After some wrestling, I have managed to put together a JSON file that can draw me a map of London and its boroughs. It's here http://graphitti.org/admin2/files/experiments/03_scaledTM5.html
Apart from the fact that the boundaries aren't perfect, the map seems OK but there's a problem in that some, but not all, of the paths apparently take up the whole SVG space. So when I inspect them in the DOM, I end up selecting the whole area. Likewise if I were to colour them in with a fill, I'd turn the whole space that colour.
I've noticed that this tends to happen with the larger boroughs, and the smaller ones are selectable through the DOM. That's not always true, but is there some limit to the number of coordinates that D3 can handle here? Or is there something else up with the code?
the JSON is here http://www.graphitti.org/admin2/files/experiments/EWNI2.json if anyone wants to inspect it. City of London is a "good" borough, as are Hackney and Tower Hamlets. Most of the others, eg Barnet, Barking, are "bad" in that they fill the SVG.
Some of your paths are "inside-out" as far as the d3 mapping functions are concerned. The order of points in a path or sub-path has to be clockwise for it to get mapped properly.
Counter-clockwise loops are interpretted as "holes" within a larger shape, the bounds of which are assumed to be outside the map. As a result, when you colour in the shape, everything except the area within the path gets coloured.
The docs recommend running your GeoJSON file through this utility to re-order the vertices in the correct structure. Of course, you could also write your own utility to reverse the order, and just manually go through and figure out which ones are incorrect.
Related
I am using the Edit2D extension on an svf created from a 2D dwg file and have a question about transforms. The Autodesk.Edit2D.Polygon's that are created have a getArea() method which is great. However it's not in the correct unit scale. I tested one and something that should be roughly 230sf in size is coming back as about 2.8.
I notice that the method takes an argument of type Autodesk.Edit2D.MeasureTransform which I'm sure is what I need, however I don't know how to get that transform. I see that I can get viewer.model.getData().viewports[1].transform. However, that is just an array of 16 numbers and not a transform object so it creates an error when I try to pass it in.
I have not been able to find any documentation on this. Can someone tell me what units this is coming back in and/or how to convert to the same units as the underlying dwg file?
Related question, how do I tell what units the underlying DWG is in?
EDIT
To add to this, I tried to get all polylines in the drawing which have an area property. In this case I was able to figure out that the polyline in the underlying dwg was reporting its area in square inches (not sure if that's always the case). I generated Edit2D polygons based on the polylines so it basically just drew over them.
I then compared the area property from the polyline to the result of getArea() on the polygon to find the ratio. In this case it was always about 83 or 84 times smaller than the square foot value of the polyline it came from (there is some degree of error in my tracing system so I don't expect they would be exact at this point). However, that doesn't fit any unit value that I know of. So remaining questions:
What unit is this?
Is this consistent or do I need to look somewhere else for this scale?
Maybe you missed the section 3.2 Units for Areas and Lengths of https://forge.autodesk.com/en/docs/viewer/v7/developers_guide/advanced_options/edit2d-use/
If you use Edit2D without the MeasureExtension, it will display all coordinates in model units. You can customize units by modifying or replacing DefaultUnitHandler. More information is available in the Customize Edit2D tutorial.
and https://forge.autodesk.com/en/docs/viewer/v7/developers_guide/advanced_options/edit2d-customize/
BTW, we can get the DefaultUnitHandler by edit2dExt.defaultContext.unitHandler
Ok after a great deal of experimentation and frustration I think I have it working. I ended up looking direction into the js for the getArea() method in dev tools. Searching through the script, I found a class called DefaultMeasureTransform that inherits from MeasureTransform and takes a viewer argument. I was able to construct that and then pass it in as an argument to getArea():
const transform = new Autodesk.Edit2D.DefaultMeasureTransform(viewer);
const area = polygon.getArea(transform);
Now the area variable matches the units in the original cad file (within acceptable rounding error anyway, it's like .05 square inches off).
Would be nice to have better documentation on the coordinate systems, am I missing it somewhere? Either way this is working so hopefully it helps someone else.
I have a cabinet and I want to get the orientation of the cabinet. I want to automatically locate the front of the cabinet.I don't know how to get the orientation
This is a tricky problem.
Front is relative to something. If you have a cabinet, one can assume the Front is where the handles are. Now assuming the handle is a block (AutoCAD), or a family (Revit), or a part (Inventor), your code may search for such an instance.
From the above, search for this object, then calculate a vector from the center to it. That will be the base vector for a raytrace.
I'm pretty new at solidworks!!
But I've been able to create a solid from a stl files. It's a Truncated tetrahedron shape.
Now I wanted this shape to be hollow (for 3D printing and adding threads).
So I've searched for a while and found a tutorial for the shell tool. This didn't work out because it gave me an error. That the faces may offset in adjacent spaces.
So I thought if I had one part and then a the same part but scale it 3mm. Place them on the same spot and then subtract them of some sort. It would give me the same shelled shape I want.
Would this work and does anybody know a way to do this or has a better way to hollow out my solid.
STL & PART upload.
Files Google Drive
If you have SOLIDWORKS Professional or Premium, you can use ScanTo3D to turn the part into a Solid / Surface body. At that point, you can manipulate the geometry as you would anything else in SOLIDWORKS.
Here's a video showing both turning on ScanTo3D and how to use it.
https://youtu.be/ZjzqWCfNfmQ
"So I thought if I had one part and then a the same part but scale it
3mm. Place them on the same spot and then subtract them of some sort.
It would give me the same shelled shape I want."
use the move/copy body command to copy it
use the scale command to scale it
use the combine feature to subject the smaller body from the main body
Alternatively use the check geometry feature to find any faulty faces and ALWAYS run import diagnostics on an imported body. if you can find and fix a faulty face try the shell tool again. If the minimum radius is too small then you will need to manually offset faces using the offset surface command
I'm absolutely new to Pixel Blender (started a couple of hours ago).
My client wants a classic folding effect for his app, I've shown him some example of folding effect via masks and he didn't like them, so I decide to dive in Pixel Blender to try to write him a custom shader.
Thx God I've found this one and I'm modyfing it by playing with values. But how can I trace / print / echo values from Pixel Blender ToolKit? This would speed up a lot all the tests I'm doing.
Here i've found in the comments that it's not possible, is it true?
Thx a lot
Well, you cannot directly trace values in Pixel Bender, but you can, for example, make a certain bitmap, then apply that filter with requested values and trace reultant bitmap's pixel values to find out what point corresponded the one selected.
For example, you make a 256x256 bitmap, with each point (x,y) having "x" red and "y" green component value. Then you apply that filter with selected values, and either display the result, or respond to clicks and trace underlying color's red and green values, which will give you the exact point on the source bitmap.
I have a simple Flex paint application which let the user draw anything they want. My problem is how can I save it into MySQL database without converting it to an image format. Moreover, I want it to be save and at the same time to retrieve in case there is an unfinished drawing.
Thank you.
Define what objects can be drawn, e.g. straight lines, points, polygons with controlled corners, etc. For each object, create serialization methods. It may be binary format (I guess you won't need search drawing in database by features used): object type first, then it's attributes. For line, it would be end points, color, maybe width and drawing style (solid, striped, dotted.)
Entire drawing will have some properties too, like width/height, format version. Write those in the header, then will go all drawing objects. If you need layers, you can make special tag for them, which will act like separator between drawing objects:
header - layer 1 tag - line - line - line - layer 2 tag - square - circle
Binary format also gives ability to save drawing into file (or in database as a blob.) Also, you can go with XML, it just will use much more bytes (but will be easier to debug.)