I'm new to use geotools. Now I need to generate a heat map showing the data density.
I found a Kernel density estimation process from here: https://jira.codehaus.org/browse/GEOT-4175, and so far it gives me a heatmap surface over a set of irregular data points as a GridCoverage2D.
My question is, how can I display it in a heat map fashion? Thanks a lot!!!
Related
I am using ITK library to get a mesh from a 3D image, the 3D image is a volume of slices. I get the mesh using itk::BinaryMask3DMeshSource. But I need to get its physical coordinate for each mesh node and I don't know how to do it.
I know how to obtain with ITK the physical coordinate of a voxel in a image using the TransformIndexToPhysicalPoint function. But when I have a mesh like this or an ITK::Mesh I don't know how to do it. I need to know if there is any relationship between the nodes of the mesh and the voxels in the image to find the physical coordinates.
Mesh points should already be in physical space, judging by both the code and the accompanying comment.
I am developing a program, and one of the requirements is to take DXF as input. The input is limited to 2D case only. The program itself is in C++/Qt, but to test it I need some sample DXF input. The spline import is already implemented, the next step is polyline with spline fit points or control points added. I decided to use Python/ezdxf to generate such polyline, as I don't have Autocad.
My first approach was to create a spline from fit points utilizing add_spline_control_frame, then convert it to polyline. The problem is there turned out to be no conversion from spline to polyline (although I think I saw it in the docs, but cannot find it anymore).
The current approach is to make polyline by add_polyline2d(points), making each point to be with DXF flag field equal 8 (spline vertex created by spline-fitting). The problem is points need to be of type DXFVertex (docs state Vertex, but it is absent), and that type is private for ezdxf.
Please share your approaches either to the problems I've faced with ezdxf, or to the initial problem.
P.S. I tried to use LibreCAD to generate such a polyline, but it's hardly possible to make a closed polyline from spline fit points there.
The ability to create B-splines by the POLYLINE entity was used by AutoCAD before in DXF R2000 the SPLINE entity was added. The usage of this feature is not documented by Autodesk and also not promoted by ezdxf in any way.
Use the SPLINE entity if you can, but if you have to use DXF R12 - there is a helper class in ezdxf to create such splines ezdxf.render.R12Spline and an usage example here.
But you will be disappointed BricsCAD and AutoCAD show a very visible polygon structure:
Because not only the control points, but also the approximated curve points have to be stored as polyline points, to get a smoother curve you have to use many approximation points, but then you can also use a regular POLYLINE as approximation. I assume the control points were only stored to keep the spline editable.
All I know about this topic is documented in the r12spline.py file. If you find a better way to create smooth B-splines for DXF R12 with fewer approximation points, please let me know.
Example to approximate a SPLINE entity spline as points, which can be used by the POLYLINE entity:
bspline = spline.construction_tool()
msp.add_polyline3d(bpline.approximate(segments=20))
The SPLINE entity is a 3D entity, if you want to squash the spline into the xy-plane, remove the z-axis:
xy_pts = [p.xy for p in bpline.approximate(segments=20)]
msp.add_polyline2d(xy_pts)
# or as LWPOLYLINE entity:
msp.add_lwpolyline(xy_pts, format='xy')
I would like to display 100,000 or more polygons in Cesium. The polygons have a lot of shared boundaries --- they are essentially like US zip code polygons but smaller, so there are more of them --- so I'd like to use a representation that takes advantage of this and is "aware" of the topology of shared boundaries and only stores each vertex once.
I'm fairly new to programming with Cesium (but familiar with 3D graphics in general); I've scanned the tutorials and docs and don't immediately see a way to create a polygon collection with shared vertices. I have my polygons in a topojson file and tried loading it using code like what is in the topojson example:
var promise = Cesium.GeoJsonDataSource.load('./polygons.topojson');
promise.then(function(dataSource) {
viewer.dataSources.add(dataSource);
...
});
But
this doesn't take advantage of the shared vertices because the GeoJsonDataSource converts each individual polygon to a GeoJson object, and
it crashes my browser, presumably because 100,000 separate GeoJson objects is more than it can handle.
I feel fairly sure (and hopeful) that there is a way to do this in Cesium, but I haven't found it yet. Can someone tell me what the most effective approach would be, in particular what primitives / loader utilities should I be looking at?
Ultimately, by the way, the application I want to write will never actually render all 100,000 polygons at the same time --- it will choose which ones to render based on the mouse position, and at any one time it will only render a few thousand of them. But I want to load them all into memory ahead of time, so that I can change which ones are being rendered in real time as the cursor moves around.
I am a beginner of deep learning. For convolutional networks such as lenet-5, there are 6 feature maps in the C1 layer. Each feature map is associated with a unique convolution kernel (5x5 matrix).
What is the difference between any 2 feature maps in the same layer? For a black-white image dataset like MNIST (without RGB), people still use 6 feature maps.
I guess, initially, the 6 convolution kernels are randomly generated 5x5 matrices. Therefore, when the same input image is projected to different feature maps, the output of feature maps will be different. And this is the main motivation, right?
Every filter in your convolutional layer extracts a specific feature from the input. One filter could be sensitive to horizontal edges while another is sensitive to vertical edges. A third filter may be sensitive to a triangular shape. You want the feature maps to be as different from each other as possible to avoid redundancy. Avoiding redundancy improves the network's capacity to as many variations in the data as possible.
Random initialization prevents learning duplicate filters.
Why 6 feature maps? This is a result of trying out other numbers of filters. Keep in mind that increasing the number of filters results in higher computational overhead and possibly overfitting (memorizing the training data but not good at classifying new images correctly). Another intuition for 6 is that there's not that much variation in pixels, you'll eventually extract more complex features in subsequent layers. 6 feature maps for C1 ended up working well for the MNIST dataset.
I need to retrieve the latitude and longitude coordinates of the intersection of a polygon with the street (look the blue point on the edge of the circle. image here!!!)
I need this data in order to calculate the road length from center of the circle, to its edge). Does anybody know if this task is possible, and if yes which technology allows for doing that ?
This works only if you have the vector data of all streets. This does not work with an image (jpg bmp).
When you have the vector data, you do a simple circle with line intersection, which you have learned in school.
You might transform the vectors first to a cartesian x,y plane such that you dont use latitude, longitude from the street vectors.
vector data, you can get for free from OpenStreetMap, or from TomTom or NavTeq when it is a huge project. Sometimes the state provides this data, too.
A common data format for such vector data is the ESRI shp file format. (.shp)