H3 hexagons render with swapped lat, long in kepler.gl - gis

I want to plot H3 hexagons. Of Austria.
Download and unzip
https://biogeo.ucdavis.edu/data/gadm3.6/gpkg/gadm36_AUT_gpkg.zip
The full code is available at https://gist.github.com/geoHeil/b5b74887e20e4b659d4bb693a700a402 generates to generate hexagons like:
size = 7
hexagons = pd.DataFrame(h3.polyfill(geoJson, size), columns=['hexagons'])
hexagons.head()
8752e5b80ffffff
8752ee6c1ffffff
Note h3 expects epsg:4326 and later generates the same projection again (https://github.com/uber/h3/issues/121)
This gives a file similar to:
Now when moving to https://kepler.gl/ and uploading the data I see three strange things happening
polygons from the WKT line string are distorted. This would indicate that the wrong projection is used. But trying to convert to the supported https://github.com/keplergl/kepler.gl/blob/6b380ac6db94e10fed0a76f5e78ef7e55406df21/docs/user-guides/b-kepler-gl-workflow/a-add-data-to-the-map.md Webmercator does not fix it
when manually adding a hexagon layer it is rendered in yemen (based on the H3 addr. This seems strange. Could this be a bug in kepler demo?
. This seems really weird as the geometries are generated out of the hexagons using: h3_to_geo_boundary
hexagon centroids are not filled. Now when converting to hexagon centroids using h3_to_geo, and adding the data back in as ha HexBinlayer not all the hexagons are filled. But that is strange as originally all hexagons were available (see 1 and 2).
notice how in (3) the hexbin hexagons are projected correctly as hexagons and not distorted.

I think there are a few things going on here:
Assuming you're using the master branch of h3-py, the signature of polyfill is polyfill(geo_json, res, geo_json_conformant=False). You need to add geo_json_conformant=True to your polyfill call, or the coords in your polygon will be interpreted as lat,lng instead of lng,lat. That's probably the source of your issues.
I'm not a Kepler expert, but I believe that the HexBin layer is using a generated Cartesian north/south aligned hex grid, which is why they look "correct" on screen. H3 hexagons have low distortion, but they do have some shape and area distortion, and they are never north/south aligned. When you display them with a Mercator projection, as in Kepler, they will have even more distortion, especially toward the poles, as a function of the projection. However, the main distortion issue here is probably due to switching lat,lng - the h3_to_multi_polygon function also requires an extra boolean argument to output GeoJSON-conformant coords.
I believe that Kepler also supports an H3 hexagon layer, so one option is to feed the raw points into Kepler and let Kepler do the aggregation to H3 indexes.

kepler uses an instance rendering on all hexagons, it assumes all your h3 hexagons are relatively close to each other. It uses your current map center to calculate hexagon distortion and apply it to all hexagons (instance rendering). Not perfect but significantly improves performance. Because it is too costly to calculate distortion for each hexagon.

Related

Is there code in Astropy or other library (any language) that can locate the centroid of a star in a .fits image to sub-pixel precision?

Is there any code available in a Python library such as Astropy or a library in any other language that can:
Take the .fits image of a star as input
Locate the centroid of the star on the .fits image to a sub-pixel precision. The sub-pixel precision is necessary.
Place the image of the star centroid in a .fits file to within sub-pixel precision.
Again, the sub-pixel precision is what makes this project unique. All software out there that does similar processing (based on what I could find) only works down to the precision of a single pixel.
I have spent weeks reading through astronomy related libraries available in Python and other languages, and I have found code that can obtain star centroids to within a 1 pixel precision. But I have not been able to find any code in any library that can obtain the centroid to within subpixel precsion. Any help offered would be greatly appriciated!
What do you mean by sub-pixel precision? The fits file will have header containing astrometry information which include resolution and projection. Anyway it seems like you want the center of a star at higher precision. You would have to upscale the image (say the resolution of the image was 6'/pixel -> change it to 2'/pixel or whatever precision you want). Find the centroid (coordinates) from the upscaled image and then make coordinates to pixel transformation at original resolution, this will be a floating value which I believe will give you the sub-pixel(?)

Cesium Resampling

I know that Cesium offers several different interpolation methods, including linear (or bilinear in 2D), Hermite, and Lagrange. One can use these methods to resample sets of points and/or create curves that approximate sampled points, etc.
However, the question I have is what method does Cesium use internally when it is rendering a 3D scene and the user is zooming/panning all over the place? This is not a case where the programmer has access to the raster, etc, so one can't just get in the middle of it all and call the interpolation functions directly. Cesium is doing its own thing as quickly as it can in response to user control.
My hunch is that the default is bilinear, but I don't know that nor can I find any documentation that explicitly says what is used. Further, is there a way I can force Cesium to use a specific resampling method during these activities, such as Lagrange resampling? That, in fact, is what I need to do: force Cesium to employ Lagrange resampling during scene rendering. Any suggestions would be appreciated.
EDIT: Here's a more detailed description of the problem…
Suppose I use Cesium to set up a 3-D model of the Earth including a greyscale image chip at its proper location on the model Earth's surface, and then I display the results in a Cesium window. If the view point is far enough from the Earth's surface, then the number of pixels displayed in the image chip part of the window will be fewer than the actual number of pixels that are available in the image chip source. Some downsampling will occur. Likewise, if the user zooms in repeatedly, there will come a point at which there are more pixels displayed across the image chip than the actual number of pixels in the image chip source. Some upsampling will occur. In general, every time Cesium draws a frame that includes a pixel data source there is resampling happening. It could be nearest neighbor (doubt it), linear (probably), cubic, Lagrange, Hermite, or any one of a number of different resampling techniques. At my company, we are using Cesium as part of a large government program which requires the use of Lagrange resampling to ensure image quality. (The NGA has deemed that best for its programs and analyst tools, and they have made it a compliance requirement. So we have no choice.)
So here's the problem: while the user is interacting with the model, for instance zooming in, the drawing process is not in the programmer's control. The resampling is either happening in the Cesium layer itself (hopefully) or in even still lower layers (for instance, the WebGL functions that Cesium may be relying on). So I have no clue which technique is used for this resampling. Worse, if that technique is not Lagrange, then I don't have any clue how to change it.
So the question(s) would be this: is Cesium doing the resampling explicitly? If so, then what technique is it using? If not, then what drawing packages and functions are Cesium relying on to render an image file onto the map? (I can try to dig down and determine what techniques those layers may be using, and/or have available.)
UPDATE: Wow, my original answer was a total misunderstanding of your question, so I've rewritten from scratch.
With the new edits, it's clear your question is about how images are resampled for the screen while rendering. These
images are texturemaps, in WebGL, and the process of getting them to the screen quickly is implemented in hardware,
on the graphics card itself. Software on the CPU is not performant enough to map individual pixels to the screen
one at a time, which is why we have hardware-accelerated 3D cards.
Now for the bad news: This hardware supports nearest neighbor, linear, and mapmapping. That's it. 3D graphics
cards do not use any fancier interpolation, as it needs to be done in a fraction of a second to keep frame rate as high as possible.
Mapmapping is described well by #gman in his article WebGL 3D Textures. It's
a long article but search for the word "mipmap" and skip ahead to his description of that. Basically a single image is reduced
into smaller images prior to rendering, so an appropriately-sized starting point can be chosen at render time. But there will
always be a final mapping to the screen, and as you can see, the choices are NEAREST or LINEAR.
Quoting #gman's article here:
You can choose what WebGL does by setting the texture filtering for each texture. There are 6 modes
NEAREST = choose 1 pixel from the biggest mip
LINEAR = choose 4 pixels from the biggest mip and blend them
NEAREST_MIPMAP_NEAREST = choose the best mip, then pick one pixel from that mip
LINEAR_MIPMAP_NEAREST = choose the best mip, then blend 4 pixels from that mip
NEAREST_MIPMAP_LINEAR = choose the best 2 mips, choose 1 pixel from each, blend them
LINEAR_MIPMAP_LINEAR = choose the best 2 mips. choose 4 pixels from each, blend them
I guess the best news I can give you is that Cesium uses the best of those, LINEAR_MIPMAP_LINEAR to
do its own rendering. If you have a strict requirement for more time-consuming imagery interpolation, that means you
have a requirement to not use a realtime 3D hardware-accelerated graphics card, as there is no way to do Lagrange image interpolation during a realtime render.

Drawing over terrain with depth test?

i'm trying to render geometrical shapes over uneven terrain (loaded from heightmap / shapes geometry is also generated based on averaged heights across the heightmap however they do not fit it exactly). I have the following problem - somethimes the terrain shows through the shape like showed on the picture.
Open Image
I need to draw both terrain and shapes with depth testing enabled so they do not obstruct other objects in the scene.. Could someone suggest a solution to make sure the shapes are always rendered on top ? Lifting them up is not really feasible... i need to replace the colors of actual pixel on the terrain and doing this in pixel shader seems too expensive..
thanks in advance
I had a similar problem and this is how I solved it:
You first render the terrain and keep the depth buffer. Do not render
any objects
Render solid bounding box of the shape you want to put on the terrain.
You need to make sure that your bounding box covers all
the height range the shape covers
An over-conservative estimation is to use the global minimum and maximum elevation of the entire
terrain
In the pixel shader, you read depth buffer and reconstructs world space position
You check if this position is inside your shape
In your case you can check if its xy (xz) projection is within the given distance from
the center of your given circle
Transform this position into your shape's local coordinate system and compute the desired color
Alpha-blend over the render target
This method results in shapes perfectly aligned with the terrain surface. It also does not produce any artifacts and works with any terrain.
The possible drawback is that it requires using deferred-style shading and I do not know if you can do this. Still, I hope this might be helpful for you.

How do I pass barycentric coordinates to an AGAL shader? (AGAL wireframe shader)

I would like to create a wire frame effect using a shader program written in AGAL for Stage3D.
I have been Googling and I understand that I can determine how close a pixel is to the edge of a triangle using barycentric coordinates (BC) passed into the fragment program via the vertex program, then colour it accordingly if it is close enough.
My confusion is in what method I would use to pass this information into the shader program. I have a simple example set up with a cube, 8 vertices and an index buffer to draw triangles between using them.
If I was to place the BC's into the vertex buffer then that wouldn't make sense as they would need to be different depending on which triangle was being rendered; e.g. Vetex1 might need (1,0,0) when rendered with Vetex2 and Vetex3, but another value when rendered with Vetex5 and Vetex6. Perhaps I am not understanding the method completely.
Do I need to duplicate vertex positions and add the aditional data into the vertex buffer, essentially making 3 vertices per triangle and tripling my vertex count?
Do I always give the vertex a (1,0,0), (0,1,0) or (0,0,1) value or is this just an example?
Am I over complicating this and is there an easier way to do wire-frame with shaders and Stage3d?
Hope that fully explains my problems. Answers are much appreciated, thanks!
It all depends on your geomtery, and this problem is in fact a problem of graph vertex coloring: you need your geometry graph to be 3-colorable. The good starting point is the Wikipedia article.
Just for example, let's assume that (1, 0, 0) basis vector is red, (0, 1, 0) is green and (0, 0, 1) is blue. It's obvious that if you build your geometry using the following basic element
then you can avoid duplicating vertices, because such graph will be 3-colorable (i.e. each edge, and thus each triangle, will have differently colored vertices). You can tile this basic element in any direction, and the graph will remain 3-colorable:
You've stumbled upon the thing that drives me nuts about AGAL/Stage3D. Limitations in the API prevent you from using shared vertices in many circumstances. Wireframe rendering is one example where things break down...but simple flat shading is another example as well.
What you need to do is create three unique vertices for each triangle in your mesh. For each vertex, add an extra param (or design your engine to accept vertex normals and reuse those, since you wont likely be shading your wireframe).
Assign each triangle a unit vector A[1,0,0], B[0,1,0], or C[0,0,1] respectively. This will get you started. Note, the obvious solution (thresholding in the fragment shader and conditionally drawing pixels) produces pretty ugly aliased results. Check out this page for some insight in techniques to anti-alias your fragment program rendered wireframes:
http://cgg-journal.com/2008-2/06/index.html
As I mentioned, you need to employ a similar technique (unique vertices for each triangle) if you wish to implement flat shading. Since there is no equivalent to GL_FLAT and no way to make the varying registers return an average, the only way to implement flat shading is for each vertex pass for a given triangle to calculate the same lighting...which implies that each vertex needs the same vertex normal.

How to find pixel co-ordinates of corners of a square pattern?

This may not be a programming related but possibly programmers would be in the best position to answer it.
For camera calibration I have a 8 x 8 square pattern printed on sheet of paper. I have to manually enter these co-ordinates into a text file. The software would then pick it up from there and compute the calibration parameters.
Is there a script or some software that I can run on these images and get the pixel co-ordinates of the 4 corners of each of the 64 squares?
You can do this with a traditional chessboard pattern (i.e. black and white squares with no gaps) using cvFindChessboardCorners(). You can read more about the function in the OpenCV API Reference and see some sample code in O'Reilly's OpenCV Book or elsewhere online. As an added bonus, OpenCV has built-in functions that calculate the intrinsic parameters of the camera and an array of extrinsic parameters for the multiple views of a planar calibration object.
I would:
apply threshold and get binarized image.
apply SobelX filter to image. You get an image with the vertical lines. This belong to the sides of the squares that are almost vertical. Keep this as image1.
apply SobelY filter to image. You get an image with the horizontal lines. This belong to the sides of the squares that are almost horizontal. Keep this as image2.
make (image1 xor image2). You get a black image with white pixels indicating the corner positions.
Hope it helps.
I'm sure there are many computer vision libraries with varying capabilities and licenses out there, but one that I can remember off the top of my head is ARToolKit, which should be able to recognize this pattern. And if that's not possible, it comes with a set of very good patterns that are tailored so that they can be recognized even if they're partially obscured.
I don't know ARToolKit (although i've heard a lot about it) but with OpenCV this processing is trivial.