When GPS positioning is unavailable (or even when it is available sometimes), Google Maps for mobile shows a blue circle of uncertainty around the blue self-localization dot. What exactly (statistically) does this blue circle represent?
Is it the 95% confidence interval? Since it does change in size, I am assuming it is some representation of accuracy. Is it just a rough guideline or are there are some actual numbers going in to an accuracy calculation which is then represented visually?
We define accuracy as the radius of 68% confidence. You may get your current location via getLastLocation position.
Related
Is there any code available in a Python library such as Astropy or a library in any other language that can:
Take the .fits image of a star as input
Locate the centroid of the star on the .fits image to a sub-pixel precision. The sub-pixel precision is necessary.
Place the image of the star centroid in a .fits file to within sub-pixel precision.
Again, the sub-pixel precision is what makes this project unique. All software out there that does similar processing (based on what I could find) only works down to the precision of a single pixel.
I have spent weeks reading through astronomy related libraries available in Python and other languages, and I have found code that can obtain star centroids to within a 1 pixel precision. But I have not been able to find any code in any library that can obtain the centroid to within subpixel precsion. Any help offered would be greatly appriciated!
What do you mean by sub-pixel precision? The fits file will have header containing astrometry information which include resolution and projection. Anyway it seems like you want the center of a star at higher precision. You would have to upscale the image (say the resolution of the image was 6'/pixel -> change it to 2'/pixel or whatever precision you want). Find the centroid (coordinates) from the upscaled image and then make coordinates to pixel transformation at original resolution, this will be a floating value which I believe will give you the sub-pixel(?)
For a given geoJSON file, which in this case is the county boundaries for Illinois, I'm overlapping the google.maps.Data() layer and D3.geo.path() with mercator projection, and I notice a displacement between both projections.
The google data layer (in red) is an object which displays in the same canvas as any google overlay (polygons, markers, polylines, etc) while the D3.geo.path() (in blue) shows up in an SVG container I overlap on top of the google container.
As you can see, the westernmost counties show the google borderline to the left of the D3 line, while towards the east, the D3 is the one displaced to the left.
At first I thought I was mis-centering the SVG container, but this example shows that the displacement is not uniform, so I can't fix it playing with SVG transform matrix.
This fenomena happens to any geoJSON, no matter the zoom or the place of the world I try with. Now, this is strange, because if d3.geo.mercator() was mistranslating coords to pixels, the miscalculation wouldn't be uniform. It isn't an approximation issue either, because it doesn't vary with zoom.
Is this just a minor issue with different projection engines? I wouldn't expect google, d3, leaflet and openlayers translating a coordinate pair to the exact same pixel, but I don't want to ignore this one for I might be doing somethign wrong.
Any ideas would be appreciated. #mbostock, I summon thee.
EDIT: It took me a lot to come out with a self-contained example, but it helped me understand better what was happening under the hood.
Please see http://bl.ocks.org/amenadiel/ba21cbada391e053d899
We are creating a Google Map with heatmap layer. We are trying to show the energy use of ~1300 companies spreadout over the United States. For each of the companies we have their lat/long and energy use in kWh. Our plan is to weight the companies on the heatmap by their kWh use. We have been able to produce the map with the heatlayer, however, because we have such a huge variance in energy use (ranging from thousands to billions of kWh), the companies using smaller amounts of energy are not showing up at all. Even when you zoom in on their location nothing you can't see any coloring on the map.
Is there a way to have all companies show up in the heatmap, no matter how small their energy use is? We have tried setting the MaxIntensity, but still have some of the smaller companies not showing up. We are also concerned about setting the MaxIntensity too low since we are then treating a companies using 50 million kWh the same as one using 3 billion kWh. Is there anyway to set a MinIntensity? Or to have some coloring visible on the map for all the companies?
Heatmap layers accept a gradient property, expecting an array of colors as its value. These colors will always have linear mapping against your sample starting from zero. Also, the first color (let's say, gradient[0]) should be transparent, for it's supposed to map zeroes or nulls. If you give a non transparent color to the first gradient point, then the whole world will have that color.
This means that if, for example, you enter a gradient of 20 points, all points weighting less than 1/20th of the maximum will show as interpolate between gradient[0] (transparent) and gradient[1] (the first non transparent color in your gradient). This will result in semi transparent datapoints for non normalized samples.
If you need to somehow flatten your values universe, you'll have to feed the Heatmap with precomputed values. For example, the value of log(kWh) will be a flatter curve to represent.
Another workaround would be to offset every value with a fraction of the maximum (for example, 10% of the maximum), so the minimum will be displaced from the zero in at least one color interval.
i'm trying to render geometrical shapes over uneven terrain (loaded from heightmap / shapes geometry is also generated based on averaged heights across the heightmap however they do not fit it exactly). I have the following problem - somethimes the terrain shows through the shape like showed on the picture.
Open Image
I need to draw both terrain and shapes with depth testing enabled so they do not obstruct other objects in the scene.. Could someone suggest a solution to make sure the shapes are always rendered on top ? Lifting them up is not really feasible... i need to replace the colors of actual pixel on the terrain and doing this in pixel shader seems too expensive..
thanks in advance
I had a similar problem and this is how I solved it:
You first render the terrain and keep the depth buffer. Do not render
any objects
Render solid bounding box of the shape you want to put on the terrain.
You need to make sure that your bounding box covers all
the height range the shape covers
An over-conservative estimation is to use the global minimum and maximum elevation of the entire
terrain
In the pixel shader, you read depth buffer and reconstructs world space position
You check if this position is inside your shape
In your case you can check if its xy (xz) projection is within the given distance from
the center of your given circle
Transform this position into your shape's local coordinate system and compute the desired color
Alpha-blend over the render target
This method results in shapes perfectly aligned with the terrain surface. It also does not produce any artifacts and works with any terrain.
The possible drawback is that it requires using deferred-style shading and I do not know if you can do this. Still, I hope this might be helpful for you.
Given a particular zoom level, how accurate is the scale provided by the satellite view in google maps?
Can one use it to ~accurately determine the square footage of a given building in the picture?
Thanks.
The imagery is very accurate, and at the finest zoom levels (19 or 20), you will be able to perform area calculation with great precision. The location information in Google maps would definitely be more accurate than trying to get readings using a handheld GPS device (there are some app out there that let you walk around a perimeter setting waypoints, and then calculating the internal area based on those waypoints).
Here is a relatively painless utility that demonstrates this:
http://www.daftlogic.com/projects-google-maps-area-calculator-tool.htm
One issue if you are trying to calculate square footage using the imagery however would be determining the number of stories.
Not sure about the accuracy. At the 200 ft. zoom level I superimposed the scale over Rice Stadium in Houston and it shows the playing field as a little over 200 ft. long and 50 meters wide. That means the width is about right but the length is way off since the standard football field is 300 ft. long. Probably has something to do with the angle of the photo. If the satellite is directly overhead it's probably more accurate. Just a thought.
the graphic scale is not consistent as you zoom in and out.
I placed two zoom images into a CAD program and sized the images by measuring the graphic scale. I got two different sized maps.