plotting maps using OSM or other shapefiles and matplotloib for standardized report - gis

We are developing a standardized report for our activities. The last graph I need is to display the geographic area of the activities (there are close to 100 locations).
The output for these reports is PDF letter or A4 size
The report is a mplotlib figure, where:
fig = plt.figure(figsize=(8.5, 11))
rect0 = 0, .7,, 0.18, 0.3
rect1 = .3, .7, .18, .3
rect2 = .8, .29, .2, .7
rect3 = 0, 0, .8, .4
ax1 = fig.add_axes(rect0)
ax2 = fig.add_axes(rect1)
ax3 = fig.add_axes(rect2)
ax4 = fig.add_axes(rect3)
The contents and layout for axes 1-3 are settled and work great. However ax4 is where the map contents would be displayed (ideally).
I was hoping to do something like this:
map1 = Basemap(llcrnrlon=6.819087, llcrnrlat=46.368452, urcrnrlon=6.963978,
urcrnrlat=46.482906, resolution = 'h', projection='tmerc',
lon_0=6.88, lat_0=46.42, ax=4)
map1.readshapefile('a valid shape file that works') #<----- this is the sticking point
map1.draw(insert locator coordinates)
plt.savefig(report to be inserted to document)
plt.show()
However I have not been successful in obtaining a shape file that works from open street maps or GIS.
Nor have I identified the correct process to transform the data from openstreetmaps.
Nor have I identified the process to extract that information from the OSM/xml document or the transformed GeoJSON document.
Ideally I would like to grab the bounding box information from openstreetmaps and generate the map directly.
What is the process to get a shapefile that works with the .readshapefile() call?
Or alternatively how do I get the defined map into a Matplotlib axes ?

It might be easiest to use the cartopy.io.img_tiles module, which will automatically pull the OSM tiles for use with cartopy. Using the pre-rendered tiles would negate the trouble of handling and styling individual shapefiles/XML.
See the cartopy docs on using these tiles within cartopy.

Related

How can I concatenate the 4 corners of the image quickly when loading image in deep learning?

What is the most effective way to concatenate 4 corner, shown in this photo ?
(conducting in getitem())
left_img = Image.open('image.jpg')
...
output = right_img
This is how I would do it.
Firstly I would convert the image to a Tensor Image temporarily
from torchvision import transforms
tensor_image = transforms.ToTensor()(image)
Now assuming you have a 3 channel image (although similiar principles apply to any matrices of any number of channels including 1 channel gray scale images).
You can find the Red channel with tensor_image[0] the Green channel with tensor_image[1] and the the Blue channel with tensor_image[2]
You can make a for loop iterating through each channel like
for i in tensor_image.size(0):
curr_channel = tensor_image[i]
Now inside that for loop with each channel you can extract the
First corner pixel with float(curr_channel[0][0])
Last top corner pixel with float(curr_channel[0][-1])
Bottom first pixel with float(curr_channel[-1][0])
Bottom and last pixel with float(curr_channel[-1][-1])
Make sure to convert all the pixel values to float or double values before this next appending step
Now you have four values that correspond to the corner pixels of each channel
Then you can make a list called new_image = []
You can then append the above mentioned pixel values using
new_image.append([[curr_channel[0][0], curr_channel[0][-1]], [curr_channel[-1][0], curr_channel[-1][-1]]])
Now after iterating through every channel you should have a big list that contains three (or tensor_image.size(0)) number of lists of lists.
Next step is to convert this list of lists of lists to a torch.tensor by running
new_image = torch.tensor(new_image)
To make sure everything is right new_image.size() should return torch.Size([3, 2, 2])
If that is the case you now have your wanted image but it is tensor format.
The way to convert it back to PIL is to run
final_pil_image = transforms.ToPILImage()(new_image)
If everything went good, you should have a pil image that fulfills your task. The only code it uses is clever indexing and one for loop.
There is a possibility however if you look more than I can, then you can avoid using a for loop and perform operations on all the channels without the loop.
Sarthak Jain
I don't know how quick this is but here:
import numpy as np
img = np.array(Image.open('image.jpg'))
w, h = img.shape[0], image.shape[1]
# the window size:
r = 4
upper_left = img[:r, :r]
lower_left = img[h-r:, :r]
upper_right = img[:r, w-r:]
lower_right = img[h-r:, w-r:]
upper_half = np.concatenate((upper_left, upper_right), axis=1)
lower_half = np.concatenate((lower_left, lower_right), axis=1)
img = np.concatenate((upper_half, lower_half))
or short:
upper_half = np.concatenate((img[:r, :r], img[:r, w-r:]), axis=1)
lower_half = np.concatenate((img[h-r:, :r], img[h-r:, w-r:]), axis=1)
img = np.concatenate((upper_half, lower_half))

Maps into Forge Viewer

Trying to follow the steps https://forge.autodesk.com/blog/add-mapbox-google-maps-forge-viewer but i can't place the model correctly on the map.
I am running the functions listed here: https://learn.microsoft.com/en-us/bingmaps/articles/bing-maps-tile-system:
LatLongToPixelXY(latitude, longitude, 7, out pixelX, out pixelY);
PixelXYToTileXY(pixelX, pixelY, out tileX, out tileY);
The result pixelX = 16225, pixelY = 12249, tileX = 63, tileY = 47.
I substitute the previous values:
map.position.set(16225,12249,-45);
class MapPlaneNode extends MapNode {
constructor(parentNode = null, mapView = null, location = MapNode.ROOT, level = 7, x = 63, y = 47)
The result is that the model comes out small and not positioned correctly. In the image, the red arrow is where the model is inserted, and the green arrow is where it should be.
image of result
What am I doing wrong?
Thank you very much
Positioning the model is a little tricky.
In the demo I created, I originally used world coordinates, where I set the root tile as level 0, and used the correct lat/long coordinate utils function to position the revit model in the correct location.
Unfortunately, the precision caused a rendering problem with the post-renderer (line edges were missing, and some strange z-fighting precision issues)...
so, I decided to hack the level, and move the map into the position I wanted and center the revit model at origin 0,0,0.
This made things a lot more manual and rather tricky, but it got around the rendering issue and also limited the user into a small area in the world, which I preferred.
I suggest changing the root tile back to zero, and adjusting the model position globaloffset to the value of the lat/long W84 utils. See the blog post and also the coordinates section of the geo-three repo, for more details here: https://github.com/tentone/geo-three#coordinates
Found a trick to adjust the map. It is still manual but it's fairly quick:
Calculate Tile X and Y (you did that step already, it's just for reference):
Copy the TileSystem class from the the link bing-maps-tile-system you posted into https://dotnetfiddle.net/
(you'll also need to add: using System.Text)
Change the main as follows
public static void Main()
{
int pixelX, pixelY, tileX, tileY;
TileSystem.LatLongToPixelXY(YOUR LAT HERE, YOUR LONG HERE, 7, out pixelX, out pixelY);
Console.WriteLine("LatLongToPixelXY: " + pixelX.ToString() + ", " + pixelY.ToString());
TileSystem.PixelXYToTileXY(pixelX, pixelY, out tileX, out tileY);
Console.WriteLine("PixelXYToTileXY: " + tileX.ToString() + ", " + tileY.ToString());
}
This will give you the TileX and Tile Y that you'll need to replace in the Extension.
Calculate Position
In the Extension set the X, Y position to 0,0, and the adjust the Z so that the map is below your model
map.position.set(0, 0, z);
Run the Extension and see where your project lands on the map. Now locate this landing point in Google maps (I found it useful at this stage to search the map using a corner between two streets by entering for example: Parker St & Wilson Rd). When you've found it, click on the landing point in Google map to place a Marker, then right-click on the marker and select Measure Distance. You will have to measure the distance to your destination both vertically, and horizontally (not directly to it). For example you'll get dH = 43.5km and dV = 17.8km
And this is were the magic happens: Multiply both numbers by 3400 if your distance is in km (or by 2113 if you distance is in miles) and set the position with those values:
dH * 3400 = 147900
dV * 3400 = 60520
If your destination is to the E or S use positive values.
If your destination is to the W or N use negative values
map.position.set(147900, -60520, z);
Now it won't be perfect, but it'll be close enough to finish adjusting the value manually.

Forge function generateTexture()

In the following example, there is a function called generateTexture().
Is it possible to draw text (numbers) into the pixel array? Or is it possible to draw text (numbers) on top of that shader?
Our goal is to draw a circle with a number inside of it.
https://forge.autodesk.com/blog/using-dynamic-texture-inside-custom-shaders
UPDATE:
We noticed that each circle can't use a unique generateTexture(). The generateTexture() result is used by every single one of them. The only thing that can be customized per object is the color, plus what texture is used.
We could create a workaround for this, which is to generate every texture from 0 to 99, and to then have each object choose the correct texture based on the number we want to display. We don't know if this will be efficient enough to work properly though. Otherwise, it might have to be 0 to 9+ or something in that direction. Any guides on our updated question would be really appreciated. Thanks.
I am able to successfully display text with the following code, simply replace generateTexture() by generateCanvasTexture() in the sample and you should get the result below:
const generateCanvasTexture = () => {
const canvas = document.createElement("canvas")
const ctx = canvas.getContext('2d')
ctx.font = '20pt Arial'
ctx.textAlign = 'center'
ctx.textBaseline = 'middle'
ctx.fillText(new Date().toLocaleString(),
canvas.width / 2, canvas.height / 2)
const canvasTexture = new THREE.Texture(canvas)
canvasTexture.needsUpdate = true
canvasTexture.flipX = false
canvasTexture.flipY = false
return canvasTexture
}
It is possible but you would need to implement it yourself. Shaders are a pretty low level feature so there is no way to directly draw a number or a text, but you can convert a given character into its representation as a 2d pixel array.

Graphhopper: Cannot create location index when graph has invalid bounds

I am using graphhopper 0.8 via maven in my java project. I create a network with the folling code
FlagEncoder encoder = new CarFlagEncoder();
EncodingManager em = new EncodingManager(encoder);
// Creating and saving the graph
GraphBuilder gb = new GraphBuilder(em).
setLocation(testDir).
setStore(true).
setCHGraph(new FastestWeighting(encoder));
GraphHopperStorage graph = gb.create();
for (Node node : ALL NODES OF MY NETWORK) {
graph.getNodeAccess().setNode(uniqueNodeId, nodeX, nodeY);
}
for (Link link : ALL LINKS OF MY NETWORK) {
EdgeIteratorState edge = graph.edge(fromNodeId, toNodeId);
edge.setDistance(linkLength);
edge.setFlags(encoder.setProperties(linkSpeedInMeterPerSecond * 3.6, true, false));
}
Weighting weighting = new FastestWeighting(encoder);
PrepareContractionHierarchies pch = new PrepareContractionHierarchies(graph.getDirectory(), graph, graph.getGraph(CHGraph.class), weighting, TraversalMode.NODE_BASED);
pch.doWork();
graph.flush();
LocationIndex index = new LocationIndexTree(graph.getBaseGraph(), graph.getDirectory());
index.prepareIndex();
index.flush();
At this point, the bounding box saved in the graph shows the correct numbers. Files are written to disk including the "location_index". However, reloading the data gets me the following error
Exception in thread "main" java.lang.IllegalStateException: Cannot create location index when graph has invalid bounds: 1.7976931348623157E308,1.7976931348623157E308,1.7976931348623157E308,1.7976931348623157E308
at com.graphhopper.storage.index.LocationIndexTree.prepareAlgo(LocationIndexTree.java:132)
at com.graphhopper.storage.index.LocationIndexTree.prepareIndex(LocationIndexTree.java:287)
The reading is done with the following code
FlagEncoder encoder = new CarFlagEncoder();
EncodingManager em = new EncodingManager(encoder);
GraphBuilder gb = new GraphBuilder(em).
setLocation(testDir).
setStore(true).
setCHGraph(new FastestWeighting(encoder));
// Load and use the graph
GraphHopperStorage graph = gb.load();
// Load the index
LocationIndex index = new LocationIndexTree(graph.getBaseGraph(), graph.getDirectory());
if (!index.loadExisting()) {
index.prepareIndex();
}
So LocationIndexTree.loadExisting runs fine until entering prepareAlgo. At this point, the graph is loaded. However, the bounding box is not set and kept at the defaults?! Reading the location index does not update the bounding box. Hence, the error downstreams. What am I doing wrong? How do I preserve the bounding box in the first place? How to reconstruct the bbox?
TL;DR Don't use cartesian coordinates but stick to the WGS84 used by OSM.
A cartesian coordinate system like e.g. EPSG:25832 may have coordinates in the range of millions. After performing some math the coordinates may further increase in magnitude. Eventually, Graphhopper will store the coordinates as integers. That is, all coordinates may end up as Integer.MAX_VALUE. Hence, an invalid bounding box.

Projection drift when rendering in WebGL over Google Map

I am trying to implement a WebGL-based rendering on Google Map (api3) as I want to render a massive amount of dynamic geometries.
Basically, I create a google.maps.OverlayView attached with a WebGL canvas into the map.
However, I encountered some problem with the mapping of the projection. Basically, I extracted the "fromLatLngToPoint" function from the googlemap api as follows:
function fromLatLngToPoint(a){
var c={x:0,y:0},
d=this.j;
c.x=d.x+a.lng*this.B;
var e=oe(m.sin(re(a.lat)),-(1-1E-15),1-1E-15);
c.y=d.y+.5*m.log((1+e)/(1-e))*-this.F;
return c
}
function oe(a,b,c){null!=b&&(a=m.max(a,b));null!=c&&(a=m.min(a,c));return a}
function re(a){return m.PI/180*a}
Then I implemented it in my vertex shader based on the documentation in Google Map Coordinates.
Basically, I have a event listener to send the updated projection constants, the viewport bounds, and the zoom level to my shader.
Then my shader will calculate the new screen coordinates based on these inputs.
highp float e, x, y, offsetY, offsetX;
// projection transformation for target points
e = sin(p.y* PI/180.0);
y = prj_y + 0.5 * log((1.0+e)/(1.0-e))*(-F);
x = prj_x + p.x*B;
// projection transformation for offset (bounds)
e = sin(bound_y*PI/180.0);
offsetY = prj_y + 0.5 * log((1.0+e)/(1.0-e))*(-F);
offsetX = prj_x + bound_x*B;
// calculate actual pixel coord wrt zoom/numTiles
x = (x* numTiles - offsetX* numTiles);
y = (y* numTiles - offsetY* numTiles);
gl_PointSize = 5.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4(x,y,0.0,1.0);
However, as shown in the screenshot below, it seems there are some errors? The rendered geometries are distorted. (I used the google map polygon api to render some of the geometries as comparison)
Screenshot Here
I am totally at a loss, what might be the reason for this distortion?
I am suspecting that the single precision in the shader is giving rise to the error. So I am wondering if there is any workaround?
It is hard to debug this piece of code and diagnose the cause of the issue. I would suggest you using the CanvasLayer library that hides all these concrete details of specifying the coordinates you want to draw the polygon. Rather you would be able to focus on your app code and functionality. The performance will be better in terms of projected image.