I'm trying to replicate this example from the dash-leaflet documentation, but for world countries instead of US states. However, when I run the code from the documentation on my machine I don't see the blue state borders in the output visual.
I figured this was because I don't have the right geojson data locally, so I downloaded some country border GeoJSON data from here but it's unclear to me how to get the dl.GeoJSON function to make use of that data. How can I get country borders to show up on the world map in the same way the states do in the linked example?
You should set the url property of the GeoJSON component to point to the data that you want to visualize. For all countries as shown in your link, the code would be along the lines of
import dash_leaflet as dl
from dash import Dash
# An url pointing to the data that you want to show.
url = "https://pkgstore.datahub.io/core/geo-countries/countries/archive/23f420f929e0e09c39d916b8aaa166fb/countries.geojson"
# Create example app.
app = Dash()
app.layout = dl.Map(children=[dl.TileLayer(), dl.GeoJSON(url=url)],
style={'width': '100%', 'height': '50vh', 'margin': "auto", "display": "block"})
if __name__ == '__main__':
app.run_server()
Related
I am trying to build a pygame editor with python and tkinter.
I have this situation: on a tkinter canvas I can place tiles neatly and without overlapping (see attached image, tiles are taken from Ghosts 'n Goblins).
Now I would like to export what is displayed on the canvas to a csv file:
for instance, if in a particular cell there's no image I would like to export "-1" to the csv file;
if in the cell nearby there's an image (let's say 1.png) I would like to export "1" to the csv;
if the image displayed is "2.png" I would like to export "2" and so on.
I am able to open a csv file and write on it but I cannot retrieve the information I need from Tkinter.
I think I could use some kind of a for loop but I do not know where to start.
I've tried:
img_on_grid = []
def export_csv():
for row in range(8):
test_1=canvas.itemcget("image_B", "image")
img_on_grid.append(test_1)
print(img_on_grid)de here
but it doesn't work: it repeats the id item.
Any help will be appreciated.
I started using cesium for representing 3d maps and was trying to add Point of Interest data on top of this 3d view. I tried local png icons and it worked. I also realized can add icons from built in assets. I tried the below code and it worked perfect. I have a various set of PoIs but could not find the labels using which I can add them onto the map.
For example to add a hospital i referred hospital in the iconid.
var hospitalPin = Cesium.when(
pinBuilder.fromMakiIconId("hospital", Cesium.Color.RED, 48),
function (canvas) {
return viewer.entities.add({
name: "Hospital",
position: Cesium.Cartesian3.fromDegrees(77.311, 32.826),
billboard: {
image: canvas.toDataURL(),
verticalOrigin: Cesium.VerticalOrigin.BOTTOM,
},
});
}
);
In the similar way is there any reference for the icon Ids which I can use to make use of them in my code to represent the PoIs. Any help is appreciated.
I found the source images located at \Build\Cesium\Assets\Textures\maki from the cesium library which I think can be used.
The image you posted is from the PinBuilder docs, but check out the original source for that: It's a screenshot of the Cesium Sandcastle GeoJson Demo.
As you noticed, the images of the individual pins are stored in a folder called maki which has names like this:
airport.png
alcohol-shop.png
america-football.png
art-gallery.png
bakery.png
bank.png
bar.png
baseball.png
...
Just strip the .png off the end of the image filename, and that's it. That's the ID.
The dashes are allowed in the ID:
pinBuilder.fromMakiIconId("america-football", Cesium.Color.RED, 48)
But, the first few pins in the original screenshot don't have corresponding Maki icons, they're simple letters or numbers. You can build pins using text, using a different function, like this:
pinBuilder.fromText("A", Cesium.Color.RED, 48)
You may also use pins with Unicode characters on them.
pinBuilder.fromText("\u267b", Cesium.Color.RED, 48)
And finally, if you have a custom PNG file of your own, similar to a Maki icon but customized for your app's need, you may build a pin directly from the URL of the custom PNG file.
pinBuilder.fromUrl(url, Cesium.Color.RED, 48)
i am using a CesiumJS instance to display a base map of the earth using a imageryProvider from source A.
var viewer = new Cesium.Viewer('cesiumContainer', imageryProvider:providerA);
Now while using the Viewer I would like to be able to change this map to get images from providerB at a certain event.
I tried:
viewer.scene.imageryLayers.get(0).imageryProvider.url = providerB.url
However that does not seem to work and also feels quite like hack anyway.
I could not find anything in Cesium's documentation .
Is this at all possible without restarting / recreating the viewer instance?
I know that there is a Cesium.BaseLayerPicker (https://cesium.com/docs/cesiumjs-ref-doc/BaseLayerPicker.html)
However I do not see what method this picker calls on "select" )
Thanks a lot.
The BaseLayerPicker widget calls this code when the user selects a new layer.
There's a lot of boilerplate widget management in that block of code, but for your sake, only a couple of the lines are critical. First, the old existing active imagery layer is searched for, and removed:
imageryLayers.remove(layer);
Then, a new imagery provider is constructed and added at index 0, the first position, which is the base imagery layer:
imageryLayers.addImageryProvider(newProviders, 0);
You can directly change the URL of the provider but you should also change appropriate parameters("layers" in case of WMS, "layer", "style", "format", "tileMatrixSetID " ... in case of WMTS) depending on the type of provider(WMS or WMTS).
I have a basic three.js scene in which I am attempting to get objects exported from Blender (as JSON files with embedded morphs) to function and update their shapes with user input.
Here is a test scene
http://onthez.com/temphosting/three-js-morph-test/morph-test.html
The slab is being resized without morphs by simply scaling a box, which is working just fine.
I must be missing something fundamental with the little monument on top. It has 3 morphs (width, depth, height) that are intended to allow it to resize.
I am using this code to implement the morph based on users dat.gui input.
folder1.add( params, 'width', 12, 100 ).step(1).name("Width").onChange( function () {
updateFoundation();
building.morphTargetInfluences['width'] = params.width/100;
roofL.morphTargetInfluences['width'] = params.width/100;
roofR.morphTargetInfluences['width'] = params.width/100;
building.updateMorphs();
});
The materials for building, roofL, and roofR each have morphTargets set as true.
I've been going over the three.js examples here:
http://threejs.org/examples/?q=morph#webgl_morphtargets_human
as well as #webgl_morphtargets and #webgl_morphtargets_horse
Any thoughts or input would be much appreciated!
I believe I've reached a solution for my question I was under the impression that the JSON loader was preserving the morph target names to be used in place of an index number with morphTargetInfluences
something like morphTargetInfluences['myMorphTargetName']
but, after closer inspection in the console it seems like they should be referred to by number like morphTargetInfluences[0]
Not the most intuitive, but I can work with it.
I just tried to use Google Map Buddy to get satellite image from Google Map. This application first download small images from google map and then stick them together into new image. I had to wait about 2 hours to get images download my computer and it looks like it downloaded all images (22,194 images) but then the app told me that it cannot stick them together. When I started app again I though this app will reuse images on my comp but it start downloading them again. So I had to stop the process and ask you, guys, if you know how I can put that puzzle together.
The naming pattern of those images goes like this:
x=92651y=48130zoom=17.png
x=92652y=48130zoom=17.png
x=92653y=48130zoom=17.png
x=92654y=48130zoom=17.png
x=92655y=48130zoom=17.png
...
...
x=92664y=48131zoom=17.png
x=92665y=48131zoom=17.png
x=92666y=48131zoom=17.png
x=92667y=48131zoom=17.png
...
...
x=92689y=48132zoom=17.png
x=92690y=48132zoom=17.png
x=92691y=48132zoom=17.png
x=92692y=48132zoom=17.png
x=92693y=48132zoom=17.png
What can I do to stick them together programmatically using some simple scripting language? I have access to Mac and Windows systems and may be can install any simple scripting languages.
Thanks
You could use Python with Python Imaging Library (PIL).
First I'd make a list of filename and their coordinates. Extract the integer coordinates from the filenames with regular expressions and store them in a list of dictionaries:
>>> filename = 'x=92664y=48131zoom=17.png'
>>> imagePattern = re.compile(r'^x=(\d{5})y=(\d{5})zoom=17.png$')
>>> x,y = map(int, imagePattern.search(filename).groups())
>>> {'x':x, 'y':y, 'filename':filename}
{'y': 48131, 'x': 92664, 'filename': 'x=92664y=48131zoom=17.png'}
Having a list of dictionaries enables you to sort them according to either dimensions:
tileListSortedByX = sorted(tileList, key = lambda i: i["x"])
and also filter them:
fileListWhereX48131 = [tile for tile in tileList if tile["x"]==48131]
With these two operations you can easily imagine the for loops to iterate over tiles line by line.
The last thing you need to create a big empty image (with PIL) where you'll paste the small tile images into. Its size will be a multiple of the tile size.
>>> from PIL import Image
>>> bigImage = Image.new('RGB',(300,300),(255,255,255))
#creates a white 300x300 image
Pasting the small images into the big one looks like this:
>>> smallImage = Image.open(tile["filename"])
>>> bigImage.paste(smallImage,(0,0))
Hope you get the idea.
The process of "sticking images together" is usually called "stitching" or "mosaicing".
I found a list of many applications that do this on Wikipedia article - "Comparison of Photo Stitching Applications".
Edited: removed link to single app I found and replaced with wikipedia list of software.