I just tried to use Google Map Buddy to get satellite image from Google Map. This application first download small images from google map and then stick them together into new image. I had to wait about 2 hours to get images download my computer and it looks like it downloaded all images (22,194 images) but then the app told me that it cannot stick them together. When I started app again I though this app will reuse images on my comp but it start downloading them again. So I had to stop the process and ask you, guys, if you know how I can put that puzzle together.
The naming pattern of those images goes like this:
x=92651y=48130zoom=17.png
x=92652y=48130zoom=17.png
x=92653y=48130zoom=17.png
x=92654y=48130zoom=17.png
x=92655y=48130zoom=17.png
...
...
x=92664y=48131zoom=17.png
x=92665y=48131zoom=17.png
x=92666y=48131zoom=17.png
x=92667y=48131zoom=17.png
...
...
x=92689y=48132zoom=17.png
x=92690y=48132zoom=17.png
x=92691y=48132zoom=17.png
x=92692y=48132zoom=17.png
x=92693y=48132zoom=17.png
What can I do to stick them together programmatically using some simple scripting language? I have access to Mac and Windows systems and may be can install any simple scripting languages.
Thanks
You could use Python with Python Imaging Library (PIL).
First I'd make a list of filename and their coordinates. Extract the integer coordinates from the filenames with regular expressions and store them in a list of dictionaries:
>>> filename = 'x=92664y=48131zoom=17.png'
>>> imagePattern = re.compile(r'^x=(\d{5})y=(\d{5})zoom=17.png$')
>>> x,y = map(int, imagePattern.search(filename).groups())
>>> {'x':x, 'y':y, 'filename':filename}
{'y': 48131, 'x': 92664, 'filename': 'x=92664y=48131zoom=17.png'}
Having a list of dictionaries enables you to sort them according to either dimensions:
tileListSortedByX = sorted(tileList, key = lambda i: i["x"])
and also filter them:
fileListWhereX48131 = [tile for tile in tileList if tile["x"]==48131]
With these two operations you can easily imagine the for loops to iterate over tiles line by line.
The last thing you need to create a big empty image (with PIL) where you'll paste the small tile images into. Its size will be a multiple of the tile size.
>>> from PIL import Image
>>> bigImage = Image.new('RGB',(300,300),(255,255,255))
#creates a white 300x300 image
Pasting the small images into the big one looks like this:
>>> smallImage = Image.open(tile["filename"])
>>> bigImage.paste(smallImage,(0,0))
Hope you get the idea.
The process of "sticking images together" is usually called "stitching" or "mosaicing".
I found a list of many applications that do this on Wikipedia article - "Comparison of Photo Stitching Applications".
Edited: removed link to single app I found and replaced with wikipedia list of software.
Related
I started using cesium for representing 3d maps and was trying to add Point of Interest data on top of this 3d view. I tried local png icons and it worked. I also realized can add icons from built in assets. I tried the below code and it worked perfect. I have a various set of PoIs but could not find the labels using which I can add them onto the map.
For example to add a hospital i referred hospital in the iconid.
var hospitalPin = Cesium.when(
pinBuilder.fromMakiIconId("hospital", Cesium.Color.RED, 48),
function (canvas) {
return viewer.entities.add({
name: "Hospital",
position: Cesium.Cartesian3.fromDegrees(77.311, 32.826),
billboard: {
image: canvas.toDataURL(),
verticalOrigin: Cesium.VerticalOrigin.BOTTOM,
},
});
}
);
In the similar way is there any reference for the icon Ids which I can use to make use of them in my code to represent the PoIs. Any help is appreciated.
I found the source images located at \Build\Cesium\Assets\Textures\maki from the cesium library which I think can be used.
The image you posted is from the PinBuilder docs, but check out the original source for that: It's a screenshot of the Cesium Sandcastle GeoJson Demo.
As you noticed, the images of the individual pins are stored in a folder called maki which has names like this:
airport.png
alcohol-shop.png
america-football.png
art-gallery.png
bakery.png
bank.png
bar.png
baseball.png
...
Just strip the .png off the end of the image filename, and that's it. That's the ID.
The dashes are allowed in the ID:
pinBuilder.fromMakiIconId("america-football", Cesium.Color.RED, 48)
But, the first few pins in the original screenshot don't have corresponding Maki icons, they're simple letters or numbers. You can build pins using text, using a different function, like this:
pinBuilder.fromText("A", Cesium.Color.RED, 48)
You may also use pins with Unicode characters on them.
pinBuilder.fromText("\u267b", Cesium.Color.RED, 48)
And finally, if you have a custom PNG file of your own, similar to a Maki icon but customized for your app's need, you may build a pin directly from the URL of the custom PNG file.
pinBuilder.fromUrl(url, Cesium.Color.RED, 48)
I have lately been working on Forge Reality Capture API and using simple curl commands to reconstruct some scenes from images.
The process goes through smoothly but I never obtain a complete mesh.
1.I have tried increasing the number of images about 5 times ( from 20 to 100)
2.Tried both the obj and rcm formats ( my scenetype=object)
3.I investigated the camera positions after exporting the rcm mesh to Recap photo and only about 15 positions are shown. While I used about 100 frames in several positions. Only the images from these camera positions are stiched and get an incomplete mesh.
Is this a algorithm issue in the reconstruction?
Do I have to capture more pictures? The area is relatively small, a corridor of 50m*20m.
Can I re-process the same scene by adding additional photos?
Is there a necessity for some amount of texture?
I am grateful for the answers.
Cheers!
I suggest having a look at my blog post on Reality Capture API https://forge.autodesk.com/blog/hitchhikers-guide-reality-capture-api that might help you to debug and identify the source of the problems.
The source of the problem could range from object having transparent or reflective surfaces, to your images (or some of them) not being properly uploaded.
In general, if you don't get complete mesh, the best solution is to take more pictures of the missing spots, instead of more pictures of the entire object. If there are missing spots, it means that the engine could not figure out of your images how to stitch them - more images of those areas should help.
I have a basic three.js scene in which I am attempting to get objects exported from Blender (as JSON files with embedded morphs) to function and update their shapes with user input.
Here is a test scene
http://onthez.com/temphosting/three-js-morph-test/morph-test.html
The slab is being resized without morphs by simply scaling a box, which is working just fine.
I must be missing something fundamental with the little monument on top. It has 3 morphs (width, depth, height) that are intended to allow it to resize.
I am using this code to implement the morph based on users dat.gui input.
folder1.add( params, 'width', 12, 100 ).step(1).name("Width").onChange( function () {
updateFoundation();
building.morphTargetInfluences['width'] = params.width/100;
roofL.morphTargetInfluences['width'] = params.width/100;
roofR.morphTargetInfluences['width'] = params.width/100;
building.updateMorphs();
});
The materials for building, roofL, and roofR each have morphTargets set as true.
I've been going over the three.js examples here:
http://threejs.org/examples/?q=morph#webgl_morphtargets_human
as well as #webgl_morphtargets and #webgl_morphtargets_horse
Any thoughts or input would be much appreciated!
I believe I've reached a solution for my question I was under the impression that the JSON loader was preserving the morph target names to be used in place of an index number with morphTargetInfluences
something like morphTargetInfluences['myMorphTargetName']
but, after closer inspection in the console it seems like they should be referred to by number like morphTargetInfluences[0]
Not the most intuitive, but I can work with it.
I am looking for some kind of command line tool , using which I can create an application to convert input images into Deep Zoom Image.
I have around 500 images. I have used the Deep Zoom Composer to generate Deep Zoom Image[DZI] content one at a time. Was looking for a better way to process multiple images.
It looks like there was a tool SparseImageTool.exe in the Deep Zoom Installed Folder which is no longer available.
Here's a list of tools for creating DZI and similar images:
http://openseadragon.github.io/examples/creating-zooming-images/
Found a command line tool which generates files for Deep Zoom.
https://libvips.github.io/libvips/API/current/Making-image-pyramids.md.html
Example:
vips dzsave some-huge-file.tif my-dzi-name
Will write my-dzi-name.dzi and a directory called my-dzi-name_files/ containing all the image tiles.
I am rendering an image to a Sprite inside of an Iterator. I'd like each render (iteration) to remain on the canvas indefinitely, so that each successive render layers on top of the previous ones. How can I do this?
There are no Clears or any other layers in my composition.
In Quartz Composer, you'll almost always want to use a Clear patch — don't assume that you can rely on the prior contents of the framebuffer. So, to accomplish this, you'll need to load all of your images into a structure (probably by using JavaScript to feed an Image Loader patch and build a Queue from that), and then display all of the images each frame using an Iterator.
Check out Apple's "Image TV" sample composition, available in the OS X Developer Library in the Quartz Composer Conceptual Compositions bundle. This example demonstrates how to load a series of images into a structure and then display them.