3ds Max BlendedBoxMap support for the Forge Vewer - autodesk-forge

We are really interested in adding BlendingBoxMaps to certain objects in our model (such as terrain and larger geometry to avoid obvious repeating in the texture).
However, all our test has failed as objects containing BlendedBoxMap (see image below) turns black after translated to SVF. Any guidance would be highly appreciated.
Update:
If the above doesn't work. Is there any alternative to BlendedBoxMapping to achieve good looking textures for larger terrain? We are aware that baking the texture onto large mesh gives very blurry results as the SVF translation reduces all larger texture resolutions to 1024x1024 (which seems to be impossible to avoid) and stretches the 1024x1024 texture as much as needed to fit the large object.

If materials using BlendedBoxMap fail to correctly appear in the viewer, as a workaround, I would suggest trying to bake your material into a single Bitmat.
Here is an example of how to do so using bake to texture:
https://knowledge.autodesk.com/support/3ds-max/learn-explore/caas/CloudHelp/cloudhelp/2016/ENU/3DSMax/files/GUID-37414F9F-5E33-4B1C-A77F-547D0B6F511A-htm.html

Related

Slicing up large heterogenous images with binary annotations

I'm working on a deep learning project and have encountered a problem. The images that I'm using are very large and extremely detailed. They also contain a huge amount of necessary visual information, so it's hard to downgrade the resolution. I've gotten around this by slicing my images into 'tiles,' with resolution 512 x 512. There are several thousand tiles for each image.
Here's the problem—the annotations are binary and the images are heterogenous. Thus, an annotation can be applied to a tile of the image that has no impact on the actual classification. How can I lessen the impact of tiles that are 'improperly' labeled.
One thought is to cluster the tiles with something like a t-SNE plot and compare the ratio of the binary annotations for different regions (or 'classes'). I could then assign weights to images based on where it's located and then use that as an extra layer in my training. Very new to all of this, so wouldn't be surprised if that's an awful idea! Just thought I'd take a stab.
For background, I'm using transfer learning on Inception v3.

Size of image for prediction with SageMaker object detection?

I'm using the AWS SageMaker "built in" object detection algorithm (SSD) and we've trained it on a series of annotated 512x512 images (image_shape=512). We've deployed an endpoint and when using it for prediction we're getting mixed results.
If the image we use for prediciton is around that 512x512 size we're getting great accuracy and good results. If the image is significantly larger (e.g. 8000x10000) we get either wildly inaccurate, or no results. If I manually resize those large images to 512x512pixels the features we're looking for are no longer discernable to the eye. Which suggests that if my endpoint is resizing images, then that would explain why the model is struggling.
Note: Although the size in pexels is large, my images are basically line drawings on a white background. They have very little color and large patches of solid white, so they compress very well. I'm mot running into the 6Mb request size limit.
So, my questions are:
Does training the model at image_shape=512 mean my prediction images should also be that same size?
Is there a generally accepted method for doing object detection on very large images? I can envisage how I might chop the image into smaller tiles then feed each tile to my model, but if there's something "out of the box" that will do it for me, then that'd save some effort.
Your understanding is correct. The endpoint resizes images based on the parameter image_shape. To answer your questions:
As long as the scale of objects (i.e., expansion of pixels) in the resized images are similar between training and prediction data, the trained model should work.
Cropping is one option. Another method is to train separate models for large and small images as David suggested.

libGDX: Best way to store many small images in libGDX for fast drawing

I'm writing a client for a multiplayer tile-based game in libGDX (for Android and Desktop).
The game world is composed of thousands of small 32x32 png images that are drawn into a large rectangular view area. The images are downloaded over the socket connection (network) as needed.
What is the best (fastest and most resource-efficient) way to store these images in "memory" so they can be drawn really fast onto the screen when needed?
So far, I have implemented a very naive algorithm that will load each and every 32x32 image into a Texture and keep that in memory indefinitly. (Pure coincidence my images got a size that is a power of two.) It seems to work, but I am worried that this is very inefficient and possible exceeds GPU resources on older devices or something.
I am aware of the TextureAtlas, but that seems to work only for static images that are packed and stored in the compiled android app. As I receive my images over the network dynamically, I believe this won't work for me.
I have found this libgdx SpriteBatch render to texture post that suggest rendering many small images into a FrameBuffer, then using this as a source of TextureRegions. This seems promising to me. Is this a good solution?
Is there a better way?
I also wonder if drawing and storing my small images into a large Pixmap might be helpful. Is this possibly a better approach than drawing into a FrameBuffer, as described above?
As I understand it from the docs, Pixmaps are purely memory-based. That might be an advantage as it probably doesn't need graphics resources, on the other hand might be slower as loading into a Texture is an expensive operation. Thoughts on this?
Actually TextureAtlas is the best way to store many images (small or not) and fortunatelly the TextureAtlas instance do not have to be created in a static way.
Take a look at
addRegion(java.lang.String name, Texture texture, int x, int y, int width, int height)
TextureAtlas'es method. It make it possible to create atlas dynamically.
So what you should do is to create empty atlas
TextureAtlas atlas = new TextureAtlas();
then add your images in some kind of loop
for(Texture texture : yourTexturesCollection)
atlas.addRegion(...);
then you can use your atlas by using findRegion or another method (take a look at reference)
Notice that for android devices it is recommended to use not larger atlas than 2048 x 2048px.
For another kind of devices (like dekstop) this value can be another (usually bigger). It is not LibGDX limit but openGL's!

Cesium Resampling

I know that Cesium offers several different interpolation methods, including linear (or bilinear in 2D), Hermite, and Lagrange. One can use these methods to resample sets of points and/or create curves that approximate sampled points, etc.
However, the question I have is what method does Cesium use internally when it is rendering a 3D scene and the user is zooming/panning all over the place? This is not a case where the programmer has access to the raster, etc, so one can't just get in the middle of it all and call the interpolation functions directly. Cesium is doing its own thing as quickly as it can in response to user control.
My hunch is that the default is bilinear, but I don't know that nor can I find any documentation that explicitly says what is used. Further, is there a way I can force Cesium to use a specific resampling method during these activities, such as Lagrange resampling? That, in fact, is what I need to do: force Cesium to employ Lagrange resampling during scene rendering. Any suggestions would be appreciated.
EDIT: Here's a more detailed description of the problem…
Suppose I use Cesium to set up a 3-D model of the Earth including a greyscale image chip at its proper location on the model Earth's surface, and then I display the results in a Cesium window. If the view point is far enough from the Earth's surface, then the number of pixels displayed in the image chip part of the window will be fewer than the actual number of pixels that are available in the image chip source. Some downsampling will occur. Likewise, if the user zooms in repeatedly, there will come a point at which there are more pixels displayed across the image chip than the actual number of pixels in the image chip source. Some upsampling will occur. In general, every time Cesium draws a frame that includes a pixel data source there is resampling happening. It could be nearest neighbor (doubt it), linear (probably), cubic, Lagrange, Hermite, or any one of a number of different resampling techniques. At my company, we are using Cesium as part of a large government program which requires the use of Lagrange resampling to ensure image quality. (The NGA has deemed that best for its programs and analyst tools, and they have made it a compliance requirement. So we have no choice.)
So here's the problem: while the user is interacting with the model, for instance zooming in, the drawing process is not in the programmer's control. The resampling is either happening in the Cesium layer itself (hopefully) or in even still lower layers (for instance, the WebGL functions that Cesium may be relying on). So I have no clue which technique is used for this resampling. Worse, if that technique is not Lagrange, then I don't have any clue how to change it.
So the question(s) would be this: is Cesium doing the resampling explicitly? If so, then what technique is it using? If not, then what drawing packages and functions are Cesium relying on to render an image file onto the map? (I can try to dig down and determine what techniques those layers may be using, and/or have available.)
UPDATE: Wow, my original answer was a total misunderstanding of your question, so I've rewritten from scratch.
With the new edits, it's clear your question is about how images are resampled for the screen while rendering. These
images are texturemaps, in WebGL, and the process of getting them to the screen quickly is implemented in hardware,
on the graphics card itself. Software on the CPU is not performant enough to map individual pixels to the screen
one at a time, which is why we have hardware-accelerated 3D cards.
Now for the bad news: This hardware supports nearest neighbor, linear, and mapmapping. That's it. 3D graphics
cards do not use any fancier interpolation, as it needs to be done in a fraction of a second to keep frame rate as high as possible.
Mapmapping is described well by #gman in his article WebGL 3D Textures. It's
a long article but search for the word "mipmap" and skip ahead to his description of that. Basically a single image is reduced
into smaller images prior to rendering, so an appropriately-sized starting point can be chosen at render time. But there will
always be a final mapping to the screen, and as you can see, the choices are NEAREST or LINEAR.
Quoting #gman's article here:
You can choose what WebGL does by setting the texture filtering for each texture. There are 6 modes
NEAREST = choose 1 pixel from the biggest mip
LINEAR = choose 4 pixels from the biggest mip and blend them
NEAREST_MIPMAP_NEAREST = choose the best mip, then pick one pixel from that mip
LINEAR_MIPMAP_NEAREST = choose the best mip, then blend 4 pixels from that mip
NEAREST_MIPMAP_LINEAR = choose the best 2 mips, choose 1 pixel from each, blend them
LINEAR_MIPMAP_LINEAR = choose the best 2 mips. choose 4 pixels from each, blend them
I guess the best news I can give you is that Cesium uses the best of those, LINEAR_MIPMAP_LINEAR to
do its own rendering. If you have a strict requirement for more time-consuming imagery interpolation, that means you
have a requirement to not use a realtime 3D hardware-accelerated graphics card, as there is no way to do Lagrange image interpolation during a realtime render.

Textures in CANVAS 2D Context

Just starting to experiment with filling the canvas, and I'm trying to apply a texture to an object (the blobs from the blob example - http://www.blobsallad.se/). This example is using the 2D context, and doesn't appear to be implementing webGL. All the information on texturing I could find uses webGL, and I was wondering how easy it would be to accomplish this feat. Is there anyway I could incorporate the texturing features of webGL to this canvas without rewriting the code? Summed up, I guess this question is asking whether or not the methods available to the 2D context are also available to the webGL context... If so I suppose I could just change the context and apply my texture? If I'm thinking about this all wrong or am confused conceptually, please let me know.
Thanks,
Brandon
I've experimented with drawing an image to a 2d canvas before using it as a texture for a WebGL canvas. It works, but the performance is horrible (it really varies from browser to browser). I'm currently considering a few other options for refactoring it. I wouldn't recommend it for anything that more than statically drawing an image to one or two 2d canvases.
You can see an example of the craziness in lanyard/src/render/SurfaceTileRenderer.js in the project at: http://github.com/fintler/lanyard
Are you looking to apply a texture to a 2D shape?
Try something like this
http://jsfiddle.net/3U9pm/