how to draw texture on obj model through optix example - cuda

I'm very new to optix and cuda.
I'm trying to modify optix SDK example to present a 3D model with ray tracing. I modified the "progressivePhotonMap" example. Because of lacking of optix/cuda knowledge, I don't know how to draw texture on the 3D model, can anyone who is familiar with SDK example could help me?
I read other draw texture examples like "swimmingShark" or "cook" and try to find out clue to use. However, those examples seem has different way to draw texture.
From now on, I know i have to load texture in cpp file
GeometryInstance instance = m_context->createGeometryInstance( mesh, &m_material, &m_material+1 );
instance["diffuse_map"]->setTextureSampler(loadTexture( m_context, ... );
and create TextureSampler in cuda file
rtTextureSampler<float4, 2> diffuse_map; // Corresponds to OBJ mtl params
,and give them texcoord to draw, like this,
float3 Kd = make_float3( tex2D( diffuse_map, texcoord.x*diffuse_map_scale, texcoord.y*diffuse_map_scale ) );
However, I cannot found where the texcoord get the texture coordinate data in cuda file.
It seems there should be some code like this in .cpp file
GI["texcoord"]->setBuffer(texcoord)
Could anyone teach me where texcoord get the texture coordinate data, and how to match coordinate data and texure to present 3D model with ray tracing?
I can't find tutorial in google, I really need help or direction to reach my goal. Thank you.

You should read up on the OptiX documentation first. Specifically the paragraph regarding Attribute variables.
IIRC the texcoord variable is an attribute of the form
rtDeclareVariable( float3, texcoord, attribute texcoord );
that is computed in the intersection program and passed along to the closest hit program (attributes are designed to pass data from the intersection point to the shading points).
Short answer: it is set into another CUDA function which, conceptually, computes some data needed by that line.

Related

Implementing shorted path Algorithm in autodesk forge viewer

I am trying to draw a geometry on my viewer based on shortest path between 2 object.
till now I know how we can draw custom geometry using vector3.
also I have figured out which algorithm I can use to find the shortest path between point A to Point B.
here are few :
Dijkstra's
A* Search
I have seen this example where this algorithm is been implemented and I am trying similar solution in forge viewer here is the Link
also if someone can help me how can I restrict first person view to through walls like in the above sample. right now in forge viewer I can penetrate through wall which I want to avoid or is there any way that I can Identify the walls
Unfortunately the viewer does not provide a lot of support for path-finding, so most of it you would have to do manually.
Here's some of the functionality that is available that might be handy in your case:
you can "shoot rays" inside the scene and compute their intersections with the nearest geometry, for example, using viewer.impl.rayIntersect(ray, ignoreTransparent)
for example, this could be used to detect collisions with walls if you had some kind of an avatar inside the scene
if needed, you can retrieve the geometry of individual objects in the scene using the "fragment list":
let frags = viewer.model.getFragmentList();
let tree = viewer.model.getInstanceTree();
tree.enumNodeFragments(dbid, function (fragid) {
let mesh = frags.getVizmesh(fragid);
// Do something with the mesh...
});

Where exactly does trainable_variables method belong in Tensorflow?

I'm a newbie in both deep learning and tensorflow and now trying to learn how to implement deep learning codes based on function API (not keras) by following example codes.
Inside the codes I'm looking at, I found out sources saying 'gradients=tape.gradient(loss,model.trainable variables)'
I intuitionally got what trainable variables mean, however in order to understand clearly,I tried to search on tensorflow documentation (which module or class the method belongs to, which are key arguments, etc) ,but I wasn't able to find the information I want. ('trainable variables' method was not in their documentation index and I'm wondering why)
So can anyone please tell me the module/class which trainable_variable method belongs to, and which arguments it takes, and also how it is able to judge and get all the trainable variables from the model ?
The reason you did not find this method is because trainable_variables is not a method, but an attribute/property. The Model class has a trainable_variables attribute, which is not documented officialy. It is inherited from the base class Layer, and to put it shortly, the list (of trainable variables) gets populated as new layers are added, since all layers have an init parameter trainable (this comes from base class Layer too). You can check the source code if you want to: "the source of the property", "adding new weights to layer appends to the list".

How to use the various Forge Viewer transforms

Below are the various transforms I have found so far using NOP_VIEWER.model.getData().
I'm using the transforms to bring a position into viewer space, and I haven't been able to find any good documentation describing what they all do. My hope here is that this question can help by providing some documentation of the role of these transforms and how/when to use them.
The model originally comes from Revit.
GlobalOffset (Vector3)
placementWithOffset (Matrix4) - seems to be just the inverse of GlobalOffset as a matrix?
placementTransform (Matrix4) - undefined in all models I've tested, I've seen some hints that this is a user defined matrix.
refPointTransform (Matrix4)
Also, there are some transforms in the NOP_VIEWER.model.getData().metadata. These may be Revit specific:
metadata.georeference.positionLL84 (Array[3]) - this is where the model's GPS coords are stored
metadata.georeference.refPointLMV (Array[3]) - no idea what this is, and it has huge and seemingly random values on many models. For example, on my current model it is [-17746143.211481072, -6429345.318822183, 27.360225423452952]
metadata.[custom values].angleToTrueNorth - I guess this is specifying whether the model is aligned to true or magnetic north?
metadata.[custom values].refPointTransform - (Array[12]) - data used to create the refPointTransform matrix above
Can someone help by documenting what these transforms do?
Related: Place a custom object into viewer space using GPS coords
As an alternative solution, the Viewer works with extensions. The Autodesk.Geolocation extension provides a few methods to handle the data structure you mentioned:
Load extension:
let geoExt;
NOP_VIEWER.loadExtension('Autodesk.Geolocation').then((e) => {geoExt = e});
Or get already loaded extension:
let geoExt = NOP_VIEWER.getLoadedExtensions()['Autodesk.Geolocation']
Then use the methods to convert the coordinates
geoExt.lmvToLonLat
geoExt.lonLatToLmv
Here is a quick article on it.
You may .activate() the extension to see additional information on the model geo location.

Strange problem when attempting to render 3d models in webgl

So basically I am trying to make a model loader, that will take in wavefont obj files and render them in webgl. Eventually I would like to be able to rotate, translate and scale these objects.
I have the interface all setup and it works nicely. However I am having problems with rendering.
I have taken in an obj file, and checked the arrays all have the correct numbers of elements and I even checked using chromes webgl debug plugin, and it appears the arrays match up (even element values match up).
Number of Vertices: 10932
Number of Indices: 18960
Anyway When I run gl.drawElements(gl.TRIANGLES, numItems, gl.UNSIGNED_SHORT, 0);
I get no chrome error but in the webgl plugin debug i get 'INVALID_OPERATION' with no additional information.
I have found that by changing numItems (which is usually the number of indices / 18960) to a much lower number, it will render a teapot (slightly wrong). The lucky number for some reason is 11034, if I go above this, it wont render, if I go below it will render my slightly wrong teapot. I need this number to really be the full number of indices, as obviously i cannot hard-code the numbers.
So I am very confused as to why this is happening, for my full code for debug:
http://webdesignscript.net/assignment/graphics_a3/
Rendering part of the code:
http://webdesignscript.net/assignment/graphics_a3/scripts/webglengine.js
Teapot model that is loaded:
http://webdesignscript.net/assignment/graphics_a3/models/teapot.obj
Cheers, Josh
I hope you remembered that the faces in OBJ files use vertex indices that start at 1, rather than 0. So perhaps those later faces (that make it crash or not work) just reference an invalid vertex (one past the end). If so, just subtract 1 from the faces' vertex indices after read from the file.

BlackBerry - Exception when creating a graphics object from a bitmap

I am making the following call in my blackberry application (API ver 4.5)...
public void annotate(String msg, EncodedImage ei)
{
Bitmap bitmap = ei.getBitmap();
Graphics g = new Graphics(bitmap);
g.drawText(msg,0,0);
}
And I keep getting an IllegalArgumentException when I instantiate the Graphics object. Looking at the documentation for Graphics is confusing as it leaves many things unstated.
What does it mean by 'default type of the device'?
How do you know if the type of 'bitmap' is not supported? Does this mean that there are different types of bitmaps? Can different encodedImages generate different types of bitmaps?
Is there another way to add my string to the associated encoded image?
public Graphics(Bitmap bitmap)
Constructs a Graphics object for drawing to a bitmap.
Parameters:
bitmap - Bitmap to draw into. Must be Bitmap.COLUMNWISE_MONOCHROME or the default type of the device.
Throws:
IllegalArgumentException - If the type of 'bitmap' is not supported, or the bitmap is readonly.
Are you sure that your Bitmap is mutable? You can't create Graphics objects from immutable Bitmaps. That is one cause of an IllegalArgumentException. You can set the decode mode for your EncodedImage (EncodeImage.setDecodeMode). There are different modes that allow you to specify whether the file is native or readonly...along with other modes that can be combined.
The size of the bitmap might be another IllegalArgumentException. Of course, this is relevant to the target device.
I'd imagine that the default type depends on the graphics chip and hardware. (If you have a monochrome screen, the default would probably be different than if you had a color one.)
Bitmap has a static method getDefaultType(), which "Queries the default Bitmap type for the device". There's also a non-static method getType(). It seems to be telling you the rule is that for the code above to work then either:
bitmap.getType() == Bitmap.getDefaultType()
...or...
bitmap.getType() == COLUMNWISE_MONOCHROME
And presumably neither of these conditions are true. You can do a sanity check on that, and maybe print out the result of getDefaultType() so you know what your target is.
Looks like you'll have to convert the bitmap or get it from somewhere else.
The Graphics object isn't normally constructed explicitly. Rather, you are given an instance of it in the paint() method, if you've overridden it.
I suspect what you want to do is create a subclass of BitmapField and override the paint() method to include your code for drawing text on the bitmap.