Set point size in LibGdx when using GL20 - libgdx

I need to render some points with specified size using ShapeRenderer.
When using GL20, Gdx.gl and Gdx.gl20 will be initialized, while Gdx.gl10 and Gdx.gl11 will be null.
I can set line width and render using code such as this:
Gdx.gl.glLineWidth(5);
mShapeRenderer.begin(ShapeType.Rectangle);
mShapeRenderer.rect(0.0f, 0.0f, 50.0f, 50.0f);
mShapeRenderer.end();
But, from what I figured out, I can only set point size using Gdx.gl10.glPointSize(5) or Gdx.gl11.glPointSize(5) - which won't work in my case since both gl10 and gl11 are null.
Are there any simple solutions to this problem?

OpenGL ES dropped support for glPointSize (among other things) in 2.0, that's why you can't find it in Gdx.gl20, or in Gdx.gl.
Instead of setting a point size, just use ShapeRenderer's filledCircle to render the "large" points. (I would point to the API documentation, but they just changed the API last week and I'm not sure which version you are using.)

Related

How to use ballot and shfl with variable sized thread block tiles in CUDA cooperative groups

I am using cooperative_groups::thread_blocks for the first time, and I'm a bit lost. I would like to have code where I can decide on the cooperative_groups::thread_block_tile size at runtime, however this doesn't seem to be possible with the thread_block_tile since you need to pass the size to the template at compile time. I see you can also use the method tiled_partition without a template to get a thread_block instead of a thread_block_tile, however, the ballot and shfl methods don't work and I seem to instead need to use __ballot_sync and __shfl_sync. However these methods require a mask, and I'm not quite clear on how this works. Would calling activemask() work to do a ballot within a single thread_block, or would you have to get a mask in some other way. Or is there some way to get variably sized thread_block_tile's that I'm missing?

Transform Edit2D Areas

I am using the Edit2D extension on an svf created from a 2D dwg file and have a question about transforms. The Autodesk.Edit2D.Polygon's that are created have a getArea() method which is great. However it's not in the correct unit scale. I tested one and something that should be roughly 230sf in size is coming back as about 2.8.
I notice that the method takes an argument of type Autodesk.Edit2D.MeasureTransform which I'm sure is what I need, however I don't know how to get that transform. I see that I can get viewer.model.getData().viewports[1].transform. However, that is just an array of 16 numbers and not a transform object so it creates an error when I try to pass it in.
I have not been able to find any documentation on this. Can someone tell me what units this is coming back in and/or how to convert to the same units as the underlying dwg file?
Related question, how do I tell what units the underlying DWG is in?
EDIT
To add to this, I tried to get all polylines in the drawing which have an area property. In this case I was able to figure out that the polyline in the underlying dwg was reporting its area in square inches (not sure if that's always the case). I generated Edit2D polygons based on the polylines so it basically just drew over them.
I then compared the area property from the polyline to the result of getArea() on the polygon to find the ratio. In this case it was always about 83 or 84 times smaller than the square foot value of the polyline it came from (there is some degree of error in my tracing system so I don't expect they would be exact at this point). However, that doesn't fit any unit value that I know of. So remaining questions:
What unit is this?
Is this consistent or do I need to look somewhere else for this scale?
Maybe you missed the section 3.2 Units for Areas and Lengths of https://forge.autodesk.com/en/docs/viewer/v7/developers_guide/advanced_options/edit2d-use/
If you use Edit2D without the MeasureExtension, it will display all coordinates in model units. You can customize units by modifying or replacing DefaultUnitHandler. More information is available in the Customize Edit2D tutorial.
and https://forge.autodesk.com/en/docs/viewer/v7/developers_guide/advanced_options/edit2d-customize/
BTW, we can get the DefaultUnitHandler by edit2dExt.defaultContext.unitHandler
Ok after a great deal of experimentation and frustration I think I have it working. I ended up looking direction into the js for the getArea() method in dev tools. Searching through the script, I found a class called DefaultMeasureTransform that inherits from MeasureTransform and takes a viewer argument. I was able to construct that and then pass it in as an argument to getArea():
const transform = new Autodesk.Edit2D.DefaultMeasureTransform(viewer);
const area = polygon.getArea(transform);
Now the area variable matches the units in the original cad file (within acceptable rounding error anyway, it's like .05 square inches off).
Would be nice to have better documentation on the coordinate systems, am I missing it somewhere? Either way this is working so hopefully it helps someone else.

Autodesk Forge Viewer - Near and Far clipping issues

We recently updated one of our projects to use the latest version of the Autodesk Forge Viewer (v7.x).
In general, the migration went fine, however we noticed that for some models we have issues with the far cutting planes of the camera. I have figured out that this happens because the model contains some elements that are very far from the rest of the model and thus the bounding box of the entire model is magnitudes larger than it should be. This results in camera near = 1 and far = 10000. It seems that far should be higher in order to not hide parts of the model when zooming out.
For now, we have been able to workaround the issue be specifying a the "nearRadius" when loading the model, but since we have to set it to a value like 50 or 100 in order to not have far plane issues, we still have some near plane clipping issues.
I was wondering if there is any possibility to fix the model once it is loaded, so that the viewer uses a more realistic bounding box for the model. I have found out so far that during the load it already sets the variable verylargebbox to true and therefore is not using the nearRadius = 0 but either the passed value in the load options or 1. I was able to set nearRadius to a negative value to have the same behaviour as with nearRadius = 0, but due to the large bounding box of the model, we still experience clipping issues.
In order to fix the model I have already tried:
Excluding the ids of the elements that are far away with ids = [...] in the load options
Setting the visibility of these elements to off using setNodeOff()
Use NOP_VIEWER.navigation.fitBounds() to set the bounding box (as suggested in near and far calculation in Autodesk Forge Viewer)
However, getVisibleBounds() still returns the huge bounding box.
We would like to find a solution in which we don't need to modify the source model file before translating it to .svf.
The engineering team has confirmed that there are pathological cases where due to the bounding box of the model being extremely large, the computation of the near/far camera planes causes clipping artifacts. While this is being addressed, the suggested workaround is to set the near plane to 1:
viewer.impl.setNearRadius(1);

ITK difference betweeen SetInitialTransform and SetMovingInitialTransform

I am now using ITK library to image registration. I wonder, when setting initial parameters for ImageRegistrationMethodv4 type registration, shall I use SetMovingInitialTransform and SetFixedInitialTransform like in the tutorial, or just SetInitialTransform??
The "transform" in SetInitialTransform means transform for moving image or for fixed image? Thank you:)
(Please read this with caution--I do not have the library with me to test this answer; it's based on memory only.)
I believe SetInitialTransform() refers to the transform which is actually optimized by the registration method. In other words, it is a collection of transform parameters that specify an "initial guess" for the optimization process; these parameters will then start moving around at each iteration. (They are therefore applied to the moving image.)
I think SetMovingInitialTransform() and SetFixedInitialTransform() refer to static initial transforms which do no change at all during the registration process. They merely "set up" the moving and fixed images to desired starting locations, if you are not satisfied with their default positions in space.
If you have some simple 2D images, try testing this answer out with simple initial transformations, like a 5-unit translation transformation or something.
You could try reading the ImageRegistrationMethov4 documentation for a little more info.

RenderTargetBitmap::RenderAsync ignores the size arguments?

The documentation says "Specifies the target width at which to render" / "Specifies the target height at which to render".
However, the resulting RenderTargetBitmap completely ignores that arguments.
E.g. in the 1080p WP8.1 simulator it renders as 1920x1080, even if the arguments are 853x480.
Is there a way to make the RenderTargetBitmap::RenderAsync method to work as advertised? Thanks in advance.
It does not ignore them. It just automatically multiplies them by DisplayInformation.RawPixelsPerViewPixel.
I don't think this can be avoided at the moment. But you can always calculate what arguments you need to pass to get the result that you want.