Looking for IMU simulation software - kalman-filter

If I am posting this in the wrong place, please let me know (I'll move the question there).
I am looking for a program that can generate 6 or 9 DoF IMU data along with its actual path.
I am working on implementing Kalman filter and require data. The requirement is that there is a camera C and object X. X and C both have IMU attached to it and both can move freely. The camera has a function that can extract the 3D position of X.
I need a simulator that can simulate this scenario.
Thank you.

Related

Tensorflow 2 Object Detection API - Can/Should I use K-Fold Cross Validation?

I have a small dataset of about 1000 images and am training my model to detect 8 classes. I had divided my dataset in a ratio of 80:20 (training: validation) and wanted to apply k-fold cross validation so as to make the most of my dataset.
#1: Is this line of thinking proper or am I misunderstanding something? In another post about K-fold cross-validation in object detection, someone mentioned that since we have confidence scores, we don't require k fold cross-validation. However, I don't see a correlation between training my model on the 'k' number of folds and confidence scores.
#2: Is this something that has to be manually done or does tensorflow 2.x have the means to add k fold cross-validation?
Any clarification would be greatly appreciated! Thanks!
About your query 1 and 2
(IMO), It would be proper to do K-Fold. FYI, splitting the data set into the 8:2 ratio is something called the holdout method, AFAIK, it's not K-Fold. When you want to do K-Fold there is something you probably need to consider such as class distribution, bounding box distribution, etc. However, as you don't provide any sample data or code, here is a similar discussion that might help you.
It has to be manually done. It's a resampling procedure used to evaluate machine learning models on a limited data sample. It's not something integrated with any framework.

AR / VR Toolkit Reduce Model Mesh to display in AR

Is there a way to reduce mesh polygons?
As an example project I use the TGA model provided by Autodesk. (https://knowledge.autodesk.com/support/revit-products/getting-started/caas/CloudHelp/cloudhelp/2019/ENU/Revit-GetStarted/files/GUID-61EF2F22-3A1F-4317-B925-1E85F138BE88-htm.html rme_advanced_sample_project.rvt)
If you add all instances to the scene you get a polygon count of about 1.3M.
For the computer this is no problem at all. The model is downloaded in about 1 min and displayed completely.
For my iPhone ( iPhone 8) this is clearly too big.
As soon as I start the AR Scene and download the model, the memory requirement rises to over 1.2 GB (bevore 0,15GB) and crashes the app.
Even if you exclude some instances (walls, ceilings, etc.) before processing the scene to display only the technical building equipment, the model is still too big for the iPhone.
Are there possibilities to reduce the mesh with the ar-vr-toolkit api . Do I have to do this manually in Revit?
Edit: 27.06.18
Here is the model i want to display in AR (Tris: 2.8m, Verts: 2.4M).
Steps:
1) Upload the original .rvt file (70mb) to my bucket.
2) Translated the file via forge.
3) Created a scene with ar-vr-toolkit api.
4) Processed scene witha ar-vr-toolkit api.
5) Downloaded the scene to unity.
6) Created a prefab.
The Meshes are way to detailed. The Graphics would not change a lot if i reduce the Vertex count to 10-15%.
In Unity i can use assets like Mesh Simplify (https://assetstore.unity.com/packages/tools/modeling/mesh-simplify-43658) to reduce the count.
An other way is to export the model to e.g. 3D max or Maya to reduce the count.
But i want to try to do this automatically.
My question is: Is there a way to to this with Forge?
Image 1
Image 2
My colleagues who are expert on this area are on vacation now, so let me try to answer your question first, and my colleague may add some more information later.
Unfortunately, the answer is No AFAIK. For the Forge AR|VR toolkit service, I remember there are some mesh reduction work done automatically at server side if it detect the client device is Hololens or DAQRI, you can get that info if checking https://github.com/wallabyway/ARVRToolkit/blob/master/unity-src/ARVRToolkit/Assets/Forge/ARKit/RequestQueue.cs#L155. But that's it, we do not provide any API to help reduce the mesh, and there is also no API within Forge which can do that either.
As you already know, you may need to do the mesh reduction in some other product, like 3ds Max, that's the current way I can think of.
My colleague may have more comments on this when they come back.

Giving position (angle) of camera from which were made photos?

I have very important question for me. I would like to use Autodesk Reality Capture API in my app. I read the documentation to API but I did not find it. I know the position of camera and i would like to send this information to Reality Capture API. For example circle was divided in 24 parts. So I know that each photo was made every 15 degrees. Is there any parameter which gives me possibility to provide the position of camera?
There is no way of passing this kind of information to Reality Capture API (at least no official way) and even if it is debatable, there is not much use for such input.
Roughly speaking, the engine will “stitch” the given images based on common pixels/regions/patches. For complex objects, each 15 degrees might not be enough to capture the complex geometry and you will have to add more photos aiming that specific region.
The main benefit is that you can process your images, get the result, see the missing or low detail spots, take a bunch of photos of those specific spots and add them to the project, process your project again and repeat till you get a satisfying result. From this perspective, the "rule" of photos taken each 15 degrees will break very fast.
If you are getting wrong results, 80% of the time (again the Pareto principle) this is caused by missing the scenetype parameter, which defaults to aerial, when usually people expects to use the object type.
Check The Hitchhiker's Guide to ... Reality Capture API for more details.

LabView: Icon identification

I'm entirely new to LabView, and as a pet project, I'm trying to recreate a pulse detector. Thing is, the version of the .VI is LabView2010, and I can't open the.VI in LabView2009, so were trying to remake it by looking at the module. I do however, have the image, but since I'm pretty new, I can't identify some of the components used. Below is an image of the .VI, as well as, the parts I don't know encircled with red and enumerated. What exactly are these? Thanks!
To make a shift register, right click on the edge of the while loop and place a shift register. The Wait (ms) node is found in the timing functions pallet. #1 and #3 are found in the waveform generation pallet. And #2 is a waveform graph that is bound to the output of the filter. Just right click on the output of the filter and create an Indicator
I only have limited experience with the specific features in this code, so I don't have exact names, but it should point you in the right direction:
2 is a dynamic data indicator.
1 converts it to a waveform (it probably appears automatically if you hook up a DDT wire to a WF function.
3 unbundles the data from the waveform. It should be in the waveform palette.
4 is a shift register.
5 is a wait function.
In general, I would recommend that you get to learning, as you will need to understand these things to at least that level before you can be proficient.
Also, the NI forums are much more suitable for this type of question and they have many more users. I would suggest if you have such questions which you can't answer yourself, then post them there.

using d3, redraw the graph based on node click

I am using a d3 force-directed graph that produces a hairball of data. :-)
I think it's either this one: http://bl.ocks.org/1138500 or this one: http://bl.ocks.org/4062045
(I inherited this so not exactly sure but these 2 visualizations are really similar!)
When I click a node, I'd like to zoom in on, or redraw the graph based on just that node. So on click of a node, the other nodes and edges would disappear. How can I do that? I'm using Ruby on Rails and Neo4j, so a cypher query is creating the JSON data that d3 uses.
Will i have to re-query the database via cypher? I hope not. I'm hoping that this can be done in d3.js alone.
Thanks in advance for your ideas. If you have a working example, I'd love to see it!
Probably you need to ask on the D3 mailing list.