How to keep Models in Cesium bright all time - cesiumjs

I want to disable the sun effects for all models presented in Cesium viewer when I manipulate the current time. I would like all the models to be bright all day long.
Tried to use the (enableLighting) and it does not help. Any suggestions?
Can be demonstrated using 3D Models scenario in SandCastle -> https://sandcastle.cesium.com/?src=3D%20Models.html
Example:
At 07:00:00 UTC the model will be way too dark.
At 18:00:00 UTC the model is brighter.

Using the DirectionalLight helps to have all the models lit.

Related

Trying to track visual change in water quality in GIS

I am trying to quantify change in water quality around storm drains before and after a rain event by quantifying imagery I have acquired. What I am thinking would be similar to an NDVI but for areas of dirty and clean water. I have been looking around for options on what to use to do this but haven't found anything successful. The end goal would be a quantification of a few images and then creating the equivalent of a dNDVI for the change over the days. Does anybody have recommendations for this?
I have tried change detection and compute change but have come up empty thus far

How can I track in real-time small fast-moving stones that are free-falling?

I am looking for the fastest real-time object trackers for tracking small fast-moving stones that are free-falling vertically, given that there can be up to 50 objects in a single frame, and their shape is very similar.
I have trained a YoloV5 object detection model on stones and the inference speed is doing pretty good (120 FPS), but when I pass the .pt weights file to DeepSort algorithm for object tracking and test it on a normal speed video, it does not track my objects at all. However, I tried to Slow-Motion the video to * 0.25 speed and re-tested DeepSort and it worked, but was not able to associate stones and differentiate well between them (one ID is given to multiple objects).
Note: I am using the pre-trained weights on pedestrians of the deep part of DeepSort.
Is there any solution to:
1- Make the model work on the normal video without having to slow-motion the video?
2- Solve the problem of ID switching and ID repeating?
3- Should I re-train the deep part of DeepSort on my dataset of stones? or I can use the pre-trained weights?
Any help of any kind will be very appreciated :)
1- Make the model work on the normal video without having to slow-motion the video?
Most of the Github repos that implement DeepSort perform the tracking offline. This is, when the object detection + association process is done for a certain frame it takes the next, and so on, till it is done. So the FPS of your video shouldn't affect your tracking results as the only thing that changes in the video by slowing it down is the presentation timestamp (PTS) of each video frame.
2- Solve the problem of ID switching and ID repeating?
Most of the DeepSort implementations on Github (https://github.com/nwojke/deep_sort, https://github.com/ZQPei/deep_sort_pytorch)
have not implemented Lambda as per Eq(5) in the paper. This implies that the position of the objects are not taken into consideration when performing the ID association. In you case this is a waste of information, specially as the stones are falling and their movement is easily predictable.
3- Should I re-train the deep part of DeepSort on my dataset of stones? or I can use the pre-trained weights?
Visually, your stones most likely look very similar. This means that training a custom ReID model on stones would have very little effect on you final tracking results. Hence, in your specific case, it is more important that the stones' position gets into consideration when performing the ID association, so we are back at the previous point.
Here is a repo that implements most of what you need (https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch)
Start with computer vision basics before powering your YOLOv5 model. Have you heard about the atmospheric turbulence model? You can read about it here or just check Chapter 5 (Image Restoration and Reconstruction) of Digital Image Processing 3rd Edition by Rafael Gonzalez.
Perhaps this paper help you understand more about moving objects: https://openaccess.thecvf.com/content/CVPR2021/html/Rozumnyi_DeFMO_Deblurring_and_Shape_Recovery_of_Fast_Moving_Objects_CVPR_2021_paper.html
Good luck and enjoy!

What is the fastest way to export floor plans (visually)?

I'm looking for the fastest way to export floor plans (hundreds) via revit api from a Revit model so that the output will be a visible indication of the floor plan (image, dwg, dxf, thumbnail, pdf).
Given over 400 floor plans, I've tried:
Image export with various settings, tested as low as 72dpi with 256 pixel - about 20 min
DWG export - about 17 min.
DXF export - about 17 min.
Are there any other ways to export the floor plans in a quick manner?
speed is the key in my problem as long as there is some viable output for each plan.
I took this up with the development team for you. So far, they have provided one pretty interesting suggestion:
Is it possible to open the model in multiple versions of Revit at once, say N, and have each version export one view (or 400/N views)?

Heatmap / feature map from Yolov3

I'm currently working on Yolov3 and have spent the last two days trying to implement the Grade-CAM approach without success. At the end I link both github repositories I used.
Since I failed to create a heatmap with the approach I used before, I am looking for other ways to create a heatmap for a class and a picture. But so far I could not find any implementation how to do this.
Which approaches could I also pursue? Or which ideas should I still try?
Yolov3: https://github.com/zzh8829/yolov3-tf2
Grad-Cam: https://github.com/sicara/tf-explain
This notebook is great https://colab.research.google.com/drive/1CJQSVaXG1ceBQGFrTHnt-pt1xjO4KFDe. It's by the creator of Keras and it uses TF2.

Detecting features like holes in components

Is it possible to get data on hole features on a component within a model - quantity and diameter? The specific use case we are looking at is to determine the size and quantity of bolts required for a component/assembly - bolts, nuts and washers are often not modelled by our customers. Example - if we can determine that a component has 4 x 13mm holes, we can programatically add 4 x M12 bolts to the BOM.
It is only possible to perform a geometric approach, the viewer is loading simple geometry, it is not a parametric CAD modeler. You could pre-process the CAD seed file to add custom properties in Inventor for example, you can then access those in the component properties in the viewer.
To get access to the geometry (vertices & faces) in the viewer take a look at this post: http://adndevblog.typepad.com/cloud_and_mobile/2015/07/accessing-mesh-information-with-the-view-data-api.html
Hope that helps