Heatmap / feature map from Yolov3 - deep-learning

I'm currently working on Yolov3 and have spent the last two days trying to implement the Grade-CAM approach without success. At the end I link both github repositories I used.
Since I failed to create a heatmap with the approach I used before, I am looking for other ways to create a heatmap for a class and a picture. But so far I could not find any implementation how to do this.
Which approaches could I also pursue? Or which ideas should I still try?
Yolov3: https://github.com/zzh8829/yolov3-tf2
Grad-Cam: https://github.com/sicara/tf-explain

This notebook is great https://colab.research.google.com/drive/1CJQSVaXG1ceBQGFrTHnt-pt1xjO4KFDe. It's by the creator of Keras and it uses TF2.

Related

QuadTreeProvider and WFS data

With each passing day I'm getting more and more annoyed with Cesium.
There was a project that used QuadTreeProvider to create 3D structures from WFS data.
But the Cesium guys saw the 3D stuff was more lucrative and not only created difficulties to create WFS primitives but created mechanisms to avoid all types of use other than its "Ion 3D tiles".
Now, any kind of question involving 3D they answer with a commercial answer like:
"That sounds like the ideal use case for 3DTiles except for the
dynamic data. Depending on your needs and resources, you could always
contact Todd at todd#agi.com to talk about possibly getting your data
working with 3DTiles."
Of course we can contact Todd. But I don't want to! I want to use free 3D stuff. I WANT to use Geoserver WFS data Todd. Can you allow this for me? Can you allow this project to work again as before?
If you want to make money from your products, you have every right to do so, but don't stop users from creating their own solutions if they don't want to contact Todd.
After all, my question is: How can I use WFS data from Geoserver to create 3D objects in Cesium without need to use Ion / 3DTiles ?

Suggestion about automatic map creation problem

I have two layers, first one is the ROAD layer and second one is the PARCEL layer as shown in Figure 1. I can get the data both in dxf and shp formats.
My task is to compute the area of intersection between ROAD and PARCEL layer, this is the easy part. I can compute intersection using QGIS or Geopandas easily. However, the difficult part is creating maps for each of the parcel. Sometimes, I have to create more then a hundred maps for each project. For mapping, there is a template that I have to use, which is similiar to Figure 2. Also, some attribute data should be included in the map, such as owner of the parcel.
These maps should be in both pdf and dxf format. Each map should be in A3 size. To be able to produce such maps, what libraries or programming languages should I use? I have experience in geopandas library but I am not sure if it is enough for this task.
Shoud I try QGIS plugin development or ArcPY? Could you please share your experiences and ideas about this problem?
I am looking forward to hearing from you,
Any help and suggestion is appreciated.
Thanks in advance

Create Series of Families On a Selected Path In Revit Using Dynamo

I’ve started exploring dynamo for a while now and quite enjoying its power. I’ve started work on a project, I’m wondering if someone would like to share their expert views on how do I create series of families from one starting point to other. See the following image to understand it visually. I’m sure we can achieve such functionality via Dynamo. Appreciate any help. Thank you.
Here is a discussion of using a dynamic model updater DMU in conjunction with the Idling event to achieve a couple of complex synchronisation tasks, including a video of almost exactly what you are asking for: Updater Queues Multi-Transaction Operation for Idling.

How to creat CNN model in Image Recognition with Tensorflow to compare with Inception v3

I'm studying Image Recognition with Tensorflow. I already read about the topic How to retrain Inception's Layer for new categories on Tensorflow.org, which utilize the Inception v3 training model.
Now, I desire to creat my own CNN model in order to compare with Inception v3, but I don't know how can I begin with.
Anyone knows some guides step-by-step on this problem?
I'd appreciate any your suggestion
Thanks in advance
First baby steps
The gold standard for getting started in image recognition is processing MNIST images. Tensorflow has a great tutorial on how to get started and also how to move to convolutional networks.
From there it is a long hard road to compete with Inception without just copying someone else's graph. You'll probably want to get a feel for what the different layers of convolution do. I created a basic Tensorflow Tutorial which contains an example python file that demos different convolution graphs and their resulting accuracy.
Going deeper
After conquering MNIST you'll need a lot of images (you can get them from imageNet) and a lot of GPU (to run all your training) and a software setup so that you can not only run and test your model, but dozens (if not hundreds) of variations to explore your hyper parameters (like learning rate, convolution size, dropout, etc). Remember, it took a team of leading edge Machine Learning experts to create something like Inception, many many months (possibly years) of iteration to find the model they use today, and thousands of CPU/GPU hours.
If you are trying to understand what is going on and what makes a good graph, then trying to recreate Inception is a great idea. If you just want an excellent Image recognition model, then reuse an existing one.
If you are trying to have fun, just do it!
Cheers-

GraphHopper Dynamic Routing

I know the question of GH dynamic edge weights has been raised in various forums, but I still find myself lost on how to implement this. I have seen options like - changing the edge weights and recalculating the contracted graph, disabling contraction hierarchies altogether etc. Could someone please explain this to me from a beginners point of view, like where do I begin, which are the available options, drawbacks of each, the packages and classes in the library that are used to achieve this, etc. Thanks