saliency map for multi-class multi-label classification? - deep-learning

I have found ways to do CAM/saliency map for multi class, but not multi label multi class. Do you know of any resources I can use to do it so I don't reinvent the wheel, or rather do you have advice for implementing it?
My specific use case is that I have a transfer learned ResNet that outputs a binary 1x11 vector. Each entry corresponds to presence of a certain feature in the input image. I want to be able to get a saliency map for each feature, so I can know what the network was looking at when deciding if each image has each of those features.

Related

How to get information about routes between elements using Viewer API

I'm trying to create web application using autodesk forge API, and want to get output of information about routes between elements. In brief, the application outputs the shortest path between two elements when they are selected.
I tried it using getIsolated() and isolate() but I can't understand what the "isolated" state is, so I can't.
please teach me the how to solve this challenge.
I'm sorry for my bad English.
Isolation is a way to highlight certain objects in the scene by making other objects "ghosted" (semitransparent) or hidden completely. If you want to actually select objects, use the .select() and .getSelection() methods.
Forge Viewer does not provide any path planning unfortunately. It can give you the bounding boxes of all objects in the scene (here's an example) but you would have to find the path among them yourself.

Mask R-CNN annotation tool

I’m new to deep learning and I was reading some state of art papers and I found that mask r-cnn is utterly used in segmentation and classification of images. I would like to apply it to my MSc project but I got some questions that you may be able to answer. I apologize if this isn’t the right place to do it.
First, I would like to know what are the best strategy to get the annotations. It seems kind of labor intensive and I’m not understanding if there is any easy way. Following that, I want to know if you know any annotation tool for mask r-cnn that generates the binary masks that are manually done by the user.
I hope this can turn into a productive and informative thread so any suggestion, experience would be highly appreciated.
Regards
You can use MASK-RCNN, I recommend it, is a two-stage framework, first you can scan the image and generate areas likely contain an object. And the second stage classifies the proposal drawing bounding boxes.
But the two-big question
how to train a model from scratch? And What happens when we want to
train our own dataset?
You can use annotations downloaded from the internet, or you can start creating your own annotations, this takes a lot of time!
You have tools like:
VIA GGC image annotator
http://www.robots.ox.ac.uk/~vgg/software/via/via_demo.html
it's online and you don't have to download any program. It is the one that I recommend you, save the images in a .json file, and so you can use the class of ballons that comes by default in SAMPLES in the framework MASK R-CNN, you would only have to put your json file and your images and to train your dataset.
But there are always more options, you have labellimg which is also used for annotation and is very well known but save the files in xml, you will have to make a few changes to your Class in python. You also have labelme, labelbox, etc.

In keras(deep learning library), sush custom embedding layer possible?

I just moved in recently from theano, lasagne to keras.
When I in theano, I used such custom embedding layer.
How to keep the weight value to zero in a particular location using theano or lasagne?
It' was useful when deal of variable length input by adding padding.
In keras, such custom embedding layer possible?
Then, how can I make it?
And, such embedding layer may be wrong?
This may not be exactly what you want, but the solution I personally use as it is used in Keras examples (e.g. this one) is to pad the data to a constant length before feeding it to network.
Keras itself provide this pre-processing tool for sequences in keras.preprocessing.sequence.pad_sequences(seq, length)

Convert and add obj model to Web gl scene - without three.js

I want someone to tell me the steps to follow to convert an .obj object to json object so I can add it to my web gl scene like this : http://learningwebgl.com/blog/?p=1658
I ve tried everything. Python script, online converters etc. Every one has its flaws and I can't fix them.
I don't use the three.js lib.
Why can't you fix them?
There is no simple answer for how. The format for .obj is documented here. Read it, pull the data out you want in a format you design.
There's an infinite number of ways to convert it and an infinite number of ways to store the data. Maybe you'd like to read out the data and store the vertices in JavaScript arrays. Maybe you'd like to store them in binary which you download with XHR. Maybe you'd like to apply lossy compression to them so they download faster. Maybe you'd like to split the vertices when they reach some limit. Maybe you'd like to throw away texture coordinates because your app doesn't need them. Maybe you'd like to read higher order definitions and tessellate them into triangles. Maybe you'd like to read only some of the material parameters because you don't support all of them. Maybe you'd like to split the vertices by materials so you can more easily handle geometries with multiple materials. Maybe you'd like to reindex them so you can use gl.drawElements or maybe you'd like to flatten them so you can use gl.drawArrays.
The question you're asking is far too broad.

How to aggregate points with same value into polygons from a shapefile using GDAL or any other opensource solution

I have a shapefile with around 19,000 points. Its basically export from a raster. Now i need to extract polygons, by aggregating the points which have same value.The field who's value i am going to use for aggregation is dynamically calculated each time using the elevation of points. NOw i need to spit out polygons. How can I do that using GDAL? is there a utility to do it. Any other opensource solutions are welcome.
I have ArcGIS which has a toolbox called 'Aggregate Points' but somehow licence for it is missing.
Here are some possibilities:
You can write a program using GDAL (actually OGR) in C++ or Python (or any other language for which GDAL/OGR provides bindings) and construct Polygon objects from the selection (sub-sets) of your points. Then you can serialise those polygons in to Shapefile or anyother storage supported by OGR.
Alternatively, forget about GDAL/OGR and load your data into PostgreSQL database enabled with PostGIS. Then use PostGIS functionality to construct Polygons
There is example of polygon construction from points based on bruteforce string manipulation and use of geometry constructor posted as postgis-users thread Making a Polygon from Points