I'd like to process different types of data seperately first and then fuse them in a common layer. Is this possible in Caffe and if yes what would be the best way to do it?
I've read that one can define several data layers in the same prototxt file. But how to fuse them?
Can I just create a InnerProduct layer and specify several bottom layers? Or do I have to concatenate the individual layers first using a Concat layer?
For any small code example I would be very thankful!
As discussed in the comments above, InnerProduct works with a single input. The fusion (concatenation) can then be done in a specific Concat layer with a configuration like this:
layer {
name: "concat"
bottom: "in1"
bottom: "in2"
top: "out"
type: "Concat"
concat_param {
axis: 1
}
}
The official documentation has more details about that layer: http://caffe.berkeleyvision.org/tutorial/layers.html
Related
I have couple of questions about combine model on forge viewer (load list urn to 1 viewer):
when i combine model. i only can get data from 1 main model in that combine. for instance,
var instanceTree = GlobalViewer.model.getData().instanceTree;
var allDbIdsStr = Object.keys(instanceTree.nodeAccess.dbIdToIndex);
var list = allDbIdsStr.map(function (id) { return parseInt(id) });
list will return all dbid of main model, how can i access all data of all model when i combine?
what is the unique id for object in combine model. i do some function with dbid and i realize it can appear in others model too.
When i combine 3d model(revit) with 2d model(autocad). it has 2 case: if 3d model load first i can rotate like normal, if 2d model load first i cant rotate the model any more. how can i force it always can rotate?
Autocad unit seems different with model in viewer. it always scale down compare with the model. how can i fix that?
Appreciate any comments,
Regarding #1: viewer.model obviously only references one of the models (I believe it's the last one you loaded), but you can use viewer.getVisibleModels() or viewer.getHiddenModels() to get other loaded models as well.
Regarding #2: dbIDs are only unique within a single model; many of the viewer methods accept an additional parameter specifying the model on which to operate, for example, you could say viewer.select([123, 456], oneOfMyModels).
Regarding #3: that's a good question; loading a 2D model first puts the viewer into 2D viewing mode (where only zoom and pan is allowed); if you know you will be working with 3D models, I'd recommend always loading those first
Regarding #4: yes, each loaded model can have different units; when loading a model using the loadDocumentNode method you can specify additional options (for example, a placement transform for the loaded geometries), and one of them is an object called applyScaling, for example, like so:
viewer.loadDocumentNode(doc, viewable, {
applyScaling: { to: 'mm' }
});
I had a term project that needs to use data stored in MySQL to train a classification model using Tensorflow or whatever else.
I've tried to use examples from https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/keras/feature_columns.ipynb, and it took me a lot of time to process the data to a csv file and modify the python script. While I need to do a lot of experiments, is there may be much more simple tool for me to train and experiment on my MySQL dataset?
Maybe SQLFlow can meet your needs; I tried to build an SQLFlow script with the dataset you provided, she should be like this:
SELECT *
FROM Heart_Disease
TRAIN DNNClassifier /* a pre-defined TensorFlow estimator, tf.estimator.DNNClassifier */
WITH n_classes = 3, hidden_units = [10, 20] /* a parameter of the Estimator class constructor */
COLUMN Age, Sex, CP, FBS .. /* From the raw data, enter the columns that you think will help predict your heart rate. */
LABEL Target /* lable column */
INTO Heart_Disease.test_model; /* The trained model is saved to the specified data table */
It is also very easy to apply this model:
SELECT *
FROM Heart_Disease.predict
PREDICT Heart_Disease.predict_result.Target
USING Heart_Disease.test_model;
Heart_Disease.predict Target column is empty, The predicted Target is saved to the Heart_Disease.predict_result.Target table.
FYI:https://github.com/sql-machine-learning/sqlflow/blob/develop/doc/demo.md
This is my first answer. Hope I can help you.
What you I think can do, is get the dump of data from sql if it's not realtime and not getting updated and then use that dump for the rest,
or you can create a connection of mysql and then feed that connection into pandas read_sql function, to get the dataframe.
A way to do that
Also if you're new to tensorflow, you should try looking at the tensorflow's estimator API that shall do your work, Apart from that you may use tensorflow's keras wrapper that also eases the work of making a NN network.
I am currently converting a shapefile into a GML file for an online Mapviewer. this application requires the geometry to be in a seperate attribute and needs to consist of multicurve features.
using GeometryExtractor, i get the following:
<rrgs:geometrie>
<gml:LineString gml:id="id-9f7691bb-868d-457e-9061-aceb37980a59-0" srsName="EPSG:28992" srsDimension="2">
<gml:posList>260471.21250000037 591380.1363999993 260457.43054999973 591385.7507499998</gml:posList>
</gml:LineString>
</rrgs:geometrie>
however, the application for uploading onto the online mapviewer requires the geometry as follows:
<rrgs:geometrie>
<gml:MultiCurve gml:id="…" srsName="EPSG:28992" srsDimension="2">
<gml:curveMember>
<gml:LineString gml:id="id-9f7691bb-868d-457e-9061-aceb37980a59-0">
<gml:posList>260471.21250000037 591380.1363999993 260457.43054999973 591385.7507499998</gml:posList>
</gml:LineString>
</gml:curveMember>
</gml:MultiCurve>
</rrgs:geometrie>
would it be possible in FME to convert linestring features into multicurve features?
Thanks in advance!
I would try just setting an Aggregator before writing so all geometries are multi-geometries.
You would need to use an attribute with a unique value in the Group By parameter so the different features aren't grouped together. If there's none try the UUIDGenerator.
The reason I apply Boost.js is exporting CSV file with big data as well as improving the performance.
I have a problem when using boost and the number of series more than 9. The barchart displays incorrectly. Therefore I try the workaround approach by increasing the the threshold to deactivate the Boost. It's also cause of CSV exporting possibility which I really concern.
Do we have any official update about this problem from the Highcharts Team?
First of all, boost is not available for bar charts (https://github.com/highcharts/highcharts/issues/6602). Solution to this inconvenience is to set type of chart to 'column' and invert it.
chart: {
type: 'column',
inverted: true
}
API Reference:
http://api.highcharts.com/highcharts/chart.type
http://api.highcharts.com/highcharts/chart.inverted
Example:
http://jsfiddle.net/obx1pbkw/
In the given example of MNIST in the Caffe installation.
For any given test image, how to get the softmax scores for each category and do some processing on them? Say compute the mean and variance of them.
I am newbie so a detail would help me a lot. I am able to train the model and use the testing feature to get the prediction but I am not sure which files are to be edited in order to get the above results.
You can use python interface
import caffe
net = caffe.Net('/path/to/deploy.prototxt', '/path/to/weights.caffemodel', caffe.TEST)
in_ = read_data(...) # this is up to you to read a sample and convert it to numpy array
out_ = net.forward(data=in_) # assuming your net expects "data" in blob
Now you have the output of your net in a dictionary out (keys are names of output blobs). You can run it in a loop on several examples etc.
I can try to answer your question. Assuming in your deploying net, the softmax layer is like below:
layer {
name: "prob"
type : "Softmax"
bottom: "fc6"
top: "prob"
}
In your python code that processes data, combining with the code #Shai provided, you can get the probability of each category by adding code based on #Shai's code:
predicted_prob = net.blobs['prob'].data
predicted_prob will be returned an array that contains the probabilities with all categories.
For example, if you only have two categories, predicted_prob[0][0] will be the probability that this testing data belongs to one category and predicted_prob[0][1] will be the probability of the other one.
PS:
If you don't want to write any additional python script, according to https://github.com/BVLC/caffe/tree/master/examples/mnist
it says this example will automatically do the testing every 500 iterations. "500" is defined in solver, such as https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_solver.prototxt
So you need to trace back the caffe source code that processes the solver file. I guess it should be https://github.com/BVLC/caffe/blob/master/src/caffe/solver.cpp
I am not sure solver.cpp is the correct file you need to look at. But in this file, you can see it has functions of testing and calculation of some values. I hope it can give you some ideas if no one else can answer your question.