generate pb files from caffe trained model - caffe

I want to run a trained caffe model in android. I was planning to use the AICamera example by caffe and then modify it to run my model. I was able to compile and build the project.
Currently i have the caffe model definition as a prototxt file and the pretrained model as a .caffemodel file. But the AICamera is using squeeze_init_net.pb file and squeeze_predict_net.pb file for reading models. So how can i convert the files i have to .pb files?

The AICamera sample that you linked is for Caffe2, which is not backward compatible with Caffe.
In Caffe2 Model Zoo page they talk about it:
Compatibility:
Caffe2 utilizes a newer format, usually found in the
protobuf .pb file format, so original .caffemodel files will require
conversion.
In the same page, they link to the Migration page where they explain how to convert the older .caffemodel to the .pb file format.
Basically, they provide a python script to convert your old format to the newer one. Also, there is a test script.
If you want to run your .caffemodel, there are 2 ways (that I know of):
Convert it to the newer .pb file format
Load your .caffemodel with the help of OpenCV

Related

Python - convert JSON to image(.jpg)

I filtered Microsoft COCO dataset by filter.py from here, which generate a filtered.JSON file, and I'm wondering is there a way to convert JSON to images(.jpg) ?
Actually I'm doing a Mask R-CNN project to perform instance segmentation, and don't really how to deal with the training data I filtered.
pip install fiftyone, and use this tool's App might be helpful.
reference

Is there a convenient method to only save model architecture information in Pytorch to a protobuf ruled file?

Is there a convenient method to only save model architecture information in Pytorch to a protobuf ruled file?
I know how to use pytorch.save to save both weights and net at the same time, to a dictionary structured data. But if I'd like to save the data to an isolated file which only contain the net architecture, like what Caffe did train from initial status, is that possible? The file may used in somewhere else. Does ONNX can do something sort of like that?

convert multiple files from .LWO to .OBJ or similar

I need to convert many files from .lwo format to .obj or .stl. I have too many to convert "by hand", meaning I don't want to use online tools or import/export the files one by one in Blender or similar.
So I'm trying to do so with a program that would load up each file, convert, then save a new stl . The files are numbered "file000001", "file000002", etc. to make importing easier.
Is there any program out there that will do this? If not, how would I go about accomplishing my goal?
As far as languages go, I am most effective with Processing/Java. I found this which might be similar but doesn't relate to LWOs.
Thanks for any help.
I just found assimp which has a command line tool to convert different file types. Thanks everyone who answered!
I'm sure you can find a few editors that import .lwo and export .obj
For example, Wings3D does that and free/opensource/lightweight.
Wings is scriptable using erlang.
Blender has LWO importer too, but it's not enabled by default. you need to go to Preferences > Addons and enable it there:
Blender has a Python API which should be easy to pickup.
This would allow you to write a script that does a batch conversion (reads a directory, traverses files, imports .lwo, transforms (scales/rotates if needed), exports .obj)
Perhaps if you search enough maybe there is a 3d file format batch converter already out there and .lwo/.obj are old enough formats so might be likely to be supported.
If you want to implement something from scratch, you need to look into each file format (e.g. lightwave object, obj ) to be able to parse and export.
Hopefully there's a java library that for you. I'd start with a 3D java game engine. For example here's a java .LWO importer found via JMonkey.

Loading json file into titan graph database

I have given a task to load a json file into titandb with dynamodb as back end.Is there any java tutorial or if possible please upload java sample coding...
thanks.
Titan is an abstraction layer so whether you use Cassandra, dynamo, hbase, etc, you merely need to find Titan data loading instructions. They are a bit dated but you might want to start with these blog posts:
http://thinkaurelius.com/2014/05/29/powers-of-ten-part-i/
http://thinkaurelius.com/2014/06/02/powers-of-ten-part-ii/
The code examples work with an older version of Titan (the schema portion) but the concepts still apply.
You will find that the strategy for data loading with Titan has a lot to do with the size of your graph. You said you are loading "a JSON file" so I imagine you have a smaller graph in the millions of edges. In this case, a simple groovy script will likely suffice. Write a script to parse your JSON and write the data to the Titan.

Reading HDF5 file that was written in java by octave

I'm writing a framework to write HDF5 files that are compatible with Octave.
That is, I want that my framework will be able to read HDF5 files that were written by Octave and in a way that Octave will be able to read HDF5 files written by my framework.
I'm using HDF-JAVA, to read and write HDF5 files.
The problem is that Octave cannot read HDF files that I write in java.
When I try to read such file, I get an error:
d=load('check.h5')
error: value on right hand side of assignment is undefined
From the documentation for load in Octave-Forge:
HDF5 load and save are not available, as this Octave executable was not linked with the HDF5 library.
Is this the problem you are trying to solve with your framework? Or is it the problem that is preventing you from implementing your framework?
That is not the problem. If I create an HDF file that contains only datasets the load works.
(The parameter -hdf5 is not mandatory, Octave can recognize the file type - I tried it).
The problem is that I cannot use only datasets because my framework demand usage of groups(for example cell array of matrixes - for that I must use groups - as Ocave does).
If i'm using groups then the problems start - loading of file that contains groups failes.