I have two or more JSON result files that satisfy the standard coco format, both of which are the result of instance segmentation of the model. How can I use these JSON files to ensemble the output of each model?
Related
I have an ONNX model and I want to debug the intermediate layer outputs and inputs of the model. How can this be done ?
I want to use Flux to train a Deep Learning model on audio files. In Flux documentation, they passed the whole data array (with all examples) to a dataloader that would feed the train!() function with a list of batches. The point is that I have not enough memory in my system to load all audio files at once.
In PyTorch, the dataloader would be fed by a dataset object that would have the logic to open one file at a time on the __getitem__() method.
So, what is the right way to implement it in Flux/Julia, what is the Torch dataset equivalent?
I found this thread on Julia discourse forum that covers basically what I am asking in this question.
https://discourse.julialang.org/t/pytorch-dataloader-equivalent-for-training-large-models-with-flux/30763
From some recommendations o the topic, there is this one, the package MLDataUtils.jl, that offer similar functionality with the nobs() and getobs() functions.
Amazon Sagemaker's Factorization Machine model's inference results differ based on input data format. I am receiving different prediction results depending whether the inference data is json or protobuf.
My JSON input data is sparse.
Protobuf RecordIO input data is also sparse.
I have assumed that no matter the input data format, the Factorization Machine's predictions should be stable?
Now I have a pre-trained deep learning based OCR model but in my real applications there exist several characters not included in the dictionary of my pre-trained model. A small dataset mainly consisting of those new characters is also available. I want the model to learn to recognize new characters without weakening its ability to identify characters in the original dictionary.
However, when I append those new characters onto the model's classification layer and finetune the weights on my dataset, the model performs poorly due to the imbalance in the class distribution of my dataset. Is there any way to effectively import these new characters into my pre-trained model?
I want to learn how to write caffe python layers.
But I only find examples about very simple layers like pyloss.
How to write python caffe with trainable parameters?
For example, how to write a fully connected python layer?
Caffe stores the layer's trainable parameters as a vector of blobs. By default this vector is empty and it is up to you to add parameters blobs to it in the setup of the layer. There is a simple example for a layer with parameters in test_python_layer.py.
See this post for more information about "Python" layers in caffe.