How to convert Model Test prediction annotation into XMLs? - json

I Train my lane marking model and test my dataset in which it gives accurate results. so i need to convert those test result annotation into XMLs or into JSON. how can i do this using python ?

Related

Relation extraction using doccano

I want to do relation extraction using doccano. I have already annotated data/entity relation using doccano and exported data is in jsonl format. I want to convert it into spacy format data to train bert using spacy on jsonl annotated data.
.
Drop this Annotation and go with NER Annotator spacy (reannotate it)

The result of using the catboostclassifier model's output python file to predict is different from the result of using model to predict directly

I want to verify that the predicted results from the exported file are consistent with those predicted directly.
I use the output Python file with the model description of catclassifier to predict result:
But the result which is predicted directly is 2.175615211102761. It is verified that this is true for multiple data. I want to know why and how to solve it.
float_sample and cat_sample look like
Supplementary question: the results predicted by using the model file described in Python language provided by the catboost tutorial are different from those predicted directly by the model

Create LMDB for new test data

I have an LMDB train data file for the VPGNet CNN model pre- trained on Caltech Lane data set.
I would like to test it on new data set different from the training data set. How to create LMDB for the new test data.
Do I need to modify prototxt files for testing with pre-trained net. For testing do I need a prototxt file or there is a specific command.
Thanks
Lightning Memory-Mapped Databases (LMDB) formats can be efficiently process as input data.
We create the native format (lmdb) for training and validating the model.
Once the trained model converged and the loss is calculated on training and validation data,
we use separate data (Unknown data/ Data which is not used for training) for inference the model.
In case if we are running a classification inference on a single image or set of images,
We need not convert those in to lmdb. Instead we can just run a forward pass on the stacked topology with the image/images converted into the desired format (numpy arrays).
For More info :
https://software.intel.com/en-us/articles/training-and-deploying-deep-learning-networks-with-caffe-optimized-for-intel-architecture

dump weights of cnn in json using keras

I want to use the dumped weights and model architecture in other framework for testing.
I know that:
model.get_config() can give the configuration of the model
model.to_json returns a representation of the model as a JSON string, but that the representation does not include the weights, only the architecture
model.save_weights(filepath) saves the weights of the model as a HDF5 file
I want to save the architecture as well as weights in a json file.
Keras does not have any built-in way to export the weights to JSON.
Solution 1:
For now you can easily do it by iterating over the weights and saving it to the JSON file.
weights_list = model.get_weights()
will return a list of all weight tensors in the model, as Numpy arrays.
Then, all you have to do next is to iterate over this list and write to the file:
for i, weights in enumerate(weights_list):
writeJSON(weights)
Solution 2:
import json
weights_list = model.get_weights()
print json.dumps(weights_list.tolist())

JUnit JavaBean assert not null deep

How can a JavaBean be tested for deep not null?
I have a JavaBean with about 400 properties. MyBatis fetches the data from a database and uses a Result Map to initialize the JavaBean. What I'm looking to do is test the Result Map for correctness. The first step I'm considering is to test for deep not null.
Use java.beans.Introspector to get BeanInfo and getPropertyDescriptors().