Catboost how to save model to python memory object and not disk - catboost

We like to use Catboost in an environment where we dont have permission to save data todisk. We found: https://github.com/catboost/tutorials/blob/master/model_analysis/model_export_as_json_tutorial.ipynb
Is there a way to pipe the model into an im memory python JSON object without saving to disk?

Although it won't be a JSON, you can use the protected method _serialize_model on the CatBoostClassifier to get the model blob. For loading it use CatBoostClassifier.load_model(blob=serialized_model).

Related

Save keras model with weights to JSON file

I have a keras model and i want to save it to JSON. The commonly used method is to save the model architecture to JSON then, save the weights in .h5 file.
However, i need to save the model including weights in a JSON file. Is there a way to do that ?
You can try saving manually,
weights_list = model.get_weights()
for i, weights in enumerate(weights_list):
writeJSON(weights)

How do I Pass JSON as a parameter to AWS Lambda

I have a CloudFormation template that consists of a Lambda function that reads messages from the SQS Queue.
Lambda function will read the message from the queue and transform it using a JSON template(Which I want it to be injected externally)
I will deploy different stacks for different products and for each product I will provide different JSON templates to be used for transformation.
I have different options but couldn't decide which one is better;
I can write all JSON files under the project and pack them together and pass related JSON name as a parameter to lambda.
I can store JSON files on S3 and pass S3 URL to lambda so I can read on runtime.
I can store JSON files on Dynamo DB and read from there using the same approach with 2
The first one seems like a better approach as I don't need to read from an external file on every lambda execution. But I will need to pack all templates together.
The last two are a more clear approach but require an external call to read JSON for every call.
Another approach could be (I'm not sure if it is possible) to inject a JSON file to Lambda on deploy from S3 bucket or sth. And Lambda function will read it like an environment variable.
As you can see from the cloudformation documentation Lambda environment variables can be only a Map of Strings, so the actual value you can pass to the function as an environment variable must be a String. You could pass your JSON as a string but the problem is that the max size for all environment variables is 4 KB.
If your templates are bigger and you don't want to call S3 or DynamoDB at runtime you could do a workaround like writing a simple shell script that copies the correct template file to the lambda folder before building and deploying the stack. This way the lambda gets deployed in a package with the code and only the desired json template.
I decided to go with S3 setup and also improved efficiency by storing Json on a global variable (after reading the first time). So I read once and use it for the lifetime of the Lambda container.
I'm not sure this is the best solution but works well enough for my scenario.

Is there a way to import json data and map each object to a separate doc in Firestore?

I have a large JSON file, an array with lots of objects, I want to import these into firestore, and I want each object to become a document, and I am looking for the most efficient way to do it, any advice?
I have tried parsing and looping through some of the objects in the file and for each object run let res = db.collection('mycoll').add(obj)
This works, is there a smarter way to do it?
I want to import these into firestore, and I want each object to become a document
According to the official documentation, you can import/export your data but only as long as you have one:
You can use the Cloud Firestore managed export and import service to recover from accidental deletion of data and to export data for offline processing.
If you only have a JSON file, then you need to write some code for that.
I have tried parsing and looping through some of the objects in the file and for each object run let res = db.collection('mycoll').add(obj)
That's the easiest way to do it. You can also add all the writes to a batch so it can be written atomically.
This works, is there a smarter way to do it?
Once you have a database, use the import/export feature.

Swift - Store JSON globally

i have this JSON array stored in my local variable:
let bigJsonArray = JSON(response)
my question is if there is any possibility to store this "bigJsonArray" in a global variable/session/cookie/config so i can access it in every view of my app ?
Anybody knows how to process this and could help me?
Greetings and thanks!
What you can do is to define bigJsonArray as a global variable just by defining it outside of any class and the Swift compiler will understand it as a global variable and you can access it from anywhere in your code.
for example:
import UIKit
var bigJsonArray = JSON(response)
class a {
var x = 0
}
that's of curse will not save the data if you killed the app, but from what I understand from your question you just need to be able to access it from all the app without resending a request to the server.
If you want to save the JSON data permanently, you just store the data that you received as a file, and the next time you need it, you read it from the file and parse it (there's actually a method for that) instead of downloading and parsing the data. Much easier than trying to store the parsed data.
If this is data that can be downloaded again, read the appropriate documentation to make sure the file isn't backed up, and is stored in a cache directory where the OS can remove it if space is tight.

Load BSON Model in Three.js?

Actually Three.js has JSONLoader to load JSON models.
Is it possible to convert JSON 3D model to BSON and load it using Three.js? Because I need to load huge JSON model.
I dont have a suggestion for BSON but you can take a look at the source in three.js\utils\converters\msgpack and http://threejs.org/examples/#webgl_loader_msgpack and you can read more about the format here: http://msgpack.org/