Are There Any Ways to Keep CreateML Training Model Up to Date? - json

I'm a CoreML and CreateML newbie and have been playing around with CreateML for the past few days and started to wonder if there are any ways to keep the training model up to date without creating a new model with an updated csv model.
Is there a way to get a JSON file from online and train it?
I've tried let dataTable = try MLDataTable(contentsOf: URL(string: "the API in JSON format I am using"))
and it gave me an error: "MLDataTable can only load from a file URL."
The basic gist: Is there a way to train a model without having to download the updated version of the CSV or JSON file everyday?
Thanks :D

Related

converting json annotation to coco format

I have annotated my data using vott and the default format is json. I wanted to load my data to detectron2 model but it seems that the required format is coco.
Can anyone tell me how can I convert my data from json vott to coco format ??
My classmates and I have created a python package called PyLabel to help others with this kind of task and other labelling tasks. You can see an example in this notebook: https://github.com/pylabel-project/samples/blob/main/coco2voc.ipynb.
You might be able to use the package's importer tool to import your data and convert it to coco.
You can find the code for the package here: https://github.com/pylabel-project/.

arcgis developers - export data from FeatureServer into csv file

Good day to all,
I am new to survey 123 and argis online, but after loads of try, I ask for the help of the community.
I am trying to export data from the API rest of arcgis under CSV format by command line. As I have hundred of Feature Layer to export, I do not want to download it through the "Export Data" button
I have already download in json format through API rest, but the conversion of nested json into csv is tricky so I prefer to download the csv file from arcgis directly in command line.
For informations, the feature layer data that I want to export is from the results of survey made with 123survey.
Thus I have a table of data accessible to the following link : https://developers.arcgis.com/layers/6f8228f7bfa24a61a029aa9751da55d8/data/0
There is a tutorial in python to export csv from a layer, but in this tutorial the layer is a datasets, however I have a Feature Layer and not a datasets one :
https://developers.arcgis.com/labs/python/download-data/
The documentation to query a feature layer is available to the following link, but we can only get html or json object from the query :
https://developers.arcgis.com/labs/rest/query-a-feature-layer/
https://developers.arcgis.com/rest/services-reference/query-feature-service-layer-.htm
The documentation to export data is available to the following link :
https://developers.arcgis.com/rest/users-groups-and-items/export-item.htm
However I can not find the request to get the csv data from this table. I know how to do it through the web interface, but I want to know how to do it with requests.
I have already try the following requests with PostMan, but unfortunately none of the following have worked :
https://mycompany.maps.arcgis.com/home/rest/content/users/chargeetudeti/export?token=UsSMUWq.......&itemId=6f8228f7bfa24a61a029aa9751da55d8&exportFormat=CSV
https://mycompany.maps.arcgis.com/sharing/rest/content/users/chargeetudeti/export?token=UsSMUWq.......&itemId=6f8228f7bfa24a61a029aa9751da55d8&exportFormat=CSV
https://www.arcgis.com/sharing/rest/content/users/chargeetudeti/export?token=UsSMUWq.......&itemId=6f8228f7bfa24a61a029aa9751da55d8&exportFormat=CSV&exportParameters={"layers" : [ { "id" : 0 } ] }
https://services6.arcgis.com/xNifwnpI45tLCPU7/ArcGIS/rest/services/service_70df166a7c9649878cc260d81ee85e4e/FeatureServer/export?token=UsSMUWq.......&exportFormat=CSV&exportParameters={"layers" : [ { "id" : 0 } ] }
Do you have any clues ?
I made public the data to let you try to export the data into a csv format, from this feature layer : https://services6.arcgis.com/xNifwnpI45tLCPU7/arcgis/rest/services/service_70df166a7c9649878cc260d81ee85e4e/FeatureServer
I will be very gratefull if you could help me : )
If you think the title of this post or its tags, are not appropriate, please let me know.
Adrien

Is there a way to import json data and map each object to a separate doc in Firestore?

I have a large JSON file, an array with lots of objects, I want to import these into firestore, and I want each object to become a document, and I am looking for the most efficient way to do it, any advice?
I have tried parsing and looping through some of the objects in the file and for each object run let res = db.collection('mycoll').add(obj)
This works, is there a smarter way to do it?
I want to import these into firestore, and I want each object to become a document
According to the official documentation, you can import/export your data but only as long as you have one:
You can use the Cloud Firestore managed export and import service to recover from accidental deletion of data and to export data for offline processing.
If you only have a JSON file, then you need to write some code for that.
I have tried parsing and looping through some of the objects in the file and for each object run let res = db.collection('mycoll').add(obj)
That's the easiest way to do it. You can also add all the writes to a batch so it can be written atomically.
This works, is there a smarter way to do it?
Once you have a database, use the import/export feature.

APP-ENGINE load data from static json file or load data into the datastore?

Im new to app-engine. Writing a rest api. Wondering if anyone has been in this dilemma before?
This data that i have is not alot (3 to 4 pages) and but it changes annually.
Option 1: Write the data as json and parse the json file every time a request comes in.
Option 2: Model into objects and throw into the datastore and then retrieve them whenever a requests comes in.
Does anyone know the pros and cons for each of this method or any better solutions if any.
Of course the answer is it depends.
Here are some of the questions I'd ask myself to make a decision -
do you want to make the change to the data dependent on a code push?
is there sensitive information in the data that should not be checked in to a VCS
what other parts of your system is dependent on this data
how likely are your assumptions about the data going to change in terms of frequency of updating and size
Assuming the data is small (<1MB) and there's no sensitive information in it, I'd start out loading the JSON file as it's the simplest solution.
You don't have to parse the data on each request, but you can parse it at the top level once and effectively treat it as a constant.
Something along these lines -
import os
import json
DATA_FILE = os.path.join(os.path.dirname(__file__), 'YOUR_DATA_FILE.json')
with open(DATA_FILE, 'r') as dataFile:
JSON_DATA = json.loads(dataFile.read())
You can then use JSON_DATA like a dictionary in your code.
awesome_data = JSON_DATA['data']['awesome']
In case you need to access the data in multiple places, you can move this into its own module (ex. config.py) and import JSON_DATA wherever you need it.
Ex. in main.py
from config import JSON_DATA
# do something w/ JSON_DATA

Export entire Logical Data Model from GoodData

In this https://developer.gooddata.com/article/data-modeling-api there is a logical data model and its corresponding JSON. However, I can't seem to find out how to extract JSON from a logical data model via the REST API. Is there a way to do this other than using the single load interface (which would be very inefficient)?
For the record, my end goal is to make a tool that extracts that JSON (which would be in dev), then post that to the ldm manager2, and then apply the suggested changes through the returned MaQL to production. Any help is greatly appreciated
Currently this works only for Getting or Updating the entire Project. Anyway you can GET all model definition by simple API call. See the documentation:
http://docs.gooddatadrafts.apiary.io/
There is a GET request which is asynchronous. You can build some logic on the top of that on your end. You can get all models, store per datasets information, but at the end you need to POST the "final version" and all updates will be applied.
Let me know if I can help you with anything!
Regards,
JT