I have a large dataset in JSON format that I would like to upload to IrisCouchDB.
I found the following instructions: http://kxepal.iriscouch.com/docs/1.3/api/database/common.html
But I am a newbie and it also seems to be for a single JSON document. Im afraid I will just create one huge entry and not multiple documents entries which is what I want.
I have NodeJS but I don't have Cradle. Will I need it in order to perform this function? Any help is greatly appreciated!
Oh, no! That's bad idea to reference on docs at my Iris host - that's was preview dev build. Please follow the official docs: http://docs.couchdb.org/en/latest/api/index.html
To update multiple document's you need to use /db/_bulk_docs resource:
curl -X POST http://localhost:5984/db/_bulk_docs \
-H "Content-Type:application/json" \
-d '{"docs":[{"name":"Yoda"}, {"name":"Han Solo"}, {"name":"Leia"}]}'
So you see the format of data you should send to CouchDB? Now it's all depending from you JSON file format. If whole file is an array of objects: just wrap his content into {"docs":..} object. If not: you have to write some small library to convert this file data into the required format.
Related
I want to know is there a possibility to post the data in json format to a dashboard from linux server.
My JSON data is in the below format.
{
"data":"{\"actiontodo\":\"Action to do for test nr:
1\",\"critical\":\"LOW\",\"fixstatus\":\"NOTCONCERN\",\"host\":\"MTR_SOME_HOST\",
\"message\":\"$message\",\"mgsApplication\":\"MTR\",\"sMxtype\":\"PROD\",
\"scriptname\":\"$scriptname\"}",
"msg":"NotificationReceiveDTO without dict to send at: 2020-06-07T11:14:09.794 Created at: 2020-
06-07T11:14:09.797",
"msgType":"DATA"
}
From the man page for wget it is possible to post data to website, using either the --post-data=string or --post-file=file argument.
I'm trying to send my inline JSON file to my Solr Database, but I'm having a problem with my nested objects.
I have two nested objects inside my _source object which are media_gallery and stock. Before my upload used to crash, but I managed to upload it after a few corrections, but my media_gallery and stock are added as separate objects therefore instead of having the original 1000 objects I get 3000 objects in my Solr DB after my upload.
I'm currently using this command to upload my JSON file:
curl 'http://192.168.99.100:8983/solr/gettingstarted/update/json/docs?split=/_source/media_gallery|/_source/stock&commit=true' \
--data-binary #catalog.json \
-H 'Content-type:application/json'
Basically I'm uploading the file catalog.json to http://192.168.99.100:8983/solr/gettingstarted.
My media_gallery and stock are both objects inside an object named _source and they're getting split as separate ones.
Could anyone help me with this? I need my media_gallery and stock objects to be uploaded as objects inside my source object one and not as a few separate ones.
Thank you.
Solution:
Basically there was no need for splitting the nested objects. Since i'm uploading everything as a single Solr document therefor i can use the the path "/".
curl 'http://192.168.99.100:8983/solr/gettingstarted/update/json/docs?split=&commit=true' --data-binary #catalog.json -H 'Content-type:application/json'
You should change your split param (remove /_source/media_gallery & /_source/stock)
If the entire JSON makes a single Solr document, the path must be
“/”
Solr Guide: json mapping-parameters
Influx, how do I upload a JSON with Rest Api?
When i read data from Influx using say Rest+query, it comes in JSON format. Now to upload they are saying json is deprecated and we have to do it in binary format, really?
Read using this gives me data in JSON format
curl -G 'http://localhost:8086/query' --data-urlencode “db=my_db" --data-urlencode "q=select * from \”server1.rte.set\" limit 1">test.txt
Write has to be this binary format
curl -i -XPOST 'http://localhost:8086/write?db=my_db' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'
Why would anybody do this? Keep both json or keep both binary.
The current version of InfluxDB doesn't support the JSON write path or a binary protocol. The predominant reason it was deprecated was that decoding JSON was the largest performance bottleneck in the system.
For a more information see the github issue comments 107043910 and 106968181.
Currently I am able to PUT a single json file to a document in Cloudant using this : curl -X PUT 'https://username.cloudant.com/dummydb/doc3' -H "Content-Type: application/json" -d #numbers.json.I have many JSON files to be uploaded as different documents in the same DB.How can it be done?
So you definitely want to use Cloudant's _bulk_docs API endpoint in this scenario. It's more efficient (and cost-effective) if you're doing a bunch of writes. You basically POST an array that contains all your JSON docs. Here's the documentation on it: https://docs.cloudant.com/document.html#bulk-operations
Going one step further, so long as you've structured your JSON file properly, you can just upload the file to _bulk_docs. In cURL, that would look something like this: curl -X POST -d #file.json <domain>/db/_bulk_docs ... (plus the content type and all that other verbose stuff).
One step up from that would be using the ccurl (CouchDB/Cloudant cURL) tool that wraps your cURL statements to Cloudant and makes them less verbose. See https://developer.ibm.com/clouddataservices/2015/10/19/command-line-tools-for-cloudant-and-couchdb/ from https://stackoverflow.com/users/4264864/glynn-bird for more.
Happy Couching!
You can create a for loop and create documents from each JSON file.
For example, in the command below I have 4 JSON files in my directory and I create 4 documents in my people database:
for file in *.json
> do
> curl -d #$file https://username:password#myinstance.cloudant.com/people/ -H "Content-Type:application/json"
> done
{"ok":true,"id":"763a28122dad6c96572e585d56c28ebd","rev":"1-08814eea6977b2e5f2afb9960d50862d"}
{"ok":true,"id":"763a28122dad6c96572e585d56c292da","rev":"1-5965ef49d3a7650c5d0013981c90c129"}
{"ok":true,"id":"763a28122dad6c96572e585d56c2b49c","rev":"1-fcb732999a4d99ab9dc5462593068bed"}
{"ok":true,"id":"e944282beaedf14418fb111b0ac1f537","rev":"1-b20bcc6cddcc8007ef1cfb8867c2de81"}
I am creating a web application that stores user's journeys and I'm build this application in CakePHP 2.2.5.
I have a Journey model and that journey model has many coordinate objects which store the lat long coordinates.
I'd like to send an API call to the journey's add method containing json in the body to create a new object.
I've tried to find out how to format the JSON to allow the controller to use it but I cant seem to find out. I'd also like to know how to add the child objects to the JSON.
I am testing the method using this cURL command
curl -i -H "ACCEPT: application/json" -X POST --data #/home/sam/data.json http://localhost/journeys/add.json
Running this CURL command create a blank entry in the database.
The JSON file included in the CURL contains
{"description" : "Hello World", "user_id":3}
It's better to convert json to array(with json_decode function) then manipulate your array, If you want at last convert it to json(json_encode function).
cakePHP handle this task by : JsHelper::object
edit 1: but I think when php do it for you , it's better use php pure function instead of cakePHP functions.
edit 2: I think cakePHP doesn't handle it , you should handle it by yourself, convert your json to array, change your array as you want in foreach then pass it to saveAll function to insert or update in database.
I've done some more research and found a solution to my problem:
CakePHP will save JSON data and it will also save nested model information.
$data = $this->request->input('json_decode');
Which is similar to what #Arash said but it will also save nested model information by calling
$this->Journey->saveAssociated($data)
The JSON I'm providing to the controller looks like:
{"Journey":
{"description":"Not even an article from the map", "user_id":3},
"Coordinate":[
{"lat":54.229149, "lng":-5.114115},
{"lat":54.39153, "lng":-5.114113}
]
}
Where coordinate belongs to Journey.