I'm trying to send my inline JSON file to my Solr Database, but I'm having a problem with my nested objects.
I have two nested objects inside my _source object which are media_gallery and stock. Before my upload used to crash, but I managed to upload it after a few corrections, but my media_gallery and stock are added as separate objects therefore instead of having the original 1000 objects I get 3000 objects in my Solr DB after my upload.
I'm currently using this command to upload my JSON file:
curl 'http://192.168.99.100:8983/solr/gettingstarted/update/json/docs?split=/_source/media_gallery|/_source/stock&commit=true' \
--data-binary #catalog.json \
-H 'Content-type:application/json'
Basically I'm uploading the file catalog.json to http://192.168.99.100:8983/solr/gettingstarted.
My media_gallery and stock are both objects inside an object named _source and they're getting split as separate ones.
Could anyone help me with this? I need my media_gallery and stock objects to be uploaded as objects inside my source object one and not as a few separate ones.
Thank you.
Solution:
Basically there was no need for splitting the nested objects. Since i'm uploading everything as a single Solr document therefor i can use the the path "/".
curl 'http://192.168.99.100:8983/solr/gettingstarted/update/json/docs?split=&commit=true' --data-binary #catalog.json -H 'Content-type:application/json'
You should change your split param (remove /_source/media_gallery & /_source/stock)
If the entire JSON makes a single Solr document, the path must be
“/”
Solr Guide: json mapping-parameters
Related
All,
I am trying to upload a simple JSON slug that should meet the specs and keep getting an error from Firebase on the command line.
here is a sample of the JSON
[{"act":"draw","arg":"20","art":"650.923","block":7207397,"deleted":false,"id":4387,"ink":"10.351568024950279","ire":"643.796950752935","lad":"0x25b8CCE3fD037c11226C64980e4128480A932eBc","pip":"120.68","per":"1.0404295226106932","ratio":"199.675367095938625390002206918521800","tab":"1299.732889761896578547364065340249904","time":"2019-02-11T16:21:48.000Z","tx":"0xda74c7780d0b778b99ddf35c0d45ad24aa088b320b7e15169badd4c261ca9f76","idx":72,"timestamp":1549902108000},{"act":"open","arg":"","art":"0","block":6746816,"deleted":false,"id":4387,"ink":"0","ire":"0","lad":"0x3e294e9EA60249999839d829CDAFE9bC3A67Cef4","pip":"131.01","per":"1.0350112435677474","ratio":null,"tab":"0.000000000000000000","time":"2018-11-21T17:25:31.000Z","tx":"0x71f513384fd358b4e945bb7b5e55f49f3a3e61f157333211e91fa0f3f7df0715","idx":103,"timestamp":1542821131000}]
bash $ curl -X PATCH -d #4387.json https://cname.firebaseio.com/data/.json?auth=xxxxxxxx
Any help is greatly appreciated.
What you're showing is not a JSON object. It's a JSON array. You can tell because it begins and ends with square brackets. If there is just one JSON object element in the array, perhaps you meant to extract the one item and make it the input here.
I am also facing the same issue in flutter with firebase. The solution is to use json.encode('json object here'). It's resolved my issue.
Currently I am able to PUT a single json file to a document in Cloudant using this : curl -X PUT 'https://username.cloudant.com/dummydb/doc3' -H "Content-Type: application/json" -d #numbers.json.I have many JSON files to be uploaded as different documents in the same DB.How can it be done?
So you definitely want to use Cloudant's _bulk_docs API endpoint in this scenario. It's more efficient (and cost-effective) if you're doing a bunch of writes. You basically POST an array that contains all your JSON docs. Here's the documentation on it: https://docs.cloudant.com/document.html#bulk-operations
Going one step further, so long as you've structured your JSON file properly, you can just upload the file to _bulk_docs. In cURL, that would look something like this: curl -X POST -d #file.json <domain>/db/_bulk_docs ... (plus the content type and all that other verbose stuff).
One step up from that would be using the ccurl (CouchDB/Cloudant cURL) tool that wraps your cURL statements to Cloudant and makes them less verbose. See https://developer.ibm.com/clouddataservices/2015/10/19/command-line-tools-for-cloudant-and-couchdb/ from https://stackoverflow.com/users/4264864/glynn-bird for more.
Happy Couching!
You can create a for loop and create documents from each JSON file.
For example, in the command below I have 4 JSON files in my directory and I create 4 documents in my people database:
for file in *.json
> do
> curl -d #$file https://username:password#myinstance.cloudant.com/people/ -H "Content-Type:application/json"
> done
{"ok":true,"id":"763a28122dad6c96572e585d56c28ebd","rev":"1-08814eea6977b2e5f2afb9960d50862d"}
{"ok":true,"id":"763a28122dad6c96572e585d56c292da","rev":"1-5965ef49d3a7650c5d0013981c90c129"}
{"ok":true,"id":"763a28122dad6c96572e585d56c2b49c","rev":"1-fcb732999a4d99ab9dc5462593068bed"}
{"ok":true,"id":"e944282beaedf14418fb111b0ac1f537","rev":"1-b20bcc6cddcc8007ef1cfb8867c2de81"}
I am using the pycsw extension to produce a CSW file. I have harvested data from one CKAN instance [1], into another [2], and am now looking to run the pycsw 'paster load' command:
paster ckan-pycsw load -p /etc/ckan/default/pycsw.cfg -u [CKAN INSTANCE]
I get the error:
Could not pass xml doc from [ID], Error: Start tag expected, '<' not found, line 1, column 1
I think it is because when I visit this url:
[CKAN INSTANCE 2]/harvest/object/[ID]
It comes up with a JSON file as opposed to an XML (which it is expecting)
I have run the pycsw load command on other ckan instances and have had no problems with them. They also display an XML file at the url stated above, so I wanted to know how to get CKAN to serve an XML file instead of JSON?
Thanks in advance for any help!
As you've worked out, your datasets need to be in ISO(XML) format to load into a CSW server. A CKAN only has a copy of the dataset in ISO(XML) format if it harvested them from a CSW.
If you use the CKAN(-to-CKAN) harvester in the chain then the ISO(XML) record doesn't get transferred with it. So you'd either need to add this functionality to the CKAN(-to-CKAN) harvester, or get rid of the CKAN-to-CKAN harvest step.
Alternatively if the record originated in a CKAN, then it has no ISO(XML) version anyway, and you'd need to create that somehow.
I have a large dataset in JSON format that I would like to upload to IrisCouchDB.
I found the following instructions: http://kxepal.iriscouch.com/docs/1.3/api/database/common.html
But I am a newbie and it also seems to be for a single JSON document. Im afraid I will just create one huge entry and not multiple documents entries which is what I want.
I have NodeJS but I don't have Cradle. Will I need it in order to perform this function? Any help is greatly appreciated!
Oh, no! That's bad idea to reference on docs at my Iris host - that's was preview dev build. Please follow the official docs: http://docs.couchdb.org/en/latest/api/index.html
To update multiple document's you need to use /db/_bulk_docs resource:
curl -X POST http://localhost:5984/db/_bulk_docs \
-H "Content-Type:application/json" \
-d '{"docs":[{"name":"Yoda"}, {"name":"Han Solo"}, {"name":"Leia"}]}'
So you see the format of data you should send to CouchDB? Now it's all depending from you JSON file format. If whole file is an array of objects: just wrap his content into {"docs":..} object. If not: you have to write some small library to convert this file data into the required format.
I am creating a web application that stores user's journeys and I'm build this application in CakePHP 2.2.5.
I have a Journey model and that journey model has many coordinate objects which store the lat long coordinates.
I'd like to send an API call to the journey's add method containing json in the body to create a new object.
I've tried to find out how to format the JSON to allow the controller to use it but I cant seem to find out. I'd also like to know how to add the child objects to the JSON.
I am testing the method using this cURL command
curl -i -H "ACCEPT: application/json" -X POST --data #/home/sam/data.json http://localhost/journeys/add.json
Running this CURL command create a blank entry in the database.
The JSON file included in the CURL contains
{"description" : "Hello World", "user_id":3}
It's better to convert json to array(with json_decode function) then manipulate your array, If you want at last convert it to json(json_encode function).
cakePHP handle this task by : JsHelper::object
edit 1: but I think when php do it for you , it's better use php pure function instead of cakePHP functions.
edit 2: I think cakePHP doesn't handle it , you should handle it by yourself, convert your json to array, change your array as you want in foreach then pass it to saveAll function to insert or update in database.
I've done some more research and found a solution to my problem:
CakePHP will save JSON data and it will also save nested model information.
$data = $this->request->input('json_decode');
Which is similar to what #Arash said but it will also save nested model information by calling
$this->Journey->saveAssociated($data)
The JSON I'm providing to the controller looks like:
{"Journey":
{"description":"Not even an article from the map", "user_id":3},
"Coordinate":[
{"lat":54.229149, "lng":-5.114115},
{"lat":54.39153, "lng":-5.114113}
]
}
Where coordinate belongs to Journey.