I have a windows batch script to perform POST request using curl and reads the data from JSON file, it works fine with only a single object in the file and it looks like this.
curl -u username#password -H "Content-Type: application/json" -d #file.json http://apiurl.com
and the json file is this:
{
"name": "Empty name",
"properties": {
"active": "True",
"subcity_zone": "East Hararge",
"woreda": "Meta"
}
}
But now I want to send the request each object in the array by iterating each item. So, How do I iterate each JSON object from the file?
Here is what the new JSON array file looks like:
[{
"name": "test facility I",
"properties": {
"active": "True",
"city": "",
"subcity_zone": "East Hararge",
"woreda": "Meta"
}
},
{
"name": "test facility II",
"properties": {
"active": "True",
"subcity_zone": "East Hararge",
"woreda": "Girawa"
}
}]
Using jq:
jq -c '.[]' file | while read js; do
curl -u username#password -H "Content-Type: application/json" -d #<(echo "$js") http://apiurl.com
done
The jq command extracts the each object in one line that is read by read command into the $js variable.
The <(echo "$js") creates a temporary file that is passed to curl.
Related
I have the following error when I try to ingest a json file
{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"failed to parse"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"not_x_content_exception","reason":"Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"}},"status":400}
My file is like this:
[{
"name":"John",
"age":30,
"cars":[ "Ford", "BMW", "Fiat" ]
},
{
"name":"John2",
"age":30,
"cars":[ "Ford2", "BMW2", "Fiat2" ]
}]
It works if I only have one record not more than one.
The curl command is like this:
curl -XPOST localhost:9200/cars/doc/1 -H "Content-Type: application/json" -d #cars.json
I have a Nexus Repository server where my artifacts are stored. I want to write a shell script to download artifacts from here. When using the curl request curl --user username:password -X GET "http://your_ip:your_port/service/rest/v1/search?repository=your_repository" -H "accept: application/json" I get a list of the items in my repository which looks like this:
{
"items": [
{
"id": "dGVzdC1hcHA6ZDM1MTBiN2FkMThkODJjZGU1NjNhMWVlMWFmOWIwMGQ",
"repository": "test-app",
"format": "maven2",
"group": "no.ahj",
"name": "test-app",
"version": "1.0-20190715.130341-2",
"assets": [
{
"downloadUrl": "http://192.168.56.2:8081/repository/test-app/no/ahj/test-app/1.0-SNAPSHOT/test-app-1.0-20190715.130341-2.pom",
"path": "no/ahj/test-app/1.0-SNAPSHOT/test-app-1.0-20190715.130341-2.pom",
"id": "dGVzdC1hcHA6Yzc3MDE2OWMwYjJlM2VkODU0MGMyOGEwOWQ0Njk4ZTQ",
"repository": "test-app",
"format": "maven2",
"checksum": {
"sha1": "5fd032774dd3ae6fbbd6484b3dc6ef2582d9b397",
"md5": "3a6aa8e295a734fdb8a8df782c0a14d5"
}
},
I would like my shell script to run this curl request, extract the value from the downloadURL field, store it in some variable and then use wget with this variable to download the file. So my question is this: How can I take the URL from downloadURL and store/use it in my shell script?
A way to parse it with likely no external dependencies (as python is installed by default on most Linux distributions) is just to use python:
user#host ~ % JSON=$(curl --user username:password -X GET "http://your_ip:your_port/service/rest/v1/search?repository=your_repository" -H "accept: application/json")
user#host ~ % echo $JSON | python -c 'import sys, json; print(json.load(sys.stdin)["items"][0]["assets"][0]["downloadUrl"])'
http://192.168.56.2:8081/repository/test-app/no/ahj/test-app/1.0-SNAPSHOT/test-app-1.0-20190715.130341-2.pom
If you are going to do a lot of JSON parsing in this script, it may be worth considering writing the entire script in Python, too, instead of shell script.
I am using JQ to parse JSON returned by Salesforce CLI.
My bash script is:
orgCreateResult=$(sfdx force:org:display -u mockScreenOrg --json)
orgCreateStatus=$(echo $orgCreateResult | ./bash-scripts/jq-win64.exe .status -r)
echo "orgCreateResult: "
echo "$orgCreateResult"
echo "orgCreateStatus: $orgCreateStatus"
instanceUrl=$(echo $checkDevHubResult | ./bash-scripts/jq-win64.exe .result -r)
echo "instanceUrl: $instanceUrl"
I am using bash terminal in VS Code on windows. Output I am getting from this script is :
orgCreateResult:
{
"status": 0,
"result": {
"username": "test-******#example.com",
"devHubId": "****#**.com",
"id": "************",
"createdBy": "*****#**.com",
"createdDate": "2019-05-13T14:15:32.000+0000",
"expirationDate": "2019-06-12",
"status": "Active",
"edition": "Developer",
"orgName": "HPESC",
"accessToken": "************************************",
"instanceUrl": "https://nosoftware-ruby-2532-dev-ed.cs6.my.salesforce.com",
"clientId": "PlatformCLI",
"alias": "mockScreenOrg"
}
}
orgCreateStatus: 0
instanceUrl:
I am able to read status field from JSON but it kinda fails with result object. I want to read instanceUrl field from JSON to use later in my script. Not sure if I am doing something wrong. I am very new to bash and JQ
Try this:
instanceUrl=$(echo $orgCreateResult | ./bash-scripts/jq-win64.exe .result.instanceUrl -r)
I have a JSON file in the following format:
"rows": [
{
"key": [
null,
null,
"dco_test_group",
"3d3ce6270fdfuashge12e1d41af93179",
"test_djougou"
],
"value": {
"lat": "31.538208354844658",
"long": "91.98762580927113"
}
},
{
"key": [
null,
null,
"dco_test_group",
"4cda7jhsadgfs6123hsdaa9069ade2",
"test_ouake"
],
"value": {
"lat": "59.696798503352547",
"long": "11.6626995307082464"
}
},
I want to import the file such that each object inside rows becomes a couchdb document. Right now, I have the following code:
curl -X PUT --data-binary #"C:\Users\me\Dropbox (Personal)\Research\Folder\location.json" http://127.0.0.1:5984/db/document_name
This adds all the data inside document_name.
If I try:
curl -X PUT --data-binary #"C:\Users\me\Dropbox (Personal)\Research\Folder\location.json" http://127.0.0.1:5984/db
a new db is created but no data gets added. How do I edit the code to get the desired output?
UPDATE 1
Does it matter if all the data is in record? Are there any rules analogous to 5 normal forms of RDB?
Use the bulk-document-api for this. Here is an example from the docs >> https://wiki.apache.org/couchdb/HTTP_Bulk_Document_API#Modify_Multiple_Documents_With_a_Single_Request
$ DB="http://127.0.0.1:5984/mydb"
$ curl -H "Content-type:application/json" -d '{"docs":[{"key":"baz","name":"bazzel"},{"key":"bar","name":"barry"}]}' -X POST $DB/_bulk_docs
$ curl -H "Content-type:application/json" -d #your_file.json -X POST $DB/_bulk_docs
Note, that all docs are items within a 'docs' array.
I need to delete some wrong data, inserted in a lot of processes, and I need to figure if this is possible with cURL and rest API, with a script in sh, batch or something like this:
curl -u admin:admin -i -H "Accept: application/json" -X GET "http://json_bpm.com/wle/v1/service/6112449?action=getData&fields=context"
First I just need the result map.
Output:
{"status":"200","data":{"result":"{\"context\":{\"name\":\"xxx\" (...)
"resultMap":{"context":{"name\":"xxx\" (...) }}}
Because I need to remove the userDelete array (see below) for thousands of processes, and set this again using curl. If you know how to remove arrays from JSON too, you're the man. :)
{
"context": {
"name": "Change Process",
"startUser": {
"user": "0001"
},
"endUser": {
"user": "0001"
},
"userDelete": {
"user": "0002"
},
"origin": "GUI",
"userAction": "Change Process"
}
}