I am getting any "error: bad request,invalid json" while running
curl -X PUT "http://localhost:5984/test" -d '{"valid":"json"}'
what to do for inserting document in database test through command line???
When doing a PUT the _id of the document should be provided in the URL. So e.g.:
curl -X PUT "http://localhost:5984/test/my-id" -d '{"valid":"json"}'
If you want Couch to generate the id, use a POST instead.
Related
we have Ambari cluster Version 2.5.0.3 , while all clients machines are Linux redhat
first I generated the json file to my Linux machine as the following: ( on ambari server machine )
curl -H "X-Requested-By: ambari" -X GET -u admin:admin http://130.14.6.28:8080/api/v1/clusters/HDP01\?format\=blueprint > blueprint.json
then I update the blueprint.json file with some changes about the parameters and their values
finally my target is to upload the new blueprint.json to ambari cluster in order to take affect !
path=/root
curl -H "X-Requested-By: ambari" --data # -X POST -u admin:admin http://130.14.6.28:8080/api/v1/blueprints/HDP01 -d #$path/blueprint.json
but I get the following errors ( seems because wrong syntax )
Warning: Couldn't read data from file "", this makes an empty POST. {
"status" : 400, "message" : "Invalid Request: Malformed Request Body. An
exception occurred parsing the request body: Unexpected character ('&'
(code 38)): expected a valid value (number, String, array, object, 'true',
'false' or 'null')\n at [Source: java.io.StringReader#4a3484a6; line: 1,
column: 3]"
please advice what is wrong in my syntax ?
And what is the right syntax in order to upload the new update blueprint.json file
Did you try to validate your JSON online e.g. at https://jsonformatter.curiousconcept.com/ ?
Looks like the problem is with general JSON syntax
In curl command used to upload new bloueprint.json, you are using --data # as well as -d #$path/blueprint.json. -d and --data are used for same purpose hence first occurance i.e. --data # takes effect and command tried to locate file with no path i.e. "".
You may remove --data # to fix Couldn't read data from file "" error.
I have been trying to call the CloudFlare API v4, using an example provided in their own documentation.
This is the code of the example
curl -X PUT "https://api.cloudflare.com/client/v4/zones/023e105f4ecef8ad9ca31a8372d0c353/dns_records/372e67954025e0ba6aaa6d586b9e0b59" \ -H "X-Auth-Email: user#example.com" \ -H "X-Auth-Key: c2547eb745079dac9320b638f5e225cf483cc5cfdda41" \ -H "Content-Type: application/json" \ --data '{"id":"372e67954025e0ba6aaa6d586b9e0b59","type":"A","name":"example.com","content":"1.2.3.4","proxiable":true,"proxied":false,"ttl":120,"locked":false,"zone_id":"023e105f4ecef8ad9ca31a8372d0c353","zone_name":"example.com","created_on":"2014-01-01T05:20:00.12345Z","modified_on":"2014-01-01T05:20:00.12345Z","data":{}}'
Which can also be found at
Update DNS Records
Using Windows cmd.exe to run this command, I need to make it single line first, so I removed the "" and reformatted it (twice) making sure I altered no part in the process.
This is the same code in one line:
curl -X PUT "https://api.cloudflare.com/client/v4/zones/023e105f4ecef8ad9ca31a8372d0c353/dns_records/372e67954025e0ba6aaa6d586b9e0b59" -H "X-Auth-Email: user#example.com" -H "X-Auth-Key: c2547eb745079dac9320b638f5e225cf483cc5cfdda41" -H "Content-Type: application/json" --data '{"id":"372e67954025e0ba6aaa6d586b9e0b59","type":"A","name":"example.com","content":"1.2.3.4","proxiable":true,"proxied":false,"ttl":120,"locked":false,"zone_id":"023e105f4ecef8ad9ca31a8372d0c353","zone_name":"example.com","created_on":"2014-01-01T05:20:00.12345Z","modified_on":"2014-01-01T05:20:00.12345Z","data":{}}'
When I run this single-liner in cmd, it works but I get a malformed JSON in request body, however, a visual check, formatting on Notepad++ and a run through the JSON validator are all positive, this JSON (copied from the CloudFlare documentation) is not malformed.
Error Message
{"success":false,"errors":[{"code":6007,"message":"Malformed JSON in request body"}],"messages":[],"result":null}
Googling this error message or the error code gives me nothing and this same command works on a PC running Linux.
Can someone tell me if this is a known bug, if the JSON really is malformed or if something else comes to mind?
I found the answer in the blog post: "Expecting to find valid JSON in request body..." curl for Windows.
For example, for Purge everything --data value will be:
# On Linux
--data '{"purge_everything":true}'
# On Windows
--data "{\"purge_everything\":true}"
For Windows:
Replace the single quotes with double quotes: ' --> "
Escape the double quotes with a backslash: " --> \"
cmd.exe doesn't support single quotes, to run those commands straight from the docs you can use Bash.
Bash can be enabled in Windows 10 : https://www.laptopmag.com/uk/articles/use-bash-shell-windows-10
or Git Bash comes with Git for windows: https://gitforwindows.org/
I have a curl which sends application/json data .when i type this url directly through unix console,it works fine.But when i take this url and store it in a csv file and through shell script try accessing this file and read each curl and esecute it through backticks i am facing two problems
1) it is not allowing spaces in the json data being posted
2) the content type is not being set
Please find below the same url
curl -i -X PUT -H 'content-type:application/json' -H "Accept:application/json" -d '{"startTime":1426172400000,"endTime":1426173300000,"attributes":{"title":"X X X","link":"https://someurl.com}}' http://10.10.7.90:9084/myapp/rest/app/706128.api`
I found the issue .It was in my shell script .Insead of using the back tick '`' to execute the curl ,i used eval curl .which solved the issue
Running the following command from a Windows command line using cURL attempting to post a new document to an existing CouchDB database (named test) fails:
curl -H "Content-Type: application/json" -X POST "http://127.0.0.1:5984/test" -d {"valid":"json"}
It returns the error:
{"error":"bad_request","reason":"invalid_json"}
The JSON is valid so what gives?
The answer is related to the formatting of the JSON string on the command line. Even though it is proper JSON when you type it, the command line, it seems, must reformat it before sending it.(Maybe someone else can explain why it does this in more detail.) To fix this you need to escape your quotations in the command line like so:
curl -H "Content-Type: application/json" -X POST "http://127.0.0.1:5984/test" -d {"""valid""":"""json"""}
See the extra quotation marks? This should work and return "ok:true" with an id and revision number.
You have to quote also the whole statement to support spaces like: -d "{\"title\":\"There is Nothing Left to Lose\" , \"artist\":\"Foo Fighters\"}"
I am new to Elasticsearch and have been entering data manually up until this point. For example I've done something like this:
$ curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
}'
I now have a .json file and I want to index this into Elasticsearch. I've tried something like this too, but no success:
curl -XPOST 'http://jfblouvmlxecs01:9200/test/test/1' -d lane.json
How do I import a .json file? Are there steps I need to take first to ensure the mapping is correct?
The right command if you want to use a file with curl is this:
curl -XPOST 'http://jfblouvmlxecs01:9200/test/_doc/1' -d #lane.json
Elasticsearch is schemaless, therefore you don't necessarily need a mapping. If you send the json as it is and you use the default mapping, every field will be indexed and analyzed using the standard analyzer.
If you want to interact with Elasticsearch through the command line, you may want to have a look at the elasticshell which should be a little bit handier than curl.
2019-07-10: Should be noted that custom mapping types is deprecated and should not be used. I updated the type in the url above to make it easier to see which was the index and which was the type as having both named "test" was confusing.
Per the current docs, https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html:
If you’re providing text file input to curl, you must use the
--data-binary flag instead of plain -d. The latter doesn’t preserve newlines.
Example:
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests
We made a little tool for this type of thing https://github.com/taskrabbit/elasticsearch-dump
One thing I've not seen anyone mention: the JSON file must have one line specifying the index the next line belongs to, for every line of the "pure" JSON file.
I.E.
{"index":{"_index":"shakespeare","_type":"act","_id":0}}
{"line_id":1,"play_name":"Henry IV","speech_number":"","line_number":"","speaker":"","text_entry":"ACT I"}
Without that, nothing works, and it won't tell you why
I'm the author of elasticsearch_loader
I wrote ESL for this exact problem.
You can download it with pip:
pip install elasticsearch-loader
And then you will be able to load json files into elasticsearch by issuing:
elasticsearch_loader --index incidents --type incident json file1.json file2.json
I just made sure that I am in the same directory as the json file and then simply ran this
curl -s -H "Content-Type: application/json" -XPOST localhost:9200/product/default/_bulk?pretty --data-binary #product.json
So if you too make sure you are at the same directory and run it this way.
Note: product/default/ in the command is something specific to my environment. you can omit it or replace it with whatever is relevant to you.
Adding to KenH's answer
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests
You can replace #requests with #complete_path_to_json_file
Note: #is important before the file path
just get postman from https://www.getpostman.com/docs/environments give it the file location with /test/test/1/_bulk?pretty command.
You are using
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests
If 'requests' is a json file then you have to change this to
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests.json
Now before this, if your json file is not indexed, you have to insert an index line before each line inside the json file. You can do this with JQ. Refer below link:
http://kevinmarsh.com/2014/10/23/using-jq-to-import-json-into-elasticsearch.html
Go to elasticsearch tutorials (example the shakespeare tutorial) and download the json file sample used and have a look at it. In front of each json object (each individual line) there is an index line. This is what you are looking for after using the jq command. This format is mandatory to use the bulk API, plain json files wont work.
As of Elasticsearch 7.7, you have to specify the content type also:
curl -s -H "Content-Type: application/json" -XPOST localhost:9200/_bulk --data-binary #<absolute path to JSON file>
I wrote some code to expose the Elasticsearch API via a Filesystem API.
It is good idea for clear export/import of data for example.
I created prototype elasticdriver. It is based on FUSE
If you are using the elastic search 7.7 or above version then follow below command.
curl -H "Content-Type: application/json" -XPOST "localhost:9200/bank/_bulk? pretty&refresh" --data-binary #"/Users/waseem.khan/waseem/elastic/account.json"
On above file path is /Users/waseem.khan/waseem/elastic/account.json.
If you are using elastic search 6.x version then you can use the below command.
curl -X POST localhost:9200/bank/_bulk?pretty&refresh --data-binary #"/Users/waseem.khan/waseem/elastic/account.json" -H 'Content-Type: application/json'
Note: Make sure in your .json file at the end you will add the one empty
line otherwise you will be getting below exception.
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "The bulk request must be terminated by a newline [\n]"
}
],
"type" : "illegal_argument_exception",
"reason" : "The bulk request must be terminated by a newline [\n]"
},
`enter code here`"status" : 400
if you are using VirtualBox and UBUNTU in it or you are simply using UBUNTU then it can be useful
wget https://github.com/andrewvc/ee-datasets/archive/master.zip
sudo apt-get install unzip (only if unzip module is not installed)
unzip master.zip
cd ee-datasets
java -jar elastic-loader.jar http://localhost:9200 datasets/movie_db.eloader
If you want to import a json file into Elasticsearch and create an index, use this Python script.
import json
from elasticsearch import Elasticsearch
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
i = 0
with open('el_dharan.json') as raw_data:
json_docs = json.load(raw_data)
for json_doc in json_docs:
i = i + 1
es.index(index='ind_dharan', doc_type='doc_dharan', id=i, body=json.dumps(json_doc))