curl xput windows Couldn't read data from file - json

I am using cURL in windows using command propmpt. When I am executing this command:
curl -XPUT "localhost:8983/solr/techproducts/schema/feature-store" --data-binary "#/path/myFeatures.json" -H "Content-type:application/json"
I am getting following error:
Warning: Couldn't read data from file "/path/myFeatures.json", this makes an
Warning: empty POST.
I have updated the file permissions and file is also present at the specific path.
What can be the possible solutions?

If you really have a file named myFeatures.json into a folder named path in the current folder where you're trying to submit curl, just remove the leading slash from the path.
curl -XPUT "localhost:8983/solr/techproducts/schema/feature-store" --data-binary "#path/myFeatures.json" -H "Content-type:application/json"
On the other hand, try to specify the absolute path to your myFeatures.json.
curl -XPUT "localhost:8983/solr/techproducts/schema/feature-store" --data-binary "#C:\your\complete\path\myFeatures.json" -H "Content-type:application/json"

I had the same problem, and in my case, it turned out that this was caused by using ~ in my path.

The simplest solution is to change the extension of a file from ".json" to ".txt".

Related

JSON format error thrown by cUrl but not Wget?

I am sending a POST request via a batch script on Windows. I have manually installed wget and all works fine, but I'd like to use cUrl as it recently became a Windows 10 standard feature and can be run on other newer computers. The problem is that cUrl throws a Json formatting error despite not seeming to have any special character exceptions.
I've tried changing quotations to apostrophes and vice versa, and using backslashes and carets as escape characters. This wget script works:
wget --quiet ^
--method POST ^
--header 'content-type: application/json' ^
--body-data '{"method":"passthrough", "params": {"deviceId": "[MyId]", "requestData": "{\"system\":{\"set_relay_state\":{\"state\":0}}}" }}' ^
--output-document ^
- 'https://eu-wap.tplinkcloud.com/?token=[MyToken]'
However the exact same JSON in cUrl is refused:
curl -X POST -H 'content-type:application/json' -d '{"method":"passthrough","params":{"deviceId":"[MyId]","requestData":"{\"system\":{\"set_relay_state\":{\"state\":0}}}"}}' https://wap.tplinkcloud.com?token=[MyToken]
Thanks for any advice.
You are using the curl (C:\Windows\System32\curl.exe) at Windows 10 on the command line for Windows.
If my understanding is correct, how about this modification?
Modified curl command for Windows:
curl -H "content-type:application/json" -d "{\"method\":\"passthrough\",\"params\":{\"deviceId\":\"[MyId]\",\"requestData\":\"{\\\"system\\\":{\\\"set_relay_state\\\":{\\\"state\\\":0}}}\"}}" https://eu-wap.tplinkcloud.com/?token=[MyToken]
At Windows command line, please use the double quotes instead of the single quotes.
At JSON object, please escape each double quotes.
Note:
I'm not sure about the correct URL from your reply. So please use the correct URL when you test above.
I'm also using Curl on windows and kept getting JSON Error,
I switched the single and double quotes and it worked fine
curl -H "Content-Type:application/json" -H "Authorization:Bearer <token>" -d "{'fileid':'<fileid>','printerid':'<printerid>','type':'pdf'}" "https://printapi.ezeep.com/sfapi/Print"

Why do I get a malformed JSON in request body in this cURL call?

I have been trying to call the CloudFlare API v4, using an example provided in their own documentation.
This is the code of the example
curl -X PUT "https://api.cloudflare.com/client/v4/zones/023e105f4ecef8ad9ca31a8372d0c353/dns_records/372e67954025e0ba6aaa6d586b9e0b59" \ -H "X-Auth-Email: user#example.com" \ -H "X-Auth-Key: c2547eb745079dac9320b638f5e225cf483cc5cfdda41" \ -H "Content-Type: application/json" \ --data '{"id":"372e67954025e0ba6aaa6d586b9e0b59","type":"A","name":"example.com","content":"1.2.3.4","proxiable":true,"proxied":false,"ttl":120,"locked":false,"zone_id":"023e105f4ecef8ad9ca31a8372d0c353","zone_name":"example.com","created_on":"2014-01-01T05:20:00.12345Z","modified_on":"2014-01-01T05:20:00.12345Z","data":{}}'
Which can also be found at
Update DNS Records
Using Windows cmd.exe to run this command, I need to make it single line first, so I removed the "" and reformatted it (twice) making sure I altered no part in the process.
This is the same code in one line:
curl -X PUT "https://api.cloudflare.com/client/v4/zones/023e105f4ecef8ad9ca31a8372d0c353/dns_records/372e67954025e0ba6aaa6d586b9e0b59" -H "X-Auth-Email: user#example.com" -H "X-Auth-Key: c2547eb745079dac9320b638f5e225cf483cc5cfdda41" -H "Content-Type: application/json" --data '{"id":"372e67954025e0ba6aaa6d586b9e0b59","type":"A","name":"example.com","content":"1.2.3.4","proxiable":true,"proxied":false,"ttl":120,"locked":false,"zone_id":"023e105f4ecef8ad9ca31a8372d0c353","zone_name":"example.com","created_on":"2014-01-01T05:20:00.12345Z","modified_on":"2014-01-01T05:20:00.12345Z","data":{}}'
When I run this single-liner in cmd, it works but I get a malformed JSON in request body, however, a visual check, formatting on Notepad++ and a run through the JSON validator are all positive, this JSON (copied from the CloudFlare documentation) is not malformed.
Error Message
{"success":false,"errors":[{"code":6007,"message":"Malformed JSON in request body"}],"messages":[],"result":null}
Googling this error message or the error code gives me nothing and this same command works on a PC running Linux.
Can someone tell me if this is a known bug, if the JSON really is malformed or if something else comes to mind?
I found the answer in the blog post: "Expecting to find valid JSON in request body..." curl for Windows.
For example, for Purge everything --data value will be:
# On Linux
--data '{"purge_everything":true}'
# On Windows
--data "{\"purge_everything\":true}"
For Windows:
Replace the single quotes with double quotes: ' --> "
Escape the double quotes with a backslash: " --> \"
cmd.exe doesn't support single quotes, to run those commands straight from the docs you can use Bash.
Bash can be enabled in Windows 10 : https://www.laptopmag.com/uk/articles/use-bash-shell-windows-10
or Git Bash comes with Git for windows: https://gitforwindows.org/

Use cURL to add a JSON web page's data in Solr

I see from the UpdateJSON page how to use a command prompt to index a standalone file stored locally. Using this example I was able to successfully make a .json file accessible through Solr:
curl 'http://localhost:8983/solr/update/json?commit=true' --data-binary #books.json -H 'Content-type:application/json'
What I'm not able to find is the proper syntax to do the same for a webpage containing JSON data. I've tried with the #:
curl 'http://localhost:8983/solr/update/json?commit=true' --data-binary #[URL] -H 'Content-type:application/json'
and without:
curl 'http://localhost:8983/solr/update/json?commit=true' --data-binary [URL] -H 'Content-type:application/json'
Both ways lead to errors. How do I configure a command to prompt Solr to index the contents at [URL]?
According to documentation, (https://wiki.apache.org/solr/ContentStream) you should first ensure remote streaming is enabled (solrconfig, search for enableRemoteStreaming)
Then the command should be of the kind:
curl 'http://localhost:8983/solr/update/json?commit=true&stream.url=YOURURL' -H 'Content-type:application/json'

Curl commands not executed properly when escuted through shell script

I have a curl which sends application/json data .when i type this url directly through unix console,it works fine.But when i take this url and store it in a csv file and through shell script try accessing this file and read each curl and esecute it through backticks i am facing two problems
1) it is not allowing spaces in the json data being posted
2) the content type is not being set
Please find below the same url
curl -i -X PUT -H 'content-type:application/json' -H "Accept:application/json" -d '{"startTime":1426172400000,"endTime":1426173300000,"attributes":{"title":"X X X","link":"https://someurl.com}}' http://10.10.7.90:9084/myapp/rest/app/706128.api`
I found the issue .It was in my shell script .Insead of using the back tick '`' to execute the curl ,i used eval curl .which solved the issue

Import/Index a JSON file into Elasticsearch

I am new to Elasticsearch and have been entering data manually up until this point. For example I've done something like this:
$ curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
}'
I now have a .json file and I want to index this into Elasticsearch. I've tried something like this too, but no success:
curl -XPOST 'http://jfblouvmlxecs01:9200/test/test/1' -d lane.json
How do I import a .json file? Are there steps I need to take first to ensure the mapping is correct?
The right command if you want to use a file with curl is this:
curl -XPOST 'http://jfblouvmlxecs01:9200/test/_doc/1' -d #lane.json
Elasticsearch is schemaless, therefore you don't necessarily need a mapping. If you send the json as it is and you use the default mapping, every field will be indexed and analyzed using the standard analyzer.
If you want to interact with Elasticsearch through the command line, you may want to have a look at the elasticshell which should be a little bit handier than curl.
2019-07-10: Should be noted that custom mapping types is deprecated and should not be used. I updated the type in the url above to make it easier to see which was the index and which was the type as having both named "test" was confusing.
Per the current docs, https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html:
If you’re providing text file input to curl, you must use the
--data-binary flag instead of plain -d. The latter doesn’t preserve newlines.
Example:
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests
We made a little tool for this type of thing https://github.com/taskrabbit/elasticsearch-dump
One thing I've not seen anyone mention: the JSON file must have one line specifying the index the next line belongs to, for every line of the "pure" JSON file.
I.E.
{"index":{"_index":"shakespeare","_type":"act","_id":0}}
{"line_id":1,"play_name":"Henry IV","speech_number":"","line_number":"","speaker":"","text_entry":"ACT I"}
Without that, nothing works, and it won't tell you why
I'm the author of elasticsearch_loader
I wrote ESL for this exact problem.
You can download it with pip:
pip install elasticsearch-loader
And then you will be able to load json files into elasticsearch by issuing:
elasticsearch_loader --index incidents --type incident json file1.json file2.json
I just made sure that I am in the same directory as the json file and then simply ran this
curl -s -H "Content-Type: application/json" -XPOST localhost:9200/product/default/_bulk?pretty --data-binary #product.json
So if you too make sure you are at the same directory and run it this way.
Note: product/default/ in the command is something specific to my environment. you can omit it or replace it with whatever is relevant to you.
Adding to KenH's answer
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests
You can replace #requests with #complete_path_to_json_file
Note: #is important before the file path
just get postman from https://www.getpostman.com/docs/environments give it the file location with /test/test/1/_bulk?pretty command.
You are using
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests
If 'requests' is a json file then you have to change this to
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests.json
Now before this, if your json file is not indexed, you have to insert an index line before each line inside the json file. You can do this with JQ. Refer below link:
http://kevinmarsh.com/2014/10/23/using-jq-to-import-json-into-elasticsearch.html
Go to elasticsearch tutorials (example the shakespeare tutorial) and download the json file sample used and have a look at it. In front of each json object (each individual line) there is an index line. This is what you are looking for after using the jq command. This format is mandatory to use the bulk API, plain json files wont work.
As of Elasticsearch 7.7, you have to specify the content type also:
curl -s -H "Content-Type: application/json" -XPOST localhost:9200/_bulk --data-binary #<absolute path to JSON file>
I wrote some code to expose the Elasticsearch API via a Filesystem API.
It is good idea for clear export/import of data for example.
I created prototype elasticdriver. It is based on FUSE
If you are using the elastic search 7.7 or above version then follow below command.
curl -H "Content-Type: application/json" -XPOST "localhost:9200/bank/_bulk? pretty&refresh" --data-binary #"/Users/waseem.khan/waseem/elastic/account.json"
On above file path is /Users/waseem.khan/waseem/elastic/account.json.
If you are using elastic search 6.x version then you can use the below command.
curl -X POST localhost:9200/bank/_bulk?pretty&refresh --data-binary #"/Users/waseem.khan/waseem/elastic/account.json" -H 'Content-Type: application/json'
Note: Make sure in your .json file at the end you will add the one empty
line otherwise you will be getting below exception.
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "The bulk request must be terminated by a newline [\n]"
}
],
"type" : "illegal_argument_exception",
"reason" : "The bulk request must be terminated by a newline [\n]"
},
`enter code here`"status" : 400
if you are using VirtualBox and UBUNTU in it or you are simply using UBUNTU then it can be useful
wget https://github.com/andrewvc/ee-datasets/archive/master.zip
sudo apt-get install unzip (only if unzip module is not installed)
unzip master.zip
cd ee-datasets
java -jar elastic-loader.jar http://localhost:9200 datasets/movie_db.eloader
If you want to import a json file into Elasticsearch and create an index, use this Python script.
import json
from elasticsearch import Elasticsearch
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
i = 0
with open('el_dharan.json') as raw_data:
json_docs = json.load(raw_data)
for json_doc in json_docs:
i = i + 1
es.index(index='ind_dharan', doc_type='doc_dharan', id=i, body=json.dumps(json_doc))