cURL download files issue - json

When i give the URL (http://192.168.150.41:8080/filereport/31779/json/) in browser, It automatically downloads the file as 31779_report.json.
Now using i'm trying to download the file using curl but i get the following error.
$ curl -O http://192.168.150.41:8080/filereport/31779/json/
curl: Remote file name has no length!
curl: try 'curl --help' or 'curl --manual' for more information
When using the '-L' switch , I get the JSON content displayed but the file is not saved.
$curl -L http://192.168.150.41:8080/filereport/31779/json/
{
.....
.....
}
How to download the exact file "31779_report.json" using cURL / wget ?
I don't want the contents to be redirected (>) manually to a file (31779_report.json).
Any suggestions please ?

The -O flag of curl tries to use the remote name of the file, but because your URL schema does not end with a filename, it can not do this. The -o flag (lower-case o) can be used to specify a file name manually without redirecting STDOUT like so:
curl <address> -o filename.json
You can manually construct the filename format you want using awk. For example:
URL=http://192.168.150.41:8080/filereport/31779/json/
file_number=$(echo $URL | awk -F/ '{print $(NF-2)}')
file_name="${file_number}_report.json"
curl -L "$URL" -o "$file_name"
Hope this is more helpful.

wget --content-disposition did the trick for me (https://askubuntu.com/a/77713/18665)
$ wget --content-disposition https://www.archlinux.org/packages/core/x86_64/lib32-glibc/download/
...
Saving to: 'lib32-glibc-2.33-4-x86_64.pkg.tar.zst'
Compare to curl:
$ curl -LO https://www.archlinux.org/packages/core/x86_64/lib32-glibc/download/
curl: Remote file name has no length!
curl: (23) Failed writing received data to disk/application
And wget without --content-disposition:
$ wget https://www.archlinux.org/packages/core/x86_64/lib32-glibc/download/
...
Saving to: 'index.html'

Related

how to retrieve json files from url inside a text file

I have to check the json data from an url using curl
curl -H "User-agent: 'your bot 0.1'" url.json | jq
this code is working
i wanted try this for a .txt file containing 200 url
like these
https://www.reddit.com/user/wanderer_007_.json
https://www.reddit.com/....
https://www.reddit.com/....
https://www.reddit.com/....
https://www.reddit.com/....
these are just examples. but whenever I give the text file as an input
#!/usr/bin/bash
while read -r line; do
name="$line"
curl -H "User-agent: 'your bot 0.1'" $name | jq
done < test001.txt
curl: (3) URL using bad/illegal format or missing URL
curl: (3) URL using bad/illegal format or missing URL
curl: (3) URL using bad/illegal format or missing URL
curl: (3) URL using bad/illegal format or missing URL
curl: (3) URL using bad/illegal format or missing URL
but if I try to use the url individually it is working as intended
Try to wrap the filename in curl command with double quotes:
curl -H "User-agent: 'your bot 0.1'" "$name" | jq
The error may be due to Bash interpreting some wierd characters in URLs specially. For example space in the URL may cause $name to split into 2 command-line arguments making curl unable to parse its inputs.

curl xput windows Couldn't read data from file

I am using cURL in windows using command propmpt. When I am executing this command:
curl -XPUT "localhost:8983/solr/techproducts/schema/feature-store" --data-binary "#/path/myFeatures.json" -H "Content-type:application/json"
I am getting following error:
Warning: Couldn't read data from file "/path/myFeatures.json", this makes an
Warning: empty POST.
I have updated the file permissions and file is also present at the specific path.
What can be the possible solutions?
If you really have a file named myFeatures.json into a folder named path in the current folder where you're trying to submit curl, just remove the leading slash from the path.
curl -XPUT "localhost:8983/solr/techproducts/schema/feature-store" --data-binary "#path/myFeatures.json" -H "Content-type:application/json"
On the other hand, try to specify the absolute path to your myFeatures.json.
curl -XPUT "localhost:8983/solr/techproducts/schema/feature-store" --data-binary "#C:\your\complete\path\myFeatures.json" -H "Content-type:application/json"
I had the same problem, and in my case, it turned out that this was caused by using ~ in my path.
The simplest solution is to change the extension of a file from ".json" to ".txt".

Bash to get JSON doc from Github

I am trying to do a curl to basically clone a package.json in my local directory, by using:
curl -U "<email>":"<pass>" -L "https://github.com/flowrepo/blob/master/package.json"
I managed to get pass the redirect by using the -L flag, but I cannot get a valid JSON doc as the command above is returning the entire Github page. Any thoughts?
Access the "raw" variant by changing the URL to:
https://github.com/flowrepo/raw/master/package.json"
[Solved]
Thanks Hans Z. for your answer. This is the final call that works:
curl -U "<email>":"<pass>" -L "https://github.com/flowrepo/raw/master/package.json?token<retrieved_token> -o output.txt

Curl commands not executed properly when escuted through shell script

I have a curl which sends application/json data .when i type this url directly through unix console,it works fine.But when i take this url and store it in a csv file and through shell script try accessing this file and read each curl and esecute it through backticks i am facing two problems
1) it is not allowing spaces in the json data being posted
2) the content type is not being set
Please find below the same url
curl -i -X PUT -H 'content-type:application/json' -H "Accept:application/json" -d '{"startTime":1426172400000,"endTime":1426173300000,"attributes":{"title":"X X X","link":"https://someurl.com}}' http://10.10.7.90:9084/myapp/rest/app/706128.api`
I found the issue .It was in my shell script .Insead of using the back tick '`' to execute the curl ,i used eval curl .which solved the issue

Import/Index a JSON file into Elasticsearch

I am new to Elasticsearch and have been entering data manually up until this point. For example I've done something like this:
$ curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
}'
I now have a .json file and I want to index this into Elasticsearch. I've tried something like this too, but no success:
curl -XPOST 'http://jfblouvmlxecs01:9200/test/test/1' -d lane.json
How do I import a .json file? Are there steps I need to take first to ensure the mapping is correct?
The right command if you want to use a file with curl is this:
curl -XPOST 'http://jfblouvmlxecs01:9200/test/_doc/1' -d #lane.json
Elasticsearch is schemaless, therefore you don't necessarily need a mapping. If you send the json as it is and you use the default mapping, every field will be indexed and analyzed using the standard analyzer.
If you want to interact with Elasticsearch through the command line, you may want to have a look at the elasticshell which should be a little bit handier than curl.
2019-07-10: Should be noted that custom mapping types is deprecated and should not be used. I updated the type in the url above to make it easier to see which was the index and which was the type as having both named "test" was confusing.
Per the current docs, https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html:
If you’re providing text file input to curl, you must use the
--data-binary flag instead of plain -d. The latter doesn’t preserve newlines.
Example:
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests
We made a little tool for this type of thing https://github.com/taskrabbit/elasticsearch-dump
One thing I've not seen anyone mention: the JSON file must have one line specifying the index the next line belongs to, for every line of the "pure" JSON file.
I.E.
{"index":{"_index":"shakespeare","_type":"act","_id":0}}
{"line_id":1,"play_name":"Henry IV","speech_number":"","line_number":"","speaker":"","text_entry":"ACT I"}
Without that, nothing works, and it won't tell you why
I'm the author of elasticsearch_loader
I wrote ESL for this exact problem.
You can download it with pip:
pip install elasticsearch-loader
And then you will be able to load json files into elasticsearch by issuing:
elasticsearch_loader --index incidents --type incident json file1.json file2.json
I just made sure that I am in the same directory as the json file and then simply ran this
curl -s -H "Content-Type: application/json" -XPOST localhost:9200/product/default/_bulk?pretty --data-binary #product.json
So if you too make sure you are at the same directory and run it this way.
Note: product/default/ in the command is something specific to my environment. you can omit it or replace it with whatever is relevant to you.
Adding to KenH's answer
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests
You can replace #requests with #complete_path_to_json_file
Note: #is important before the file path
just get postman from https://www.getpostman.com/docs/environments give it the file location with /test/test/1/_bulk?pretty command.
You are using
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests
If 'requests' is a json file then you have to change this to
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests.json
Now before this, if your json file is not indexed, you have to insert an index line before each line inside the json file. You can do this with JQ. Refer below link:
http://kevinmarsh.com/2014/10/23/using-jq-to-import-json-into-elasticsearch.html
Go to elasticsearch tutorials (example the shakespeare tutorial) and download the json file sample used and have a look at it. In front of each json object (each individual line) there is an index line. This is what you are looking for after using the jq command. This format is mandatory to use the bulk API, plain json files wont work.
As of Elasticsearch 7.7, you have to specify the content type also:
curl -s -H "Content-Type: application/json" -XPOST localhost:9200/_bulk --data-binary #<absolute path to JSON file>
I wrote some code to expose the Elasticsearch API via a Filesystem API.
It is good idea for clear export/import of data for example.
I created prototype elasticdriver. It is based on FUSE
If you are using the elastic search 7.7 or above version then follow below command.
curl -H "Content-Type: application/json" -XPOST "localhost:9200/bank/_bulk? pretty&refresh" --data-binary #"/Users/waseem.khan/waseem/elastic/account.json"
On above file path is /Users/waseem.khan/waseem/elastic/account.json.
If you are using elastic search 6.x version then you can use the below command.
curl -X POST localhost:9200/bank/_bulk?pretty&refresh --data-binary #"/Users/waseem.khan/waseem/elastic/account.json" -H 'Content-Type: application/json'
Note: Make sure in your .json file at the end you will add the one empty
line otherwise you will be getting below exception.
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "The bulk request must be terminated by a newline [\n]"
}
],
"type" : "illegal_argument_exception",
"reason" : "The bulk request must be terminated by a newline [\n]"
},
`enter code here`"status" : 400
if you are using VirtualBox and UBUNTU in it or you are simply using UBUNTU then it can be useful
wget https://github.com/andrewvc/ee-datasets/archive/master.zip
sudo apt-get install unzip (only if unzip module is not installed)
unzip master.zip
cd ee-datasets
java -jar elastic-loader.jar http://localhost:9200 datasets/movie_db.eloader
If you want to import a json file into Elasticsearch and create an index, use this Python script.
import json
from elasticsearch import Elasticsearch
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
i = 0
with open('el_dharan.json') as raw_data:
json_docs = json.load(raw_data)
for json_doc in json_docs:
i = i + 1
es.index(index='ind_dharan', doc_type='doc_dharan', id=i, body=json.dumps(json_doc))