how to GET json query results from Windows command terminal - json

I have a json query that I am trying to pass through the command line
I have tried replacing outer single quotes with double quotes, but still shows me error
curl -XGET "http://localhost:9200/honda/_search?pretty" -H 'Content-Type:
application/json' -d “#{“query”:{"match":{"color":"silver"}}}”
Expected: Documents that matches field:silver
Actual Error: Warning:
Couldn't read data from file "", this makes an empty POST
THEN DISPLAYS MY ALL MY DOCUMENTS:
{
"_index" : "honda",
"_type" : "_doc",
"_id" : "234",
"_score" : 1.0,
"_source" : {
"model" : "Accord EX",
"price" : 28000,
"color" : "red",
"num_doors" : 4,
"weight" : "9000lbs"
}
}.................................
curl: (6) Could not resolve host: application
{"query":{"match":{"color":"silver"}}}"
The filename, directory name, or volume label syntax is incorrect.

Try escaping the characters for quotes.

Related

Post request in JSON using cURL fails

I'm struggling to generate a proper JSON POST request using cURL. For that purpose I'm writing a short shell script to do so but apparently there seems to be a problem with my JSON-string (according to the error message I listed below).
If I write the CSR directly into the JSON string, it will work just fine:
authToken="Here'sMyAuthToken"
curl --data-binary '{"authToken" : "'$authToken'", "order" : {"type" : "DomainValidatedCertificateOrder", "csr" : "-----BEGIN CERTIFICATE REQUEST-----
certificaterequestkey
-----END CERTIFICATE REQUEST-----", "adminContact" : {"title" : "mytitle", "firstName" : "myfirstname", "lastName" : "mylastname", "phoneNumber" : "X0000000", "emailAddress" : "test#example.com"}, "techContact" : {"title" : "mytitle", "firstName" : "myfirstname", "lastName" : "mylastname", "phoneNumber" : "000000000", "emailAddress" : "test#example.com"}, "productCode" : "ssl-geotrust-rapidssl-12m", "validationType" : "validateViaDns", "approverEmailAddress" : "postmaster#example.com", "autoRenew" : false}}' -i -X POST https://partner.http.net/api/ssl/v1/json/orderCreate
However, if I pass the CSR over by reading the csr file directly like this
authToken="Here'sMyAuthToken"
csr=$(<csr.csr)
curl --data-binary '{"authToken" : "'$authToken'", "order" : {"type" : "DomainValidatedCertificateOrder", "csr" : "'$csr'", "adminContact" : {"title" : "mytitle", "firstName" : "myfirstname", "lastName" : "mylastname", "phoneNumber" : "X0000000", "emailAddress" : "test#example.com"}, "techContact" : {"title" : "mytitle", "firstName" : "myfirstname", "lastName" : "mylastname", "phoneNumber" : "000000000", "emailAddress" : "test#example.com"}, "productCode" : "ssl-geotrust-rapidssl-12m", "validationType" : "validateViaDns", "approverEmailAddress" : "postmaster#example.com", "autoRenew" : false}}' -i -X POST https://partner.http.net/api/ssl/v1/json/orderCreate
it will give me the following error.
curl: option -----END: is unknown
curl: try 'curl --help' or 'curl --manual' for more information
I've already found a case where someone had the exact same problem like me here:
POST request containing CSR fails in Bash
The user accomplished solving this problem using the jq package. Unfortunately I can't install this package on the machine the script is supposed to run since I'm not allowed to install any packages at all.
Could someone give an advice how to solve this problem?
Many thanks in advance!
I have been having a very hard time posting. I kept getting Errors when saving.
This is the request and response with a Content-Type: application/json in the HTTP request header:
Content-Length: 540
Content-Type: application/json
Accept: application/json
Accept-Encoding: deflate, gzip, br
Host: eatled.com
BODY={"authToken": "$authToken","order": {"type": "DomainValidatedCertificateOrder","csr": "$csr","adminContact": {"title": "mytitle","firstName": "myfirstname","lastName": "mylastname","phoneNumber": "X0000000","emailAddress": "test#example.com"},"techContact": {"title": "mytitle","firstName": "myfirstname","lastName": "mylastname","phoneNumber": "000000000","emailAddress": "test#example.com"},"productCode": "ssl-geotrust-rapidssl-12m","validationType": "validateViaDns","approverEmailAddress": "postmaster#example.com","autoRenew": false}}
I removed the Content-Type and this is the request and response.
Notice how the header changed to Content-Type: application/x-www-form-urlencoded and how the JSON is in the KEY of the $_POST data as it was received by the Server. When the server sees the form data header it will may process try to process the request as if it were a form. It depends upon how well the API was written.
Content-Length: 540
Content-Type: application/x-www-form-urlencoded
Accept: application/json
Accept-Encoding: deflate, gzip, br
Host: eatled.com
$_POST
array (
'{"authToken":_"$authToken","order":_{"type":_"DomainValidatedCertificateOrder","csr":_"$csr","adminContact":_{"title":_"mytitle","firstName":_"myfirstname","lastName":_"mylastname","phoneNumber":_"X0000000","emailAddress":_"test#example_com"},"techContact":_{"title":_"mytitle","firstName":_"myfirstname","lastName":_"mylastname","phoneNumber":_"000000000","emailAddress":_"test#example_com"},"productCode":_"ssl-geotrust-rapidssl-12m","validationType":_"validateViaDns","approverEmailAddress":_"postmaster#example_com","autoRenew":_false}}' => '',
)
This is your curl code with all the escape codes and the options reorganized because the you had them generated an error. This is tested, and works.
curl -i -H "Content-Type: application/json" -X POST http://eatled.com/receiveheader.php --data-binary "{\"authToken\": \"$authToken\",\"order\": {\"type\": \"DomainValidatedCertificateOrder\",\"csr\": \"$csr\",\"adminContact\": {\"title\": \"mytitle\",\"firstName\": \"myfirstname\",\"lastName\": \"mylastname\",\"phoneNumber\": \"X0000000\",\"emailAddress\": \"test#example.com\"},\"techContact\": {\"title\": \"mytitle\",\"firstName\": \"myfirstname\",\"lastName\": \"mylastname\",\"phoneNumber\": \"000000000\",\"emailAddress\": \"test#example.com\"},\"productCode\": \"ssl-geotrust-rapidssl-12m\",\"validationType\": \"validateViaDns\",\"approverEmailAddress\": \"postmaster#example.com\",\"autoRenew\": false}}"
You are misquoting things in your command line. You're making frequent use of this sort of structure:
'{"somekey": "'$variable'"}'
That means that you're not quoting $somevariable, so if it contains whitespace you're going to end up with a command other than what you expect. You need to quote all your variables, so the above becomes:
'{"somekey": "'"$variable"'"}'
And your full command line is:
curl --data-binary '
{
"authToken" : "'"$authToken"'",
"order" : {
"type" : "DomainValidatedCertificateOrder",
"csr" : "'"$csr"'",
"adminContact" : {
"title" : "mytitle",
"firstName" : "myfirstname",
"lastName" : "mylastname",
"phoneNumber" : "X0000000",
"emailAddress" : "test#example.com"
},
"techContact" : {
"title" : "mytitle",
"firstName" : "myfirstname",
"lastName" : "mylastname",
"phoneNumber" : "000000000",
"emailAddress" : "test#example.com"
},
"productCode" : "ssl-geotrust-rapidssl-12m",
"validationType" : "validateViaDns",
"approverEmailAddress" : "postmaster#example.com",
"autoRenew" : false
}
}
' -i -X POST https://partner.http.net/api/ssl/v1/json/orderCreate
You could simplify things by using a here document instead of trying to embed everything on the command line. That would look like:
curl -i -X POST --data-binary #- https://partner.http.net/api/ssl/v1/json/orderCreate <<EOF
{
"authToken": "$authToken",
"order": {
"type": "DomainValidatedCertificateOrder",
"csr": "$csr",
"adminContact": {
"title": "mytitle",
"firstName": "myfirstname",
"lastName": "mylastname",
"phoneNumber": "X0000000",
"emailAddress": "test#example.com"
},
"techContact": {
"title": "mytitle",
"firstName": "myfirstname",
"lastName": "mylastname",
"phoneNumber": "000000000",
"emailAddress": "test#example.com"
},
"productCode": "ssl-geotrust-rapidssl-12m",
"validationType": "validateViaDns",
"approverEmailAddress": "postmaster#example.com",
"autoRenew": false
}
}
EOF
Now you don't need all those quoting tricks.
Here's how I've tested the above solution:
#!/bin/bash
# Use some sort of http debugging service to verify the content
# of the request.
url="https://eny65dku43a4g.x.pipedream.net"
# Create an example CSR
openssl req new -nodes \
-keyout req.key \
-out req.csr \
-subject '/O=Example$Organization+Inc,CN=example.com'
csr=$(<req.csr)
authToken='example+password$here'
curl -i -X POST "$url" --data-binary #- <<EOF
{
"authToken": "$authToken",
"order": {
"type": "DomainValidatedCertificateOrder",
"csr": "$csr",
"adminContact": {
"title": "mytitle",
"firstName": "myfirstname",
"lastName": "mylastname",
"phoneNumber": "X0000000",
"emailAddress": "test#example.com"
},
"techContact": {
"title": "mytitle",
"firstName": "myfirstname",
"lastName": "mylastname",
"phoneNumber": "000000000",
"emailAddress": "test#example.com"
},
"productCode": "ssl-geotrust-rapidssl-12m",
"validationType": "validateViaDns",
"approverEmailAddress": "postmaster#example.com",
"autoRenew": false
}
}
EOF

curl for ElasticSearch7.16: why there must be a blank character in request body?

I use curl 7.81.0 (x86_64-pc-win32) in PowerShell 5.1.19041.610 to access Elastic Search 7.16, all run on my Win10 20H2.
This command line is successful:
PS D:> ./curl -X PUT "localhost:9200/customer/_doc/1?pretty" -H "Content-Type:application/json" --data " {\`"name\`":\`"John Doe\`"}"
And get output:
{
"_index" : "customer",
"_type" : "_doc",
"_id" : "1",
"_version" : 17,
"result" : "updated",
"_shards" : {
"total" : 2,
"successful" : 1,
},
"_seq_no" : 35,
"_primary_term" : 13
}
You can notice there is a blank character just after the first double-quotation mark in --data " {
But if I delete that blank character after the first double-quotation mark, just as --data "{, the command line get an error result:
PS D:> ./curl -X PUT "localhost:9200/customer/_doc/1?pretty" -H "Content-Type:application/json" --data "{\`"name\`":\`"John Doe\`"}"
And get output:
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "failed to parse field [name] of type [text] in document with id '1'. Preview of field's value: ''"
}
],
"type" : "mapper_parsing_exception",
"reason" : "failed to parse field [name] of type [text] in document with id '1'. Preview of field's value: ''",
"caused_by" : {
"type" : "json_e_o_f_exception",
"reason" : "Unexpected end-of-input in VALUE_STRING\n at [Source: (ByteArrayInputStream); line: 1, column: 14]"
}
},
"status" : 400
}
curl: (3) unmatched close brace/bracket in URL position 5:
Doe"}
^
Why this blank character is so important?
I had tried a lot times to get curl command run in Power Shell!
This is issue coming from using PoweShell, no related to CURL or ElasticSearch. Curly braces { and } are special characters in PowerShell, which denote a variable usage and since they are also used for JSON, you have an issue sending JSON through PowerShell.
I do not know the exact syntax, but in one case, PowerShell is trying to match an opening brace to a closing brace. It looks like just adding a space before the brace will somehow escape it, in this case.
https://www.tutorialspoint.com/powershell/powershell_brackets.htm
I recommend using another terminal or a tool like Postman to send these requests, instead of CURL with PowerShell.

curl: (3) [globbing] unmatched close brace/bracket

I tried to use curl to POST a JSON object to ElasticSearch server but keep getting globbing error
This is my curl command:
curl -X POST "localhost:9200/school/_doc/10?pretty" -H "Content-type:application/json" -d "{"firstName":"Bilbo","lastName":"Baggins"}"
And the error I get from the server :
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "failed to parse"
}
],
"type" : "mapper_parsing_exception",
"reason" : "failed to parse",
"caused_by" : {
"type" : "json_e_o_f_exception",
"reason" : "Unexpected end-of-input: expected close marker for Object (start marker at [Source: (byte[])\"{\"; line: 1, column: 1])\n at [Source: (byte[])\"{\"; line: 1, column: 2]"
}
},
"status" : 400
}
curl: (3) [globbing] unmatched close brace/bracket in column 33
You have a syntax issue with your curl command, proper curl command of your request is
curl -v -XPOST -H "Content-type: application/json" -d '{"firstName":"Bilbo","lastName":"Baggins"}' 'localhost:9200/school/_doc/10?pretty'
Best way to use the Elasticsearch using REST format and rest client like postman but still if you want to use the curl, you can use this online curl builder to avoid the syntax issues.

How to index couchdb from elasticsearch server with the help of elasticsearch river plugin and hence get JSON data

I am working on graphical representation of data. The graph accepts JSON data,hence I need to fetch the required data from couchdb. I am using elasticsearch server for indexing couchdb and hence retrieve required data.
I am using elasticsearch river plugin to make couchdb and elasticsearch server together.
I have Created the CouchDB Database 'testdb' and created some test documents for the same.
Setup elasticsearch with the database.
On testing the same by writing CURl GET command with default search criteria, we must get 'total hits' more than 0 and the 'hits' must have some response value for searched criteria.
But we are getting 'total hits' as 0 and 'hits':[] (i.e. null)
Procedures I followed.
1) Downloaded and installed couchdb latest version
2) Verified CouchDB is running
curl localhost:5984
I got response that starts with:
{"couchdb":"Welcome"...
3) Downloaded ElasticSearch and installed service
service.bat install
curl http://127.0.0.1:9200
I got response as
{ "ok" : true, "status" : 200,.....
4) Installed the CouchDB River Plugin for ElasticSearch 1.4.2
plugin -install elasticsearch/elasticsearch-river-couchdb/2.4.1
5) To Create the CouchDB Database and ElasticSearch Index
curl -X PUT "http://127.0.0.1:5984/testdb"
6) To Create some test documents:
curl -X PUT "http://127.0.0.1:5984/testdb/1" -d "{\"name\":\"My
Name 1\"}"
curl -X PUT "http://127.0.0.1:5984/testdb/2" -d
"{\"name\":\"My Name 2\"}"
curl -X PUT
"http://127.0.0.1:5984/testdb/3" -d "{\"name\":\"My Name 3\"}"
curl
-X PUT "http://127.0.0.1:5984/testdb/4" -d "{\"name\":\"My Name 4\"}"
7) To Setup ElasticSearch with the Database
curl -X PUT "127.0.0.1:9200/_river/testdb/_meta" -d "{ \"type\" :
\"couchdb\", \"couchdb\" : { \"host\" : \"localhost\", \"port\" :
5984, \"db\" : \"testdb\", \"filter\" : null }, \"index\" : {
\"index\" : \"testdb\", \"type\" : \"testdb\", \"bulk_size\" :
\"100\", \"bulk_timeout\" : \"10ms\" } }"
8) To test it
curl "http://127.0.0.1:9200/testdb/testdb/_search?pretty=true"
on testing we should get this
{
"took" : 4,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 4,
"max_score" : 1.0,
"hits" : [ {
"_index" : "testdb",
"_type" : "testdb",
"_id" : "4",
"_score" : 1.0, "_source" : {"_rev":"1-7e9376fc8bfa6b8c8788b0f408154584","_id":"4","name":"My Name 4"}
}, {
"_index" : "testdb",
"_type" : "testdb",
"_id" : "1",
"_score" : 1.0, "_source" : {"_rev":"1-87386bd54c821354a93cf62add449d31","_id":"1","name":"My Name"}
}, {
"_index" : "testdb",
"_type" : "testdb",
"_id" : "2",
"_score" : 1.0, "_source" : {"_rev":"1-194582c1e02d84ae36e59f568a459633","_id":"2","name":"My Name 2"}
}, {
"_index" : "testdb",
"_type" : "testdb",
"_id" : "3",
"_score" : 1.0, "_source" : {"_rev":"1-62a53c50e7df02ec22973fc802fb9fc0","_id":"3","name":"My Name 3"}
} ]
}
}
But I got something like this
{
"error" : "IndexMissingException[[testdb] missing]",
"status" : 404
}
This curl string doesn't need the additional testb. This:
curl "http://127.0.0.1:9200/testdb/testdb/_search?pretty=true"
Should be this:
curl 'http://localhost/testdb/_search?pretty=true'
You can view all your by running the following and ensuring your search is against one of your indices:
curl -X GET 'localhost:9200/_cat/indices'

Elasticsearch queries on "empty index"

in my application I use several elasticsearch indices, which will contain no indexed documents in their initial state. I consider that can be called "empty" :)
The document's mapping is correct and working.
The application also has a relational database that contain entities, that MIGHT have documents associated in elasticsearch.
In the initial state of the appliation it is very common that there are only entities without documents, so not a single document has been indexed, therefore "empty index". The index has been created nevertheless and also the document's mapping has been put to the index and is present in the indexes metadata.
Anyway, when I query elasticsearch with a SearchQuery to find an document for one of the entities (the document contains an unique id from the entity), elasticsearch will throw an ElasticSearchException, that complains about no mapping present for field xy etc.
BUT IF I insert one single blank document into the index first, the query wont fail.
Is there a way to "initialize" an index in a way to prevent the query from failing and to get rid of the silly "dummy document workaround"?
UPDATE:
Plus, the workaround with the dummy doc pollutes the index, as for example a count query now returns always +1....so I added a deletion to the workaround as well...
Your questions lacks details and is not clear. If you had provided gist of your index schema and query, that would have helped. You should have also provided the version of elasticsearch that you are using.
"No mapping" exception that you have mentioned has nothing to do with initializing the index with some data. Most likely you are sorting on the field which doesn't exist. This is common if you are querying multiple indexes at once.
Solution: Solution is based on the version of elasticsearch. If you are on 1.3.x or lower then you should use ignore_unmapped. If you are on a version greater than 1.3.5 then you should use unmapped_type.
Click here to read official documentation.
If you are find the documentation confusing, then this example will make it clear:
Lets create two indexes testindex1 and testindex2
curl -XPUT localhost:9200/testindex1 -d '{"mappings":{"type1":{"properties":{"firstname":{"type":"string"},"servers":{"type":"nested","properties":{"name":{"type":"string"},"location":{"type":"nested","properties":{"name":{"type":"string"}}}}}}}}}'
curl -XPUT localhost:9200/testindex2 -d '{"mappings":{"type1":{"properties":{"firstname":{"type":"string"},"computers":{"type":"nested","properties":{"name":{"type":"string"},"location":{"type":"nested","properties":{"name":{"type":"string"}}}}}}}}}'
The only difference between these two indexes is - testindex1 has "server" field and textindex2 has "computers" field.
Now let's insert test data in both the indexes.
Index test data on testindex1:
curl -XPUT localhost:9200/testindex1/type1/1 -d '{"firstname":"servertom","servers":[{"name":"server1","location":[{"name":"location1"},{"name":"location2"}]},{"name":"server2","location":[{"name":"location1"}]}]}'
curl -XPUT localhost:9200/testindex1/type1/2 -d '{"firstname":"serverjerry","servers":[{"name":"server2","location":[{"name":"location5"}]}]}'
Index test data on testindex2:
curl -XPUT localhost:9200/testindex2/type1/1 -d '{"firstname":"computertom","computers":[{"name":"computer1","location":[{"name":"location1"},{"name":"location2"}]},{"name":"computer2","location":[{"name":"location1"}]}]}'
curl -XPUT localhost:9200/testindex2/type1/2 -d '{"firstname":"computerjerry","computers":[{"name":"computer2","location":[{"name":"location5"}]}]}'
Query examples:
Using "unmapped_type" for elasticsearch version > 1.3.x
curl -XPOST 'localhost:9200/testindex2/_search?pretty' -d '{"fields":["firstname"],"query":{"match_all":{}},"sort":[{"servers.location.name":{"order":"desc","unmapped_type":"string"}}]}'
Using "ignore_unmapped" for elasticsearch version <= 1.3.5
curl -XPOST 'localhost:9200/testindex2/_search?pretty' -d '{"fields":["firstname"],"query":{"match_all":{}},"sort":[{"servers.location.name":{"order":"desc","ignore_unmapped":"true"}}]}'
Output of query1:
{
"took" : 15,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 2,
"max_score" : null,
"hits" : [ {
"_index" : "testindex2",
"_type" : "type1",
"_id" : "1",
"_score" : null,
"fields" : {
"firstname" : [ "computertom" ]
},
"sort" : [ null ]
}, {
"_index" : "testindex2",
"_type" : "type1",
"_id" : "2",
"_score" : null,
"fields" : {
"firstname" : [ "computerjerry" ]
},
"sort" : [ null ]
} ]
}
}
Output of query2:
{
"took" : 10,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 2,
"max_score" : null,
"hits" : [ {
"_index" : "testindex2",
"_type" : "type1",
"_id" : "1",
"_score" : null,
"fields" : {
"firstname" : [ "computertom" ]
},
"sort" : [ -9223372036854775808 ]
}, {
"_index" : "testindex2",
"_type" : "type1",
"_id" : "2",
"_score" : null,
"fields" : {
"firstname" : [ "computerjerry" ]
},
"sort" : [ -9223372036854775808 ]
} ]
}
}
Note:
These examples were created on elasticserch 1.4.
These examples also demonstrate how to do sorting on nested fields.
Are you doing a sort when you search? I've run into the same issue ("No mapping found for [field] in order to sort on"), but only when trying to sort results. In that case, the solution is simply to add the ignore_unmapped: true property to the sort parameter in your query:
{
...
"body": {
...
"sort": [
{"field_name": {
"order": "asc",
"ignore_unmapped": true
}}
]
...
}
...
}
I found my solution here:
No mapping found for field in order to sort on in ElasticSearch