I'm struggling to generate a proper JSON POST request using cURL. For that purpose I'm writing a short shell script to do so but apparently there seems to be a problem with my JSON-string (according to the error message I listed below).
If I write the CSR directly into the JSON string, it will work just fine:
authToken="Here'sMyAuthToken"
curl --data-binary '{"authToken" : "'$authToken'", "order" : {"type" : "DomainValidatedCertificateOrder", "csr" : "-----BEGIN CERTIFICATE REQUEST-----
certificaterequestkey
-----END CERTIFICATE REQUEST-----", "adminContact" : {"title" : "mytitle", "firstName" : "myfirstname", "lastName" : "mylastname", "phoneNumber" : "X0000000", "emailAddress" : "test#example.com"}, "techContact" : {"title" : "mytitle", "firstName" : "myfirstname", "lastName" : "mylastname", "phoneNumber" : "000000000", "emailAddress" : "test#example.com"}, "productCode" : "ssl-geotrust-rapidssl-12m", "validationType" : "validateViaDns", "approverEmailAddress" : "postmaster#example.com", "autoRenew" : false}}' -i -X POST https://partner.http.net/api/ssl/v1/json/orderCreate
However, if I pass the CSR over by reading the csr file directly like this
authToken="Here'sMyAuthToken"
csr=$(<csr.csr)
curl --data-binary '{"authToken" : "'$authToken'", "order" : {"type" : "DomainValidatedCertificateOrder", "csr" : "'$csr'", "adminContact" : {"title" : "mytitle", "firstName" : "myfirstname", "lastName" : "mylastname", "phoneNumber" : "X0000000", "emailAddress" : "test#example.com"}, "techContact" : {"title" : "mytitle", "firstName" : "myfirstname", "lastName" : "mylastname", "phoneNumber" : "000000000", "emailAddress" : "test#example.com"}, "productCode" : "ssl-geotrust-rapidssl-12m", "validationType" : "validateViaDns", "approverEmailAddress" : "postmaster#example.com", "autoRenew" : false}}' -i -X POST https://partner.http.net/api/ssl/v1/json/orderCreate
it will give me the following error.
curl: option -----END: is unknown
curl: try 'curl --help' or 'curl --manual' for more information
I've already found a case where someone had the exact same problem like me here:
POST request containing CSR fails in Bash
The user accomplished solving this problem using the jq package. Unfortunately I can't install this package on the machine the script is supposed to run since I'm not allowed to install any packages at all.
Could someone give an advice how to solve this problem?
Many thanks in advance!
I have been having a very hard time posting. I kept getting Errors when saving.
This is the request and response with a Content-Type: application/json in the HTTP request header:
Content-Length: 540
Content-Type: application/json
Accept: application/json
Accept-Encoding: deflate, gzip, br
Host: eatled.com
BODY={"authToken": "$authToken","order": {"type": "DomainValidatedCertificateOrder","csr": "$csr","adminContact": {"title": "mytitle","firstName": "myfirstname","lastName": "mylastname","phoneNumber": "X0000000","emailAddress": "test#example.com"},"techContact": {"title": "mytitle","firstName": "myfirstname","lastName": "mylastname","phoneNumber": "000000000","emailAddress": "test#example.com"},"productCode": "ssl-geotrust-rapidssl-12m","validationType": "validateViaDns","approverEmailAddress": "postmaster#example.com","autoRenew": false}}
I removed the Content-Type and this is the request and response.
Notice how the header changed to Content-Type: application/x-www-form-urlencoded and how the JSON is in the KEY of the $_POST data as it was received by the Server. When the server sees the form data header it will may process try to process the request as if it were a form. It depends upon how well the API was written.
Content-Length: 540
Content-Type: application/x-www-form-urlencoded
Accept: application/json
Accept-Encoding: deflate, gzip, br
Host: eatled.com
$_POST
array (
'{"authToken":_"$authToken","order":_{"type":_"DomainValidatedCertificateOrder","csr":_"$csr","adminContact":_{"title":_"mytitle","firstName":_"myfirstname","lastName":_"mylastname","phoneNumber":_"X0000000","emailAddress":_"test#example_com"},"techContact":_{"title":_"mytitle","firstName":_"myfirstname","lastName":_"mylastname","phoneNumber":_"000000000","emailAddress":_"test#example_com"},"productCode":_"ssl-geotrust-rapidssl-12m","validationType":_"validateViaDns","approverEmailAddress":_"postmaster#example_com","autoRenew":_false}}' => '',
)
This is your curl code with all the escape codes and the options reorganized because the you had them generated an error. This is tested, and works.
curl -i -H "Content-Type: application/json" -X POST http://eatled.com/receiveheader.php --data-binary "{\"authToken\": \"$authToken\",\"order\": {\"type\": \"DomainValidatedCertificateOrder\",\"csr\": \"$csr\",\"adminContact\": {\"title\": \"mytitle\",\"firstName\": \"myfirstname\",\"lastName\": \"mylastname\",\"phoneNumber\": \"X0000000\",\"emailAddress\": \"test#example.com\"},\"techContact\": {\"title\": \"mytitle\",\"firstName\": \"myfirstname\",\"lastName\": \"mylastname\",\"phoneNumber\": \"000000000\",\"emailAddress\": \"test#example.com\"},\"productCode\": \"ssl-geotrust-rapidssl-12m\",\"validationType\": \"validateViaDns\",\"approverEmailAddress\": \"postmaster#example.com\",\"autoRenew\": false}}"
You are misquoting things in your command line. You're making frequent use of this sort of structure:
'{"somekey": "'$variable'"}'
That means that you're not quoting $somevariable, so if it contains whitespace you're going to end up with a command other than what you expect. You need to quote all your variables, so the above becomes:
'{"somekey": "'"$variable"'"}'
And your full command line is:
curl --data-binary '
{
"authToken" : "'"$authToken"'",
"order" : {
"type" : "DomainValidatedCertificateOrder",
"csr" : "'"$csr"'",
"adminContact" : {
"title" : "mytitle",
"firstName" : "myfirstname",
"lastName" : "mylastname",
"phoneNumber" : "X0000000",
"emailAddress" : "test#example.com"
},
"techContact" : {
"title" : "mytitle",
"firstName" : "myfirstname",
"lastName" : "mylastname",
"phoneNumber" : "000000000",
"emailAddress" : "test#example.com"
},
"productCode" : "ssl-geotrust-rapidssl-12m",
"validationType" : "validateViaDns",
"approverEmailAddress" : "postmaster#example.com",
"autoRenew" : false
}
}
' -i -X POST https://partner.http.net/api/ssl/v1/json/orderCreate
You could simplify things by using a here document instead of trying to embed everything on the command line. That would look like:
curl -i -X POST --data-binary #- https://partner.http.net/api/ssl/v1/json/orderCreate <<EOF
{
"authToken": "$authToken",
"order": {
"type": "DomainValidatedCertificateOrder",
"csr": "$csr",
"adminContact": {
"title": "mytitle",
"firstName": "myfirstname",
"lastName": "mylastname",
"phoneNumber": "X0000000",
"emailAddress": "test#example.com"
},
"techContact": {
"title": "mytitle",
"firstName": "myfirstname",
"lastName": "mylastname",
"phoneNumber": "000000000",
"emailAddress": "test#example.com"
},
"productCode": "ssl-geotrust-rapidssl-12m",
"validationType": "validateViaDns",
"approverEmailAddress": "postmaster#example.com",
"autoRenew": false
}
}
EOF
Now you don't need all those quoting tricks.
Here's how I've tested the above solution:
#!/bin/bash
# Use some sort of http debugging service to verify the content
# of the request.
url="https://eny65dku43a4g.x.pipedream.net"
# Create an example CSR
openssl req new -nodes \
-keyout req.key \
-out req.csr \
-subject '/O=Example$Organization+Inc,CN=example.com'
csr=$(<req.csr)
authToken='example+password$here'
curl -i -X POST "$url" --data-binary #- <<EOF
{
"authToken": "$authToken",
"order": {
"type": "DomainValidatedCertificateOrder",
"csr": "$csr",
"adminContact": {
"title": "mytitle",
"firstName": "myfirstname",
"lastName": "mylastname",
"phoneNumber": "X0000000",
"emailAddress": "test#example.com"
},
"techContact": {
"title": "mytitle",
"firstName": "myfirstname",
"lastName": "mylastname",
"phoneNumber": "000000000",
"emailAddress": "test#example.com"
},
"productCode": "ssl-geotrust-rapidssl-12m",
"validationType": "validateViaDns",
"approverEmailAddress": "postmaster#example.com",
"autoRenew": false
}
}
EOF
Related
I am building a Kafka source connector and am able to successfully publish a Kafka SourceRecord with a key and its schema with a value and its schema. However, when I use kafka-json-schema-console-consumer to consume the message on the topic I expect both key and value to be received, but I only receive the value and the key is NOT received. Any help is appreciated!
Below is the relevant code that I have tried/used:
The message that I am publishing:
"message": {
"key": {
"id": "some-id"
},
"data": {
"text": "some-text"
}
}
The message gets put into he SourceRecord as follows:
:
:
SourceRecord sr = new SourceRecord(
sourcePartition, sourceOffset, this.topicName, partition,
keySchema, key, valueSchema, value);
log.info("HttpSourceTask:buildSourceRecord - source:{}", sr);
I use a log message to confirm the message was successfully processed... below is the console log message in Kafka which confirms that the key plus key schema and the value plus value schema are correctly set in the SourceRecord:
connect | [2022-01-29 21:24:47,455] INFO HttpSourceTask:buildSourceRecord - source:SourceRecord{sourcePartition={source=bgs-topic}, sourceOffset={offset=0}} ConnectRecord{topic='bgs-topic', kafkaPartition=null, key=Struct{id=some-id}, keySchema=Schema{STRUCT}, value=Struct{text=some-text}, valueSchema=Schema{STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}
Note that the key and keySchema fields are filled in with a STRUCT (as are the value and valueSchema fields).
I am using the following code (I am using Kafka in docker) to consume the message... this is the part that DOES NOT seem to return the right data (key is missing).
docker exec -it schema-registry \
/usr/bin/kafka-json-schema-console-consumer \
--bootstrap-server http://kafka:9092 \
--topic bgs-topic \
--from-beginning \
--property key.separator="|" \
--property value.schema='
{
"title": "simple-data-schema",
"type" : "object",
"required" : [ "text" ],
"properties" : {
"text" : {
"type": "string"
}
}
}' \
--property parse.key=true \
--property key.schema='
{
"title": "simple-key-schema",
"type" : "object",
"required" : [ "id" ],
"properties" : {
"id" : {
"type": "string"
}
}
}'
Here is the returned message - note that the only value is available:
{"text":"some-text"}
Found the error... I used "parse.key=true" but it should be "print.key=true"... once this change was made the correct output was provided:
{"id":"some-id"}|{"text":"some-text"}
The corrected command is shown below:
docker exec -it schema-registry
/usr/bin/kafka-json-schema-console-consumer
--bootstrap-server http://kafka:9092
--topic bgs-topic
--from-beginning
--property key.separator="|"
--property value.schema='
{
"title": "simple-data-schema",
"type" : "object",
"required" : [ "text" ],
"properties" : {
"text" : {
"type": "string"
}
}
}'
--property print.key=true
--property key.schema='
{
"title": "simple-key-schema",
"type" : "object",
"required" : [ "id" ],
"properties" : {
"id" : {
"type": "string"
}
}
}'
I have a json query that I am trying to pass through the command line
I have tried replacing outer single quotes with double quotes, but still shows me error
curl -XGET "http://localhost:9200/honda/_search?pretty" -H 'Content-Type:
application/json' -d “#{“query”:{"match":{"color":"silver"}}}”
Expected: Documents that matches field:silver
Actual Error: Warning:
Couldn't read data from file "", this makes an empty POST
THEN DISPLAYS MY ALL MY DOCUMENTS:
{
"_index" : "honda",
"_type" : "_doc",
"_id" : "234",
"_score" : 1.0,
"_source" : {
"model" : "Accord EX",
"price" : 28000,
"color" : "red",
"num_doors" : 4,
"weight" : "9000lbs"
}
}.................................
curl: (6) Could not resolve host: application
{"query":{"match":{"color":"silver"}}}"
The filename, directory name, or volume label syntax is incorrect.
Try escaping the characters for quotes.
Hi I am new to solr and trying to add an custom JSON.I am following the link https://cwiki.apache.org/confluence/display/solr/Transforming+and+Indexing+Custom+JSON. First I created a core using below command.solr create -c my_collection -d data_driven_schema_configs. after that I added the below content from cygwin to my core.But when I am enquiring through http://localhost:8983/solr/my_collection/select?wt=json&q=* it is saying no doc.{"responseHeader":{"status":0,"QTime":1,"params":{"q":"*","wt":"json"}},"response":{"numFound":0,"start":0,"docs":[]}}
curl 'http://localhost:8983/solr/my_collection/update/json/docs'\
'?split=/exams'\
'&f=first:/first'\
'&f=last:/last'\
'&f=grade:/grade'\
'&f=subject:/exams/subject'\
'&f=test:/exams/test'\
'&f=marks:/exams/marks'\
-H 'Content-type:application/json' -d '
{
"first": "John",
"last": "Doe",
"grade": 8,
"exams": [
{
"subject": "Maths",
"test" : "term1",
"marks" : 90},
{
"subject": "Biology",
"test" : "term1",
"marks" : 86}
]
}'
The query should be *:*, not just *. Just check in Admin UI and do default search.
I have a JSON file and I need to index it on ElasticSearch server.
JSON file looks like this:
{
"sku": "1",
"vbid": "1",
"created": "Sun, 05 Oct 2014 03:35:58 +0000",
"updated": "Sun, 06 Mar 2016 12:44:48 +0000",
"type": "Single",
"downloadable-duration": "perpetual",
"online-duration": "365 days",
"book-format": "ePub",
"build-status": "In Inventory",
"description": "On 7 August 1914, a week before the Battle of Tannenburg and two weeks before the Battle of the Marne, the French army attacked the Germans at Mulhouse in Alsace. Their objective was to recapture territory which had been lost after the Franco-Prussian War of 1870-71, which made it a matter of pride for the French. However, after initial success in capturing Mulhouse, the Germans were able to reinforce more quickly, and drove them back within three days. After forty-three years of peace, this was the first test of strength between France and Germany. In 1929 Karl Deuringer wrote the official history of the battle for the Bavarian Army, an immensely detailed work of 890 pages; First World War expert and former army officer Terence Zuber has translated this study and edited it down to more accessible length, to produce the first account in English of the first major battle of the First World War.",
"publication-date": "07/2014",
"author": "Deuringer, Karl",
"title": "The First Battle of the First World War: Alsace-Lorraine",
"sort-title": "First Battle of the First World War: Alsace-Lorraine",
"edition": "0",
"sampleable": "false",
"page-count": "0",
"print-drm-text": "This title will only allow printing of 2 consecutive pages at a time.",
"copy-drm-text": "This title will only allow copying of 2 consecutive pages at a time.",
"kind": "book",
"fro": "false",
"distributable": "true",
"subjects": {
"subject": [
{
"-schema": "bisac",
"-code": "HIS027090",
"#text": "World War I"
},
{
"-schema": "coursesmart",
"-code": "cs.soc_sci.hist.milit_hist",
"#text": "Social Sciences -> History -> Military History"
}
]
},
"pricelist": {
"publisher-list-price": "0.0",
"digital-list-price": "7.28"
},
"publisher": {
"publisher-name": "The History Press",
"imprint-name": "The History Press Ireland"
},
"aliases": {
"eisbn-canonical": "1",
"isbn-canonical": "1",
"print-isbn-canonical": "9780752460864",
"isbn13": "1",
"isbn10": "0750951796",
"additional-isbns": {
"isbn": [
{
"-type": "print-isbn-10",
"#text": "0752460862"
},
{
"-type": "print-isbn-13",
"#text": "97807524608"
}
]
}
},
"owner": {
"company": {
"id": "1893",
"name": "The History Press"
}
},
"distributor": {
"company": {
"id": "3658",
"name": "asc"
}
}
}
But when I try to index this JSON file using command
curl -XPOST 'http://localhost:9200/_bulk' -d #1.json
I get this error:
{"error":{"root_cause":[{"type":"action_request_validation_exception","reason":"Validation Failed: 1: no requests added;"}],"type":"action_request_validation_exception","reason":"Validation Failed: 1: no requests added;"},"status":400}
I don't know where I am making a mistake.
The bulk API of Elasticsearch use a special syntax, which is actually made of json documents written on single lines. Take a look to the documentation.
The syntax is pretty simple. For indexing, creating and updating you need 2 single line json documents. The first lines tells the action, the second gives the document to index/create/update. To delete a document, it is only needed the action line. For example (from the documentation):
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } }
{ "field1" : "value1" }
{ "create" : { "_index" : "test", "_type" : "type1", "_id" : "3" } }
{ "field1" : "value3" }
{ "update" : {"_id" : "1", "_type" : "type1", "_index" : "index1"} }
{ "doc" : {"field2" : "value2"} }
{ "delete" : { "_index" : "test", "_type" : "type1", "_id" : "2" } }
Don't forget to end your file with a new line.
Then, to call the bulk api use the command:
curl -s -XPOST localhost:9200/_bulk --data-binary "#requests"
From the documentation:
If you’re providing text file input to curl, you must use the --data-binary flag instead of plain -d
Adding next line(Enter in case of postman or "\n" if you are using json as body in client API) did my work
I had a similar issue in that I wanted to delete a specific document of a specific type and via the above answer I managed to get my simple bash script working finally!
I have a file that has a document_id per line (document_id.txt) and using the below bash script I can delete documents of a certain type with the mentioned document_id's.
This is what the file looks like:
c476ce18803d7ed3708f6340fdfa34525b20ee90
5131a30a6316f221fe420d2d3c0017a76643bccd
08ebca52025ad1c81581a018febbe57b1e3ca3cd
496ff829c736aa311e2e749cec0df49b5a37f796
87c4101cb10d3404028f83af1ce470a58744b75c
37f0daf7be27cf081e491dd445558719e4dedba1
The bash script looks like this:
#!/bin/bash
es_cluster="http://localhost:9200"
index="some-index"
doc_type="some-document-type"
for doc_id in `cat document_id.txt`
do
request_string="{\"delete\" : { \"_type\" : \"${doc_type}\", \"_id\" : \"${doc_id}\" } }"
echo -e "${request_string}\r\n\r\n" | curl -s -XPOST "${es_cluster}/${index}/${doc_type}/_bulk" --data-binary #-
echo
done
The trick, after lots of frustration, was to use the -e option to echo and append \n\n to the output of echo before I piped it into curl.
And then in curl I then have the --data-binary option set to stop it stripping out the \n\n needed for the _bulk endpoint followed by the #- option to get it to read from stdin!
Was a weird mistake in my case. I was creating the bulkRequest Object and clearing it before inserting into ElasticSearch.
The line that was creating the issue.
bulkRequest.requests().clear();
My issue was also the missing \n. If you print it, it will parse it & interpret it as a newline (so it looks like you're missing a \n). In case that helps anyone
pseudocode:
document = '{"index": {"_index": "users", "_id": "1"}} \n {"first_name": "Bob"}'
print(document)
will print
{"index": {"_index": "users", "_id": "1"}}
{"first_name": "Bob"}
but that's ok -- as long as the string contains the \n separator then it should work
I am working on graphical representation of data. The graph accepts JSON data,hence I need to fetch the required data from couchdb. I am using elasticsearch server for indexing couchdb and hence retrieve required data.
I am using elasticsearch river plugin to make couchdb and elasticsearch server together.
I have Created the CouchDB Database 'testdb' and created some test documents for the same.
Setup elasticsearch with the database.
On testing the same by writing CURl GET command with default search criteria, we must get 'total hits' more than 0 and the 'hits' must have some response value for searched criteria.
But we are getting 'total hits' as 0 and 'hits':[] (i.e. null)
Procedures I followed.
1) Downloaded and installed couchdb latest version
2) Verified CouchDB is running
curl localhost:5984
I got response that starts with:
{"couchdb":"Welcome"...
3) Downloaded ElasticSearch and installed service
service.bat install
curl http://127.0.0.1:9200
I got response as
{ "ok" : true, "status" : 200,.....
4) Installed the CouchDB River Plugin for ElasticSearch 1.4.2
plugin -install elasticsearch/elasticsearch-river-couchdb/2.4.1
5) To Create the CouchDB Database and ElasticSearch Index
curl -X PUT "http://127.0.0.1:5984/testdb"
6) To Create some test documents:
curl -X PUT "http://127.0.0.1:5984/testdb/1" -d "{\"name\":\"My
Name 1\"}"
curl -X PUT "http://127.0.0.1:5984/testdb/2" -d
"{\"name\":\"My Name 2\"}"
curl -X PUT
"http://127.0.0.1:5984/testdb/3" -d "{\"name\":\"My Name 3\"}"
curl
-X PUT "http://127.0.0.1:5984/testdb/4" -d "{\"name\":\"My Name 4\"}"
7) To Setup ElasticSearch with the Database
curl -X PUT "127.0.0.1:9200/_river/testdb/_meta" -d "{ \"type\" :
\"couchdb\", \"couchdb\" : { \"host\" : \"localhost\", \"port\" :
5984, \"db\" : \"testdb\", \"filter\" : null }, \"index\" : {
\"index\" : \"testdb\", \"type\" : \"testdb\", \"bulk_size\" :
\"100\", \"bulk_timeout\" : \"10ms\" } }"
8) To test it
curl "http://127.0.0.1:9200/testdb/testdb/_search?pretty=true"
on testing we should get this
{
"took" : 4,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 4,
"max_score" : 1.0,
"hits" : [ {
"_index" : "testdb",
"_type" : "testdb",
"_id" : "4",
"_score" : 1.0, "_source" : {"_rev":"1-7e9376fc8bfa6b8c8788b0f408154584","_id":"4","name":"My Name 4"}
}, {
"_index" : "testdb",
"_type" : "testdb",
"_id" : "1",
"_score" : 1.0, "_source" : {"_rev":"1-87386bd54c821354a93cf62add449d31","_id":"1","name":"My Name"}
}, {
"_index" : "testdb",
"_type" : "testdb",
"_id" : "2",
"_score" : 1.0, "_source" : {"_rev":"1-194582c1e02d84ae36e59f568a459633","_id":"2","name":"My Name 2"}
}, {
"_index" : "testdb",
"_type" : "testdb",
"_id" : "3",
"_score" : 1.0, "_source" : {"_rev":"1-62a53c50e7df02ec22973fc802fb9fc0","_id":"3","name":"My Name 3"}
} ]
}
}
But I got something like this
{
"error" : "IndexMissingException[[testdb] missing]",
"status" : 404
}
This curl string doesn't need the additional testb. This:
curl "http://127.0.0.1:9200/testdb/testdb/_search?pretty=true"
Should be this:
curl 'http://localhost/testdb/_search?pretty=true'
You can view all your by running the following and ensuring your search is against one of your indices:
curl -X GET 'localhost:9200/_cat/indices'