I'm working on a cli for the Pushbullet HTTP API using Bash scripting. Sending pushes (notes and links), as well as creating, deleting, and modifying contacts & devices are all straight forward using curl and Bash. However, sending SMS and files are a bit more complex, as both require sending more complex JSON-formatted requests to the server (multiple JSON-formatted requests, in the case of pushing files).
I've tried sending many variations on the following (both with and without escape characters), but the server keeps replying about JSON-formatting errors. The following code is based off of the example given in the Pushbullet HTTP API documentation.
curl -su $auth_id: -X POST https://api.pushbullet.com/v2/ephemerals --header "Content-Type: application/json"
--data-binary '{ "\"type"\": "\"push"\", "\"push"\": { \
"\"type"\": "\"messaging_extension_reply"\", \
"\"package_name"\": "\"com.pushbullet.android"\", \
"\"source_user_iden"\": "\"$source_idens"\", \
"\"target_device_iden"\": "\"$target_idens"\", \
"\"conversation_iden"\": "\"$sms_device"\", \
"\"message"\": "\"Hello"\" \
} }'
Using bash -x, I can see that this is (supposedly) what is being sent to the server:
--data-binary '{"type": "push", "push": {
"type": "messaging_extension_reply",
"package_name": "com.pushbullet.android",
"source_user_iden": "<source_idens>",
"target_device_iden": "<device_idens>",
"conversation_iden": "<sms_phone_number>",
"message": "Hello" } }'
In all cases, the server returns:
{"error":{"type":"invalid_request","message":"Failed to parse JSON body.","cat":"(=^‥^=)"}}
What is the appropriate formatting of a JSON request using curl to send an SMS via the Pushbullet API? Am I overlooking something obvious? I'm trying to accomplish this using only curl and Bash, I see no reason why it's not possible (maybe not the fastest or most elegant way, but certainly possible).
I found the solution to my issue so I thought I'd share it. It was actually very simple:
Because the curl command includes a JSON-formatted response with single quotes, variable expansion was not occurring. This is a limitation (or perhaps a feature) of Bash. So, even though the server responded with { } indicating no errors in the request, the requests were actually being sent without the proper values for parameters, such asuser_iden,source_user_iden, etc.
Solution:
Enclose all variable expansions inside the JSON-formatted request in a double-quote and single-quote, like so:
"'"$user_idens"'"
First I'd like to apologize for how bad the API is, especially file upload and sending SMS. I was thinking of adding multipart or base64 file uploads to /v2/pushes. I think the first one might help you with curl, not sure about the base64 one. multipart is a huge pain though, so I'd prefer to make it better than the current setup if possible, rather than about equally as bad. Suggestions are welcome.
I tried your command line and it seemed to work, so I'm not sure what is going wrong. Here's the command line I did. Perhaps your quote escaping or newlines are causing the JSON error?
curl -u <access_token> -X POST https://api.pushbullet.com/v2/ephemerals --header "Content-Type: application/json" --data-binary '{"type": "push", "push": {"type": "messaging_extension_reply","package_name": "com.pushbullet.android","source_user_iden": "iden","target_device_iden": "device_idens", "conversation_iden": "sms_phone_number","message": "Hello" } }'
Related
I am making GET request and receiving json or "binary" data. Based on my testing the chance for json response is not far from a coin flip. When I receive binary data I also get either "Content-Length: xxx" or "Transfer-Encoding: chunked" response header (this is also more or less 50-50 chance). I have not noticed getting both headers at the same time (if I add -i option for curl in below snippet). However I do get "Content-Length" header for json response. Usually the json response is 280kB in size while chunked response is about 40kB in size.
curl -o output.json\
-H "Content-Type: application/json;charset=UTF-8"\
"www.example.com/data"\
&& ./process-output.sh
I would like to find a solution where I can be sure that the whole response is in "output.json" before I execute the next script.
--EDIT--
My current solution is to check the output with file output.json | grep -c "UTF-8 Unicode" and retry (max 5 times). Most people would say that this is an ugly workaround and some might even stop talking to me. I hope that I should not need to use a "solution" like this.
does curl -N (no-buffer) fix your issue?
I saw this previously with piping curl to a json formatter - where curl catches SIGPIPE and returns success (so your shell moves on to the process step) before the last chunked response part is sent to STDOUT.
I am sending a POST request from curl like this:
curl -H 'Content-Type:application/json" -X POST -d '{"key":"value"}' http://localhost:3000/parsejson
However, I am getting on my Node/Express server:
{'key':'value'} // req.body
So it is unclear to me if it is the curl request or the configuration of my node server. On my node server, I'm using: bodyParser.json() and bodyParser.urlencoded()
Thank you!
First of all this line is incorrect and you cannot run curl with that:
curl -H 'Content-Type:application/json" -X POST -d '{"key":"value"}' http://localhost:3000/parsejson
I'm mentioning it because this is a question about single and double quotes and you have single and double quotes messed up in your example of how you make a request, resulting in a code that cannot possibly be executed. Since this is obviously not how you really make the request, it is not clear how you do.
Now, if you want to see what you really get in the request, then don't use a body parser (temporarily switch it off, remove or comment it out) and run req.pipe(process.stdout) to display the request body on the server. Then you will know what you are getting from the client.
Also run curl with the -v option to see what it is actually sending.
If it turns out that curl is sending a correct JSON and your server gets a correct JSON in the request body then your problem must be somewhere else than in topics that you are asking about in this question.
Of course it's impossible to tell you what is the problem in that case because you didn't include even a single line of code in your question.
I am using the following curl command to upload data to CouchDB:
curl -d #abcd.json -H "Content-Type: application/json" -X POST http://#localhost:5984/database/_bulk_docs
The file contains multiple JSON documents and is valid JSON.
The response I get is: {"error":"bad_request","reason":"Request body must be a JSON object"}
I have studied other answers to similar questions but don't seem to be able to find the reason for the error.
(The file does not have a 'BOM' as far as I can see.)
I am running on Windows 10.
I have tried using the RESTClient addon in Firefox with the same result.
To solve this I have found that one needs an added structure in the input file, nl. an additional:
{
"docs":
before the first "[" of the first JSON document in the file(naturally with closing "}") then everything works.
Sorry for the inconvenience.
This post jogged my thinking.
I have a collection of JSON documents in MarklLogic which I want to return to an API call as a JSON Array.
fn.collection('my-users')
Returns a sequence of JSON docs, I need a valid JSON object, an array. I am doing this in serverside java script, pushing to a new empty array().
No real example documentation to my knowledge, only in XQuery some examples.Google keeps referring to this very high level documentation here
var myArray = [];
for (d of fn.collection('my-users')){
myArray.push(d);
}
myArray
Do I need to loop over each item in the sequence to push to an array or is there a more elegant/quicker solution?
hugo
If you need to return them as a JSON array, you're doing the right thing. There's no more elegant/quicker solution that I know of. But if you're looking for a more optimized / high performance way, try REST extensions which will turn a sequence of documents into a multi-part HTTP response.
Here's an example. Given example.sjs with the contents:
function get(context, params) {
return fn.collection('my-users')
}
exports.GET = get;
Installed like so:
curl --anyauth --user admin:admin -X PUT -i \
-H "Content-type: application/vnd.marklogic-javascript" \
--data-binary #./example.sjs \
http://localhost:8000/LATEST/config/resources/js-example
And the following docs inserted into the my-users collection (I assume you know how to insert these):
myuser.json
{"name":"Sue"}
myuser2.json
{"name":"Mary"}
myuser3.json
{"name":"Jane"}
myuser4.json
{"name":"Joe"}
You can call your rest extension like so:
curl --anyauth --user admin:admin \
http://localhost:8000/LATEST/resources/js-example
And you get the following multi-part http response:
--js-example-get-result
Content-Type: application/json
Content-Length: 14
{"name":"Sue"}
--js-example-get-result
Content-Type: application/json
Content-Length: 15
{"name":"Mary"}
--js-example-get-result
Content-Type: application/json
Content-Length: 15
{"name":"Jane"}
--js-example-get-result
Content-Type: application/json
Content-Length: 14
{"name":"Joe"}
--js-example-get-result--
Use your favorite client-side http library to accept that efficient response as an individual json document for each document.
I should add that there's no need for a REST extension if your requirements are this simple. You could simply use the REST search endpoint:
curl --anyauth --user admin:admin \
-H accept:multipart/mixed \
http://localhost:8000/LATEST/search?collection=my-users
and get a very similar multi-part http response. I only provided the REST extension example since your question was about server-side javascript and I figured you might have additional requirements that require that.
Iterables are from ES6 and are (from what I understand), one of the only things carried over for the initial release of SJS along with sub-sequences.
The reason for these is so that you get the same behaviour as you would get with sequences and sub-sequences in xQuery. (different notation in the two languages, but identical behaviour)
If there were a full implementation of ES6, then the answer for you would be Array.from(iteratable)
However, without that feature, then I think you are using the most efficient way. But be careful that you don't suck your entire database into memory with the pushing from iterator to array.
I am curious of your use-case for needing them in an array actually..
-David
You don't have to loop. The following should help you for your need:
myArray.push(fn.doc(fn.collection('my-users')))
I am new to elasticsearch, running elasticsearch from chrome:extension- Postman.
I want to enter bulk data into it, from JSON using Bulk API.
I have seen the command :
curl -s -XPOST 'http://jfblouvmlxecs01:9200/_bulk' --data-binary #bulk.json
While using Postman, I do not know where to store the file #bulk.json,
Currently I have stored it at C:\elasticsearch-1.5.2\bin\bulk.JSON
The command I am using is http://localhost:9200/_bulk --data-binary #bulk.JSON
This is throwing following error:
"error": "InvalidIndexNameException[[_bulk --data-binary #bulk.JSON] Invalid index name [_bulk --data-binary #bulk.JSON], must not contain the following characters [\, /, *, ?, \", <, >, |, , ,]]",
"status": 400 }
Can someone please suggest, where to store the JSON file.
Or am I doing something wrong here.
You can store your file anywhere and then pass the path like this:
curl -s -XPOST 'http://jfblouvmlxecs01:9200/_bulk' --data-binary #"/blah/blah/bulk.json"
If you want to do it with postman there is an answer here that explains how
But if you want to use this for large data, I wouldn't recommend using postman or sense. Use curl directly or logstash
See also this: https://github.com/taskrabbit/elasticsearch-dump