I use a powershell script to get JSON Information from a webinterdace. Actually i store the JSON Information from there into a file before i use JQ 1.5 (under Windows 10) to transform the json into a format that i can upload into a database. But since i use jq in the same powershell enviroment, i think i can avoid that redirection and work directly with the json text (with a variable or with the json text direct in the jq command). I checked the Manual but found no clear answer on that question (for me). I found in the Manual the --argjson command that Looks like that i need. But the Manual is not clear how i define the variable under Windows/powershell.
Regards
Timo
Sorry if the question is confusing. I found a way to work with variables directly on the commandline, e.g:
$variable | C:\jq.exe [Filter]
There was no Explanation in the JQ Manual how to pass json text directly on the commandshell to jq. But i found it. Thanks for your help.
Regards
Timo
You could use the ConvertFrom-Json command to convert any piece of JSON directly into a PowerShell object.
Related
I just installed jq on Ubuntu 18.04 and It is really nicely formatted when making curl calls. I am wondering if there is a way to always have my terminal(s) output JSON data nicely formatted without having to append the | jq . command to the end of it? I am familiar with setting up aliases in my .zshrc file but there is no single command I would use this for so not sure how to make this happen.
Dummy command I ran
curl -X GET localhost:3000/products | jq .
I want the jq command to always be in effect when returning json to from the CLI.
Thanks!
While I won't go so far as to say this cannot be done, not only am I not aware of any straightforward way of doing so, I think it would be extremely challenging to do at all for the following reasons:
It's hard to reliable identify that a command is outputting JSON. Many things might look like JSON but not be intended to be JSON. A command might output something that looks like JSON in large part but that isn't entirely JSON.
It's impossible to know if a long-running command is outputting valid JSON until it has finished. This gives a dilemma of what to do in the interim - wait for it to finish, potentially indefinitely (especially if it's waiting for your input!) or find some way to roll-back the output if it turns out we guessed wrong.
It's hard to predict up-front if a command is going to return JSON. As in your example, you can't tell what the remote server is going to return from a curl command.
It's unclear what's the "right" thing to do for non-trivial commands, such as invoking shell functions or pipelines or applying redirections, or for programs such as text editors that use advanced control codes for redrawing the screen.
For these reasons, I think if this is possible it would require quite an advanced shell that captures the output of all commands the user executes, analyses and formats that output continuously, and has the capability to roll-back and reformat (or unformat) the output if it realises the output is incompatible with its method of display. I think that would be pretty interesting to see but I'm not aware of such a shell.
You could start with a simple script, e.g.
$ cat curljq
#!/bin/bash
curl "$#" | jq .
$ chmod +x curljq
$ ./curljq -Ss -H "Accept: application/json" https://reqbin.com/echo/get/json
{
"success": "true"
}
This question already has answers here:
jq to replace text directly on file (like sed -i)
(9 answers)
How can I use a file in a command and redirect output to the same file without truncating it?
(14 answers)
Closed 4 years ago.
I want to move a JS object from one file to another. I am currently trying to use jq to do this, but I'm having issues getting it to output the result to a file using pretty printing. In case you were wondering, I am trying to move duplicate swagger definitions into a common file.
This is my script:
content=`jq '.definitions.foo' ./src.swagger.json`
src_without_content=`jq 'del(.definitions.foo)' ./src.swagger.json`
echo $src_without_content > ./src.swagger.json
dest_with_content=`jq --argjson content "$content" '.definitions |= .+ { "foo": $content }' ./dest.swagger.json`
echo $dest_with_content > ./dest.swagger.json
Basically, I am trying to capture the object that I want, then remove it from the source file, then add it to the destination. I modify both files by creating the data that I want in the files, then overwriting them.
When I tried using the output to directly write to a file (instead of first storing it in environment variables), the file was overwritten with a blank file:
jq 'del(.definitions.foo)' ./src.swagger.json > ./src.swagger.json
With my current script, the content is valid and as expected, but it isn't formatted nicely. Instead, it is being printed as one solid line. I read into this, and by default jq is supposed to use pretty printing. Perhaps it is getting lost when I store the result in an environment variable? I've seen a few posts, but none of them seem to discuss how to write the output to a file.
Am I missing something? Is there a way to do this? Any help is appreciated. Thanks!
In case it matters, I am running this script on a mac.
Am I missing something?
Yes - you cannot reliably use shell redirection to read and write the same file in the way you are expecting. That's just the way things are :-(
Is there a way to do this?
Yes. One way is to use sponge. (On a Mac, you could install it using brew install moreutils.) Another is to use https://github.com/nicowilliams/inplace
For further details and options, see https://github.com/stedolan/jq/wiki/FAQ (search for sponge).
I'd like to recursively download JSON resources from a RESTful HTTP endpoint and store these in a local directory structure, following links to related resources in the form of JSON strings containing HTTP URLs. Wget would seem to be a likely tool for the job, though its recursive download is apparently limited to HTML hyperlinks and CSS url() references.
The resources in question are Swagger documentation files similar to this one, though in my cases all of the URLs are absolute. The Swagger schema is fairly complicated, but it would be sufficient to follow any string that looks like an absolute HTTP(S) URL. Even better would be to follow absolute or relative paths specified in 'path' properties.
Can anyone suggest a general purpose recursive crawler that would do what I want here, or a lightweight way of scripting wget or similar to achieve it?
I ended up writing a shell script to solve the problem:
API_ROOT_URL="http://petstore.swagger.wordnik.com/api/api-docs"
OUT_DIR=`pwd`
function download_json {
echo "Downloading $1 to $OUT_DIR$2.json"
curl -sS $1 | jq . > $OUT_DIR$2.json
}
download_json $API_ROOT_URL /api-index
jq -r .apis[].path $OUT_DIR/api-index.json | while read -r API_PATH; do
API_PATH=${API_PATH#$API_ROOT_URL}
download_json $API_ROOT_URL$API_PATH $API_PATH
done
This uses jq to extract the API paths from the index file, and also to pretty print the JSON as it is downloaded. As webron mentions this will probably only be of interest to people still using the 1.x Swagger schema, though I can see myself adapting this script for other problems in the future.
One problem I've found with this for Swagger is that the order of entries in our API docs is apparently not stable. Running the script several times in a row against our API docs (generated by swagger-springmvc) results in minor changes to property orders. This can be partly fixed by sorting the JSON objects' property keys with jq's --sort-keys option, but this doesn't cover all cases, e.g. a model schema's required property which is a plain array of string property names.
I'm on an Ubuntu system, and I'm trying to write a testing framework that has to (among other things) compare the output of a mongodump command. This command generates a bunch of BSON files, which I can compare. However, for human readability, I'd like to convert these to nicely formatted JSON instead, which I can do using the provided bsondump command. The issue is that this appears to be a one-way conversion.
While I can work around this if I absolutely need to, it would be alot easier if there was a way to convert back from JSON to BSON on the command line. Does anyone know of a command line tool to do this? Google seems to have come up dry.
I haven't used them, but bsontools can convert from json, xml, or csv
As #WiredPrarie points out, the conversion from BSON to JSON is lossy, and it makes no sense to want to go back the other way. Workarounds include using mongoimport instead of mongorestore, or just using the original BSON. See the comments for more deails (adding this answer mainly so I can close the question)
You can try beesn, it converts data both ways. For your variant - JSON -> BSON - use the -x switch.
Example:
$ beesn -x -i test-data/01.json -o my.bson
Disclaimer: I am an author of this tool.
Does anyone known of a simple utility for editing a simple BSON database/file?
Did You try this: http://docs.mongodb.org/manual/reference/bsondump/ ?
The installation package that includes mongodump ('mongo-tools' on Ubuntu) should also include bsondump, for which the manpage says:
bsondump - examine BSON files in a human-readable form
You can convert BSON to JSON with the following:
bsondump --pretty <your_file.bson
As a data interchange format, BSON may not be suitable for editing directly. For manipulating a BSON dataset, you could, of course, upload it to MonogDB and work with that. Or, you could open the bsondump decoded JSON in an editor. But the BSON Wikipedia article indicates that compatible libraries exist in several languages, which suggests that you should also be able to decode it programmatically to an internal map representation and edit that internal representation in code.