I have several large CSV's which I would like to export to a particular JSON format but I'm not really sure how to convert it over. It's a list of usernames and urls.
b00nw33,harrypotter788.flv
b00nw33,harrypotter788.mov
b00nw33,levitation271.avi
b01spider,schimbvalutar109.avi
...
I want to export them to JSON grouped by the username like the following
{
"b00nw33": [
"harrypotter788.flv",
"harrypotter788.mov",
"levitation271.avi"
],
"b01spider": [
"schimbvalutar109.avi"
]
}
What is the JQ to do this? Thank you!
The key to a simple solution is the generic function aggregate_by:
# In this formulation, f must either always evaluate to a string or
# always to an integer, it being understood that negative integers
# might be problematic
def aggregate_by(s; f; g):
reduce s as $x (null; .[$x|f] += [$x|g]);
If the CSV can be accurately parsed by simply splitting on commas, then the desired transformation can be accomplished using the following jq filter:
aggregate_by(inputs | split(","); .[0]; .[1])
This assumes jq is invoked with the -R (raw) and -n options.
Output
With the given CSV input, the output would be:
{
"b00nw33": [
"harrypotter788.flv",
"harrypotter788.mov",
"levitation271.avi"
],
"b01spider": [
"schimbvalutar109.avi"
]
}
Handling non-trivial CSV
The above solution assumes that the CSV is as uncomplicated as the sample. If, on the contrary, the CSV cannot be accurately parsed by simply splitting at commas, a more general parser will be needed.
One approach would be to use the very robust and fast csv2json parser at https://github.com/fadado/CSV
Alternatively, you could use one of the many available "csv2tsv" parsers to generate TSV, which jq can handle directly (by splitting on tabs, i.e. split("\t") rather than split(",")).
In any case, once the CSV has been converted to JSON, the filter aggregate_by defined above can be used.
If you are interested in a jq parser for CSV, you might want to look at fromcsvfile (https://gist.github.com/pkoppstein/bbbbdf7489c8c515680beb1c75fa59f2); see also
the definitions for fromcsv being proposed at https://github.com/stedolan/jq/issues/1650#issuecomment-448050902
Related
Consider the following JSON file example.json:
{
"key1": ["arr value 1", "arr value 2", "arr value 3"],
"key2": {
"key2_1": ["a1", "a2"],
"key2_2": {
"key2_2_1": 1.43123123,
"key2_2_2": 456.3123,
"key2_2_3": "string1"
}
}
}
The following jq command extracts a value from the above file:
jq ".key2.key2_2.key2_2_1" example.json
Output:
1.43123123
Is there an option in jq that, instead of printing the value itself, prints the location (line and column, start and end position) of the value within a (valid) JSON file, given an Object Identifier-Index (.key2.key2_2.key2_2_1 in the example)?
The output could be something like:
some_utility ".key2.key2_2.key2_2_1" example.json
Output:
(6,25) (6,35)
Given JSON data and a query, there is no
option in jq that, instead of printing the value itself, prints the location
of possible matches.
This is because JSON parsers providing an interface to developers usually focus on processing the logical structure of a JSON input, not the textual stream conveying it. You would have to instruct it to explicitly treat its input as raw text, while properly parsing it at the same time in order to extract the queried value. In the case of jq, the former can be achieved using the --raw-input (or -R) option, the latter then by parsing the read-in JSON-encoded string using fromjson.
The -R option alone would read the input linewise into an array of strings, which would have to be concatenated (e.g. using add) in order to provide the whole input at once to fromjson. The other way round, you could also provide the --slurp (or -s) option which (in combination with -R) already concatenates the input to a single string which then, after having parsed it with fromjson, would have to be split again into lines (e.g. using /"\n") in order to provide row numbers. I found the latter to be more convenient.
That said, this could give you a starting point (the --raw-output (or -r) option outputs raw text instead of JSON):
jq -Rrs '
"\(fromjson.key2.key2_2.key2_2_1)" as $query # save the query value as string
| ($query | length) as $length # save its length by counting its characters
| ./"\n" | to_entries[] # split into lines and provide 0-based line numbers
| {row: .key, col: .value | indices($query)[]} # find occurrences of the query
| "(\(.row),\(.col)) (\(.row),\(.col + $length))" # format the output
'
(5,24) (5,34)
Demo
Now, this works for the sample query, how about the general case? Your example queried a number (1.43123123) which is an easy target as it has the same textual representation when encoded as JSON. Therefore, a simple string search and length count did a fairly good job (not a perfect one because it would still find any occurrence of that character stream, not just "values"). Thus, for more precision, but especially with more complex JSON datatypes being queried, you would need to develop a more sophisticated searching approach, probably involving more JSON conversions, whitespace stripping and other normalizing shenanigans. So, unless your goal is to rebuild a full JSON parser within another one, you should narrow it down to the kind of queries you expect, and compose an appropriately tailored searching approach. This solution provides you with concepts to simultaneously process the input textually and structurally, and with a simple search and ouput integration.
I have the following JSON structure:
{
"host1": "$PROJECT1",
"host2": "$PROJECT2",
"host3" : "xyz",
"host4" : "$PROJECT4"
}
And the following environment variables in the shell:
PROJECT1="randomtext1"
PROJECT2="randomtext2"
PROJECT4="randomtext3"
I want to check the values for each key, if they have a "$" character in them, replace them with their respective environment variable(which is already present in the shell) so that my JSON template is rendered with the correct environment variables.
I can use the --args option of jq but there are quite a lot of variables in my actual JSON template that I want to render.
I have been trying the following:
jq 'with_entries(.values as v | env.$v)
Basically making each value as a variable, then updating its value with the variable from the env object but seems like I am missing out on some understanding. Is there a straightforward way of doing this?
EDIT
Thanks to the answers on this question, I was able to achieve my larger goal for a part of which this question was asked
iterating over each value in an object,
checking its value,
if it's a string and starts with the character "$"
use the value to update it with an environment variable of the same name .
if it's an array
use the value to retrieve an environment variable of the same name
split the string with "," as delimiter, which returns an array of strings
Update the value with the array of strings
jq 'with_entries(.value |= (if (type=="array") then (env[.[0][1:]] | split(",")) elif (type=="string" and startswith("$")) then (env[.[1:]]) else . end))'
You need to export the Bash variables to be seen by jq:
export PROJECT1="randomtext1"
export PROJECT2="randomtext2"
export PROJECT4="randomtext3"
Then you can go with:
jq -n 'with_entries((.value | select(startswith("$"))) |= env[.[1:]])'
and get:
{
"host1": "randomtext1",
"host2": "randomtext2",
"host3": "xyz",
"host4": "randomtext3"
}
Exporting a large number of shell variables might not be such a good idea and does not address the problem of array-valued variables. It might therefore be a good idea to think along the lines of printing the variable=value details to a file, and then combining that file with the template. It’s easy to do and examples on the internet abound and probably here on SO as well. You could, for example, use printf like so:
printf "%s\t" ${BASH_VERSINFO[#]}
3 2 57 1
You might also find declare -p helpful.
See also https://github.com/stedolan/jq/wiki/Cookbook#arbitrary-strings-as-template-variables
I'm working with multiple JSON files that are located in the same folder.
Files contain objects with the same properties and they are such as:
{
"identifier": "cameraA",
"alias": "a",
"rtsp": "192.168.1.1"
}
I want to replace a property for all the objects in the JSON files at the same time for a certain condition.
For example, let's say that I want to replace all the rtsp values of the objects with identifier equal to "cameraA".
I've been trying with something like:
jq 'if .identifier == \"cameraA" then .rtsp=\"cameraX" else . end' -c *.json
But it isn't working.
Is there a simple way to replace the property of an object among multiple JSON files?
jq can only write to STDIN and STDOUT, so the simplest approach would be to process one file at a time, e.g. putting your jq program inside a shell loop. sponge is often used when employing this approach.
However, there is an alternative that has the advantage of efficiency. It requires only one invocation of jq, the output of which would include the filename information (obtained from input_filename). This output would then be the input of an auxiliary process, e.g. awk.
I want to aggregate the json present on each line of file based on the date and account. There might be multiple records with same date and account, we have to aggregate count based on date and account_no.
sample file:
{"date":"2019-04-01","count":0,"account_no":"1591"}
{"date":"2019-04-01","count":1,"account_no":"1592"}
Please suggest some solution.
Number of jsons in file are almost 2.5cr
jq using inputs is a good way to go.
First, here's a generic stream-oriented sigma_by function:
# In this formulation, f must either always evaluate to a string or
# always to an integer, it being understood that negative integers
# might be problematic
def sigma_by(s; f; g):
reduce s as $x (null; .[$x|f] += ($x|g));
Then a solution could be achieved by:
sigma_by(inputs; "\(.date):\(.account_no)"; .count)
provided the -n command-line option is used.
Output
With the sample input, the output would be:
{
"2019-04-01:1591": 0,
"2019-04-01:1592": 1
}
Variations
Needless to say, there are many possible variations. In particular, a variant of sigma_by that uses a dictionary of dictionaries might be warranted, e.g. to save space, and to avoid potential parsing issues for recovering the two "aggregate by" strings:
def sigma_by(s; a; b; g):
reduce s as $x (null; .[$x|a][$x|b] += ($x|g));
sigma_by(inputs; .date; .account_no; .count)
Note that jq's builtin "group_by" has a significant potential disadvantage for large arrays: it uses a sorting algorithm.
I have a json file like this:
{"caller_id":"123321","cust_name":"abc"}
{"caller_id":"123443","cust_name":"def"}
{"caller_id":"123321","cust_name":"abc"}
{"caller_id":"234432","cust_name":"ghi"}
{"caller_id":"123321","cust_name":"abc"}
....
I tried:
jq -s 'unique_by(.field1)'
but this will remove all the duplicated items, I,m looking to keep just one of the duplicated items, to get the file like this:
{"caller_id":"123321","cust_name":"abc"}
{"caller_id":"123443","cust_name":"def"}
{"caller_id":"234432","cust_name":"ghi"}
....
With field1, I doubt you are getting anything in the output, since there is no key/field with the given name. If you simply change your command to jq -s 'unique_by(.caller_id)' it will give you desired result containing unique & sorted objects based on caller_id key. It will ensure in result you have atleast & atmost one object for each caller_id.
NOTE: Same as what #Jeff Mercado has explained in the comments.
If the file consists of a sequence (stream) of JSON objects, then a very simple way to produce a stream of the distinct objects would be to use the invocation:
jq -s `unique[]`
A similar alternative would be:
jq -n `[inputs] | unique[]`
For large files, however, the above will probably be too inefficient, both with respect to RAM and run-time. Note that both unique and unique_by entail a sort.
A far better alternative would be to take advantage of the fact that the input is a stream, and to avoid the built-in unique and unique_by filters. This can be done with the assistance of the following filters, which are not yet built-in but likely to become so:
# emit a dictionary
def set(s): reduce s as $x ({}; .[$x | (type[0:1] + tostring)] = $x);
# distinct entities in the stream s
def distinct(s): set(s)[];
We now have only to add:
distinct(inputs)
to achieve the objective, provided jq is invoked with the -n command-line option.
This approach will also preserve the original ordering.
If the input is an array ...
If the input is an array, then using distinct as defined above still has the advantage of not requiring a sort. For arrays that are too large to fit comfortably in memory, it would be advisable to use jq's streaming parser to create a stream.
One possibility would be to proceed in two steps (jq --stream .... | jq -n ...), but it might be better to do everything in one step (jq -cn --stream ...), using the following "main" program:
distinct(fromstream(inputs
| (.[0] |= .[1:] )
| select(. != [[]])))