Break JSON in pager "less" - json

I use the pager called less since 20 years.
Time changes and I often look at files containing json.
A json dict which is on one line is not easy to read for me.
Is there a way to break the json into key-value pairs if you look at the log file?
Example:
How to display a line in a log file which looks like this:
{"timestamp": "2019-05-13 14:40:51", "name": "foo.views.error", "log_intent": "line1\nline2" ...}
roughly like this:
"timestamp": "2019-05-13 14:40:51"
"name": "foo.views.error"
"log_intent": "line1
line2"
....
I am not married with the pager less if there is better tool, please leave a comment.

In your case, the log file seems to consist of one json document per line, you can use jq to preformat the logfile before piping to less:
jq -s . file.log | less
With colors:
jq -Cs . file.log | less -r

Related

Providing a very large argument to a jq command to filter on keys

I am trying to parse a very large file which consists of JSON objects like this:
{"id": "100000002", "title": "some_title", "year": 1988}
Now I also have a very big list of ID's that I want to extract from the file, if they are there.
Now I know that I can do this:
jq '[ .[map(.id)|indices("1", "2")[]] ]' 0.txt > p0.json
Which produces the result I want, namely fills p0.json with only the objects that have "id" 1 and "2". Now comes the problem: my list of id's is very long too (100k or so). So I have a Python programm that outputs the relevant id's. My line of thought was, to first assign that to a variable:
REL_IDS=`echo python3 rel_ids.py`
And then do:
jq --arg ids "$REL_IDS" '[ .[map(.id)|indices($ids)[]] ]' 0.txt > p0.json
I tried both with brackets [$ids] and without brackets, but no luck so far.
My question is, given a big amount of arguments for the filter, how would I proceed with putting them into my jq command?
Thanks a lot in advance!
Since the list of ids is long, the trick is NOT to use --arg. However, the details will depend on the details regarding the "long list of ids".
In general, though, you'd want to present the list of ids to jq as a file so that you could use --rawfile or --slurpfile or some such.
If for some reason you don't want to bother with an actual file, then provided your shell allows it, you could use these file-oriented options with process substitution: <( ... )
Example
Assuming ids.json contains a lising of the ids as JSON strings:
"1"
"2"
"3"
then one could write:
< objects.json jq -c -n --slurpfile ids ids.json '
inputs | . as $in | select( $ids | index($in.id))'
Notice the use of the -n command-line option.

Using jq to concatenate directory of JSON files

I have a directory of about 100 JSON files, each an array of 100 simple records, that I want to concatenate into one file for inclusion as static data in an app, so I don't have to make repeated API calls to retrieve small pieces. (I'm limited to downloading only 100 records at a time; that's why I have 100 short files.)
Here's a sample file, shortened to two records for display here:
[
{
"id": 11531,
"title": "category 1",
"count": 5
},
{
"id": 11532,
"title": "category 2",
"count": 5
}
]
My research led to a solution that works but only for two files with two records each:
jq -s '.[0] + .[1]' file1.json file2.json > output.json
This source also suggested this line would work to handle a directory (right now only two files in it):
jq -s 'reduce .[] as $item ({}; . * $item)' json_files/* > output.json
but I get an error:
jq: error (at json_files/categories-11-20.json:0): object ({}) and array ([{"id":1153...) cannot be multiplied
I thought maybe the problem was the *trying to multiply, so I tried + in that place, but I get a ... cannot be added. message.
Is there a way to do this through jq or is there a better tool?
The simplest and perfectly reasonable approach would be to use the -s command-line option and add along the following lines:
jq -s add json_files/*
Of course you may wish to specify the list of files differently. The order in which they are specified is also significant.
Notes:
This Q is really just a variant of Use jq to concatenate JSON arrays in multiple files
reduce can also be used, but you would need to start with null or [] rather than {}.
The operator '*' is (not surprisingly) quite different from '+'!

Retrieving the first entity out of several ones

I am a rank beginner with jq, and I've been going through the tutorial, but I think there is a conceptual difference I don't understand. A common problem I encounter is that a large JSON file will contain many objects, each of which is quite big, and I'd like to view the first complete object, to see which fields exist, what types, how much nesting, etc.
In the tutorial, they do this:
# We can use jq to extract just the first commit.
$ curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq '.[0]'
Here is an example with one object - here, I'd like to return the whole array (just like my_array=['foo']; my_array[0] would return foo in Python).
wget https://hacker-news.firebaseio.com/v0/item/8863.json
I can access and pretty-print the whole thing with .
$ cat 8863.json | jq '.'
$
{
"by": "dhouston",
"descendants": 71,
"id": 8863,
"kids": [
9224,
...
8876
],
"score": 104,
"time": 1175714200,
"title": "My YC app: Dropbox - Throw away your USB drive",
"type": "story",
"url": "http://www.getdropbox.com/u/2/screencast.html"
}
But trying to get the first element fails:
$ cat 8863.json| jq '.[0]'
$ jq: error (at <stdin>:0): Cannot index object with number
I get the same error jq '.[0]' 8863.json, but strangely echo 8863.json | jq '.[0]' gives me parse error: Invalid numeric literal at line 2, column 0. What is the difference? Also, is this not the correct way to get the zeroth member of the JSON?
I've looked at other SO posts with this error message and at the manual, but I'm still confused. I think of the file as an array of JSON objects, and I'd like to get the first. But it looks like jq works with something called a "stream", and does operations on all of it (say, return one given field from every object).
Clarification:
Let's say I have 2 objects in my JSON:
{
"by": "pg",
"id": 160705,
"poll": 160704,
"score": 335,
"text": "Yes, ban them; I'm tired of seeing Valleywag stories on News.YC.",
"time": 1207886576,
"type": "pollopt"
}
{
"by": "dpapathanasiou",
"id": 16070,
"kids": [
16078
],
"parent": 16069,
"text": "Dividends don't mean that much: Microsoft in its dominant years (when they had 40%-plus margins and were raking in the cash) never paid a dividend (they did so only recently).",
"time": 1177355133,
"type": "comment"
}
How would I get the entire first object (lines 1-9) with jq?
Cannot index object with number
This error message says it all, you can't index objects with numbers. If you want to get the value of by field, you need to do
jq '.by' file
Wrt
echo 8863.json | jq '.[0]' gives me parse error: Invalid numeric literal at line 2, column 0.
It's normal since you didn't specify -R/--raw-input flag, and so jq sees the shell string 8863.json as a JSON string, and one cannot apply array indexing to JSON strings. (To get the first character as a string, you'd write .[0:1].)
If your input file consists of several separate entities, to get the first one:
jq -n 'input' file
or,
jq -n 'first(inputs)' file
To get nth (let's say 5th for example):
jq -n 'nth(5; inputs)' file
a large JSON file will contain many objects, each of which is quite big, and I'd like to view the first complete object, to see which fields exist, what types, how much nesting, etc.
As implied in #OguzIsmail's response, there are important differences between:
- a JSON file (i.e, a file containing exactly one JSON entity);
- a file containing a sequence (i.e., stream) of JSON entities;
- a file containing an array of JSON entities.
In the first two cases, you can write jq -n input to select the first entity, and in the case of an array of entities, jq .[0] will suffice.
(In JSON-speak, a "JSON object" is a kind of dictionary, and is not to be confused with JSON entities in general.)
If you have a bunch of JSON objects (whether as a stream or array or whatever), just looking at the first often doesn't really give an accurate picture of all them. For getting a bird's eye view of a bunch of objects, using a "schema inference engine" is often the way to go. For this purpose, you might like to consider my schema.jq schema inference engine. It's usually very simple to use but of course how you use it will depend on whether you have a stream or array of JSON entities. For basic details, see https://gist.github.com/pkoppstein/a5abb4ebef3b0f72a6ed; for related topics (e.g. verification), see the entry for JESS at https://github.com/stedolan/jq/wiki/Modules
Please note that schema.jq infers a structural schema that mirrors the entities under consideration. Such structural schemas have little in common with JSON Schema schemas, which you might also like to consider.

Using jq to combine multiple JSON files

First off, I am not an expert with JSON files or with JQ. But here's my problem:
I am simply trying to download to card data (for the MtG card game) through an API, so I can use it in my own spreadsheets etc.
The card data from the API comes in pages, since there is so much of it, and I am trying to find a nice command line method in Windows to combine the files into one. That will make it nice and easy for me to use the information as external data in my workbooks.
The data from the API looks like this:
{
"object": "list",
"total_cards": 290,
"has_more": true,
"next_page": "https://api.scryfall.com/cards/search?format=json&include_extras=false&order=set&page=2&q=e%3Alea&unique=cards",
"data": [
{
"object": "card",
"id": "d5c83259-9b90-47c2-b48e-c7d78519e792",
"oracle_id": "c7a6a165-b709-46e0-ae42-6f69a17c0621",
"multiverse_ids": [
232
],
"name": "Animate Wall",
......
}
{
"object": "card",
......
}
]
}
Basically I need to take what's inside the "data" part from each file after the first, and merge it into the first file.
I have tried a few examples I found online using jq, but I can't get it to work. I think it might be because in this case the data is sort of under an extra level, since there is some basic information, then the "data" category is beneath it. I don't know.
Anyway, any help on how to get this going would be appreciated. I don't know much about this, but I can learn quickly so even any pointers would be great.
Thanks!
To merge the .data elements of all the responses into the first response, you could run:
jq 'reduce inputs.data as $s (.; .data += $s)' page1.json page2.json ...
Alternatives
You could use the following filter in conjunction with the -n command-line option:
reduce inputs as $s (input; .data += ($s.data))
Or if you simply want an object of the form {"data": [ ... ]} then (again assuming you invoke jq with the -n command-line option) the following jq filter would suffice:
{data: [inputs.data] | add}
Just to provide closure, #peak provided the solution. I am using it in conjunction with the method found here for using wildcards in batch files to address multiple files. The code looks like this now:
set expanded_list=
for /f "tokens=*" %%F in ('dir /b /a:-d "All Cards\!setname!_*.json"') do call set expanded_list=!expanded_list! "All Cards\%%F"
jq-win32 "reduce inputs.data as $s (.; .data += $s)" !expanded_list! > "All Cards\!setname!.json"
All the individual pages for each card set are named "setname"_"pagenumber".json
The code finds all the pages for each set and combines them into one variable which I can pass into jq.
Thanks again!

Alter log file date with the command sed?

i have the following line multiple times in a log file with other data.
And i like to analyze this data by importing the json part to a mongodb first and the run selected queries over it.
DEBUG 2015-04-18 23:13:23,374 [TEXT] (Class.java:19) - {"a":"1", "b":"2", ...}
To alter the data just to get the json part i use:
cat mylog.log | sed "s/DEBUG.*19) - //g" > mylog.json
The main problem here is, that is like to add the date and time part as well and as an additional json value to get something like this:
{"date": "2015-04-18", "time":"23:13:26,374", "a":"1", "b":"2", ...}
Here is the main question. How can i do this by using the linux console and the comman sed? Or by an alternative console command?
thx in advance
Since this appears to be a very rigid format, you could probably use sed like so:
sed 's/DEBUG \([^ ]*\) \([^ ]*\).*19) - {/{ "date": "\1", "time": "\2", /' mylog.log
Where [^ ]* matches a sequence of non-space characters and \(regex\) is a capturing group that makes a matched string available for use in the replacement as \1, \2, and so forth depending on its position. You can see these used in the replacement part.
If it were me, though, I'd use Perl for its ability to split a line into fields and match non-greedily:
perl -ape 's/.*?{/{ "date": "$F[1]", "time": "$F[2]", /' mylog.log
The latter replaces everything up to the first { (because .*? matches non-greedily) and replaces it with the string you want. $F[1] and $F[2] are the second and third whitespace-delimited field in the line; -a makes Perl split the line into the #F array this way.