Getting the first element of json data with jq - json

I'm working with Poloniex API. While using returnTicker function, the data comes like this.
curl "https://poloniex.com/public?command=returnTicker"
{
"BTC_BTS": {
"id": 14,
"last": "0.00000111",
"lowestAsk": "0.00000112",
"highestBid": "0.00000110",
"percentChange": "0.09900990",
"baseVolume": "3.12079869",
"quoteVolume": "2318738.79293715",
"isFrozen": "0",
"high24hr": "0.00000152",
"low24hr": "0.00000098"
},
"BTC_DASH": {
"id": 24,
"last": "0.00466173",
"lowestAsk": "0.00466008",
"highestBid": "0.00464358",
"percentChange": "0.02318430",
"baseVolume": "1.98111396",
"quoteVolume": "425.22973220",
"isFrozen": "0",
"high24hr": "0.00482962",
"low24hr": "0.00450482"
....
},
"USDT_GRT": {
"id": 497,
"last": "0.72811272",
"lowestAsk": "0.75999916",
"highestBid": "0.72740000",
"percentChange": "0.48594450",
"baseVolume": "133995.43411815",
"quoteVolume": "194721.36672887",
"isFrozen": "0",
"high24hr": "0.79000000",
"low24hr": "0.45000020"
},
"TRX_SUN": {
"id": 498,
"last": "500.00000000",
"lowestAsk": "449.99999999",
"highestBid": "100.00000000",
"percentChange": "0.00000000",
"baseVolume": "0.00000000",
"quoteVolume": "0.00000000",
"isFrozen": "0",
"high24hr": "0.00000000",
"low24hr": "0.00000000"
}
}
I want the output like this
BTC_BTS : 14 : 0.00000111 : 0.00000112 : 0.00000110 : 0.09900990 : 3.12079869 : 2318738.79293715 : 0 : 0.00000152 : 0.00000098
...
USDT_GRT : 497 : 0.72428700 : 0.75999958 : 0.72630001 : 0.47813685 : 133968.74968533 : 194695.96886712 : 0 : 0.79000000 : 0.45000020
TRX_SUN : 498 : 500.00000000 : 449.99999999 : 100.00000000 : 0.00000000 : 0.00000000 : 0.00000000 : 0 : 0.00000000 : 0.00000000
I am using jq and my problem is accesing the currency pair name.
I could do this;
14 : 0.00000111 : 0.00000112 : 0.00000110 : 0.09900990 : 3.12079869 : 2318738.79293715 : 0 : 0.00000152 : 0.00000098
...
497 : 0.72428700 : 0.75999958 : 0.72630001 : 0.47813685 : 133968.74968533 : 194695.96886712 : 0 : 0.79000000 : 0.45000020
498 : 500.00000000 : 449.99999999 : 100.00000000 : 0.00000000 : 0.00000000 : 0.00000000 : 0 : 0.00000000 : 0.00000000
by using this command;
curl "https://poloniex.com/public?command=returnTicker" |jq -r | jq '.[] | (.id|tostring) + " : " + (.last|tostring) + " : " + (.lowestAsk|tostring) + " : " + (.highestBid|tostring) + " : " + (.percentChange|tostring) + " : " + (.baseVolume|tostring) + " : " + (.quoteVolume|tostring) + " : " + (.isFrozen|tostring) + " : " + (.high24hr|tostring) + " : " + (.low24hr|tostring)'|jq -r
not only this, in every jq pipeline, I cant access the first element of json
I am not meaning the |jq .BTC_BTS or |jq .USDT_GRT pipeline.
|jq . gives whole json, |jq .[] gives the sub elements after the first element.
How can i access the first path?
By the way, I may have written stupid and long pipeline with jq. If you have any idea to convert whole json to a row-column data, I am open to your ideas.
Thank you all for your answers.

To be safe, it might be better not to assume that the ordering of the keys is the same in all the inner objects. Ergo:
keys_unsorted as $outer
| (.[$outer[0]] | keys_unsorted) as $keys
| $outer[] as $k
| [ $k, .[$k][$keys[]] ]
| join(" : ")

I think this does what you want.
curl -s "https://poloniex.com/public?command=returnTicker" | \
jq -r 'to_entries
| .[]
| [ .key, (.value | to_entries | .[] | .value) ]
| join(" : ")'
In a nutshell, put everything in an array and use join to produce the desired output.
Update
As luciole75w notes, my solution has too many steps. This is better.
jq -r 'to_entries[] | [ .key, .value[] ] | join(" : ")'
That said, I would use peak's solution. Mine does not guarantee that the columns are the same for each line.

Related

How to convert specific JSON file to CSV

Here is my sample.json:
{
"process" : {
"pid" : "1462",
"path" : "\/Applications\/Google Chrome.app\/Contents\/Frameworks\/Google Chrome Framework.framework\/Versions\/108.0.5359.98\/Helpers\/Google Chrome Helper.app\/Contents\/MacOS\/Google Chrome Helper",
"signature(s)" : {
"signatureIdentifier" : "com.google.Chrome.helper",
"signatureStatus" : 0,
"signatureSigner" : 3,
"signatureAuthorities" : [
"Developer ID Application: Google LLC (EQHXZ8M8AV)",
"Developer ID Certification Authority",
"Apple Root CA"
]
}
},
"connections" : [
{
"remoteHostName" : "n\/a",
"protocol" : "UDP",
"interface" : "",
"localAddress" : "::",
"state" : "n\/a",
"remotePort" : "0",
"localPort" : "5353",
"remoteAddress" : "::"
},
{
"remoteHostName" : "n\/a",
"protocol" : "TCP",
"interface" : "en0",
"localAddress" : "2a02:560:5424:b200:359c:f801:abab:cd28",
"state" : "Established",
"remotePort" : "443",
"localPort" : "50190",
"remoteAddress" : "2600:1f18:60d5:4e03:ffe8:813e:6d1a:d379"
}
]
}
I would like to create a custom CSV from this data to see all connections by process id (pid), but I don't get it.
What I have so far:
cat sample.json | jq '[.process.pid], (.connections | .[])'
Thanks in advance for your help!
jq -r '{pid: .process.pid} + .connections[] | to_entries | map(.value) | #csv' input.json
Output
"1462","n/a","UDP","","::","n/a","0","5353","::"
"1462","n/a","TCP","en0","2a02:560:5424:b200:359c:f801:abab:cd28","Established","443","50190","2600:1f18:60d5:4e03:ffe8:813e:6d1a:d379"

Text file to json using JQ command

I have a text file with data like below:
6111119268639|22|65024:3|2000225350|Samsung|ADD|234534643645|REMOVE|5645657|65067:3|Apple|ADD|234534643645|REMOVE|3432523|65023:3
6111119268639|22|65024:3|2000225350|Apple|ADD|234534643645|REMOVE|3432523|65023:3
6111119268639|22|65024:3|2000225350|Samsung|ADD|234534643645|REMOVE|3432523|65023:3
and so on ...
I want want json output like this below:
[{
"ExternalId": "6111119268639",
"ExternalIdType": "22",
"RPPI": "65024:3",
"NewPrimaryOfferId": "2000225350",
"Samsung": [{
"Action": "ADD",
"NewSecondaryOfferId": "234534643645"
},
{
"Action": "REMOVE",
"SecondaryProductOfferId": "5645657",
"RemoveSecondaryProductInstance": "65067:3"
}
],
"Apple": [
{
"Action": "ADD",
"NewComponentOfferId": "234534643645"
},
{
"Action": "REMOVE",
"ComponentOfferId": "3432523",
"RemoveAddOnProductInstance": "65023:3"
}
]
},
{
"ExternalId": "6111119268639",
"ExternalIdType": "22",
"RPPI": "65024:3",
"NewPrimaryOfferId": "2000225350",
"Apple": [{
"Action": "ADD",
"NewComponentOfferId": "234534643645"
},
{
"Action": "REMOVE",
"ComponentOfferId": "3432523",
"RemoveAddOnProductInstance": "65023:3"
}
]
},
{
"ExternalId": "6111119268639",
"ExternalIdType": "22",
"RPPI": "65024:3",
"NewPrimaryOfferId": "2000225350",
"Apple": [{
"Action": "Samsung",
"NewComponentOfferId": "234534643645"
},
{
"Action": "REMOVE",
"ComponentOfferId": "3432523",
"RemoveAddOnProductInstance": "65023:3"
}
]
}
]
Here ExternalId,ExternalIdType,RPPI,NewPrimaryOfferId are constant and will be there in every line.But Samsung and Apple can vary accordingly means there could be only 'Samsung' in one line or there could be only 'Apple' in one line or there could be both as shown in sample text.
I have written a Jq command for this like below:
jq -Rn '[inputs / "|" | [[
["ExternalId"],["ExternalIdType"],["RPPI"],["NewPrimaryOfferId"],
(("Samsung", "Apple") as $p |
[$p, 0] + (["Action"], ["NewSecondaryOfferId"]),
[$p, 1] + (["Action"], ["SecondaryProductOfferId"], ["RemoveSecondaryProductInstance"])
)
],.] | transpose | reduce .[] as $k ({}; setpath($k[0];$k[1]))]' data.txt
But seems like it is not giving me the desired output I want.Please suggest how can I write the jq command for this using if-else condition for products or any shell script to get the desired json output.Thanks in advance!
Another approach:
jq -Rn '
[
inputs / "|" | reduce (.[4:] | while(. != [];.[6:])) as $prod (
.[:4] | with_entries(.key |= ["ExternalId","ExternalIdType","RPPI","NewPrimaryOfferId"][.]);
.[$prod[0]] = [
{Action:"ADD", NewComponentOfferId:$prod[2]},
{Action:"REMOVE", ComponentOfferId:$prod[4], RemoveAddOnProductInstance:$prod[5]}
]
)
]
' data.txt
Demo
This seems to work on your test data:
jq -nR '
def offer:
. as $data |
[[], 0] | until([$data[.[1]]] | inside(["ADD", "REMOVE"]) | not;
if $data[.[1]] == "ADD" then
[ .[0] + [{ Action: "ADD", NewComponentOfferId: $data[.[1] + 1] }], .[1] + 2 ]
else
[ .[0] + [{ Action: "REMOVE", ComponentOfferId: $data[.[1] + 1],
RemoveAddOnProductInstance: $data[.[1] + 2] }], .[1] + 3 ]
end);
def build:
(. / "|") as $data | ($data | length) as $len |
[ { ExternalId: $data[0], ExternalIdType: $data[1], RPPI: $data[2],
NewPrimaryOfferId: $data[3] }, 4 ] |
until(.[1] >= $len;
($data[.[1]+1:] | offer) as $off |
[ .[0] + { ($data[.[1]]): $off[0] }, .[1] + 1 + $off[1] ]) |
.[0];
[ inputs | build ]' data.txt

Import JSON from CSV, grouping by multiple fields

I would like to create a JSON with array of nested objects with a grouping for different fields.
This is the CSV and Iwould like to group it by sid, year and quarter (first three fields):
S4446B3,2020,202001,2,345.45
S4446B3,2020,202001,4,24.44
S4446B3,2021,202102,5,314.55
S6506LK,2020,202002,3,376.55
S6506LK,2020,202003,3,76.23
After splitting the CSV with the following I get an object for each record.
split("\n")
| map(split(","))
| .[0:]
| map({"sid" : .[0], "year" : .[1], "quarter" : .[2], "customer_type" : .[3], "obj" : .[4]})
But for each sid I would like to get an array of objects nested like this :
[
{
"sid" : "S4446B3",
"years" : [
{
"year" : 2020,
"quarters" : [
{
"quarter" : 202001,
"customer_type" : [
{
"type" : 2,
"obj" : "345.45"
},
{
"type" : 4,
"obj" : "24.44"
}
]
}
]
},
{
"year" : 2021,
"quarters" : [
{
"quarter" : 202102,
"customer_type" : [
{
"type" : 5,
"obj" : "314.55"
}
]
}
]
}
]
},
{
"sid" : "S6506LK",
"years" : [
{
"year" : 2020,
"quarters" : [
{
"quarter" : 202002,
"customer_type" : [
{
"type" : 3,
"obj" : "376.55"
}
]
},
{
"quarter" : 202003,
"customer_type" : [
{
"type" : 3,
"obj" : "76.23"
}
]
}
]
}
]
}
]
It'd be more intuitive if sid, year, quarter, etc. were to be key names. With -R/--raw-input and -n/--null-input options on the command line, this will do that:
reduce (inputs / ",")
as [$sid, $year, $quarter, $type, $obj]
(.; .[$sid][$year][$quarter] += [{$type, $obj}])
And, to get your expected output you can append these lines to the above program.
| .[][] |= (to_entries | map({quarter: .key, customer_type: .value}))
| .[] |= (to_entries | map({year: .key, quarters: .value}))
| . |= (to_entries | map({sid: .key, years: .value}))

JSON file to CSV file conversion using jq

I am trying to convert my json file to a csv file using jq. Below is the sample input events.json file.
{
"took" : 111,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : "alerts",
"_type" : "_doc",
"_id" : "1",
"_score" : 1.0,
"_source" : {
"alertID" : "639387c3-0fbe-4c2b-9387-c30fbe7c2bc6",
"alertCategory" : "Server Alert",
"description" : "Successfully started.",
"logId" : null
}
},
{
"_index" : "alerts",
"_type" : "_doc",
"_id" : "2",
"_score" : 1.0,
"_source" : {
"alertID" : "2",
"alertCategory" : "Server Alert",
"description" : "Successfully stoped.",
"logId" : null
}
}
]
}
}
My rows in csv should have the data inside each _source tag. So my columns would be alertId , alertCategory , description and logId with its respective data.
I tried the below command :
jq --raw-output '.hits[] | [."alertId",."alertCategory",."description",."logId"] | #csv' < /root/events.json
and its not working.
Can anyone help me with this?
Your path-expression is not right, you have a hits array inside an object named hits and the fields you trying to put in CSV is present under __source object.
So your expression should have been below. Use it along with -r flag to put the output in raw output format
.hits.hits[]._source | [ .alertID, .alertCategory, .description, .logId ] | #csv
If your fields are null, the string representation of your null field value results in just "". If you want an explicit "null" string representation, use the alternate operator along with the field you expect to be null, e.g. instead of .logId, you can do (.logId // "null")
To add the column name as the header in the output CSV format, you could use the #csv or the join(",") function in raw output format -r
[ "alertId" , "alertCategory" , "description", "logId" ],
( .hits.hits[]._source | [ .alertID, .alertCategory, .description, .logId // "null" ]) | #csv
or
[ "alertId" , "alertCategory" , "description", "logId" ],
( .hits.hits[]._source | [ .alertID, .alertCategory, .description, .logId // "null" ]) | join(",")

jq Filter on sub object value

I have a json file people.json:
{
"Joe" : {"Job" : "Clown", "Age" : 22},
"Sally" : {"Job" : "Programmer", "Age" : 32},
"Anne" : {"Job" : "Clown", "Age" : 29}
}
I would like to select everyone who is a Clown. My output should look like this:
{
"Joe" : {"Job" : "Clown", "Age" : 22},
"Anne" : {"Job" : "Clown", "Age" : 29}
}
I have tried the .. operator as in
cat people.json | jq '. | map(select(.Job == "Clown"))'
But it seems to match Joe and Anne at multiple levels and produces more output then I want. Any ideas? Thanks.
use with_entries to convert to/from an intermediate format that represents that data as an array of objects with key and value elements:
cat people.json | jq 'with_entries(select(.value.Job == "Clown"))'
as per the docs here: http://stedolan.github.io/jq/manual/
Here is a solution using reduce
. as $v
| reduce keys[] as $k (
{};
if $v[$k].Job == "Clown" then .[$k] = $v[$k] else . end
)