Trying to filter the output from Json, so far the filter used works as expected when , the software version is found .However when the software version is not present , jq will result in a error. Basically how do I escape the () and return empty in the csv file.
.result [] | [ "https://vuldb.com/?id." + .entry.id ,.software.vendor // "empty"
,.software.name // "empty", (.software.version []| tostring // "empty")
,.software.type // "empty"
,.software.platform //"empty" ]
"result": [
{
"entry": {
"id": "206880",
"title": "CrowdStrike Falcon 6.31.14505.0\/6.42.15610 Uninstallation authorization",
"summary": "A vulnerability was found in CrowdStrike Falcon 6.31.14505.0\/6.42.15610. It has been classified as problematic. Affected is some unknown functionality of the component Uninstallation Handler. There is no information about possible countermeasures known. It may be suggested to replace the affected object with an alternative product.",
"details": {
"affected": "A vulnerability was found in CrowdStrike Falcon 6.31.14505.0\/6.42.15610. It has been classified as problematic.",
"vulnerability": "CWE is classifying the issue as CWE-862. The software does not perform an authorization check when an actor attempts to access a resource or perform an action.",
"impact": "This is going to have an impact on availability.",
"exploit": "It is declared as functional. The vulnerability was handled as a non-public zero-day exploit for at least 54 days. During that time the estimated underground price was around $0-$5k.",
"countermeasure": "There is no information about possible countermeasures known. It may be suggested to replace the affected object with an alternative product.",
"sources": "Further details are available at modzero.com."
},
"timestamp": {
"create": "1661155277",
"change": "1661155462"
},
"changelog": [
"vulnerability_cvss3_meta_basescore",
"vulnerability_cvss3_meta_tempscore",
"vulnerability_cvss3_researcher_basescore"
]
},
"software": {
"vendor": "CrowdStrike",
"name": "Falcon",
"version": [
"6.31.14505.0",
"6.42.15610"
],
Suppress the error and provide an alternative for empty values:
.result | map(
"https://vuldb.com/?id.\(.entry.id)",
(
.software |
.vendor // "empty",
.name // "empty",
(.version[] | tostring // "empty")? // "no version",
.type // "empty",
.platform // "empty"
)
)
Related
Here is a simplified json file of a terraform state file (let's call it dev.ftstate)
{
"version": 4,
"terraform_version": "0.12.9",
"serial": 2,
"lineage": "ba56cc3e-71fd-1488-e6fb-3136f4630e70",
"outputs": {},
"resources": [
{
"module": "module.rds.module.reports_cpu_warning",
"mode": "managed",
"type": "datadog_monitor",
"name": "alert",
"each": "list",
"provider": "module.rds.provider.datadog",
"instances": []
},
{
"module": "module.rds.module.reports_lag_warning",
"mode": "managed",
"type": "datadog_monitor",
"name": "alert",
"each": "list",
"provider": "module.rds.provider.datadog",
"instances": []
},
{
"module": "module.rds.module.cross_region_replica_lag_alert",
"mode": "managed",
"type": "datadog_monitor",
"name": "alert",
"each": "list",
"provider": "module.rds.provider.datadog",
"instances": []
},
{
"module": "module.rds",
"mode": "managed",
"type": "aws_db_instance",
"name": "master",
"provider": "provider.aws",
"instances": [
{
"schema_version": 0,
"attributes": {
"address": "dev-database.123456.us-east-8.rds.amazonaws.com",
"allocated_storage": 10,
"password": "",
"performance_insights_enabled": false,
"tags": {
"env": "development"
},
"timeouts": {
"create": "6h",
"delete": "6h",
"update": "6h"
},
"timezone": "",
"username": "admin",
"vpc_security_group_ids": [
"sg-1234"
]
},
"private": ""
}
]
}
]
}
There are many modules at the same level of module.rds inside the instances. I took out many of them to create the simplified version of the raw data. The key takeway: do not assume the array index will be constant in all cases.
I wanted to extract the password field in the above example.
My first attempt is to use equality check to extract the relevant modules
` jq '.resources[].module == "module.rds"' dev.tfstate`
but it actually just produced a list of boolean values. I don't see any mention of builtin functions like filter in jq's manual
then I tried to just access the field:
> jq '.resources[].module[].attributes[].password?' dev.tfstate
then it throws the following error
jq: error (at dev.tfstate:1116): Cannot iterate over string ("module.rds")
So what is the best way to extract the value? Hopefully it can only focus on the password attribute in module.rds module only.
Edit:
My purpose is to detect if a password is left inside a state file. I want to ensure the passwords are exclusively stored in AWS secret manager.
You can extract the module you want like this.
jq '.resources[] | select(.module == "module.rds")'
I'm not confident that I understand the requirements for the rest of the solution. So this might not only not be the best way of doing what you want; it might not do what you want at all!
If you know where password will be, you can do this.
jq '.resources[] | select(.module == "module.rds") | .instances[].attributes.password'
If you don't know exactly where password will be, this is a way of finding it.
jq '.resources[] | select(.module == "module.rds") | .. | .password? | values'
According to the manual under the heading "Recursive Descent," ..|.a? will "find all the values of object keys “a” in any object found “below” ."
values filters out the null results.
You could also get the password value out of the state file without jq by using Terraform outputs. Your module should define an output with the value you want to output and you should also output this at the root module.
Without seeing your Terraform code you'd want something like this:
modules/rds/main.tf
resource "aws_db_instance" "master" {
# ...
}
output "password" {
value = aws_db_instance.master.password
sensitive = true
}
example/main.tf
module "rds" {
source = "../modules/rds"
# ...
}
output "rds_password" {
value = module.rds.password
sensitive = true
}
The sensitive = true parameter means that Terraform won't print the output to stdout when running terraform apply but it's still held in plain text in the state file.
To then access this value without jq you can use the terraform output command which will retrieve the output from the state file and print it to stdout. From there you can use it however you want.
I have a Main json file.
{
"swagger": "2.0",
"paths": {
"/agents/delta": {
"get": {
"description": "lorem ipsum doram",
"operationId": "getagentdelta",
"summary": "GetAgentDelta",
"tags": [
"Agents"
],
"parameters": [
{
"name": "since",
"in": "query",
"description": "Format - date-time (as date-time in RFC3339). The time from which you need changes from. You should use the format emitted by Date's toJSON method (for example, 2017-04-23T18:25:43.511Z). If a timestamp older than a week is passed, a business rule violation will be thrown which will require the client to change the from date. As a best-practice, for a subsequent call to this method, send the timestamp when you <b>started</b> the previous delta call (instead of when you completed processing the response or the max of the lastUpdateOn timestamps of the returned records). This will ensure that you do not miss any changes that occurred while you are processing the response from this method",
"required": true,
"type": "string"
}
]
}
}
}
}
And I have a smaller json file.
{
"name": "Authorization",
"description": "This parameter represents the Authorization token obtained from the OKTA Authorization server. It is the Bearer token provided to authorize the consumer. Usage Authorization : Bearer token",
"in": "header",
"required": true,
"type": "string"
}
Now I need to add the contents of the smaller json file into the Main.Json file in the parameters array.
I tried the below command
cat test.json | jq --argfile sub Sub.json '.paths./agents/delta.get.parameters[ ] += $sub.{}' > test1.json
But I get the below error:
jq: error: syntax error, unexpected '{', expecting FORMAT or QQSTRING_START (Unix shell quoting issues?) at <top-level>, line 1:
.paths += $sub.{}
jq: 1 compile error
cat: write error: Broken pipe
I tried this command.
cat test.json | jq '.paths./agents/delta.get.parameters[ ] | = (.+ [{ "name": "Authorization", "description": "This parameter represents the Authorization token obtained from the OKTA Authorization server. It is the Bearer token provided to authorize the consumer. Usage Authorization : Bearer token", "in": "header", "required": true, "type": "string" }] )' > test1.json
And I get no error and no output either. How do I get around this?
I would have to add the contents of the smaller json file directly first. And then at a later stage, search if it already had name: Authorization and it's other parameters, and then remove and replace the whole name: Authorization piece with the actual contents of the smaller.json, under each path that starts with '/xx/yyy'.
Edited to add:
For the last part of the question, I could not use the walk function, since I have jq 1.5 and since am using the bash task within Azure DevOps, I can't update the jq installation file with the walk function.
Meanwhile I found the use of something similar to wildcard in jq, and was wondering why I can't use it in this way.
jq --slurpfile newval auth.json '.paths | .. | objects | .get.parameters += $newval' test.json > test1.json
Can anyone please point out the issue in the above command? It did not work, and am not sure why..
You want --slurpfile, and you need to escape /agents/delta part of the path with quotes:
$ jq --slurpfile newval insert.json '.paths."/agents/delta".get.parameters += $newval' main.json
{
"swagger": "2.0",
"paths": {
"/agents/delta": {
"get": {
"description": "lorem ipsum doram",
"operationId": "getagentdelta",
"summary": "GetAgentDelta",
"tags": [
"Agents"
],
"parameters": [
{
"name": "since",
"in": "query",
"description": "Format - date-time (as date-time in RFC3339). The time from which you need changes from. You should use the format emitted by Date's toJSON method (for example, 2017-04-23T18:25:43.511Z). If a timestamp older than a week is passed, a business rule violation will be thrown which will require the client to change the from date. As a best-practice, for a subsequent call to this method, send the timestamp when you <b>started</b> the previous delta call (instead of when you completed processing the response or the max of the lastUpdateOn timestamps of the returned records). This will ensure that you do not miss any changes that occurred while you are processing the response from this method",
"required": true,
"type": "string"
},
{
"name": "Authorization",
"description": "This parameter represents the Authorization token obtained from the OKTA Authorization server. It is the Bearer token provided to authorize the consumer. Usage Authorization : Bearer token",
"in": "header",
"required": true,
"type": "string"
}
]
}
}
}
}
And here's one that first removes any existing Authorization objects from the parameters before inserting the new one into every parameters array, and doesn't depend on an the exact path:
jq --slurpfile newval add.json '.paths |= walk(
if type == "object" and has("parameters") then
.parameters |= map(select(.name != "Authorization")) + $newval
else
.
end)' main.json
I want to process this data
{
"results": [
{
"headword": "binding",
"senses": [
{
"definition": [
"a promise, agreement etc that must be obeyed"
]
}
]
},
{
"headword": "non-binding",
"senses": [
{
"definition": [
"a non-binding agreement or decision does not have to be obeyed"
],
"examples": [
{
"text": "The industry has signed a non-binding agreement to reduce pollution."
}
]
}
]
}
]
}
into this
{
"headword": "binding",
"definition": "a promise, agreement etc that must be obeyed",
"examples": null
}
{
"headword": "non-binding",
"definition": "a non-binding agreement or decision does not have to be obeyed",
"examples": "The industry has signed a non-binding agreement to reduce pollution."
}
this command
cat data.json | jq '.results[] | {headword: .headword, definition: .senses[].definition[], examples: .senses[].examples[].text}'
errors out with 'Cannot iterate over null'
to overcome that, this command using '.[]?' filter
cat data.json | jq '.results[] | {headword: .headword, definition: .senses[].definition[], examples: .senses[].examples[]?.text}'
but this outputs only
{
"headword": "non-binding",
"definition": "a non-binding agreement or decision does not have to be obeyed",
"examples": "The industry has signed a non-binding agreement to reduce pollution."
}
so, How do you iterate over null and not skip array?
Using an if/else statement may help.
jq '.results[] | {
headword,
definition: .senses[0].definition[0],
examples: (if .senses[0].examples then .senses[0].examples[0].text else null end)
}' data.json
As #oguzismail has implicitly pointed out,
assuming that the senses array has only one element
is risky, especially as the choice of name suggests
it was anticipated that each headword might have more than one sense.
A similar observation could be made about .examples, but
the Q does not make it clear what should be done if .examples has more than one element.
In the following I shall therefore opt for a safe approach,
since it can easily be adjusted to meet more specific requirements.
.results[]
| { headword }
+ (.senses[]
| { definition: .definition[0],
examples: (if has("examples")
then [.examples[].text]
else null end) } )
I wish to store some building data for a calculator (for an existing game) in JSON. The thing is some can be upgraded, while others cannot. They are the same type of building though. Is there a way to dynamically set the size of the array size based on the value of the maximum levels, or am I expecting too much from JSON? I intend to open-source the tool and would like to have a schema that validates for anyone that adds a JSON file to it. With the below code, Visual Studio Code is giving me a warning about how maxItems expects an integer.
{
"$schema": "http://json-schema.org/draft-06/schema#",
"properties": {
"$schema": {
"type":"string"
},
"maxLevels": {
"description": "The maximum level that this building can be upgraded to",
"type":"integer",
"enum": [
1,
5
]
},
"goldCapacity": {
"description": "The maximum amount of gold that the building can hold.",
"type":"array",
"minItems": 1,
"maxItems": {"$ref": "#/properties/maxLevels"},
"items": {
"type":"integer",
"uniqueItems": true
}
}
}
}
There is the proposal for $data reference that allows to use values from the data as the values of certain schema keywords. Using $data reference you can:
{
"$schema": "http://json-schema.org/draft-06/schema#",
"properties": {
"maxLevel": {
"description": "The maximum level that this building can be upgraded to",
"type":"integer",
"minimum": 1,
"maximum": 5
},
"goldCapacity": {
"description": "The maximum amount of gold that the building can hold.",
"type":"array",
"minItems": 1,
"maxItems": {"$data": "1/maxLevel"},
"items": {
"type":"integer",
"uniqueItems": true
}
}
}
}
In this way, the value of property "maxLevel" (that should be >= 1 and <=5) determines the maximum number of items the array "goldCapacity" can hold.
Currently only ajv (JavaScript) implements $data reference, as far as I know, and it is being considered for the inclusion in the next versions of the specification (feel free to vote for it).
JSON (and JSON Schema) is basically a set of key/value pairs, so JSON alone has no real support to do what you want to do.
To accomplish what you want, construct the JSON with a default value for maxItems (e.g. 0), btain a reference to your JSON object and then update the value after you have your dynamic value using JavaScript:
jsonObj['maxItems'] = yourCalculatedValue;
I have a hierarchically deep JSON object created by a scientific instrument, so the file is somewhat large (1.3MB) and not readily readable by people. I would like to get a list of keys, up to a certain depth, for the JSON object. For example, given an input object like this
{
"acquisition_parameters": {
"laser": {
"wavelength": {
"value": 632,
"units": "nm"
}
},
"date": "02/03/2525",
"camera": {}
},
"software": {
"repo": "github.com/username/repo",
"commit": "a7642f",
"branch": "develop"
},
"data": [{},{},{}]
}
I would like an output like such.
{
"acquisition_parameters": [
"laser",
"date",
"camera"
],
"software": [
"repo",
"commit",
"branch"
]
}
This is mainly for the purpose of being able to enumerate what is in a JSON object. After processing the JSON objects from the instrument begin to diverge: for example, some may have a field like .frame.cross_section.stats.fwhm, while others may have .sample.species, so it would be convenient to be able to interrogate the JSON object on the command line.
The following should do exactly what you want
jq '[(keys - ["data"])[] as $key | { ($key): .[$key] | keys }] | add'
This will give the following output, using the input you described above:
{
"acquisition_parameters": [
"camera",
"date",
"laser"
],
"software": [
"branch",
"commit",
"repo"
]
}
Given your purpose you might have an easier time using the paths builtin to list all the paths in the input and then truncate at the desired depth:
$ echo '{"a":{"b":{"c":{"d":true}}}}' | jq -c '[paths|.[0:2]]|unique'
[["a"],["a","b"]]
Here is another variation uing reduce and setpath which assumes you have a specific set of top-level keys you want to examine:
. as $v
| reduce ("acquisition_parameters", "software") as $k (
{}; setpath([$k]; $v[$k] | keys)
)