How to retrieve a json value based on a string key - json

I have json data that looks like this:
{
"deploy:success": 2,
"deploy:RTX:success": 1,
"deploy:BLX:success": 1,
"deploy:RTX:BigTop:success": 1,
"deploy:BLX:BigTop:success": 1,
"deploy:RTX:BigTop:xxx:success": 1,
"deploy:BLX:BigTop:yyy:success": 1,
}
Where each new :<field> tacked on makes it more specific. Say a key with the format "deploy:RTX:success" is for a specific site RTX. I was planning on using a filter to show only the site-specific counts.
eval column_name=if($site_token$ = "", "deploy:success", "deploy:$site_token$:success")
Then rename the derived column:
rename column_name deploy
But the rename is looking for actual values in that first argument and not just a column name. I can't figure out how to get the values associated from that column for the life of me.
index=cloud_aws namespace=my namespace=Stats protov3=*
| spath input=protov3
| eval column_name=if("$site_token$" = "", "deploy:success", "deploy:$site_token$:success")
| rename column_name AS "deploy"
What have I done incorrectly?

It's not clear what the final result is supposed to be. If the result when $site_token$ is empty should be "deploy:success" then just use "deploy" as the target of the eval.
index=cloud_aws namespace=my namespace=Stats protov3=*
| spath input=protov3
| eval deploy=if("$site_token$" = "", "deploy:success", "deploy:$site_token$:success")
OTOH, if the result when $site_token$ is empty should be "2" then use the existing query with single quotes in the eval. Single quotes tell Splunk to treat the enclosed text as a field name rather than a literal string (which is what double quotes do).
index=cloud_aws namespace=my namespace=Stats protov3=*
| spath input=protov3
| eval deploy=if("$site_token$" = "", 'deploy:success', 'deploy:$site_token$:success')

Related

KQL | How do I extract or check for data in a long string with many quotation marks?

I'm super newbie to KQL and data in general.
I'm working with a data column with long strings like this:
"data": {"stageID":1670839857060,"entities":[{"entity":{"key":"BearKnight","owner":0,"id":"[2|1]"},"levels":{"main":4,"star":1,"ShieldWall.main":4,"ShieldWall.enhance":0,"ShieldThrow.main":4,"ShieldThrow.enhance":0}},{"entity":{"key":"DryadHealer","owner":0,"id":"[3|1]"},"levels":{"main":5,"star":1,"HealingTouch.main":5,"HealingTouch.enhance":0,"CuringTouch.main":5,"CuringTouch.enhance":0}},{"entity":{"key":"HumanKnight","owner":1,"id":"[4|1]"},"levels":{"main":4,"star":0,"BullRush.main":4,"BullRush.enhance":0,"FinishingStrike.main":4,"FinishingStrike.enhance":0,"SwordThrow.main":4,"SwordThrow.enhance":0,"StrongAttack.main":0,"StrongAttack.enhance":0}},
I need to get a list of the *HeroNames *inside here [ "key":"HeroName","owner":0 ] but not in here [ "key":"HeroName","owner":1 ].
I've been trying the extract_all and has_any functions, but I can't work with the data if it has all the quotation marks. Can I parse this somewhow and remove them?
My ideal output would be a list of hero names who have owner:0.
For example, for the top string the ideal output is: "BearKnight","DryadHealer"
print txt = 'data: {"stageID":1670839857060,"entities":[{"entity":{"key":"BearKnight","owner":0,"id":"[2|1]"},"levels":{"main":4,"star":1,"ShieldWall.main":4,"ShieldWall.enhance":0,"ShieldThrow.main":4,"ShieldThrow.enhance":0}},{"entity":{"key":"DryadHealer","owner":0,"id":"[3|1]"},"levels":{"main":5,"star":1,"HealingTouch.main":5,"HealingTouch.enhance":0,"CuringTouch.main":5,"CuringTouch.enhance":0}},{"entity":{"key":"HumanKnight","owner":1,"id":"[4|1]"},"levels":{"main":4,"star":0,"BullRush.main":4,"BullRush.enhance":0,"FinishingStrike.main":4,"FinishingStrike.enhance":0,"SwordThrow.main":4,"SwordThrow.enhance":0,"StrongAttack.main":0,"StrongAttack.enhance":0}}]}'
| parse txt with * ": " doc
| mv-apply e = parse_json(doc).entities on (where e.entity.owner == 0 | summarize HeroNames = make_list(e.entity.key))
| project-away txt, doc
HeroNames
["BearKnight","DryadHealer"]
Fiddle

why is mysql saving json in reverse order ? how to fix it

this is the code to insert json in my sql
function insert_ema($json){
$a= new Sql();
$b=$a->connection;
$sql = "INSERT INTO ema (ema_ten) VALUES ('$json')";
if ($b->query($sql) === TRUE) {
echo PHP_EOL." New record created successfully \n";
} else {
echo PHP_EOL." Error: " . $sql . "<br>" . $b->error;
}
$b->close();
;}
insert_ema('{"firstName":"John", "lastName":"Doe","3":"Jo", "4":"Do"}');
+----------------------------------------------------------------+----+
| ema_ten | id |
+----------------------------------------------------------------+----+
| {"3": "Jo", "4": "Do", "lastName": "Doe", "firstName": "John"} | 1 |
| {"3": "Jo", "4": "Do", "lastName": "Doe", "firstName": "John"} | 2 |
+----------------------------------------------------------------+----+
the sql saved above is in reverse order!! how can i fix it
The reason why I want to persevere order is, I want to be able to convert the json to an array and use pop .
I think MySQL should save arrays and also sort this issue.
https://dev.mysql.com/doc/refman/8.0/en/json.html says:
To make lookups more efficient, MySQL also sorts the keys of a JSON object. You should be aware that the result of this ordering is subject to change and not guaranteed to be consistent across releases.
This mean you should not depend on any particular sort order of the keys in a JSON object. JSON arrays have order, but the keys of JSON objects don't.
JSON objects are equal if their keys and respective values are the same, regardless of order:
mysql> select cast('{"firstName":"John", "lastName":"Doe","3":"Jo", "4":"Do"}' as json)
= cast('{"3": "Jo", "4": "Do", "lastName": "Doe", "firstName": "John"}' as json)
as is_equal;
+----------+
| is_equal |
+----------+
| 1 |
+----------+
Re your comment:
The point of the above example is that you can't make MySQL store keys in your intended order. MySQL's implementation of JSON doesn't do that. It rearranges JSON object keys to make it more efficient for lookups. You don't get a say in this.
JSON arrays can be ordered. So your only option to preserve order is to use an array, where each element of the array is an object with a single key:
[{"firstName":"John"}, {"lastName":"Doe"}, {"3":"Jo"}, {"4":"Do"}]
I understand this is not what you asked for, but what you asked for cannot be achieved in MySQL.

JQ: Using variable for dot notation path

Is it possible to use a variable to hold a dot notation path? (I'm probably not using the correct term.)
For example, given the following json:
{
"people": [
{
"names": {
"given": "Alice",
"family": "Smith"
},
"id": 47
},
{
"id": 42
}
]
}
Is it possible to construct something like:
.names.given as $ng | .people[] | select(.id==47) | ($ng)
and output "Alice"?
The idea is to allow easier modification of a complex expression. I've tried various parens and quotes with increasing literal results ('.names.given' and '$ng')
The answer is no and yes: as you've seen, once you write an expression such as .names.given as $ng, $ng holds the JSON values, not the path.
But jq does support path expressions in the form of arrays of strings and/or non-negative integers. These can be used to access values in conjunction with the built-in getpath/1.
So you could, for example, write something along the lines of:
["names", "given"] as $ng
| .people[]
| select(.id==47)
| getpath($ng)
Converting jq paths to JSON arrays
It's possible to convert a "dot notation" path into an "array path" using path/1; e.g. the assignment to $ng above could be written as:
(null | path(.names.given)) as $ng
Your question and the example you provided seems very confusing to me. The jist that I got is that you want to assign a name to a value obtained from dot notation and then use it at a later point in time.
See if this is of any help -
.people | map(select(.id = 47))[0].names.given as $ng | $ng

In Couchbase Java Query DSL, how do I filter for property-names that are not from the ASCII alphabet?

Couchbase queries should support any String for property-name in a filter ( where clause.)
But the query below returns no values for any of the fieldNames "7", "a", "#", "&", "", "?". It does work for values for fieldName a.
Note that I'm using the Java DSL API, not N1ql directly.
OffsetPath statement = select("*").from(i(bucket.name())).where(x(fieldName).eq(x("$t")));
JsonObject placeholderValues = JsonObject.create().put("t", fieldVal);
N1qlQuery q = N1qlQuery.parameterized(statement, placeholderValues);
N1qlQueryResult result = bucket.query(q);
But my bucket does have each of these JsonObjects, including those with unusual property names, as shown by an unfiltered query:
{"a":"a"}
{"#":"a"}
{"&":"a"}
{"":"a"}
{"?":"a"}
How do I escape property names or otherwise support these legal names in queries?
(This question relates to another one, but that is about values and this is about field names.)
The field name is treated as an identifier. So, back-ticks are needed to escape them thus:
select("*").from(i(bucket.name())).where(x("`" + fieldName + "`").eq(x("$value"))
with parameterization of $value, of course

Having separate arrays, how to extract value based on the column name?

I am trying to extract data from some JSON with JQ - I have already got it down to the last level of data that I need to extract from, but I am completely stumped as to how to proceed with how this part of the data is formatted.
An example would be:
{
"values": [
[
1483633677,
42
]
],
"columns": [
"time",
"count_value"
],
"name": "response_time_error"
}
I would want to extract just the value for a certain column (e.g. count_value) and I can extract it by using [-1] in this specific case, but I want to select the column by its name in case they change in the future.
If you're only extracting a single value and the arrays will always correspond with eachother, you could find the index in the columns array then use that index into the values array.
It seems like values is an array of rows with those values. Assuming you want to output the values of all rows with the selected column:
$ jq --arg col 'count_value' '.values[][.columns | index($col)]' input.json
If the specified column name does not exist in .columns, then Jeff's filter will fail with a rather obscure error message. It might therefore be preferable to check whether the column name is found. Here is an illustration of how to do so:
jq --arg col count_value '
(.columns | index($col)) as $ix
| if $ix then .values[][$ix] else empty end' input.json
If you want an informative error message to be printed, then replace empty with something like:
error("specified column name, \($col), not found")