Postgres JSON Query on Unknown Keys - json

I have a jsonb column where the structure always remains the same, but the keys within the json may change. For example,
{
"key-12345":
{
"values-12345": [
{"type": 5200,
"source": "somesource",
"messageid": 707643203507,
"timestamp": "2018-07-26T21:25:42.612Z",
"destination": "somedestination",
"previouslyRouted": false
},
{"type": 5200,
"source": "anothersource",
"messageid": 707643203507,
"timestamp": "2018-07-26T21:26:01.542Z",
"destination": "anotherdestination",
"previouslyRouted": false
}
]
},
"key-6789":
{
"values-34512": [
{"type": 5200,
"source": "yetantohersomesource",
"messageid": 707643203507,
"timestamp": "2018-07-26T21:25:42.612Z",
"destination": "yetanothersomedestination",
"previouslyRouted": false
},
{"type": 5200,
"source": "anothersource",
"messageid": 707643203507,
"timestamp": "2018-07-26T21:26:01.542Z",
"destination": "anotherdestination",
"previouslyRouted": false
}
]
}
}
I know that the structure of that document will be the same, but the keys could be anything. I can pull the keys themselves out with
select jsonb_object_keys(column) from table;
easily enough but I don't know how to pull the object assigned to that key and deal with it. How do I select from a jsonb object based on the value of a key, rather than the values.
select object from document where json_key = 'key-12345';

This is how it's done:
SELECT object->>'key-12345' FROM document;
and with a where clause:
SELECT * FROM document WHERE object->>'key-12345' IS NOT NULL;

Related

Using Recursive feature while Flattening in Snowflake

I have a JSON string, which needs to be parsed in order to retrieve particular values.Here is an example I am working with;
{
"assignable_type": "SHIPMENT",
"rule": {
"rules": [
{
"meta_data": {},
"rules": [
{
"op": "IN",
"target": "CLIENT_FID",
"type": "ARRAY_VALUE_ASSERTION",
"values": [
"flx::core:client:dbid/64171",
"flx::core:client:dbid/76049",
"flx::core:client:dbid/34040",
"flx::core:client:dbid/61806"
]
}
],
"type": "AND"
}
],
"type": "OR"
},
"type": "USER_DEFINED"
}
The goal is to get the values when "target":"CLIENT_FID".
Expected Output for this JSON file should be ;
["flx::core:client:dbid/64171",
"flx::core:client:dbid/76049",
"flx::core:client:dbid/34040",
"flx::core:client:dbid/61806"]
Here, as we can see rules is a list of dictionaries, and we can have nested lists as seen in the example.
Similarly, we have other JSON file of following type;
{
"assignable_type": "SHIPMENT",
"rule": {
"rules": [
{
"meta_data": {},
"rules": [
{
"op": "IN",
"target": "PORT_OF_ENTRY_FID",
"type": "ARRAY_VALUE_ASSERTION",
"values": [
"flx::core:port:dbid/566788",
"flx::core:port:dbid/566931",
"flx::core:port:dbid/561482"
]
}
],
"type": "AND"
},
{
"meta_data": {},
"rules": [
{
"op": "IN",
"target": "PORT_OF_LOADING_FID",
"type": "ARRAY_VALUE_ASSERTION",
"values": [
"flx::core:port:dbid/561465"
]
},
{
"op": "IN",
"target": "SHIPMENT_MODE",
"type": "ARRAY_VALUE_ASSERTION",
"values": [
0
]
},
{
"op": "IN",
"target": "CLIENT_FID",
"type": "ARRAY_VALUE_ASSERTION",
"values": [
"flx::core:client:dbid/28169"
]
}
],
"type": "AND"
}
],
"type": "OR"
},
"type": "USER_DEFINED"
}
For the second example ,
Expected Output shd be;
["flx::core:client:dbid/28169"]
As. seen, we may need to read the values at different depths in the file. In order to address this issue, I used following code;
/* first convert the string to a JSON object in cte1 */
with cte1 as (
select to_json(json_string) as json_rep,
parse_json(json_extract_path_text(json_rep, 'rule.rules')) as list_elem
from table 1),
cte2 as (select split_array,
json_extract_path_text(split_array, 'target') as target_client
from (
select json_rep,
list_elem,
t.value as split_array,
typeof(split_array) as obj_type,
index
from cte1,
table(flatten(cte1.list_elem, recursive=>true)) as t) temp /* use recursive feature */
where split_array ilike '%"target":"client_fid"%' /* filter for those rows containing this string */
and obj_type='OBJECT')
select
split_array,
json_extract_path_text(split_array, 'values') as client_values
from cte2
where target_client='CLIENT_FID'; /* filter the rows where we have the dictionary containing client fid */
In order to address the issue of varying depth at which client_fid is found we're recursing while flattening the string into rows. The output which is obtained for both of above inputs is provided below,
For the first String we get the actual output in variable client_values as
["flx::core:client:dbid/64171",
"flx::core:client:dbid/76049",
"flx::core:client:dbid/34040",
"flx::core:client:dbid/61806"]
Similarly, for the second string we get the actual output as
["flx::core:client:dbid/28169"]
As seen the code seems to be working in getting the correct output, but the way I filtered in the final query for target_client='CLIENT_FID'; it seems to be a very hacky way. Hence is it possible to get a better approach to resolve the issue of retrieving client fid values though the depth can vary in the given input.
Help is appreciated.

Extract value of Tags from cloudTrail logs using Athena

I am trying to query cloudtrail logs using Athena. My goal is to find specific instances and extract them with their Tags.
The query I am using is:
SELECT eventTime, awsRegion , json_extract(responseelements, '$.instancesSet.items[0].instanceId') AS instanceId, json_extract(responseelements, '$.instancesSet.items[0].tagSet.items') AS TAGS FROM cloudtrail_logs_PP WHERE (eventName = 'RunInstances' OR eventName = 'StartInstances' ) AND requestparameters LIKE '%mytest1%' AND "timestamp" BETWEEN '2021/09/01' AND '2021/10/01' ORDER BY eventTime;
Using this query - I am able to get all Tags under one column.
Output of query
I want to extract only specific Tags and need help in the same. How cam I extract the only specific Tag?
I tried enhancing my query as json_extract(responseelements, '$.instancesSet.items[0].tagSet.items[0]' but the order of Tags is diff in diff logs - so cant pass the index location.
My json file in S3 is something like below:
{
"eventVersion": "1",
"eventTime": "2022-05-27T18:44:29Z",
"eventName": "RunInstances",
"awsRegion": "us-east-1",
"requestParameters": {
"instancesSet": {
"items": [{
"imageId": "ami-1234545",
"keyName": "DDKJKD"
}]
},
"instanceType": "m5.2xlarge",
"monitoring": {
"enabled": false
},
"hibernationOptions": {
"configured": false
}
},
"responseElements": {
"instancesSet": {
"items": [{
"tagSet": {
"items": [ {
"key": "11",
"value": "DS"
}, {
"key": "1",
"value": "A"
}]
}]
}
}
}

How to POST a record with fk data in Strapi?

I have a problem when I try to POST a new record.
I have a legacy app that I'm consuming with Strapi, so I didn't let Strapi create the tables, it just only use what is on the DDBB.
This one of the collection type: (enfermedadrepeticion) (go to repeticion_id)
{
"kind": "collectionType",
"connection": "atdbconnection",
"collectionName": "enfermedadrepeticion",
"info": {
"name": "RepeticionEnfermedad"
},
"options": {
"increments": false,
"timestamps": false
},
"attributes": {
"valor": {
"type": "decimal"
},
"enfermedad_id": {
"type": "integer"
},
"valor_1": {
"type": "string"
},
"valor_2": {
"type": "string"
},
"fechaTomaDato": {
"type": "string"
},
"repeticion_id": {
"via": "repeticion_enfermedads",
"model": "repeticion"
}
}
}
this is the model "repeticion" ... go to repeticion_enfermedads...
{
"kind": "collectionType",
"connection": "atdbconnection",
"collectionName": "repeticiones",
"info": {
"name": "Repeticion"
},
"options": {
"increments": false,
"timestamps": false
},
"attributes": {
"activa": {
"type": "boolean"
},
"altura": {
"type": "string"
},
"linea_id": {
"model": "linea"
},
"ensayo_id": {
"type": "integer"
},
"esp": {
"type": "string"
},
... bunch of fields ...
"repeticion_enfermedads": {
"collection": "repeticion-enfermedad",
"via": "repeticion_id"
}
}
}
My relation is: One Repeticion have 0 or N repeticion-enfermedad, via repeticion_id field.
Using this relation, when I fetch data from "Repeticion" I get one or more "Repeticion-Enfermedad".
So, when I need to update una entry in "Repeticion-Enfermedad" I PUT the data with this body :
body {
enfermedad_id: 1,
fechaTomaDato: '2021-05-05 03:29:29',
fechaUltCambio: '2021-05-05 03:29:29',
repeticion_id: { id: 392571 },
valor: 60,
valor_1: '60',
valor_2: 'S'
}
and everything works fine!!
But when I try to create a record using POST, with the same body, I get a 500 error.
error Error: ER_NO_DEFAULT_FOR_FIELD: Field 'repeticion_id' doesn't have a default value
I try to send the body using Swagger, and I get the same error.
I try, sending "repeticion_id" field, using integer, string, object, etc, but I cant't make it work...
My Strapi version is 3.0.6
I really don't want to alter the model definition, it's working fine...
Any suggestion ??
Best Regards
OK, I found the problem.
There is nothing to do with Strapi, I have to add NOT NULL to the field in RepeticionEnfermedad table.
It looks like Strapi first tries to create a record in that table, and then updates the recently added record...
I leave my answer in case someone got the same problem.
Chiao!

Will there be a performance overhead when using an index having Object_Pairs (in case of a covered query) - Couchbase

Suppose I create an index on Object_pair(values).val.data.
Will my index store the “values” field as an array (with elements name for ID and val for data due to object_pair)?
If so, and also if my n1ql query is a covered query (fetching only Object_pair(values).val.data via select clause), will there still be a performance overhead? (because I am under the impression that in the above case, as index would already contain “values” field as an array, no actual object_pair transformation would take place hence avoiding the overhead. Only in the case of a non-covered query will the actual document be accessed and object_pair transformation done on “values” field).
Couchbase document:
"values": {
"item_1": {
"data": [{
"name": "data_1",
"value": "A"
},
{
"name": "data_2",
"value": "XYZ"
}
]
},
"item_2": {
"data": [{
"name": "data_1",
"value": "123"
},
{
"name": "data_2",
"value": "A23"
}
]
}
}
}```
UPDATE:
suppose if we plan to create index on Object_pair(values)[*].val.data & Object_pair(values)[*].name
Index: CREATE INDEX idx01 ON ent_comms_tracking(ARRAY { value.name, value.val.data} FOR value IN object_pairs(values) END)
Query: SELECT ARRAY { value.name, value.val.data} FOR value IN object_pairs(values) END as values_array FROM bucket
Can you please paste your full create index statement?
Creating index on OBJECT_PAIRS(values).val.data indexes nothing.
You can check it out by creating a primary index and then running below query:
SELECT OBJECT_PAIRS(`values`).val FROM mybucket
Output is:
[
{}
]
OBJECT_PAIRS(values) returns arrays of values which contain the attribute name and value pairs of the object values -
SELECT OBJECT_PAIRS(`values`) FROM mybucket
[
{
"$1": [
{
"name": "item_1",
"val": {
"data": [
{
"name": "data_1",
"value": "A"
},
{
"name": "data_2",
"value": "XYZ"
}
]
}
},
{
"name": "item_2",
"val": {
"data": [
{
"name": "data_1",
"value": "123"
},
{
"name": "data_2",
"value": "A23"
}
]
}
}
]
}
]
It's an array, so val of it is not directly referenced

How to add TimeInstant, CreationDate and ModifiedDate into CrateDB with Orion Context Broker?

I'm setting up a Firmware-Framework, where I unforutunately have to add historically Sensor Values. But I also need the creationDate and the modificationDate for other usecases.
Therefore I add the Attribute "Metadata" with the variable "TimeInstant". Then I create an Entity, create an Orion-Subscription for that Entity and update the Entity with my old Sensor-Valses.
The Json-File I send to the Orion-Context Broker to update the Attribute looks like this:
{
"metadata": {
"TimeInstant": {
"type": "DateTime",
"value": "2015-02-02T11:35:25.0000Z"
}
},
"type": "Number",
"value": 0.0132361 }
The Output in my Mongo-DB like this:
"_id": {
"id": "urn:ngsi-ld:SensorB-K1200____",
"type": "Sensor",
"servicePath": "/test/servicepath"
},
"attrNames": [
"Sensor_value"
],
"attrs": {
"Sensor_value": {
"value": 0.01632361,
"type": "Number",
"md": {
"TimeInstant": {
"type": "DateTime",
"value": 1422876989
}
},
"mdNames": [
"TimeInstant"
],
"creDate": 1568712813,
"modDate": 1568735930
}
},
"creDate": 1568712813,
"modDate": 1568735930,
"lastCorrelator": "0a129232-d964-11e9-8e5a-0242ac130009" }
But my Crate-DB only has the columns:
entity_id entity_type fiware_servicepath sensor_value time_index
My Subscription File looks like this:
{
"expires": "2019-12-24T18:00:00",
"notification": {
"http": {
"url": "http://quantumleap:8668/v2/notify"
},
"metadata": [
"dateCreated",
"dateModified",
"TimeInstant"
]
},
"subject": {
"entities": [
{
"id": "urn:ngsi-ld:SensorB-K1200____",
"type": "Sensor"
}
]
},
"throttling": 0 }
I've tried changing the "Metadata" Attributes in the Subscription-File, also tried restartig Crate-DB, ContextBroker e.g..
I excpect the CrateDb to show all three values: "dateCreated", "dateModified" and "TimeInstant".
Did you check what's the message notification actually sent by Orion to QuantumLeap?
As regards the payload I would try as follow:
{
"TimeInstant": {
"type": "DateTime",
"value": "2015-02-02T11:35:25.0000Z"
},
"type": "Number",
"value": 0.0132361
}
Internally we usually use as attribute name for this type of scenario dateObserved, but it would not make any difference w.r.t. TimeInstant.
I am not actually sure you can attach metadata to the root of NGSI message, I believe they are supposed to be attached only to attributes.
Anyhow, QuantumLeaps does not support NGSI metadata (i.e. metadata attached to NGSI attributes). Still it support time indexing based on them.
The way Quantum Leap handles TimeInstant metadata and other time metadata is via time_index. See documentation here: https://quantumleap.readthedocs.io/en/latest/user/#time-index