Parse JSON object dynamically in Bigquery + dbt - json

I have a json message like below. I am using dbt and with Big query plug in. I need to create table dynamically in Big query
{
"data": {
"schema":"dev",
"payload": {
"lastmodifieddate": "2022-11-122 00:01:28",
"changeeventheader": {
"changetype": "UPDATE",
"changefields": [
"lastmodifieddate",
"product_value"
],
"committimestamp": 18478596845860,
"recordIds":[
"568069"
]
},
"product_value" : 20000
}
}
}
I need to create table dynamically with recordIds and changed fields. This field list changes dynamically whenever source sends update..
Expected output:
recordIds | product_value | lastmodifieddate |changetype
568069 | 20000 | 2022-11-122 00:01:28 |UPDATE
Thanks for your suggestions and help!.

JSON objects can be saved in a BigQuery table. There is no need to use dbt here.
with tbl as (select 5 row, JSON '''{
"data": {
"schema":"dev",
"payload": {
"lastmodifieddate": "2022-11-122 00:01:28",
"changeeventheader": {
"changetype": "UPDATE",
"changefields": [
"lastmodifieddate",
"product_value"
],
"committimestamp": 18478596845860,
"recordIds":[
"568069"
]
},
"product_value" : 20000
}
}
}''' as JS)
select *,
JSON_EXTRACT_STRING_ARRAY(JS.data.payload.changeeventheader.recordIds) as recordIds,
JSON_EXTRACT_SCALAR(JS.data.payload.product_value) as product_value,
Json_value(JS.data.payload.lastmodifieddate) as lastmodifieddate,
Json_value(JS.data.payload.changeeventheader.changetype) as changetype
from tbl
If the JSON is saved as string in a BigQuery table, please use PARSE_JSON(column_name) to convert the string to JSON first.

Related

ADF Data Flow flatten JSON to rows

IN ADF Data Flow how can I flatten JSON into rows rather than columns?
{
"header": [
{
"main": {
"id": 1
},
"sub": [
{
"type": "a",
"id": 2
},
{
"type": "b",
"id": 3
}
]}]}
In ADF I'm using the flatten task and get the below result:
However the result I'm trying to achieve is merging the two id columns into one column like below:
Since both main_id and sub_id belong in the same column, instead of using 1 flatten to flatten all the data, flatten both main and sub separately.
I have taken the following JSON as source for my dataflow.
{
"header":[
{
"main":{
"id":1
},
"sub":[
{
"type":"a",
"id":2
},
{
"type":"b",
"id":3
}
]
},
{
"main":{
"id":4
},
"sub":[
{
"type":"c",
"id":5
},
{
"type":"d",
"id":6
}
]
}
]
}
I have taken 2 flatten transformations flattenMain and flattenSub instead of 1 which use the same source.
For flattenMain, I have unrolled by header and selected unroll root as header. Then created an additional column selecting source column header.main.id.
The data preview for flattenMain would be:
For flattenSub, I have unrolled by header.sub and selected unroll root as header.sub. Then created 2 additional column selecting source column header.sub.id as id column and header.sub.type as type column.
The data preview for flattenSub transformation would be:
Now I have applied union transformation on both flattenMain and flattenSub. I have applied union by using Name.
The final data preview for this Union transformation will give the desired result.
NOTE: All the highlighted rows in output images indicate the result that would be achieved when we use the JSON sample provided in the question.

Parsing JSON in SAS

Does anyone know how to convert the following JSON to table format in SAS? Appreciate in advance any help!
JSON
{
"totalCount": 2,
"facets": {},
"content": [
[
{
"name": "customer_ID",
"value": "1"
},
{
"name": "customer_name",
"value": "John"
}
],
[
{
"name": "customer_ID",
"value": "2"
},
{
"name": "customer_name",
"value": "Jennifer"
}
]
]
}
Desired Output
customer_ID
customer_name
1
John
2
Jennifer
Steps I've Taken
1- Call API
filename request "C:\path.request.txt";
filename response "C:\path.response.json";
filename status "C:\path.status.json";
proc http
url="http://httpbin.org/get"
method="POST"
in=request
out=response
headerout=status;
run;
2- I have the following JSON MAP file save:
{
"DATASETS": [
{
"DSNAME": "customers",
"TABLEPATH": "/root/content",
"VARIABLES": [
{
"NAME": "name",
"TYPE": "CHARACTER",
"PATH": "/root/content/name"
},
{
"NAME": "value",
"TYPE": "CHARACTER",
"PATH": "/root/content/value"
}
]
}
]
}
3- I use the above JSON Map file as follow:
filename jmap "C:\path.jmap.map";
libname cust json map=jmap access=readonly;
proc copy inlib=cust outlib=work;
run;
4- This generates a table like this, which is not what I need:
name
value
customer_id
1
customer_value
John
customer_id
2
customer_value
Jennifer
From where you are, you have a very trivial step to convert to what you want - PROC TRANSPOSE.
filename test "h:\temp\test.json";
libname test json;
data pre_trans;
set test.content;
if name='customer_ID' then row+1;
run;
proc transpose data=pre_trans out=want;
by row;
id name;
var value;
run;
You could also do this directly in the data step; there are advantages to going either way.
data want;
set test.content;
retain customer_ID customer_name;
if name='customer_ID' then customer_ID=input(value,best.);
else if name='customer_name' then do;
customer_name = value;
output;
end;
run;
This data step works okay for the example above - the proc transpose works better for more complex examples, as you only have to hardcode the one value.
I suspect you could do this more directly with a proper JSON map, but I don't usually do this sort of thing that way - it's easier for me to just get it into a dataset and then work with it from there.
In this case, SAS is getting tripped up by the double arrays with no content before the second array - if there was some (any) content there, it would parse more naturally. Since there's nothing for SAS to really judge what you want to do with that Content array, it just lets you do whatever you want with it - which is easy enough.

Retrieve specific value from a JSON blob in MS SQL Server, using a property value?

In my DB I have a column storing JSON. The JSON looks like this:
{
"views": [
{
"id": "1",
"sections": [
{
"id": "1",
"isToggleActive": false,
"components": [
{
"id": "1",
"values": [
"02/24/2021"
]
},
{
"id": "2",
"values": []
},
{
"id": "3",
"values": [
"5393",
"02/26/2021 - Weekly"
]
},
{
"id": "5",
"values": [
""
]
}
]
}
]
}
]
}
I want to create a migration script that will extract a value from this JSON and store them in its own column.
In the JSON above, in that components array, I want to extract the second value from the component with an ID of "3" (among other things, but this is a good example). So, I want to extract the value "02/26/2021 - Weekly" to store in its own column.
I was looking at the JSON_VALUE docs, but I only see examples for specifing indexes for the json properties. I can't figure out what kind of json path I'd need. Is this even possible to do with JSON_VALUE?
EDIT: To clarify, the views and sections components can have static array indexes, so I can use views[0].sections[0] for them. Currently, this is all I have with my SQL query:
SELECT
*
FROM OPENJSON(#jsonInfo, '$.views[0].sections[0]')
You need to use OPENJSON to break out the inner array, then filter it with a WHERE and finally select the correct value with JSON_VALUE
SELECT
JSON_VALUE(components.value, '$.values[1]')
FROM OPENJSON (#jsonInfo, '$.views[0].sections[0].components') components
WHERE JSON_VALUE(components.value, '$.id') = '3'

Loop through csv file and build json file in python

Have a csv file with 3 columns
Col1 : SCHEMA
col2 : TABLENAME
col3 : COLUMNNAME
There are going to be around 10k rows in them. Need a script to roll through the csv and create a json as below and keep appending the same json until the last.
I want to build a json block as below :-
{
"name": "SCHEMA.TABLENAME",
"table_manipulation": {
"owner": "SCHEMA",
"name": "TABLENAME",
"transform_columns": [{
"column_name": "COLUMNNAME",
"action": "KEEP",
"computation_expression": "replace($COLUMNNAME,\"'\",\"\")"
}],
"source_table_settings": {
"unload_segments": {
"ranges": {},
"entry_names": {}
}
}
}
},
As for script - python would be best would powershell would work also.
Any help will be greatly appreciated - thanks in advance

Obtain a different JSON object structure in AngularJS

I'm Working on AngularJS.
In this part of the project my goal is to obtain a JSON structure after filling a form with some particulars values.
Here's the fiddle of my simple form: Fiddle
With the form I will do a query to KairosDB, that is my NoSql Database, I will query data from it by a JSON object. The form is structured in this way:
a Name
a certain Number of Tags, with Tag Id ("ch" for example) and tag value ("932" for example)
a certain Number of Aggregators to manipulate data coming from DB
Start Timestamp and End Timestamp (now they are static and only included in the final JSON Object)
After filling this form, with my code I'll obtain for example this JSON object:
{
"metrics": [
{
"tags": [
{
"id": "ch",
"value": "932"
},
{
"id": "ch",
"value": "931"
}
],
"aggregators": {
"name": "sum",
"sampling": [
{
"value": "1",
"unit": "milliseconds",
"type": "SUM"
}
]
}
}
],
"cache_time": 0,
"start_absolute": 123,
"end_absolute": 1234
}
Unfortunately, KairosDB accepts a different structure, and as you could see, Tag id "ch" doesn't hase an "id" string before, or for example, Tag values coming from the same tag id are grouped together
{
"metrics": [
{
"tags": {
"ch": [
"932",
"931"
]
},
"name": "AIENR",
"aggregators": [
{
"name": "sum",
"sampling": {
"value": "1",
"unit": "milliseconds"
}
}
]
}
],
"cache_time": 0,
"start_absolute": 1367359200000,
"end_absolute": 1386025200000
}
My question is: Is there a way to obtain the JSON structure like the one accepted by Kairos DB with an Angular JS form?. Thanks to everyone.
I've seen this topic as the one more similar to mine but it isn't in AngularJS.
Personally, I'd do the refactoring work in the backend - Have what ever server interfaces sends and receives data do the manipulation - Otherwise you'll end up needing to refactor your data inside Angular anywhere you want to use that dataset.
Where as doing it in the backend would put it in a single access point.
Of course, you could do it in Angular, just replace userString in the submitData method with a copy of the array and replace the tags section with data in the new format, and likewise refactor the returned result to the correct format when you get a reply.