How to flatten nested json data in Azure stream analytics - json

I have a problem writing a query to extract a table out of the arrays from a JSON file.
I want to flatten the three arrays i.e. case_Time, details & others, and make them all in a normal SQL table.
Sample JSON data:
{
"case_Time": [
{
"v1": "1",
"v2": "0",
"v3": "0",
"date": "30 January ",
"dateymd": "2020-01-30",
"v4": "1",
"v5": "0",
"v6": "0"
},
{
"v1": "1",
"v2": "0",
"v3": "0",
"date": "31 January ",
"dateymd": "2020-01-31",
"v4": "1",
"v5": "0",
"v6": "0"
}],
"details": [
{
"d1": "281844",
"d2": "10124024",
"d3": "146791",
"d4": "0",
"d5": "0",
"d6": "0",
"lastupdatedtime": "24/12/2020 09:12:24",
"d7": "2746",
"d8": "9692643",
"d9": "Total",
"notes": "some text"
},
{
"d1": "281944",
"d2": "1012",
"d3": "1791",
"d4": "0",
"d5": "0",
"d6": "0",
"lastupdatedtime": "25/12/2020 09:12:24",
"d7": "2746",
"d8": "96643",
"d9": "Total",
"notes": "some text"
}],
"others": [
{
"p1": "",
"p2": "75.64",
"p3": "",
"p4": "",
"p5": "",
"p6": "",
"date": "13/03/2020",
"p7": "",
"p8": "1.20%",
"p9": "",
"p10": "83.33",
"p11": "5",
"p12": "5900",
"p13": "78"
},
{
"p1": "",
"p2": "75.64",
"p3": "",
"p4": "",
"p5": "",
"p6": "",
"date": "14/03/2020",
"p7": "",
"p8": "1.20%",
"p9": "",
"p10": "81.33",
"p11": "5",
"p12": "500",
"p13": "78"
}
]
}
I tried the below query but getting first array data only, how to flatten remaining array :
WITH Cases AS
(
SELECT
arrayElement.ArrayIndex,
arrayElement.ArrayValue as av
FROM input as event
CROSS APPLY GetArrayElements(event.case_Time) AS arrayElement
)
SELECT av.v1, av.v2, av.v3,av.date,av.dateymd, av.v4,av.v5,av.v6
INTO powerbi
FROM Cases
Appreciate any help :)

You can Cross APPLY all your array, try something like this:
WITH Cases AS
(
SELECT
arrayElement.ArrayIndex as ai,
arrayElement.ArrayValue as av,
y.ArrayIndex as yi,
y.ArrayValue as dt,
z.ArrayIndex as zi,
z.ArrayValue as ot
FROM input as event
CROSS APPLY GetArrayElements(event.case_Time) AS arrayElement
CROSS APPLY GetArrayElements(event.details) AS y
CROSS APPLY GetArrayElements(event.others) AS z
)
SELECT av.v1, av.v2, av.v3,av.date,av.dateymd,av.v4,av.v5,av.v6,dt.d1,dt.d2,dt.d3,dt.d4,dt.d5,dt.d6,dt.lastupdatedtime,dt.d7,dt.d8,dt.d9,dt.notes,ot.p1,ot.p2,ot.p3,ot.p4,ot.p5,ot.p6,ot.p7,ot.p8,ot.p9,ot.p10,ot.p11,ot.p12,ot.p13,ot.date as tdate
FROM Cases
INTO powerbi
This query will result in a complete cross product, so you will get 8 rows. If you only want to get 2 rows(correspond index), you can add Where ai = yi and yi = zi

Related

Accessing specific JSON values in a deluge script

I have a JSON API response that contains multiple entries, with different types of subscriptions and multiple users.
I need to search the list for a "user_name" AND a "subscription", then return any matching "duration". In some cases, there will be more than one "duration" for a user and subscription. I would need the total (sum) of the duration when there is more than one.
For example, here is a part of an example Json I am working with:
[
{
"id": 139387026,
"user_name": "John Smith",
"note": "",
"last_modify": "2022-03-28 14:16:35",
"date": "2022-03-28",
"locked": "0",
"addons_external_id": "",
"description": "",
"info": [
{
"subscription": "basic",
"duration": "22016",
}
]
},
{
"id": 139387027,
"user_name": "John Smith",
"note": "",
"last_modify": "2022-03-28 14:16:35",
"date": "2022-03-28",
"locked": "0",
"addons_external_id": "",
"description": "",
"info": [
{
"subscription": "advanced",
"duration": "10537",
}
]
},
{
"id": 139387028,
"user_name": "Martin Lock",
"note": "",
"last_modify": "2022-03-28 14:16:35",
"date": "2022-03-28",
"locked": "0",
"addons_external_id": "",
"description": "",
"info": [
{
"subscription": "basic",
"duration": "908",
}
]
},
]
So for example, for user_name: "John Smith" and subscription: "advanced", I need to return duration: "10537".
I've used toJsonlist(); to convert it, then used the code below, but it returns all values in the list. I can't figure out how to search for the specific values or add matching entries together.
rows = subscriptions.toJsonlist();
for each row in rows
{
info row;
user_name = row.getJson("user_name");
info "username: " + user_name;
subscription = row.getJson("subscription");
info "subscription: " + subscription;
subscriptionId = row.getJson("subscriptionId");
info "subscription Id: " + subscriptionId;
}
I'm fairly new to programming. Any help is appreciated!
According to your needs , you want to filter your JSON data and get the corresponding value from your filter in user_name and subcription.
Here is the Deluge Script for that. I use clear variable name so that it will not confused you.
//Your Entry Change this based on your filter
input_user_name = "John Smith";
input_subscription = "advanced";
//Your JSON data
json_string_data = '[ { "id": 139387026, "user_name": "John Smith", "note": "", "last_modify": "2022-03-28 14:16:35", "date": "2022-03-28", "locked": "0", "addons_external_id": "", "description": "", "info": [ { "subscription": "basic", "duration": "22016", } ] }, { "id": 139387027, "user_name": "John Smith", "note": "", "last_modify": "2022-03-28 14:16:35", "date": "2022-03-28", "locked": "0", "addons_external_id": "", "description": "", "info": [ { "subscription": "advanced", "duration": "10537", } ] }, { "id": 139387028, "user_name": "Martin Lock", "note": "", "last_modify": "2022-03-28 14:16:35", "date": "2022-03-28", "locked": "0", "addons_external_id": "", "description": "", "info": [ { "subscription": "basic", "duration": "908", } ] } ]';
//Declare the data as JSON
processed_json_data = json_string_data.toJsonlist();
initial_total_duration = 0;//Donot change this
list_of_duration = List();
total_duration_per_username_per_subscription = Map();
for each row in processed_json_data
{
if (row.get("user_name") == input_user_name )
{
info_list = row.get("info").toJSONList();
for each info_row in info_list
{
if (info_row.get("subscription") == input_subscription)
{
info_row_duration = info_row.get("duration").toLong(); // make it integer
list_of_duration.add(info_row_duration);
}
}
}
}
result_map = Map();
//Sum of list_of_duration
for each duration in list_of_duration
{
initial_total_duration = initial_total_duration + duration;
}
result_map.put("user_name",input_user_name);
result_map.put("subscription",input_subscription);
result_map.put("no_of_subscription",list_of_duration.size());
result_map.put("total_duration",initial_total_duration);
info result_map;
And the result should be
{"user_name":"John Smith","subscription":"advanced","no_of_subscription":1,"total_duration":10537}
You can test these script in https://deluge.zoho.com/tryout.
Thanks,
Von

How can I convert a JSON traffic packet into JSON format for bulk import into Elasticsearch?

I am trying to convert some JSON files about TCP and DNP3 traffic into bulk import into Elasticsearch. I've already know that tshark has a command that can generate JSON for bulk import from a pcap:
tshark -T ek -r dnp3_trace.pcap > dnp3_trace.json
However, I haven't got the pcaps for some JSON files and I don't know if there is something that could transform the JSON into bulk index.
For example, I provide an example of my JSON that I would like to convert into bulk index:
{
"_index": "packets-2020-10-17",
"_type": "doc",
"_score": null,
"_source": {
"layers": {
"frame": {
"frame.interface_id": "0",
"frame.interface_id_tree": {
"frame.interface_name": "ens224"
},
"frame.encap_type": "1",
"frame.time": "Oct 17, 2020 10:51:44.072688465 Central Daylight Time",
"frame.offset_shift": "0.000000000",
"frame.time_epoch": "1602949904.072688465",
"frame.time_delta": "0.000000000",
"frame.time_delta_displayed": "0.000000000",
"frame.time_relative": "0.000000000",
"frame.number": "1",
"frame.len": "72",
"frame.cap_len": "72",
"frame.marked": "0",
"frame.ignored": "0",
"frame.protocols": "eth:ethertype:ip:tcp:dnp3",
"frame.coloring_rule.name": "TCP",
"frame.coloring_rule.string": "tcp"
},
"eth": {
"eth.dst": "00:00:00:aa:00:25",
"eth.dst_tree": {
"eth.dst_resolved": "00:00:00_aa:00:25",
"eth.dst.oui": "0",
"eth.dst.oui_resolved": "Officially Xerox, but 0:0:0:0:0:0 is more common",
"eth.addr": "00:00:00:aa:00:25",
"eth.addr_resolved": "00:00:00_aa:00:25",
"eth.addr.oui": "0",
"eth.addr.oui_resolved": "Officially Xerox, but 0:0:0:0:0:0 is more common",
"eth.dst.lg": "0",
"eth.lg": "0",
"eth.dst.ig": "0",
"eth.ig": "0"
},
"eth.src": "00:50:56:9c:5f:cc",
"eth.src_tree": {
"eth.src_resolved": "VMware_9c:5f:cc",
"eth.src.oui": "20566",
"eth.src.oui_resolved": "VMware, Inc.",
"eth.addr": "00:50:56:9c:5f:cc",
"eth.addr_resolved": "VMware_9c:5f:cc",
"eth.addr.oui": "20566",
"eth.addr.oui_resolved": "VMware, Inc.",
"eth.src.lg": "0",
"eth.lg": "0",
"eth.src.ig": "0",
"eth.ig": "0"
},
"eth.type": "0x00000800"
},
"ip": {
"ip.version": "4",
"ip.hdr_len": "20",
"ip.dsfield": "0x00000000",
"ip.dsfield_tree": {
"ip.dsfield.dscp": "0",
"ip.dsfield.ecn": "0"
},
"ip.len": "58",
"ip.id": "0x000009f9",
"ip.flags": "0x00004000",
"ip.flags_tree": {
"ip.flags.rb": "0",
"ip.flags.df": "1",
"ip.flags.mf": "0"
},
"ip.frag_offset": "0",
"ip.ttl": "64",
"ip.proto": "6",
"ip.checksum": "0x0000c405",
"ip.checksum.status": "2",
"ip.src": "172.16.0.2",
"ip.addr": "172.16.0.2",
"ip.src_host": "172.16.0.2",
"ip.host": "172.16.0.2",
"ip.dst": "192.168.0.5",
"ip.addr": "192.168.0.5",
"ip.dst_host": "192.168.0.5",
"ip.host": "192.168.0.5"
},
"tcp": {
"tcp.srcport": "41391",
"tcp.dstport": "20000",
"tcp.port": "41391",
"tcp.port": "20000",
"tcp.stream": "0",
"tcp.len": "18",
"tcp.seq": "1",
"tcp.seq_raw": "3359839259",
"tcp.nxtseq": "19",
"tcp.ack": "1",
"tcp.ack_raw": "1388983197",
"tcp.hdr_len": "20",
"tcp.flags": "0x00000018",
"tcp.flags_tree": {
"tcp.flags.res": "0",
"tcp.flags.ns": "0",
"tcp.flags.cwr": "0",
"tcp.flags.ecn": "0",
"tcp.flags.urg": "0",
"tcp.flags.ack": "1",
"tcp.flags.push": "1",
"tcp.flags.reset": "0",
"tcp.flags.syn": "0",
"tcp.flags.fin": "0",
"tcp.flags.str": "·······AP···"
},
"tcp.window_size_value": "501",
"tcp.window_size": "501",
"tcp.window_size_scalefactor": "-1",
"tcp.checksum": "0x00006cec",
"tcp.checksum.status": "2",
"tcp.urgent_pointer": "0",
"tcp.analysis": {
"tcp.analysis.bytes_in_flight": "18",
"tcp.analysis.push_bytes_sent": "18"
},
"Timestamps": {
"tcp.time_relative": "0.000000000",
"tcp.time_delta": "0.000000000"
},
"tcp.payload": "05:64:0b:c4:59:02:01:00:d4:49:ca:ca:01:3c:01:06:d1:ff",
"tcp.pdu.size": "18"
},
"dnp3": {
"Data Link Layer, Len: 11, From: 1, To: 601, DIR, PRM, Unconfirmed User Data": {
"dnp3.start": "0x00000564",
"dnp3.len": "11",
"dnp3.ctl": "0x000000c4",
"dnp3.ctl_tree": {
"dnp3.ctl.dir": "1",
"dnp3.ctl.prm": "1",
"dnp3.ctl.fcb": "0",
"dnp3.ctl.fcv": "0",
"dnp3.ctl.prifunc": "4"
},
"dnp3.dst": "601",
"dnp3.addr": "601",
"dnp3.src": "1",
"dnp3.addr": "1",
"dnp3.hdr.CRC": "0x000049d4",
"dnp.hdr.CRC.status": "1"
},
"dnp3.tr.ctl": "0x000000ca",
"dnp3.tr.ctl_tree": {
"dnp3.tr.fin": "1",
"dnp3.tr.fir": "1",
"dnp3.tr.seq": "10"
},
"Data Chunks": {
"Data Chunk: 0": {
"dnp.data_chunk": "ca:ca:01:3c:01:06",
"dnp.data_chunk_len": "6",
"dnp.data_chunk.CRC": "0x0000ffd1",
"dnp.data_chunk.CRC.status": "1"
}
},
"dnp3.al.fragments": {
"dnp3.al.fragment": "1",
"dnp3.al.fragment.count": "1",
"dnp3.al.fragment.reassembled.length": "5"
},
"Application Layer: (FIR, FIN, Sequence 10, Read)": {
"dnp3.al.ctl": "0x000000ca",
"dnp3.al.ctl_tree": {
"dnp3.al.fir": "1",
"dnp3.al.fin": "1",
"dnp3.al.con": "0",
"dnp3.al.uns": "0",
"dnp3.al.seq": "10"
},
"dnp3.al.func": "1",
"READ Request Data Objects": {
"dnp3.al.obj": "15361",
"dnp3.al.obj_tree": {
"Qualifier Field, Prefix: None, Range: No Range Field": {
"dnp3.al.objq.prefix": "0",
"dnp3.al.objq.range": "6"
},
"Number of Items: 0": ""
}
}
}
}
}
}
}
My goal would be to convert this JSON in this format:
{"index":{"_index":"packets-2019-10-25","_type":"doc"}}
{"timestamp":"1571994793106","layers":{"frame":{"frame_frame_encap_type":"1","frame_frame_time":"2019-10-25T09:13:13.106208000Z","frame_frame_offset_shift":"0.000000000","frame_frame_time_epoch":"1571994793.106208000","frame_frame_time_delta":"0.000000000","frame_frame_time_delta_displayed":"0.000000000","frame_frame_time_relative":"0.000000000","frame_frame_number":"1","frame_frame_len":"78","frame_frame_cap_len":"78","frame_frame_marked":false,"frame_frame_ignored":false,"frame_frame_protocols":"eth:ethertype:ip:tcp:dnp3"},"eth":{"eth_eth_dst":"50:7b:9d:76:77:d5","eth_eth_dst_resolved":"LCFCHeFe_76:77:d5","eth_eth_dst_oui":"5274525","eth_eth_dst_oui_resolved":"LCFC(HeFei) Electronics Technology co., ltd","eth_eth_addr":"50:7b:9d:76:77:d5","eth_eth_addr_resolved":"LCFCHeFe_76:77:d5","eth_eth_addr_oui":"5274525","eth_eth_addr_oui_resolved":"LCFC(HeFei) Electronics Technology co., ltd","eth_eth_dst_lg":false,"eth_eth_lg":false,"eth_eth_dst_ig":false,"eth_eth_ig":false,"eth_eth_src":"d8:50:e6:05:a3:1e","eth_eth_src_resolved":"ASUSTekC_05:a3:1e","eth_eth_src_oui":"14176486","eth_eth_src_oui_resolved":"ASUSTek COMPUTER INC.","eth_eth_addr":"d8:50:e6:05:a3:1e","eth_eth_addr_resolved":"ASUSTekC_05:a3:1e","eth_eth_addr_oui":"14176486","eth_eth_addr_oui_resolved":"ASUSTek COMPUTER INC.","eth_eth_src_lg":false,"eth_eth_lg":false,"eth_eth_src_ig":false,"eth_eth_ig":false,"eth_eth_type":"0x00000800"},"ip":{"ip_ip_version":"4","ip_ip_hdr_len":"20","ip_ip_dsfield":"0x00000000","ip_ip_dsfield_dscp":"0","ip_ip_dsfield_ecn":"0","ip_ip_len":"64","ip_ip_id":"0x0000259f","ip_ip_flags":"0x00004000","ip_ip_flags_rb":false,"ip_ip_flags_df":true,"ip_ip_flags_mf":false,"ip_ip_frag_offset":"0","ip_ip_ttl":"128","ip_ip_proto":"6","ip_ip_checksum":"0x00000000","ip_ip_checksum_status":"2","ip_ip_src":"192.168.1.150","ip_ip_addr":["192.168.1.150","192.168.1.200"],"ip_ip_src_host":"192.168.1.150","ip_ip_host":["192.168.1.150","192.168.1.200"],"ip_ip_dst":"192.168.1.200","ip_ip_dst_host":"192.168.1.200"},"tcp":{"tcp_tcp_srcport":"53543","tcp_tcp_dstport":"20000","tcp_tcp_port":["53543","20000"],"tcp_tcp_stream":"0","tcp_tcp_len":"24","tcp_tcp_seq":"1","tcp_tcp_seq_raw":"3354368014","tcp_tcp_nxtseq":"25","tcp_tcp_ack":"1","tcp_tcp_ack_raw":"3256068755","tcp_tcp_hdr_len":"20","tcp_tcp_flags":"0x00000018","tcp_tcp_flags_res":false,"tcp_tcp_flags_ns":false,"tcp_tcp_flags_cwr":false,"tcp_tcp_flags_ecn":false,"tcp_tcp_flags_urg":false,"tcp_tcp_flags_ack":true,"tcp_tcp_flags_push":true,"tcp_tcp_flags_reset":false,"tcp_tcp_flags_syn":false,"tcp_tcp_flags_fin":false,"tcp_tcp_flags_str":"·······AP···","tcp_tcp_window_size_value":"2052","tcp_tcp_window_size":"2052","tcp_tcp_window_size_scalefactor":"-1","tcp_tcp_checksum":"0x000084e1","tcp_tcp_checksum_status":"2","tcp_tcp_urgent_pointer":"0","tcp_tcp_analysis":null,"tcp_tcp_analysis_bytes_in_flight":"24","tcp_tcp_analysis_push_bytes_sent":"24","text":"Timestamps","tcp_tcp_time_relative":"0.000000000","tcp_tcp_time_delta":"0.000000000","tcp_tcp_payload":"05:64:11:c4:01:00:02:00:c3:5a:c8:c8:01:3c:02:06:3c:03:06:3c:04:06:c0:4c","tcp_tcp_pdu_size":"24"},"dnp3":{"text":["Data Link Layer, Len: 17, From: 2, To: 1, DIR, PRM, Unconfirmed User Data","Data Chunks","Application Layer: (FIR, FIN, Sequence 8, Read)"],"dnp3_dnp3_start":"0x00000564","dnp3_dnp3_len":"17","dnp3_dnp3_ctl":"0x000000c4","dnp3_dnp3_ctl_dir":true,"dnp3_dnp3_ctl_prm":true,"dnp3_dnp3_ctl_fcb":false,"dnp3_dnp3_ctl_fcv":false,"dnp3_dnp3_ctl_prifunc":"4","dnp3_dnp3_dst":"1","dnp3_dnp3_addr":["1","2"],"dnp3_dnp3_src":"2","dnp3_dnp3_hdr_CRC":"0x00005ac3","dnp3_dnp_hdr_CRC_status":"1","dnp3_dnp3_tr_ctl":"0x000000c8","dnp3_dnp3_tr_fin":true,"dnp3_dnp3_tr_fir":true,"dnp3_dnp3_tr_seq":"8","text":["Data Chunk: 0","READ Request Data Objects"],"dnp3_dnp_data_chunk":"c8:c8:01:3c:02:06:3c:03:06:3c:04:06","dnp3_dnp_data_chunk_len":"12","dnp3_dnp_data_chunk_CRC":"0x00004cc0","dnp3_dnp_data_chunk_CRC_status":"1","dnp3_dnp3_al_fragments":null,"dnp3_dnp3_al_fragment":"1","dnp3_dnp3_al_fragment_count":"1","dnp3_dnp3_al_fragment_reassembled_length":"11","dnp3_dnp3_al_ctl":"0x000000c8","dnp3_dnp3_al_fir":true,"dnp3_dnp3_al_fin":true,"dnp3_dnp3_al_con":false,"dnp3_dnp3_al_uns":false,"dnp3_dnp3_al_seq":"8","dnp3_dnp3_al_func":"1","dnp3_dnp3_al_obj":["15362","15363","15364"],"text":["Qualifier Field, Prefix: None, Range: No Range Field","Number of Items: 0","Qualifier Field, Prefix: None, Range: No Range Field","Number of Items: 0","Qualifier Field, Prefix: None, Range: No Range Field","Number of Items: 0"],"dnp3_dnp3_al_objq_prefix":["0","0","0"],"dnp3_dnp3_al_objq_range":["6","6","6"]}}}
If anyone has any solution or suggestion, I would appreciate it :)
Thanks in advance.

Highchart : How to plot Stacked bar graph with line by below JSON respons

I am using below Json to plot the stacked graph with line (look like below screenshot)
[{
"TD": "2",
"TE": "5",
"TI": "3",
"TLI": "2",
"TR": "2",
"hour": "0",
"totalCount": "14"
},
{
"FINGERVERIFY": "4",
"LI": "1",
"TD": "3",
"TE": "9",
"TI": "4",
"TLI": "3",
"TLIP": "2",
"TR": "3",
"hour": "1",
"totalCount": "29"
},
{
"LI": "1",
"LIP": "1",
"LLI": "1",
"LLIP": "1",
"LR": "1",
"LRP": "1",
"hour": "2",
"totalCount": "6"
},
{
"FE": "2",
"TE": "2",
"hour": "8",
"totalCount": "4"
}
]
Chart Image
Description of chart based on the below points:-
x-axis : "hours" from Json property
tip of the line shows the "totalCount"
stacked bar shows the other property of Json.
Can anyone please help me to achive above graph which is simlar to screenshot, by using above Json?
Based on your data, you need to build a series structure required by Highcharts. Example:
const series = [];
data.forEach(dataEl => {
for (const key in dataEl) {
if (key === 'hour') continue;
const existingSeries = series.find(s => s.name === key);
if (!existingSeries) {
series.push({
name: key,
type: key === 'totalCount' ? 'line' : 'column',
data: [[Number(dataEl.hour), Number(dataEl[key])]]
});
} else {
existingSeries.data.push([Number(dataEl.hour), Number(dataEl[key])]);
}
}
});
Live demo: http://jsfiddle.net/BlackLabel/40pgqn9j/
API Reference: https://api.highcharts.com/highcharts/series

How do I parse this JSON in a varchar column within my table? Snowflake

I have a table with a column called "message_json" and within that table I have this json stored as a varchar data type.
{
"request_id": "b53e7cc3-89b1-495b-aab0-e0dd6243b32e",
"quote_id": "7a760b81-2c9c-4f20-9453-f7b72d4e06c6",
"tenant_abbreviation": "ahs",
"tenant_id": "ee312e77-8463-44bd-ad7e-2cd4e75c9e3d",
"event_detail": {
"source": "Quote service",
"event_name": "quote_created",
"timestamp": {
"seconds": 1608236418,
"nanos": 290575000
},
"id": "7a760b81-2c9c-4f20-9453-f7b72d4e06c6"
},
"quote": {
"attribute": {
"contract.renewal": "false",
"contract.yearsOfService": "0",
"description": "xx 3X3 xx $xx DC SC",
"mktgSourceKey": "5fdc118555b95efff7d29f23",
"order.method": "eCom",
"originalSalesChannel": "",
"plan.id": "xxx",
"product.familyName": "combo",
"product.name": "xx xx COMBO $x DC SC",
"product.origin": "TX3C217D",
"property.address1": "xxx xx xx",
"property.address2": "",
"property.ageOfHome": "",
"property.city": "xxx",
"property.country": "USA",
"property.dwellingType": "1",
"property.dwellingTypeCode": "SINGLE FAMILY RESIDENCE",
"property.motherInLaw": "",
"property.sizeOfHome": "4900",
"property.state": "xxxx",
"property.unitType": "",
"property.unitValue": "",
"property.zip5": "xxxxx",
"property.zip9": "xxxxxx",
"salesChannel": "DC",
"serviceFee": "xxxx"
}
}
I am trying to create a new table that assigns each key:value pair to a column.
I've tried parse_json(message_json) and all it returns for some reason is this.
{
"event_detail": {
"event_name": "xxxxxx",
"id": "77e49765-2b53-4d79-9442-8156b0bde3bc",
"source": "xxx xxx",
"timestamp": {
"nanos": 830472300,
"seconds": 1572679265
}
},
"quote_id": "77e49765-2b53-4d79-9442-8156b0bde3bc",
"request_id": "d8ad7a0a-f390-4660-8dc4-2838853d3846",
"tenant_abbreviation": "xxx",
"tenant_id": "ee312e77-8463-44bd-ad7e-2cd4e75c9e3d"
}
I've also tried message_json:request_id and I get this error
Invalid argument types for function 'GET': (VARCHAR(16777216), VARCHAR(10))
Any help is appreciated
I don't know what's wrong in your case, but I could fix the json string in the question by adding a closing bracket:
create or replace temp table texts as
select '{
"request_id": "b53e7cc3-89b1-495b-aab0-e0dd6243b32e",
"quote_id": "7a760b81-2c9c-4f20-9453-f7b72d4e06c6",
"tenant_abbreviation": "ahs",
"tenant_id": "ee312e77-8463-44bd-ad7e-2cd4e75c9e3d",
"event_detail": {
"source": "Quote service",
"event_name": "quote_created",
"timestamp": {
"seconds": 1608236418,
"nanos": 290575000
},
"id": "7a760b81-2c9c-4f20-9453-f7b72d4e06c6"
},
"quote": {
"attribute": {
"contract.renewal": "false",
"contract.yearsOfService": "0",
"description": "xx 3X3 xx $xx DC SC",
"mktgSourceKey": "5fdc118555b95efff7d29f23",
"order.method": "eCom",
"originalSalesChannel": "",
"plan.id": "xxx",
"product.familyName": "combo",
"product.name": "xx xx COMBO $x DC SC",
"product.origin": "TX3C217D",
"property.address1": "xxx xx xx",
"property.address2": "",
"property.ageOfHome": "",
"property.city": "xxx",
"property.country": "USA",
"property.dwellingType": "1",
"property.dwellingTypeCode": "SINGLE FAMILY RESIDENCE",
"property.motherInLaw": "",
"property.sizeOfHome": "4900",
"property.state": "xxxx",
"property.unitType": "",
"property.unitValue": "",
"property.zip5": "xxxxx",
"property.zip9": "xxxxxx",
"salesChannel": "DC",
"serviceFee": "xxxx"
}
}
}' input;
select parse_json(input):request_id
from texts
-- "b53e7cc3-89b1-495b-aab0-e0dd6243b32e"
It even comes back with the supposedly "missing" parts:
select parse_json(input):quote.attribute['property.address1']
from texts
-- "xxx xx xx"

jquery object as parameter to function

This is my jquery response:
[
{ "depot":
{
"id": "D1",
"intersection": {
"first": "Markham",
"second": "Lawrence"
},
"address": {
"number": "25",
"street": "Cougar Court",
"city": "Scarborough",
"province": "ON",
"postal_code": "M1J"
}
},
"vehicle": [
{
"id": "V1",
"depot_id": "D1",
"model": "Ford Focus",
"price": "45",
"km_per_litre": "15",
"cargo_cu_m": "YES",
"category": "Compact car",
"image": "www.coolcarz.com"
}
,
{
"id": "V2",
"depot_id": "D1",
"model": "Honda Civic",
"price": "45",
"km_per_litre": "150",
"cargo_cu_m": "YES",
"category": "Compact car",
"image": "www.coolcarz.com"
}
,
{
"id": "V8",
"depot_id": "D1",
"model": "Pontiac Aztek",
"price": "10",
"km_per_litre": "6",
"cargo_cu_m": "YES",
"category": "SUV",
"image": "www.nocoolcarz.com"
}
,
{
"id": "V12",
"depot_id": "D1",
"model": "Chevrolet Impala",
"price": "45",
"km_per_litre": "12",
"cargo_cu_m": "YES",
"category": "Standard car",
"image": "www.coolcarz.com"
}
,
{
"id": "V29",
"depot_id": "D1",
"model": "Nissan Leaf",
"price": "150",
"km_per_litre": "0",
"cargo_cu_m": "YES",
"category": "Electronic Car",
"image": "www.coolcarz.com"
}
]
}
,
{ "depot":
{
"id": "A1",
"intersection": {
"first": "Markham",
"second": "Lawrence"
},
"address": {
"number": "25",
"street": "Cougar Court",
"city": "Scarborough",
"province": "ON",
"postal_code": "m1J"
}
},
"vehicle": [
]
}
]
What I want to do is that at some point in my code, when I have received this response data, I want to pass , say data[0] or or data[0].vehicle[1] to a function
The way I am doing it now is:
function(data) {
var items = [];
for( i=0; i<data.length; i++){
items.push('<b>' + data[i].depot.intersection.first+"-"+ data[i].depot.intersection.second + " depot has following cars:"+ '</b>');
for( k=0; k<data[i].vehicle.length;k++){
str = '<li> ' + data[i].vehicle[k].category +", $"+ data[i].vehicle[k].price +' a day </li>';
items.push(str);
in effect I am trying to create a hyperlink (with vehicle category as text), and when user clicks on this hyperlink, I want to pass on the vehicle information array to a new function called moreInfo that does its job. Rit now when I do this and click the hyperlink, I see the error
missing ] after element list
timepass([object Object],[object Object]
Any ideas?
Remove 2 square brackets from beginning and end of your json code and use use it as an object not as an array.
What you did was creating an array of objects not JSON object