How to parse a dynamic Json - Power Automate - json

Im getting a http response from Azure LogAnalytics, the response is a Json like this
{
"tables": [
{
"name": "PrimaryResult",
"columns": [
{
"name": "TimeGenerated",
"type": "datetime"
},
{
"name": "DestinationIP",
"type": "string"
},
{
"name": "DestinationUserName",
"type": "string"
},
{
"name": "country_name",
"type": "string"
},
{
"name": "country_iso_code",
"type": "string"
},
{
"name": "AccountCustomEntity",
"type": "string"
}
],
"rows": [
[
"2021-05-17T14:07:01.878Z",
"158.000.000.33",
"luis",
"United States",
"US",
"luis"
]
]
}
]
}
I will never get the same colums or sometimes i will get more rows with data like this
{
"tables": [
{
"name": "PrimaryResult",
"columns": [
{
"name": "Account",
"type": "string"
},
{
"name": "Computer",
"type": "string"
},
{
"name": "IpAddress",
"type": "string"
},
{
"name": "AccountType",
"type": "string"
},
{
"name": "Activity",
"type": "string"
},
{
"name": "LogonTypeName",
"type": "string"
},
{
"name": "ProcessName",
"type": "string"
},
{
"name": "StartTimeUtc",
"type": "datetime"
},
{
"name": "EndTimeUtc",
"type": "datetime"
},
{
"name": "ConnectinCount",
"type": "long"
},
{
"name": "timestamp",
"type": "datetime"
},
{
"name": "AccountCustomEntity",
"type": "string"
},
{
"name": "HostCustomEntity",
"type": "string"
},
{
"name": "IPCustomEntity",
"type": "string"
}
],
"rows": [
[
"abc\\abc",
"EQ-DC02.abc.LOCAL",
"0.0.0.0",
"User",
"4624 - An account was successfully logged on.",
"10 - RemoteInteractive",
"C:\\Windows\\System32\\svchost.exe",
"2021-05-17T15:02:25.457Z",
"2021-05-17T15:02:25.457Z",
2,
"2021-05-17T15:02:25.457Z",
"abc\\abc",
"EQ-DC02.abc.LOCAL",
"0.0.0.0"
],
[
"abc\\eona",
"EQPD-SW01.abc.LOCAL",
"0.0.0.0",
"User",
"4624 - An account was successfully logged on.",
"10 - RemoteInteractive",
"C:\\Windows\\System32\\svchost.exe",
"2021-05-17T15:21:45.993Z",
"2021-05-17T15:21:45.993Z",
1,
"2021-05-17T15:21:45.993Z",
"abc\\abc",
"EQPD-SW01.abc.LOCAL",
"0.0.0.0"
]
]
}
]
}
Im using Power Automate to parse this kind of Json to a Object or to make a response
the question is, how can i parse this "Columns" and "Rows" to a object?

Similar discussion happened in community forum and the solution identified was:
parse JSON and transform it to XML and then search keys with XPATH in Flow

Related

Unable to create sample data for avro schema Error creating a kafka message to producer - Expected start-union. Got VALUE_STRING

Unable to Error creating a kafka message to producer - Expected start-union. Got VALUE_STRING
{
"namespace": "de.morris.audit",
"type": "record",
"name": "AuditDataChangemorris",
"fields": [
{"name": "employeeID", "type": "string"},
{"name": "employeeNumber", "type": ["null", "string"], "default": null},
{"name": "serialNumbers", "type": [ "null", {"type": "array", "items": "string"}]},
{"name": "correlationId", "type": "string"},
{"name": "timestamp", "type": "long", "logicalType": "timestamp-millis"},
{"name": "employmentscreening","type":{"type": "enum", "name": "employmentscreening", "symbols": ["NO","YES"]}},
{"name": "vouchercodes","type": ["null",
{
"type": "array",
"items": {
"name": "Vouchercodes",
"type": "record",
"fields": [
{"name": "voucherName","type": ["null","string"], "default": null},
{"name": "authocode","type": ["null","string"], "default": null}
]
}
}], "default": null}
]
}
when i was trying to create a sample data in json format based on the above avsc for kafka consumer i am getting the below error upon testing
{
"employeeID": "qtete46524",
"employeeNumber": {
"string": "custnumber9813"
},
"serialNumbers": {
"type": "array",
"items": ["363536623","5846373733"]
},
"correlationId": "corr-656532443",
"timestamp": 1476538955719,
"employmentscreening": "NO",
"vouchercodes": [
{
"voucherName": "skygo",
"authocode": "A238472ASD"
}
]
}
getting the below error when i got when i ran the dataflow job in gcp
Error message from worker: java.lang.RuntimeException: java.io.IOException: Insert failed: [{"errors":[{"debugInfo":"","location":"serialnumbers","message":"Array specified for non-repeated field: serialnumbers.","reason":"invalid"}],"index":0}]**
how to create correct sample data based on the above schema ?
Read the spec
The value of a union is encoded in JSON as follows:
if its type is null, then it is encoded as a JSON null;
otherwise it is encoded as a JSON object with one name/value pair whose name is the type’s name and whose value is the recursively encoded value
So, here's the data it expects.
{
"employeeID": "qtete46524",
"employeeNumber": {
"string": "custnumber9813"
},
"serialNumbers": {"array": [
"serialNumbers3521"
]},
"correlationId": "corr-656532443",
"timestamp": 1476538955719,
"employmentscreening": "NO",
"vouchercodes": {"array": [
{
"voucherName": {"string": "skygo"},
"authocode": {"string": "A238472ASD"}
}
]}
}
With this schema
{
"namespace": "de.morris.audit",
"type": "record",
"name": "AuditDataChangemorris",
"fields": [
{
"name": "employeeID",
"type": "string"
},
{
"name": "employeeNumber",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "serialNumbers",
"type": [
"null",
{
"type": "array",
"items": "string"
}
]
},
{
"name": "correlationId",
"type": "string"
},
{
"name": "timestamp",
"type": {
"type": "long",
"logicalType": "timestamp-millis"
}
},
{
"name": "employmentscreening",
"type": {
"type": "enum",
"name": "employmentscreening",
"symbols": [
"NO",
"YES"
]
}
},
{
"name": "vouchercodes",
"type": [
"null",
{
"type": "array",
"items": {
"name": "Vouchercodes",
"type": "record",
"fields": [
{
"name": "voucherName",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "authocode",
"type": [
"null",
"string"
],
"default": null
}
]
}
}
],
"default": null
}
]
}
Here's an example of producing and consuming to Kafka
$ jq -rc < /tmp/data.json | kafka-avro-console-producer --topic foobar --property value.schema="$(jq -rc < /tmp/data.avsc)" --bootstrap-server localhost:9092 --sync
$ kafka-avro-console-consumer --topic foobar --from-beginning --bootstrap-server localhost:9092 | jq
{
"employeeID": "qtete46524",
"employeeNumber": {
"string": "custnumber9813"
},
"serialNumbers": {
"array": [
"serialNumbers3521"
]
},
"correlationId": "corr-656532443",
"timestamp": 1476538955719,
"employmentscreening": "NO",
"vouchercodes": {
"array": [
{
"voucherName": {
"string": "skygo"
},
"authocode": {
"string": "A238472ASD"
}
}
]
}
}
^CProcessed a total of 1 messages

how to create nested and custom json format for the datafarme

i want to create sub-categories from the existing data frame
data frame column consists of (sample table) my changes required at the columns level not any changes in the data like a set of columns are the and column names 3 different suffixes (few with similar column names and other column names)
example like
|payer_id|payer_name|halo_payer_name|delta_payer_name|halo_desc|delta_desc|halo_operations|delta_notes|halo_processed_data|delta_processed_data|extra|insurance_company|
I want it to be grouped in this halo group halo_payer_name|halo_desc|halo_operations|halo_processed_data|
I want it to be grouped in this delta group delta_payer_name|delta_desc|delta_notes|delta_processed_data|
and the remaining columns as one group
so when converted to JSON it would come in this layout
{
"schema": {
"fields": [{
"payer_details": [{
"name": "payer_id",
"type": "string"
},
{
"name": "payer_name",
"type": "string"
},
{
"name": "extra",
"type": "string"
},
{
"name": "insurance_company",
"type": "string"
}
]
},
{
"halo": [{
"name": "halo_payer_name",
"type": "string"
},
{
"name": "halo_desc",
"type": "string"
},
{
"name": "halo_operstions",
"type": "string"
},
{
"name": "halo_processed_data",
"type": "string"
}
]
}, {
"delta": [{
"name": "delta_payer_name",
"type": "string"
},
{
"name": "delta_desc",
"type": "string"
},
{
"name": "delta_notes",
"type": "string"
},
{
"name": "delta_processed_data",
"type": "string"
}
]
}
],
"pandas_version": "1.4.0"
},
"masterdata": [{
"payer_details": [{
"payer_id": "",
"payer_name": "",
"extra": "",
"insurance_company": ""
}],
"halo": [{
"halo_payer_name": "",
"halo_desc": "",
"halo_operations": "",
"halo_processed_data": "",
}],
"delta":[{
"delta_payer_name": "",
"delta_desc": "",
"delta_notes": "",
"delta_processed_data": "",
}]
}]
}
for this type of situation i couldn't find a solution as it is a column based grouping instead of data-based grouping
so came across this post today it helped with my situation (adding data from a data frame and using it to create looped data and insert it into a dict and then convert the whole into a JSON file)
the ref that was helpful to me is link
so the solution for this question goes like this
schema={
"schema": {
"fields": [{
"payer_details": [{
"name": "payer_id",
"type": "string"
},
{
"name": "payer_name",
"type": "string"
},
{
"name": "extra",
"type": "string"
},
{
"name": "insurance_company",
"type": "string"
}
]
},
{
"halo": [{
"name": "halo_payer_name",
"type": "string"
},
{
"name": "halo_desc",
"type": "string"
},
{
"name": "halo_operstions",
"type": "string"
},
{
"name": "halo_processed_data",
"type": "string"
}
]
}, {
"delta": [{
"name": "delta_payer_name",
"type": "string"
},
{
"name": "delta_desc",
"type": "string"
},
{
"name": "delta_notes",
"type": "string"
},
{
"name": "delta_processed_data",
"type": "string"
}
]
}
],
"pandas_version": "1.4.0"
},
"masterdata": []
}
derived the schema above as i have desired
payer_list=[]
for i in df.index:
case={
"payer_details": [{
"payer_id": "{}".format(df['payer_id'][i]),
"payer_name": "{}".format(df['payer_name'][i]),
"extra": "{}".format(df['extra'][i]),
"insurance_company": "{}".format(df['insurance_company'][i])
}],
"halo": [{
"halo_payer_name": "{}".format(df['halo_payer_name'][i]),
"halo_desc": "{}".format(df['halo_desc'][i]),
"halo_operations": "{}".format(df['halo_operations'][i]),
"halo_processed_data": "{}".format(df['halo_processed_data'][i]),
}],
"delta":[{
"delta_payer_name": "{}".format(df['delta_payer_name'][i]),
"delta_desc": "{}".format(df['delta_desc'][i]),
"delta_notes": "{}".format(df['delta_notes'][i]),
"delta_processed_data": "{}".format(df['delta_processed_data'][i]),
}]
}
payer_list.append(case)
schema["masterdata"] = payer_list
created and empty list and run the loop and included in the empty list and joined or linked to the schema

Mapping request body fields in logic apps

We have configured common alert schema for alerts and we are using a ticketing software and whenever we receive alert it should create a ticket.
In the logic app I have to make Post API call for creation of ticket with following json object, some fields are hard coded values:
{
"subject": "",
"Id": "123456789", // Hard code value
"priority": "",
"email": "test#test.com", // Hard code value
"status": "Open" // Hard code value
}
Parse Json sample payload schema for Alert:
{
"data": {
"alertContext": {
},
},
"customProperties": null,
"essentials": {
"alertContextVersion": "123",
"alertId": "123",
"alertRule": "Test Alerts",
"description": "test",
"severity": "Sev4"
}
},
"schemaId": "test"
}
I have to map "subject and "priority" fields with alert json object "description" and "severity":
subject-->description
priority --> severity // sev0 =high ,sev1=medium, sev2 =low
How can I achieve this using logic app?
After Parse JSON You can directly map its objects in the compose connector with the required fields. Below is my logic app flow.
You can use the below Code view to reproduce the same in your Logic app
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
"Compose": {
"inputs": {
"customProperties": null,
"data": {
"alertContext": {}
},
"essentials": {
"alertContextVersion": "123",
"alertId": "123",
"alertRule": "Test Alerts",
"description": "test",
"severity": "Sev4"
},
"schemaId": "test"
},
"runAfter": {},
"type": "Compose"
},
"Compose_2": {
"inputs": {
"Id": "123456789",
"email": "test#test.com",
"priority": "#{body('Parse_JSON')?['essentials']?['severity']}",
"status": "Open",
"subject": "#{body('Parse_JSON')?['essentials']?['description']}"
},
"runAfter": {
"Parse_JSON": [
"Succeeded"
]
},
"type": "Compose"
},
"Parse_JSON": {
"inputs": {
"content": "#outputs('Compose')",
"schema": {
"properties": {
"customProperties": {},
"data": {
"properties": {
"alertContext": {
"properties": {},
"type": "object"
}
},
"type": "object"
},
"essentials": {
"properties": {
"alertContextVersion": {
"type": "string"
},
"alertId": {
"type": "string"
},
"alertRule": {
"type": "string"
},
"description": {
"type": "string"
},
"severity": {
"type": "string"
}
},
"type": "object"
},
"schemaId": {
"type": "string"
}
},
"type": "object"
}
},
"runAfter": {
"Compose": [
"Succeeded"
]
},
"type": "ParseJson"
}
},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {},
"triggers": {
"manual": {
"inputs": {},
"kind": "Http",
"type": "Request"
}
}
},
"parameters": {}
}

How to use Parser Transformation for JSON data in IICS?

I am new to IICS and I have JSON data as below, which I would to parse in csv file. I am using this link as a reference to achieve this transformation. I created valid mapping in IICS.The mapping runs fine. However, when I see my jobs I am receiving below error.I went to the path mentioned and opened the Events.cme file in Notepad but cannot make of what file is talking about (Note: in belwo output I deleted few of the numbers)
Not sure what is wrong ? Do I need to save my JSON data file as txt file ?
Any help will be appreciated! Thanks in advance!
ERROR after running the mapping
[ERROR] Failed to process data: File C:/IICSLabFiles/test.json doesn't exist or isn't readable- for more information see file://C:/PROGRA~1/Informatica Cloud Secure Agent/apps/Data_Integration_Server/data/CMReports/Tmp/2022-06-01/HierarchyParser_h2r_udt_8gns3_ONLY_H2R_XMAP_/Events.cme
Opening Events.cme file in notepad produces following
<B#80010%#>
!~109146~165266~~10.2.2.65()
<B#80032%#>
</B#8032%#>
<m -- XMap%m>
!~103149~1654220266~~Pages\/page_m_1.cmv%Pages\/page_m_1.json
<B#80037%XML#>
!~1031~1654220266~~Pages\/Input_of_m_1.cmv%Pages\/Input_of_m_1.json
<LocalFile>
!~309025~16542266~~C:\/IICSLabFiles\/test.json
</LocalFile>
!~103205~16540266~~C:\/IICSLabFiles\/test.json
!~3033~1654220266~~
</B#8007%XML#>
</m -- XMap>
</B#80010%#>
JSON Data that is saved in test.json (with File type as JSON File):
{
"current_page": 1,
"first_page_url": "https://covid-api.com/api/regions?per_page=20&page=1",
"last_page_url": "https://covid-api.com/api/regions?per_page=20&page=50",
"next_page_url": "https://covid-api.com/api/regions?per_page=20&page=2",
"prev_page_url": null,
"per_page": "20",
"last_page": 50,
"from": 1,
"path": "https://covid-api.com/api/regions",
"to": 20,
"total": 997,
"data": [
{
"iso": "CHN",
"name": "China"
},
{
"iso": "TWN",
"name": "Taipei and environs"
},
{
"iso": "USA",
"name": "US"
},
{
"iso": "JPN",
"name": "Japan"
},
{
"iso": "THA",
"name": "Thailand"
},
{
"iso": "KOR",
"name": "Korea, South"
},
{
"iso": "SGP",
"name": "Singapore"
},
{
"iso": "PHL",
"name": "Philippines"
},
{
"iso": "MYS",
"name": "Malaysia"
},
{
"iso": "VNM",
"name": "Vietnam"
},
{
"iso": "AUS",
"name": "Australia"
},
{
"iso": "MEX",
"name": "Mexico"
},
{
"iso": "BRA",
"name": "Brazil"
},
{
"iso": "COL",
"name": "Colombia"
},
{
"iso": "FRA",
"name": "France"
},
{
"iso": "NPL",
"name": "Nepal"
},
{
"iso": "CAN",
"name": "Canada"
},
{
"iso": "KHM",
"name": "Cambodia"
},
{
"iso": "LKA",
"name": "Sri Lanka"
},
{
"iso": "CIV",
"name": "Cote d'Ivoire"
}
]
}
**JSON SCHEMA that is saved in Hierarchy schema (with file type as JSON FILE) **
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"current_page": {
"type": "integer"
},
"first_page_url": {
"type": "string"
},
"last_page_url": {
"type": "string"
},
"next_page_url": {
"type": "string"
},
"prev_page_url": {
"type": "null"
},
"per_page": {
"type": "string"
},
"last_page": {
"type": "integer"
},
"from": {
"type": "integer"
},
"path": {
"type": "string"
},
"to": {
"type": "integer"
},
"total": {
"type": "integer"
},
"data": {
"type": "array",
"items": [
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
},
{
"type": "object",
"properties": {
"iso": {
"type": "string"
},
"name": {
"type": "string"
}
},
"required": [
"iso",
"name"
]
}
]
}
},
"required": [
"current_page",
"first_page_url",
"last_page_url",
"next_page_url",
"prev_page_url",
"per_page",
"last_page",
"from",
"path",
"to",
"total",
"data"
]
}
Source connection Setup
Path_text file contains following information
Path
C:/IICSLabFiles/test.json
The error message "C:/IICSLabFiles/test.json doesn't exist or isn't readable" suggests you try reading local file. Is this the path to the file located on Secure Agent running the mapping or is it the path to a file stored on your laptop? What is your Source definition?
Keep in mind that you design the mapping on your laptop where you have access to files stored on your laptop - but once you execute, it gets processed by Secure Agent (that can be a different machine, cloud-hosted, etc.). In this case it seems the Secure Agent cannot access the file at the given location.
It's also possible to have Secure Agent installed on your machine and run the process on the laptop where you actually have been designing the mapping. In such case please make sure there are no typos in the path, no leading or trailing empty spaces. And if it's a Windows-based Secure Agent, verify the paths as the one you use has froward slashes while Windows uses backslashes usually:
C:/IICSLabFiles/test.json
vs
C:\IICSLabFiles\test.json

Convert Nested Json Schem to Pyspark Schema

I have a schema which has nested fields.When I try to convert it with:
jtopy=json.dumps(schema_message['SchemaDefinition']) #json.dumps take a dictionary as input and returns a string as output.
print(jtopy)
dict_json=json.loads(jtopy) # json.loads take a string as input and returns a dictionary as output.
print(dict_json)
new_schema = StructType.fromJson(dict_json)
print(new_schema)
It returns error:
return StructType([StructField.fromJson(f) for f in json["fields"]])
TypeError: string indices must be integers
The schema is Definition as described below is what Im passing
{
"type": "record",
"name": "tags",
"namespace": "com.tigertext.data.events.tags",
"doc": "Schema for tags association to accounts (role,etc..)",
"fields": [
{
"name": "header",
"type": {
"type": "record",
"name": "eventHeader",
"namespace": "com.tigertext.data.events",
"doc": "Metadata about the event record.",
"fields": [
{
"name": "topic",
"type": "string",
"doc": "The topic this record belongs to. e.g. messages"
},
{
"name": "server",
"type": "string",
"doc": "The server that generated this event. e.g. xmpp-07"
},
{
"name": "service",
"type": "string",
"doc": "The service that generated this event. e.g. erlang-producer"
},
{
"name": "environment",
"type": "string",
"doc": "The environment this record belongs to. e.g. dev, prod"
},
{
"name": "time",
"type": "long",
"doc": "The time in epoch this record was produced."
}
]
}
},
{
"name": "eventType",
"type": {
"type": "enum",
"name": "eventType",
"symbols": [
"CREATE",
"UPDATE",
"DELETE",
"INIT"
]
},
"doc": "event type"
},
{
"name": "tagId",
"type": "string",
"doc": "Tag ID for the tag"
},
{
"name": "orgToken",
"type": "string",
"doc": "org ID"
},
{
"name": "tagName",
"type": "string",
"doc": "name of the tag"
},
{
"name": "colorId",
"type": "string",
"doc": "color id"
},
{
"name": "colorName",
"type": "string",
"doc": "color name"
},
{
"name": "colorValue",
"type": "string",
"doc": "color value e.g. #C8C8C8"
},
{
"name": "entities",
"type": [
"null",
{
"type": "array",
"items": {
"type": "record",
"name": "entity",
"fields": [
{
"name": "entityToken",
"type": "string"
},
{
"name": "entityType",
"type": "string"
}
]
}
}
],
"default": null
}
]
}
Above is the schema of the kafka topic I want to parse into pyspark schema