Modify JSON generated in Maximo 7.6.1 - json

I'm able to successfully generate JSON file from Maximo however I would like to modify the JSON before it gets generated. Like below is the sample JSON that gets generated in Maximo,
{"lastreadingdate":"2020-01-30T16:48:33+01:00",
"linearassetmeterid":0,
"sinceinstall":0.0,
"lastreading":"1,150",
"plustinitrdng":0.0,
"sincelastinspect":0.0,
"_rowstamp":"568349195",
"assetnum":"RS100003",
"active":true,
"assetmeterid":85,
"lifetodate":0.0,
"measureunitid":"KWH",
"metername":"1010",
"remarks":"TESTING JSON"}
I need the JSON to be generated as below ,
{"spi:action": "OSLC draft",
"spi:tri1readingdate":"2020-01-30T16:48:33+01:00",
"spi:tryassetmeterid":0,
"spi:install":0.0,
"spi:lastreadingTx":"1,150",
"spi:intrdngtrX":0.0,
and so on...}
Basically I need to change the target attribute names and prefix "spi" Below is the error occuring in JSON Mapping .

You're not specifying how you generate the JSON file but I'll quickly explain how you can achieve this:
As Dex pointed out, there is a JSON Mapping app in the integration module that you can use to map your outbound object structure's fields to your target structure naming.
You define your JSON structure on the JSON Mapping tab by providing a JSON sample.
You then define your mapping with Maximo on the Properties tab, like this:
Reading this IBM doc before jumping right into it should help you a lot:
https://www.ibm.com/developerworks/community/wikis/form/anonymous/api/wiki/02db2a84-fc66-4667-b760-54e495526ec1/page/e10f6e96-435d-433c-8259-5690eb756779/attachment/169224c7-10a5-4cee-af72-697a476f8b2e/media/JSON

Related

Reading JSON in Azure Synapse

I'm trying to understand the code for reading JSON file in Synapse Analytics. And here's the code provided by Microsoft documentation:
Query JSON files using serverless SQL pool in Azure Synapse Analytics
select top 10 *
from openrowset(
bulk 'https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/ecdc_cases/latest/ecdc_cases.jsonl',
format = 'csv',
fieldterminator ='0x0b',
fieldquote = '0x0b'
) with (doc nvarchar(max)) as rows
go
I wonder why the format = 'csv'. Is it trying to convert JSON to CSV to flatten the file?
Why they didn't just read the file as a SINGLE_CLOB I don't know
When you use SINGLE_CLOB then the entire file is important as one value and the content of the file in the doc is not well formatted as a single JSON. Using SINGLE_CLOB will make us do more work after using the openrowset, before we can use the content as JSON (since it is not valid JSON we will need to parse the value). It can be done but will require more work probably.
The format of the file is multiple JSON's like strings, each in separate line. "line-delimited JSON", as the document call it.
By the way, If you will check the history of the document at GitHub, then you will find that originally this was not the case. As much as I remember, originally the file included a single JSON document with an array of objects (was wrapped with [] after loaded). Someone named "Ronen Ariely" in fact found this issue in the document, which is why you can see my name in the list if the Authors of the document :-)
I wonder why the format = 'csv'. Is it trying to convert json to csv to flatten the hierarchy?
(1) JSON is not a data type in SQL Server. There is no data type name JSON. What we have in SQL Server are tools like functions which work on text and provide support for strings which are JSON's like format. Therefore, we do not CONVERT to JSON or from JSON.
(2) The format parameter has nothing to do with JSON. It specifies that the content of the file is a comma separated values file. You can (and should) use it whenever your file is well formatted as comma separated values file (also commonly known as csv file).
In this specific sample in the document, the values in the csv file are strings, which each one of them has a valid JSON format. Only after you read the file using the openrowset, we start to parse the content of the text as JSON.
Notice that only after the title "Parse JSON documents" in the document, the document starts to speak about parsing the text as JSON.

How do I parse a JSON from Azure Blob Storage file in Logic App?

I have a JSON file in Azure Blob storage that I need to parse and insert rows into SQL using the Logic App.
I am using the "Get Blob Content" and my first attempt was to then pass to "Parse JSON". It returns and error": InvalidTemplate. Unable to process template language expressions in action 'Parse_JSON' inputs at line '1' and column '2856'"
I found some discussion that indicated that the content needs to be converted to a string so I used "Compose" and edited the code as suggested to
"inputs": "#base64ToString(body('Get_blob_content').$content)"
This works but then the InvalidTemplate issue gets pushed to the Parse function and I get the InvalidTemplate error there. I have tried wrapping the output in JSON expression and a few other things but I just can't get it to parse.
If I take a sample or even the entire JSON and put it into the INPUT of the Parse function it works without issue but it will not accept the blob content as JSON.
The only thing I have been able to do successfully from blob content is to take it as a string and update a row in SQL to later use the OPENJSON in SQL...but I run into an issue there that is for another post.
I am at a loss of what to do.
You don't post much information about your logic app actions, so maybe you could refer to my flow design. I test with a json data with array.
The below is my flow picture. I'm not using compose action, and use decodeBase64(body('Get_blob_content')['$content']) as the Parse Json content.
And if select property from the json, you need set the array index. I set a variable to get a value 'body('Parse_JSON')1['name']'.
you could have a try with this, if still fail, please provide more information or some sample to let us have a test.

AWS Lambda output format - JSON

I trying to format my output from a lambda function into JSON. The lambda function queries my Amazon Aurora RDS instance and returns an array of rows in the following format:
[[name,age,town,postcode]]
which gives the an example output:
[["James", 23, "Maidenhead","sl72qw"]]
I understand that mapping templates are designed to translate one format to another but I don't understand how I can take the output above and map in to a JSON format using these mapping templates.
I have checked the documentation and it only covers converting one JSON to another.
Without seeing the code you're specifically using, it's difficult to give you a definitely correct answer, but I suspect what you're after is returning the data from python as a dictionary then converting that to JSON.
It looks like this thread contains the relevant details on how to do that.
More specifically, using the DictCursor
cursor = connection.cursor(pymysql.cursors.DictCursor)

How to validate against runtime JSON object reference?

For a sample JSON data which looks like this -
{
"children":{
"Alice":{...},
"Jamie":{...},
"Bob":{...}
// Any new child with a given unique name will be added to this object
},
childrenOrder:["Alice", "Bob", "Jamie"]
}
In the corresponding JSON Schema, I am trying to limit the valid values in "childrenOrder" array to be from the run time children keys.
I didn't see any means of referring to runtime dynamic values in the official JSON Schema documentation (http://json-schema.org/documentation.html).
Is this even possible at the moment?
For the sake of brevity I omitted JSON Schema code. I can add it if folks think it is needed to address the question.
Thanks in advance.
No it is not possible using the current JSON Schema specification. However, there is a proposal for the next version of JSON Schema that could change that.
https://github.com/json-schema/json-schema/wiki/%24data-(v5-proposal)

how to display json data got from json2 format

I get json data from Struts Action, like below:
I want to precoss this data in JSP page, but I tried to use .$each or attr, all does not woek, I use Json2.js JSON.stringfy() get these data, so how I can fet each key and value in it?
[{"agreementNumber":"161446628","employeeIndicator":"N","enrollmentSrc":"363","fepIndicator":"N","groupCancelDate":null,"groupCancelDateTime":0,"groupCnlDate":"","groupEffDate":"20070701","groupEffectiveDate":{"date":1,"day":0,"hours":0,"minutes":0,"month":6,"seconds":0,"time":1183262400000,"timezoneOffset":240,"year":107},"groupEffectiveDateTime":1183262400000,"groupName":"Westminster College","groupNumber":"01501701 ","index":"1","memberList":{"enrollmetSrc":"363","groupNumber":"01501701 ","memberList":[{"agreementNumber":"161446628","birthDate":{"date":10,"day":0,"hours":0,"minutes":0,"month":1,"seconds":0,"time":-217450800000,"timezoneOffset":300,"year":63},"birthDateTime":-217450800000,"cancelDate":null,"cancelDateTime":0,"classCode":" I3","effectiveDate":{"date":1,"day":0,"hours":0,"minutes":0,"month":6,"seconds":0,"time":1183262400000,"timezoneOffset":240,"year":107},"effectiveDateTime":1183262400000,"firstName":"KENNETH ","gender":"M","groupName":"","groupNumber":"01501701 ","lastName":"ROMIG ","medicareAdvantage":"","memberId":375315,"middleName":"J ","pin":"1","preTtlName":" ","relation":"Self","relationCode":"1","sucTtlName":" "},{"agreementNumber":"161446628","birthDate":{"date":23,"day":5,"hours":0,"minutes":0,"month":7,"seconds":0,"time":-200692800000,"timezoneOffset":240,"year":63},"birthDateTime":-200692800000,"cancelDate":null,"cancelDateTime":0,"classCode":" I3","effectiveDate":{"date":1,"day":0,"hours":0,"minutes":0,"month":6,"seconds":0,"time":1183262400000,"timezoneOffset":240,"year":107},"effectiveDateTime":1183262400000,"firstName":"KIMBERLY ","gender":"F","groupName":"","groupNumber":"01501701 ","lastName":"ROMIG ","medicareAdvantage":"","memberId":1424959,"middleName":"G ","pin":"3","preTtlName":" ","relation":"Spouse","relationCode":"2","sucTtlName":" "},{"agreementNumber":"161446628","birthDate":{"date":8,"day":1,"hours":0,"minutes":0,"month":0,"seconds":0,"time":631774800000,"timezoneOffset":300,"year":90},"birthDateTime":631774800000,"cancelDate":null,"cancelDateTime":0,"classCode":" I3","effectiveDate":{"date":1,"day":0,"hours":0,"minutes":0,"month":6,"seconds":0,"time":1183262400000,"timezoneOffset":240,"year":107},"effectiveDateTime":1183262400000,"firstName":"NICOLE ","gender":"F","groupName":"","groupNumber":"01501701 ","lastName":"CRUMBACHER ","medicareAdvantage":"","memberId":375314,"middleName":"A ","pin":"4","preTtlName":" ","relation":"Child","relationCode":"3","sucTtlName":" "},{"agreementNumber":"161446628","birthDate":{"date":7,"day":6,"hours":0,"minutes":0,"month":6,"seconds":0,"time":994478400000,"timezoneOffset":240,"year":101},"birthDateTime":994478400000,"cancelDate":null,"cancelDateTime":0,"classCode":" I3","effectiveDate":{"date":1,"day":0,"hours":0,"minutes":0,"month":6,"seconds":0,"time":1183262400000,"timezoneOffset":240,"year":107},"effectiveDateTime":1183262400000,"firstName":"NATHAN ","gender":"M","groupName":"","groupNumber":"01501701 ","lastName":"ROMIG ","medicareAdvantage":"","memberId":1424960,"middleName":"J ","pin":"6","preTtlName":" ","relation":"Child","relationCode":"3","sucTtlName":" "}]},"ownerCode":"HM"}]
Your json object has a complex structure so I am not going to write the function how to map it properly but I am going to give you some tools how to understand it better and work with it.
To see your json object more clearly use a an online json pareser http://json.parser.online.fr/ just paste your json object there and in the right you will see the tree like structure.
Here you can see examples how to access you json object properties jsfidle
I know that this is not what you want exactly but will help you build it.