I trying to format my output from a lambda function into JSON. The lambda function queries my Amazon Aurora RDS instance and returns an array of rows in the following format:
[[name,age,town,postcode]]
which gives the an example output:
[["James", 23, "Maidenhead","sl72qw"]]
I understand that mapping templates are designed to translate one format to another but I don't understand how I can take the output above and map in to a JSON format using these mapping templates.
I have checked the documentation and it only covers converting one JSON to another.
Without seeing the code you're specifically using, it's difficult to give you a definitely correct answer, but I suspect what you're after is returning the data from python as a dictionary then converting that to JSON.
It looks like this thread contains the relevant details on how to do that.
More specifically, using the DictCursor
cursor = connection.cursor(pymysql.cursors.DictCursor)
Related
I have a json file which of following format
{"result": [{"key1":"value1", "key2":"value2", "key3":"value3"}]}
When I use the crawler the table created has classification UNKOWNN. I have done some research and if you make a custom classifier with JsonPath $[*] you should be able to get the whole array. Unfortunately this does not work, for me at least. I created a new crawler after creating the classifier as it would not work if the old crawler were updated with the classifier.
Has anyone run into this issue and can be of help?
Your JSONPath is assuming that the root is a collection, eg.
[{"result ..},{}]
Since your root is not a collection, try a JSONPath like this:
$.result
That assumes that the whole object is the value you want, you may also want to do:
$.result[*]
That will get each entry in the result collection as a separate object.
I found a workaround..
In my python script I select the "result" array. In other words I do not have the "result" key now. I can then use the classifier with following JsonPath $[*]. This workaround worked fine for me.
Have a nice one!
When I use the BigQuery console manually, I can see that the 3 options when exporting a table to GCS are CSV, JSON (Newline delimited), and Avro.
With Airflow, when using the BigQueryToCloudStorageOperator operator, what is the correct value to pass to export_format in order to transfer the data to GCS as JSON (Newline delimited)? Is it simply JSON? All examples I've seen online for BigQueryToCloudStorageOperator use export_format='CSV', never for JSON, so I'm not sure what the correct value here is. Our use case needs JSON, since the 2nd task in our DAG (after transferring data to GCS) is to then load that data from GCS into our MongoDB Cluster with mongoimport.
I found that the value export_format='NEWLINE_DELIMITED_JSON' was required after finding the documentation https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#jobconfigurationextract and refering to the values for destinationFormat
According to the BigQuery documentation the three possible formats to which you can export BigQuery query results are: CSV, JSON, and Avro (and this is compatible with the UI drop-down menu).
I would try with export_format='JSON' as you already proposed.
Am having trouble identifying the propert format to store a json request body in csv format, then use the csv file value in a scenario.
This works properly within a scenario:
And request '{"contextURN":"urn:com.myco.here:env:booking:reservation:0987654321","individuals":[{"individualURN":"urn:com.myco.here:env:booking:reservation:0987654321:individual:12345678","name":{"firstName":"NUNYA","lastName":"BIDNESS"},"dateOfBirth":"1980-03-01","address":{"streetAddressLine1":"1 Myplace","streetAddressLine2":"","city":"LANDBRANCH","countrySubdivisionCode":"WV","postalCode":"25506","countryCode":"USA"},"objectType":"INDIVIDUAL"},{"individualURN":"urn:com.myco.here:env:booking:reservation:0987654321:individual:23456789","name":{"firstName":"NUNYA","lastName":"BIZNESS"},"dateOfBirth":"1985-03-01","address":{"streetAddressLine1":"1 Myplace","streetAddressLine2":"","city":"BRANCHLAND","countrySubdivisionCode":"WV","postalCode":"25506","countryCode":"USA"},"objectType":"INDIVIDUAL"}]}'
However, when stored in csv file as follows (I've tried quite a number other formatting variations)
'{"contextURN":"urn:com.myco.here:env:booking:reservation:0987654321","individuals":[{"individualURN":"urn:com.myco.here:env:booking:reservation:0987654321:individual:12345678","name":{"firstName":"NUNYA","lastName":"BIDNESS"},"dateOfBirth":"1980-03-01","address":{"streetAddressLine1":"1 Myplace","streetAddressLine2":"","city":"LANDBRANCH","countrySubdivisionCode":"WV","postalCode":"25506","countryCode":"USA"},"objectType":"INDIVIDUAL"},{"individualURN":"urn:com.myco.here:env:booking:reservation:0987654321:individual:23456789","name":{"firstName":"NUNYA","lastName":"BIZNESS"},"dateOfBirth":"1985-03-01","address":{"streetAddressLine1":"1 Myplace","streetAddressLine2":"","city":"BRANCHLAND","countrySubdivisionCode":"WV","postalCode":"25506","countryCode":"USA"},"objectType":"INDIVIDUAL"}]}',
and used in scenario as:
And request requestBody
my test returns an "javascript evaluation failed: " & the json above & :1:63 Missing close quote ^ in at line number 1 at column number 63
Can you please identify correct formatting or the usage errors I am missing? Thanks
We just use a basic CSV library behind the scenes. I suggest you roll your own Java helper class that does whatever processing / pre-processing you need.
Do read this answer as well: https://stackoverflow.com/a/54593057/143475
I can't make sense of your JSON but if you are trying to fit JSON into CSV, sorry - that's not a good idea. See this answer: https://stackoverflow.com/a/62449166/143475
I'm able to successfully generate JSON file from Maximo however I would like to modify the JSON before it gets generated. Like below is the sample JSON that gets generated in Maximo,
{"lastreadingdate":"2020-01-30T16:48:33+01:00",
"linearassetmeterid":0,
"sinceinstall":0.0,
"lastreading":"1,150",
"plustinitrdng":0.0,
"sincelastinspect":0.0,
"_rowstamp":"568349195",
"assetnum":"RS100003",
"active":true,
"assetmeterid":85,
"lifetodate":0.0,
"measureunitid":"KWH",
"metername":"1010",
"remarks":"TESTING JSON"}
I need the JSON to be generated as below ,
{"spi:action": "OSLC draft",
"spi:tri1readingdate":"2020-01-30T16:48:33+01:00",
"spi:tryassetmeterid":0,
"spi:install":0.0,
"spi:lastreadingTx":"1,150",
"spi:intrdngtrX":0.0,
and so on...}
Basically I need to change the target attribute names and prefix "spi" Below is the error occuring in JSON Mapping .
You're not specifying how you generate the JSON file but I'll quickly explain how you can achieve this:
As Dex pointed out, there is a JSON Mapping app in the integration module that you can use to map your outbound object structure's fields to your target structure naming.
You define your JSON structure on the JSON Mapping tab by providing a JSON sample.
You then define your mapping with Maximo on the Properties tab, like this:
Reading this IBM doc before jumping right into it should help you a lot:
https://www.ibm.com/developerworks/community/wikis/form/anonymous/api/wiki/02db2a84-fc66-4667-b760-54e495526ec1/page/e10f6e96-435d-433c-8259-5690eb756779/attachment/169224c7-10a5-4cee-af72-697a476f8b2e/media/JSON
I'm working on some Python code for my local billiard hall and I'm running into problems with JSON encoding. When I dump my data into a file I obviously get all the data in a single line. However, I want my data to be dumped into the file following the format that I want. For example (Had to do picture to get point across),
My custom JSON format
. I've looked up questions on custom JSONEncoders but it seems they all have to do with datatypes that aren't JSON serializable. I never found a solution for my specific need which is having everything laid out in the manner that I want. Basically, I want all of the list elements to on a separate row but all of the dict items to be in the same row. Do I need to write my own custom encoder or is there some other approach I need to take? Thanks!