List item
I am a newbie in the azure environment. I am getting JSON response from my custom connector but I need to convert that JSON into parameters so I can use these parameters in further actions Can anybody know how is this possible ?
You want to use these parameters in further actions but you don't mention which type it's stored.
1.Stored as string. In this way the whole json is a string, it's not supported to select property. So you need parse it to Json with Parse JSON action then you will be able to select property. About the Parse JSON Schema, just click the Use sample payload to generate schema and paste your json value, it will generate. And select your property just use the #{body('Parse_JSON')?['name']}, it will work.
2.If it's stored as an Object, just use expression variables('test1')['name'] to get it.
Related
I want to pass value from my csv file in json extractor but it's not working.
I have tried like [?(#.name == '${UserName}')].id but when i am simply writing Username without taking it from CSV then it is working.
Double check that your UserName variable exists and has the anticipated value using Debug Sampler and View Results Tree listener combination
As long as the associated JMeter Variable exists you should be able to use it in the JSON Extractor
I need to import JSON data in a WP database. I found the correct table of the database but the example JSON data present in table is not in a normal JSON format.
I need to import:
{"nome": "Pippo","cognome": "Paperino"}
but the example data in the table is:
a:2:{s:4:"nome";s:5:"Pippo";s:7:"cognome";s:8:"Paperino";}
How i can convert my JSON to "WP JSON"?
The data is serialized, that's why it looks weird. You can use maybe_unserialize() in WordPress, this function will unserialize the data if it was serialized.
https://developer.wordpress.org/reference/functions/maybe_unserialize/
Some functions serialize data before saving it in wordpress, and some will also unserialize when pulling from the DB. So depending on how you save it, and how you later extract the data, you might end up with serialized data.
I have searched a lot but not found exact slution.
I have a REST service, in response of which I get rows and each row in a JSON, as given bellow:
{"event":"click1","properties":{ "time":"2 dec 2018","clicks":29,"parent":"jbar","isLast":"NO"}}
{"event":"click2","properties":{ "time":"2 dec 2018","clicks":35,"parent":"jbar3","isLast":"NO"}}
{"event":"click3","properties":{ "time":"2 dec 2018","clicks":10,"parent":"jbar2","isLast":"NO"}}
{"event":"click4","properties":{ "time":"2 dec 2018","clicks":9,"parent":"jbar1","isLast":"YES"}}
Each row is a JSON (all are similar to each other). I have a database table having all those fields as columns. I wanted to loop through these and upload all data in Talend. What I have tried is following:
tRestClient--tNormalize--tExtractJsonFields--tOracleOutput
and provided loop criteria and mapping in tExtractJsonFields component but it is not working and throwing me error saying "json can not be null or empty"
Need help in doing that.
Since your webservice returns multiple json objects in the response, it's not valid json but rather a json document.
You need to break it into individual json objects.
You can add a tNormalize between tRESTClient and tExtractJsonFields, and normalize the json document on "\n" character.
The error "json can not be null or empty" is due to an error in your Jsonpath queries. You have to set the loop query to "$", and reference the json properties using "event", "properties.time"
could you try this :
In your tExtractJsonFields, configure the property readBy to JsonPath without loop
Can I get help in creating a table on AWS Athena.
For a sample example of data :
[{"lts": 150}]
AWS Glue generate the schema as :
array (array<struct<lts:int>>)
When I try to use the created table by AWS Glue to preview the table, I had this error:
HIVE_BAD_DATA: Error parsing field value for field 0: org.openx.data.jsonserde.json.JSONObject cannot be cast to org.openx.data.jsonserde.json.JSONArray
The message error is clear, but I can't find the source of the problem!
Hive running under AWS Athena is using Hive-JSON-Serde to serialize/deserialize JSON. For some reason, they don't support just any standard JSON. They ask for one record per line, without an array. In their words:
The following example will work.
{ "key" : 10 }
{ "key" : 20 }
But this won't:
{
"key" : 20,
}
Nor this:
[{"key" : 20}]
You should create a JSON classifier to convert array into list of object instead of a single array object. Use JSON path $[*] in your classifier and then set up crawler to use it:
Edit crawler
Expand 'Description and classifiers'
Click 'Add' on the left pane to associate you classifier with crawler
After that remove previously created table and re-run the crawler. It will create a table with proper scheme but I think Athena will still be complaining when you will try to query it. However, now you can read from that table using Glue ETL job and process single record object instead of array-objects
This json - [{"lts": 150}] would work like a charm with below query:-
select n.lts from table_name
cross join UNNEST(table_name.array) as t (n)
The output would be as below:-
But I have faced a challenge with json like - [{"lts": 150},{"lts": 250},{"lts": 350}].
Even if there are 3 elements in the JSON, the query is returning only the first element. This may be because of the limitation listed by #artikas.
Definitely, we can change the json like below to make it work:-
{"lts": 150}
{"lts": 250}
{"lts": 350}
Please post if anyone is having a better solution to it.
I am using Spark Jobserver https://github.com/spark-jobserver/spark-jobserver and Apache Spark for some analytic processing.
I am receiving back the following structure from jobserver when a job finishes
"status": "OK",
"result": [
"[17799.91015625,null,hello there how areyou?]",
"[50000.0,null,Hi, im fine]",
"[0.0,null,All good]"
]
The result doesnt contain valid json, as explained here:
https://github.com/spark-jobserver/spark-jobserver/issues/176
So I'm trying to convert the returned structure into a json structure, however I cant simply make the result string insert ' (single quotes) based on the comma delimiter, as sometimes the result contains a comma itself.
How can i convert a spark Sql row into a json object in the above situation?
I actually found a better way in the end,
from 1.3.0 onwards you can use .toJSON on a Dataframe to convert it to json
df.toJSON.collect()
to output a dataframes schema to json you can use
df.schema.json