I am trying to read Json file as string and using the data as validator for cerberus. I am using a custom function with check_with. The json works fine if I use the code from within my python test script.
abc.json
{
"rows": {
"type": "list",
"schema": {
"type": "dict",
"schema": {
"amt": {"type": "integer"},
"amt": {"check_with": util_cls.amt_gt_than}
}
}
}
}
python test code
with open("abc.json") as f:
s = f.read()
#s = ast.literal_eval(s)
v = Validator()
r = v.validate(json_data, s)
Cerberus requires the variable s to be a dict and I can't convert abc.json file contents to json using json.load as the json is not a valid format. Any ideas on how to convert the string to dict?
Related
I've a json file, and there is a comma in the end of JSON object. How to remove the last comma of Item2?
Opening this file in the notepad++ having json viewer plugin/Format json removes the commas from Item1, Item2 and last json object.
Does PowerShell support reading this json and format properly like notepad++ does?
Found this documentation https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/convertto-json?view=powershell-7.2
Did not find any options in ConvertTo-Json to format the json given below. And write same json again in correct format.
{
"Name": "SampleName",
"Cart": [
{
"Item1": "ItemOne",
},
{
"Item2": "ItemTwo",
},
]
}
Expected json output
{
"Name": "SampleName",
"Cart": [
{
"Item1": "ItemOne"
},
{
"Item2": "ItemTwo"
}
]
}
You can use the third-party module newtonsoft.json
Then the cmdlet ConvertFrom-JsonNewtonsoft will accept this malformatted JSON file.
Once converted to an object you can convert it back to a json valid string
$a = #"
{
"Name": "SampleName",
"Cart": [
{
"Item3": "ItemOne",
},
{
"Item2": "ItemTwo",
},
]
}
"#
$a | ConvertFrom-JsonNewtonsoft | ConvertTo-JsonNewtonsoft
Im using the following JSON and query to calculate the array length in the JMeter json extractor.
{
"data": {
"modal": "HJ",
"technicalid": "e492fc62-a886-67a461b76de8",
"viewModel": {
"series": [
{
"name": "H_0_G_0_R_0",
"UID": "J_0_G_0_R_0",
"description": "Test1",
"type": "series",
"groups": [
{
"name": "H_0_G_0",
"UID": "G_0_G_0",
"description": "Group 1",
"type": "group"
}
],
"postProcessing": null
}
]
},
"status": "success"
},
"success": true,
"statusCode": 200,
"errorMessage": ""
}
Here is the query.
data.Model.series[0].groups.length
This is working fine in the online jsonquerytool. When I use this query in the JMeter json extractor, it is returning null. I assume this is because it is returning an integer because other similar queries which are returning strings are working fine with json extractor . How to find the array length in JMeter json extractor?
Why JSON extractor to calculate the length? You could use a post processer. Like JSR223 post processer using groovy script.
import groovy.json.*
def response = prev.responseDataAsString ;
def json = new JsonSlurper().parseText(response) ;
def sizeResultPractitioners = json.data.viewModel.series[0].groups.size();
log.info("---------->"+sizeResultPractitioners);
I tried with your JSON response payload and also tried with modified response payload,
With modified response payload,
With JSON Extractor you can provide "Match No." as -1:
and the number of matches will be available as foo_matchNr JMeter Variable:
Alternative option is going for JSON JMESPath Extractor which provides length() function so you can get the size of the array as:
length(data.viewModel.series[0].groups)
or if you prefer pipe expressions
data.viewModel.series[0].groups | length(#)
I have a text file which contains raw data. I want to parse that data and clean it so that it can be used further.The following is the rawdata.
"{\x0A \x22identifier\x22: {\x0A \x22company_code\x22: \x22TSC\x22,\x0A \x22product_type\x22: \x22airtime-ctg\x22,\x0A \x22host_type\x22: \x22android\x22\x0A },\x0A \x22id\x22: {\x0A \x22type\x22: \x22guest\x22,\x0A \x22group\x22: \x22guest\x22,\x0A \x22uuid\x22: \x221a0d4d6e-0c00-11e7-a16f-0242ac110002\x22,\x0A \x22device_id\x22: \x22423e49efa4b8b013\x22\x0A },\x0A \x22stats\x22: [\x0A {\x0A \x22timestamp\x22: \x222017-03-22T03:21:11+0000\x22,\x0A \x22software_id\x22: \x22A-ACTG\x22,\x0A \x22action_id\x22: \x22open_app\x22,\x0A \x22values\x22: {\x0A \x22device_id\x22: \x22423e49efa4b8b013\x22,\x0A \x22language\x22: \x22en\x22\x0A }\x0A }\x0A ]\x0A}"
I want to remove all the hexadecimal characters,I tried parsing the data and storing in an array and cleaning it using re.sub() but it gives the same data.
for line in f:
new_data = re.sub(r'[^\x00-\x7f],\x22',r'', line)
data.append(new_data)
\x0A is the hex code for newline. After s = <your json string>, print(s) gives
>>> print(s)
{
"identifier": {
"company_code": "TSC",
"product_type": "airtime-ctg",
"host_type": "android"
},
"id": {
"type": "guest",
"group": "guest",
"uuid": "1a0d4d6e-0c00-11e7-a16f-0242ac110002",
"device_id": "423e49efa4b8b013"
},
"stats": [
{
"timestamp": "2017-03-22T03:21:11+0000",
"software_id": "A-ACTG",
"action_id": "open_app",
"values": {
"device_id": "423e49efa4b8b013",
"language": "en"
}
}
]
}
You should parse this with the json module load (from file) or loads (from string) functions. You will get a dict with 2 dicts and a list with a dict.
My problem is I have Json file of small json file creadted with node js
I couldn't consume my json from that link and i tried to test my json file in some website like Json formatter there is this error : Multiple JSON root elements .
when i put only one json in json formatter it become right but like this example 2 json it it wrong
this is the example of my json of 2 json ,
{"#timestamp":"2017-06-11T00:28:24.112Z","type_instance":"interrupt","plugin":"cpu","logdate":"2017-06-11T00:28:24.112Z","host":"node-2","#version":"1","collectd_type":"percent","value":0}
{"#timestamp":"2017-06-11T00:28:24.112Z","type_instance":"softirq","plugin":"cpu","logdate":"2017-06-11T00:28:24.112Z","host":"node-2","#version":"1","collectd_type":"percent","value":0}
this is not a json format json must have a root an object or an array
[
{
"#timestamp": "2017-06-11T00:28:24.112Z",
"type_instance": "interrupt",
"plugin": "cpu",
"logdate": "2017-06-11T00:28:24.112Z",
"host": "node-2",
"#version": "1",
"collectd_type": "percent",
"value": 0
},
{
"#timestamp": "2017-06-11T00:28:24.112Z",
"type_instance": "softirq",
"plugin": "cpu",
"logdate": "2017-06-11T00:28:24.112Z",
"host": "node-2",
"#version": "1",
"collectd_type": "percent",
"value": 0
}
]
If you have the file contents as a string, then split the lines and JSON.parse them one by one:
const data =`{"a": 1}
{"b": 2}`;
const lines = data.split('\n')
const objects = lines.map(JSON.parse);
console.log(objects);
I have a huge .json file like the one below. I want to convert that JSON to a dataframe on Spark.
{
"movie": {
"id": 1,
"name": "test"
}
}
When I execute the following code, I get a _corrupt_record error:
val df = sqlContext.read.json("example.json")
df.first()
Lately I learned that Spark only supports one-line JSON files, like:
{ "movie": { "id": 1, "name": "test test" } }
How can I convert a JSON text from multiple lines to a single line.