Output from Response Body
{"data":[{"id”:122,"name”:”Test 1“,”description”:”TEST 1 Test 2 …..}]},{"id”:123,"name”:”DYNAMO”……}]},{"id”:126,”name”:”T DYNAMO”……
*** Keywords ***
Capture The Data Ids
#{ids}= Create List 122 123 126 167 190
${header} Create Dictionary Authoriztion...
${resp} Get Response httpsbin /data
${t_ids}= Get Json Value ${resp.content} /data/0/id
Problem
I have created a list of above ids in the test case and I need to compare the created data against the id returned in the response body.
t_ids returns 122and when 0 is replaced by 1, returns 123
Rather than capturing individual id, is it possible to put them in for loop?
:FOR ${i} IN ${ids}
\ ${the_id= Get Json Value ${resp.content} /data/${i}/id ?
I tried this and failed.
What is the possible solution to compare the ids from the response data against the created list?
Thank you.
It is possible to what you want, but it is always good to know what kind of data structure your variable contains. In the below example loading a json file replaces the received answer in ${resp.content}. To my knowledge this is a string, which is also what Get File returns.
The example is split into the json file and the robot file.
so_json.json
{
"data":[
{
"id":122,
"name": "Test 1",
"description": "TEST 1 Test 2"
},
{
"id": 123,
"name": "DYNAMO"
},
{
"id": 126,
"name": "T DYNAMO"
}
]
}
so_robot.robot
*** Settings ***
Library HttpLibrary.HTTP
Library OperatingSystem
Library Collections
*** Test Cases ***
TC
${json_string} Get File so_json.json
${json_object} Parse Json ${json_string}
:FOR ${item} IN #{json_object['data']}
\ Log To Console ${item['id']}
Which in turn gives the following result:
==============================================================================
Robot - Example
==============================================================================
Robot - Example.SO JSON
==============================================================================
TC 122
123
126
| PASS |
------------------------------------------------------------------------------
Robot - Example.SO JSON | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
Robot - Example | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
Related
I have a request which tend to upload a file, if a file with the same name already exists it throws a message that the file already exists. This can be considered as expected result and even though the error I would the test to pass as it is.
This is the code I am using:
Create Session mysession ${test_env}
&{headers} Create Dictionary Content-Type=application/json; charset=utf-8 Authorization=${token}
${json}= Catenate { "FileName": "File.txt", "Content": "PD94bWwg..", "UserId": "email.com" }
${value} Set Variable 2
${value} Convert To Integer ${value}
${json}= Evaluate json.loads('''${json}''') json
#Set To Dictionary ${json["FileName"]}
${json}= Evaluate json.dumps(${json}) json
${resp} POST url=${test_env}/api/nt data=${json} headers=${headers}
${log}= Log To Console ${resp.status_code} 400
Log To Console ${resp.content}
Status Should Be expected_status=any
The test stops at the POST request and does not want to read the expected_status=any and consider the test as pass.
I would appreciate any hints on how to make it pass.
Below code will verify the 400 error and will continue further execution
Run Keyword And Expect Error HTTPError: 400* POST url=${test_env}/api/nt data=${json} headers=${headers}
I'm trying to open a bunch of JSON files using read_json In order to get a Dataframe as follow
ddf.compute()
id owner pet_id
0 1 "Charlie" "pet_1"
1 2 "Charlie" "pet_2"
3 4 "Buddy" "pet_3"
but I'm getting the following error
_meta = pd.DataFrame(
columns=list(["id", "owner", "pet_id"]])
).astype({
"id":int,
"owner":"object",
"pet_id": "object"
})
ddf = dd.read_json(f"mypets/*.json", meta=_meta)
ddf.compute()
*** ValueError: Metadata mismatch found in `from_delayed`.
My JSON files looks like
[
{
"id": 1,
"owner": "Charlie",
"pet_id": "pet_1"
},
{
"id": 2,
"owner": "Charlie",
"pet_id": "pet_2"
}
]
As far I understand the problem is that I'm passing a list of dicts, so I'm looking for the right way to specify it the meta= argument
PD:
I also tried doing it in the following way
{
"id": [1, 2],
"owner": ["Charlie", "Charlie"],
"pet_id": ["pet_1", "pet_2"]
}
But Dask is wrongly interpreting the data
ddf.compute()
id owner pet_id
0 [1, 2] ["Charlie", "Charlie"] ["pet_1", "pet_2"]
1 [4] ["Buddy"] ["pet_3"]
The invocation you want is the following:
dd.read_json("data.json", meta=meta,
blocksize=None, orient="records",
lines=False)
which can be largely gleaned from the docstring.
meta looks OK from your code
blocksize must be None, since you have a whole JSON object per file and cannot split the file
orient "records" means list of objects
lines=False means this is not a line-delimited JSON file, which is the more common case for Dask (you are not assuming that a newline character means a new record)
So why the error? Probably Dask split your file on some newline character, and so a partial record got parsed, which therefore did not match your given meta.
I want to assign a hash variable from puppet to a hiera data structure but i only get a string.
Here is a example to illustrate, what I want. Finaly I don't want to access a fact.
1 ---
2 filesystems:
3 - partitions: "%{::partitions}"
And here is my debug code:
1 $filesystemsarray = lookup('filesystems',Array,'deep',[])
2 $filesystems = $filesystemsarray.map | $fs | {
3 notice("fs: ${fs['partitions']}")
4 }
5
6 notice("sda1: ${filesystemsarray[0]['partitions']['/dev/sda1']}")
The map leads to the following output:
Notice: Scope(Class[Profile::App::Kms]): fs: {"/dev/mapper/localhost--vg-root"=>{"filesystem"=>"ext4", "mount"=>"/", "size"=>"19.02 GiB", "size_bytes"=>20422066176, "uuid"=>"02e2ba2c-2ee4-411d-ac63-fc963c8026b4"}, "/dev/mapper/localhost--vg-swap_1"=>{"filesystem"=>"swap", "size"=>"512.00 MiB", "size_bytes"=>536870912, "uuid"=>"95ba4b2a-7434-48fd-9331-66443c752a9e"}, "/dev/sda1"=>{"filesystem"=>"ext2", "mount"=>"/boot", "partuuid"=>"de90a5ed-01", "size"=>"487.00 MiB", "size_bytes"=>510656512, "uuid"=>"398f2ab6-a7e8-4983-bd81-db03984fbd0e"}, "/dev/sda2"=>{"size"=>"1.00 KiB", "size_bytes"=>1024}, "/dev/sda5"=>{"filesystem"=>"LVM2_member", "partuuid"=>"de90a5ed-05", "size"=>"19.52 GiB", "size_bytes"=>20961034240, "uuid"=>"wLKRQm-9bdn-mHA8-M8bE-NL76-Gmas-L7Gp0J"}}
Seem to be a Hash as expected but the notice in Line 6 leads to:
Error: Evaluation Error: A substring operation does not accept a String as a character index. Expected an Integer at ...
What's my fault?
I'm trying to parse a json file with OpenStruct. Json file has an array for Skills. When I parse it I get some extra "garbage" returned. How do I get rid of it?
json
{
"Job": "My Job 1",
"Skills": [{ "Name": "Name 1", "ClusterName": "Cluster Name 1 Skills"},{ "Name": "Name 2", "ClusterName": "Cluster Name 2 Skills"}]
}
require 'ostruct'
require 'json'
json = File.read('1.json')
job = JSON.parse(json, object_class: OpenStruct)
puts job.Skills
#<OpenStruct Name="Name 1", ClusterName="Cluster Name 1 Skills">
#<OpenStruct Name="Name 2", ClusterName="Cluster Name 2 Skills">
If by garbage, you mean #<OpenStruct and ">, it is just the way Ruby represents objects when called with puts. It is useful for development and debugging, and it makes it easier to understand the difference between a String, an Array, an Hash and an OpenStruct.
If you just want to display the name and cluster name, and nothing else :
puts job.Job
job.Skills.each do |skill|
puts skill.Name
puts skill.ClusterName
end
It returns :
My Job 1
Name 1
Cluster Name 1 Skills
Name 2
Cluster Name 2 Skills
EDIT:
When you use job = JSON.parse(json, object_class: OpenStruct), your job variable becomes an OpenStruct Ruby object, which has been created from a json file.
It doesn't have anything to do with json though: it is not a json object anymore, so you cannot just write it back to a .json file and expect it to have the correct syntax.
OpenStruct doesn't seem to work well with to_json, so it might be better to remove object_class: OpenStruct, and just work with hashes and arrays.
This code reads 1.json, convert it to a Ruby object, adds a skill, modifies the job name, writes the object to 2.json, and reads it again as JSON to check that everything worked fine.
require 'json'
json = File.read('1.json')
job = JSON.parse(json)
job["Skills"] << {"Name" => "Name 3", "ClusterName" => "Cluster Name 3 Skills"}
job["Job"] += " (modified version)"
# job[:Fa] = 'blah'
File.open('2.json', 'w'){|out|
out.puts job.to_json
}
require 'pp'
pp JSON.parse(File.read('2.json'))
# {"Job"=>"My Job 1 (modified version)",
# "Skills"=>
# [{"Name"=>"Name 1", "ClusterName"=>"Cluster Name 1 Skills"},
# {"Name"=>"Name 2", "ClusterName"=>"Cluster Name 2 Skills"},
# {"Name"=>"Name 3", "ClusterName"=>"Cluster Name 3 Skills"}]}
Trying to load the json file which is having null values in it by using elephant-bird JsonLoader.
sample.json
{"created_at": "Mon Aug 22 10:48:23 +0000 2016","id": 767674772662607873,"id_str": "767674772662607873","text": "KPIT Image Result for https:\/\/t.co\/Nas2ZnF1zZ... https:\/\/t.co\/9TnelwtIvm","source": "\u003ca href=\"http:\/\/twitter.com\" rel=\"nofollow\"\u003eTwitter Web Client\u003c\/a\u003e","truncated": false,"in_reply_to_status_id": 123,"in_reply_to_status_id_str": null,"in_reply_to_user_id": null,"in_reply_to_user_id_str": null,"in_reply_to_screen_name": null,"geo": null,"coordinates": null,"place": null,"contributors": null,"is_quote_status": false,"retweet_count": 0,"favorite_count": 0,"entities": {"hashtags": [],"urls": [{"url": "https:\/\/t.co\/Nas2ZnF1zZ","expanded_url": "http:\/\/miltonious.com\/","display_url": "miltonious.com","indices": [24, 47]}],"user_mentions": [],"symbols": []},"favorited": false,"retweeted": false,"possibly_sensitive": false,"filter_level": "low","lang": "en","timestamp_ms": "1471862903167"}
script:
REGISTER piggybank.jar
REGISTER json-simple-1.1.1.jar
REGISTER elephant-bird-pig-4.3.jar
REGISTER elephant-bird-core-4.1.jar
REGISTER elephant-bird-hadoop-compat-4.3.jar
json = LOAD 'sample.json' USING JsonLoader('created_at:chararray, id:chararray, id_str:chararray, text:chararray, source:chararray, in_reply_to_status_id:chararray, in_reply_to_status_id_str:chararray, in_reply_to_user_id:chararray, in_reply_to_user_id_str:chararray, in_reply_to_screen_name:chararray, geo:chararray, coordinates:chararray, place:chararray, contributors:chararray, is_quote_status:bytearray, retweet_count:long, favorite_count:chararray, entities:map[], favorited:bytearray, retweeted:bytearray, possibly_sensitive:bytearray, lang:chararray');
describe json;
dump json;
When I dump json,I am getting the following output and the worning
(Mon Aug 22 10:48:23 +0000 2016,767674772662607873,767674772662607873,google Image Result for Twitter Web Client,false,1234,12345,3214,43215,,,,,,,,,,,,,,)
WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger - org.apache.pig.builtin.JsonLoader(UDF_WARNING_1): Bad record, returning null for {complete json}
By warning i guess it is getting NULL values.
So how can we load a Json which is having null values in it.
And I have tried in another way i.e
json = LOAD 'sample.json' USING com.twitter.elephantbird.pig.load.JsonLoader('created_at:chararray, id:chararray, id_str:chararray, text:chararray, source:chararray, in_reply_to_status_id:chararray, in_reply_to_status_id_str:chararray, in_reply_to_user_id:chararray, in_reply_to_user_id_str:chararray, in_reply_to_screen_name:chararray, geo:chararray, coordinates:chararray, place:chararray, contributors:chararray, is_quote_status:bytearray, retweet_count:long, favorite_count:chararray, entities:map[], favorited:bytearray, retweeted:bytearray, possibly_sensitive:bytearray, lang:chararray');
describe json;
Output
Schema for json unknown.
Please suggest me.
Thanks.
You can try something like this,
MY_JSON = LOAD 'sample.json' USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad');
dump MY_JSON;