slurperresponse = new JsonSlurper().parseText(responseContent)
log.info (slurperresponse.WorkItems[0].WorkItemExternalId)
The above code helps me get the node value "WorkItems[0].WorkItemExternalId" using Groovy. Below is the response.
{
"TotalRecordCount": 1,
"TotalPageCount": 1,
"CurrentPage": 1,
"BatchSize": 10,
"WorkItems": [ {
"WorkItemUId": "4336c111-7cd6-4938-835c-3ddc89961232",
"WorkItemId": "20740900",
"StackRank": "0",
"WorkItemTypeUId": "00020040-0200-0010-0040-000000000000",
"WorkItemExternalId": "79853"
}
I need to append the string "WorkItems[0].WorkItemExternalId" (being read from a excel file) and multiple other such nodes dynamically to "slurperresponse" to get the value of nodes rather than directly hard coding as slurperresponse.WorkItems[0].WorkItemExternalId..
Tried append and "+" operator but i get a compilation error. What other way can I do this?
slurperrsesponse is an object its not a string that's why the concatenation does not work
Json Slurper creates an object out of the input string. This object is dynamic by nature, you can access it, you can add fields to it or alter the existing fields. Contatenation won't work here.
Here is an example:
import groovy.json.*
def text = '{"total" : 2, "students" : [{"name": "John", "age" : 20}, {"name": "Alice", "age" : 21}] }'
def json = new JsonSlurper().parseText(text)
json.total = 3 // alter the value of the existing field
json.city = 'LA' // add a totally new field
json.students[0].age++ // change the field in a list
println json
This yields the output:
[total:3, students:[[name:John, age:21], [name:Alice, age:21]], city:LA]
Now if I've got you right you want to add a new student dynamically and the input is a text that you've read from Excel. So here is the example:
json.students << new JsonSlurper().parseText('{"name" : "Tom", "age" : 25}')
// now there are 3 students in the list
Update
Its also possible to get the values without 'hardcoding' the property name:
// option 1
println json.city // prints 'LA'
// option 2
println json.get('city') // prints 'LA' but here 'city' can be a variable
// option 3
println json['city'] // the same as option 2
I have a list of values that I can use for the title field in my json request. I would like to store a function in the common.feature file which randomizes the title value when a scenario is executed.
I have attempted using the random number function provided on the commonly needed utilities tab on the readme. I have generated a random number successfully, the next step would be using that randomly gernerated number within the jsonpath line in order to retrieve a value from my data list which is in json.
* def myJson =
"""
{
"title" : {
"type" : "string",
"enum" : [
"MR",
"MRS",
"MS",
"MISS"
[...]
]
}
}
"""
* def randomNumber = random(3)
* def title = get[0] myJson.title.enum
* print title```
The code above works but I would like to randomize the number within the get[0]. How is this possible in Karate?
I'm not sure of what you want, but can't you just replace 0 by randomNumber in get[randomNumber] myJson.title.enum ?
I am trying to make a localized version of this app: SMS Broadcast Ruby App
I have been able to get the JSON data from a local file & sanitize the number as well as open the JSON data. However I have been unable to extract the values and pair them as a scrubbed hash. Here's what I have so far.
def data_from_spreadsheet
file = open(spreadsheet_url).read
JSON.parse(file)
end
def contacts_from_spreadsheet
contacts = {}
data_from_spreadsheet.each do |entry|
puts entry['name']['number']
contacts[sanitize(number)] = name
end
contacts
end
Here's the JSON data sample I'm working with.
[
{
"name": "Michael",
"number": 9045555555
},
{
"name": "Natalie",
"number": 7865555555
}
]
Here's how I would like the JSON to be expressed after the contacts_from_spreadsheet method.
{
'19045555555' => 'Michael',
'19045555555' => 'Natalie'
}
Any help would be much appreciated.
You could create array of pairs (hashes) using map and then call reduce to get a single hash.
data = [{
"name": "Michael",
"number": 9045555555
},
{
"name": "Natalie",
"number": 7865555555
}]
data.map{|e| {e[:number] => e[:name]}}.reduce Hash.new, :merge
Result: {9045555555=>"Michael", 7865555555=>"Natalie"}
You don't seem to have number or name extracted in any way. I think first you'll need to update your code to get those details.
i.e. If entry is a JSON object (or rather was before parsing), you can do the following:
def contacts_from_spreadsheet
contacts = {}
data_from_spreadsheet.each do |entry|
contacts[sanitize(entry['number'])] = entry['name']
end
contacts
end
Not really keeping this function within JSON, but I have solved the problem. Here's what I used.
def data_from_spreadsheet
file = open(spreadsheet_url).read
YAML.load(file)
end
def contacts_from_spreadsheet
contacts = {}
data_from_spreadsheet.each do |entry|
name = entry['name']
number = entry['phone_number'].to_s
contacts[sanitize(number)] = name
end
contacts
end
This returned back clean array here:
{"+19045555555"=>"Michael", "+17865555555"=>"Natalie"}
Thanks everyone who added input!
I'm using the latest cassandra version and trying to save JSON like below and was successful,
INSERT INTO mytable JSON '{"username": "myname", "country": "mycountry", "userid": "1"}'
Above query saves the record like,
"rows": [
{
"[json]": "{\"userid\": \"1\", \"country\": \"india\", \"username\": \"sai\"}"
}
],
"rowLength": 1,
"columns": [
{
"name": "[json]",
"type": {
"code": 13,
"type": null
}
}
]
Now I would like to retrieve the record based on userid:
SELECT JSON * FROM mytable WHERE userid = fromJson("1") // but this query throws error
All this occurs in a node/express app and I'm using dse-driver as the client driver.
The CQL command worked like below,
SELECT JSON * FROM mytable WHERE userid="1";
However if it has to be executed via the dse-driver then the below snippet worked,
let query = 'SELECT JSON * FROM mytable WHERE userid = ?';
client.execute(query, ["1"], { prepare: true });
where client is,
const dse = require('dse-driver');
const client = new dse.Client({
contactPoints: ['h1', 'h2'],
authProvider: new dse.auth.DsePlainTextAuthProvider('username', 'pass')
});
If your Cassandra version is 2.1x and below, you can use the Python-based approach. Write a python script using Cassandra-Python API
Here you have to get your row first and then use python json's loads method, which will convert your json text column value into JSON object which will be dict in Python. Then you can play around with Python dictionaries and extract your required nested keys. See the below code snippet.
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
import json
if __name__ == '__main__':
auth_provider = PlainTextAuthProvider(username='xxxx', password='xxxx')
cluster = Cluster(['0.0.0.0'],
port=9042, auth_provider=auth_provider)
session = cluster.connect("keyspace_name")
print("session created successfully")
rows = session.execute('select * from user limit 10')
for user_row in rows:
#fetchign your json column
column_dict = json.loads(user_row.json_col)
print(column_dict().keys()
Assuming user-id is the partition key, and assuming you want to retrieve a JSON object corresponding to user of id 1, you should try:
SELECT JSON * FROM mytable WHERE userid=1;
If userid is of type text, you will need to add some quotes.
I see a lot of references to "compressed JSON" when it comes to different serialization formats. What exactly is it? Is it just gzipped JSON or something else?
Compressed JSON removes the key:value pair of json's encoding to store keys and values in seperate parallel arrays:
// uncompressed
JSON = {
data : [
{ field1 : 'data1', field2 : 'data2', field3 : 'data3' },
{ field1 : 'data4', field2 : 'data5', field3 : 'data6' },
.....
]
};
//compressed
JSON = {
data : [ 'data1','data2','data3','data4','data5','data6' ],
keys : [ 'field1', 'field2', 'field3' ]
};
This method of usage i found here
Content from link (http://www.nwhite.net/?p=242)
rarely find myself in a place where I am writing javascript applications that use AJAX in its pure form. I have long abandoned the ‘X’ and replaced it with ‘J’ (JSON). When working with Javascript, it just makes sense to return JSON. Smaller footprint, easier parsing and an easier structure are all advantages I have gained since using JSON.
In a recent project I found myself unhappy with the large size of my result sets. The data I was returning was tabular data, in the form of objects for each row. I was returning a result set of 50, with 19 fields each. What I realized is if I augment my result set I could get a form of compression.
// uncompressed
JSON = {
data : [
{ field1 : 'data1', field2 : 'data2', field3 : 'data3' },
{ field1 : 'data4', field2 : 'data5', field3 : 'data6' },
.....
]
};
//compressed
JSON = {
data : [ 'data1','data2','data3','data4','data5','data6' ],
keys : [ 'field1', 'field2', 'field3' ]
};
I merged all my values into a single array and store all my fields in a separate array. Returning a key value pair for each result cost me 8800 byte (8.6kb). Ripping the fields out and putting them in a separate array cost me 186 bytes. Total savings 8.4kb.
Now I have a much more compressed JSON file, but the structure is different and now harder to work with. So I implement a solution in Mootools to make the decompression transparent.
Request.JSON.extend({
options : {
inflate : []
}
});
Request.JSON.implement({
success : function(text){
this.response.json = JSON.decode(text, this.options.secure);
if(this.options.inflate.length){
this.options.inflate.each(function(rule){
var ret = ($defined(rule.store)) ? this.response.json[rule.store] : this.response.json[rule.data];
ret = this.expandData(this.response.json[rule.data], this.response.json[rule.keys]);
},this);
}
this.onSuccess(this.response.json, text);
},
expandData : function(data,keys){
var arr = [];
var len = data.length; var klen = keys.length;
var start = 0; var stop = klen;
while(stop < len){
arr.push( data.slice(start,stop).associate(keys) );
start = stop; stop += klen;
}
return arr;
}
});
Request.JSON now has an inflate option. You can inflate multiple segments of your JSON object if you so desire.
Usage:
new Request.JSON({
url : 'url',
inflate : [{ 'keys' : 'fields', 'data' : 'data' }]
onComplete : function(json){}
});
Pass as many inflate objects as you like to the option inflate array. It has an optional property called ’store’ If set the inflated data set will be stored in that key instead.
The ‘keys’ and ‘fields’ expect strings to match a location in the root of your JSON object.
Based in Paniyar's answer, we can convert a List of Objects in "compressed" Json format using C# like this:
var JsonString = serializer.Serialize(
new
{
cols = new[] { "field1", "field2", "field3"},
items = data.Select(x => new object[] {x.field1, x.field2, x.field3})
});
I used an array of object because the fields can be int, bool, string...
More Reduction:
If the field is repeated very often and it is a string type, you can get compressed a little be more if you add a distinct list of that field... for instance, a field name job position, city, etc are excellent candidate for this. You can add a distinct list of this items and in each item change the value for a reference number. That will make your Json more lite.
Compressed:
[["KeyA", "KeyB", "KeyC", "KeyD", "KeyE", "KeyF"],
["ValA1", "ValB1", "ValC1", "ValD1", "ValE1", "ValF1"],
["ValA2", "ValB2", "ValC2", "ValD2", "ValE2", "ValF2"],
["ValA3", "ValB3", "ValC3", "ValD3", "ValE3", "ValF3"],
["ValA4", "ValB4", "ValC4", "ValD4", "ValE4", "ValF4"]]
Uncompressed:
[{KeyA: "ValA1", KeyB: "ValB1", KeyC: "ValC1", KeyD: "ValD1", KeyE: "ValE1", KeyF: "ValF1"},
{KeyA: "ValA2", KeyB: "ValB2", KeyC: "ValC2", KeyD: "ValD2", KeyE: "ValE2", KeyF: "ValF2"},
{KeyA: "ValA3", KeyB: "ValB3", KeyC: "ValC3", KeyD: "ValD3", KeyE: "ValE3", KeyF: "ValF3"},
{KeyA: "ValA4", KeyB: "ValB4", KeyC: "ValC4", KeyD: "ValD4", KeyE: "ValE4", KeyF: "ValF4"}]
The most likely answer is that it really is just gzipped JSON. There is no other standard meaning to this phrase.
Re-organizing a homogenous array of JSON objects into a pair of arrays is a very useful technique to make the payload smaller and to speed up encoding and decoding, it is not commonly called "compressed JSON". I haven't run across it ever in open source or any open API, but we use this technique internally and call it "jsontable".