Jmeter - JSON Extractor - Large numbers erroring - json

Hope someone can help me :)
I have the response json below :
"endValue":{"amount":12515920.97,"currencyCode":"EUR"}
and I'm using the JSON extractor to retrieve the "amount" number and is working fine for any numbers that have up till 6 characters before the decimal point, but for large numbers like this one, is actually saving "1.251592097E7" on my variable. Is this a limitation or is there any other way that I can have the full number extracted?
Thanks in advance!

If you want to store the value "as is" the easiest option is going for the JSR223 Post-Processor and fetch your value using Groovy
Example code:
vars.put('your_variable_name_here', new groovy.json.JsonSlurper().parse(prev.getResponseData()).endValue.amount as String)
Demo:
More information:
JsonSlurper
Apache Groovy - Parsing and producing JSON
Apache Groovy - Why and How You Should Use It

All the digits of the number are there, it is just that it is being displayed in scientific notation.
You can format the number when the program needs to display it, for example using DecimalFormat:
import java.text.DecimalFormat;
public class Example{
public static void main(String []args){
double x = 12515920.97;
DecimalFormat df = new DecimalFormat("#,###,###,##0.00");
String result = df.format(x);
System.out.println(result);
}
}
Outputs:
12,515,920.97

Related

GSON | Extract JSON's Root Name | JsonPath Or JsonPointer

I am looking at extracting the root element of a JSON document. It looks like this is possible neither using JsonPointer nor JsonPath as my attempts to look up for such an expression has been unsuccessful. Any tips would be appreciated. TIA.
Sample document:
{
"MESSAGE1_ROOT_INPUT": {
"CTRL_SEG": "test"
}
}
The below using gson 2.9.0:
$.*~
produces:
{"CTRL_SEG": "test"}
while JSONPath Online produces this:
[
"MESSAGE1_ROOT_INPUT"
]
The attempt is to get text "MESSAGE1_ROOT_INPUT" using JsonPath/JsonPointer expression(s). Note that, extracting this the traditional (substring or regex on a stringified json text) way, would preferably be my last resort.
Background: We are building an API service that accepts JSON documents with different roots. Such as, MESSAGE2_ROOT_INPUT, MESSAGE3_ROOT_INPUT, etc. It is based on this, the routing of a message further will occur.
Supported/Employed Languages: Java/GSON Library/RegEx
Gson does not natively support JSONPath or JSON Pointer. However, you can quite efficiently obtain the name of the first property using JsonReader:
public static String getFirstPropertyName(Reader reader) throws IOException {
// Don't have to call JsonReader.close(); that would just close the provided reader
JsonReader jsonReader = new JsonReader(reader);
jsonReader.beginObject();
return jsonReader.nextName();
}
There are however two things to keep in mind:
This only reads the beginning of the JSON document; it neither verifies that the complete JSON document has valid syntax, nor checks if there might be more top-level properties
This consumes some data from the Reader; to further process the data you have to buffer the data to allow re-reading it again (you can also first store the JSON in a String and pass a StringReader to JsonReader)

Ballerina json datetime value

i have to index documents to elasticsearch to an index which has a date field mapping and i'm trying to build a json with this date value, but ballerina says this seems not possible.
I thought about storing this date value into an xml and after that to convert it to a json but xml has the same problem (i thought this might be a trick...).
I tried to store it into a string and after that to extract the json payload from that string but it gives me this error:
error: {ballerina/io}GenericError message=unrecognized token 'date=time=1591128342000'
I thought about dealing with this string to date conversion from inside elasticsearch but i would like to keep this scenario as the last one. I don't like it, beacause i have to do some queries based on timestamp after and storing date as a string would give me additional problems
So is there any way to trick ballerina in order to achive this json containing a date value ?
-----here is snapshot of the code giving me the error-----
It says:
incompatible types: expected 'json', found 'ballerina/time:Time'
JSON is a text format that is completely language independent (see e.g. json.org).
time:Time is a Ballerina language specific type JSON knows nothing about. Because there is no implicit conversion (for a good reason) one have to provide the conversion.
In this case you most likely want to convert time:Time to a ISO 8601 string presentation with time:toString.
The following code (Ballerina 1.2):
import ballerina/io;
import ballerina/time;
public function main() {
var btime = time:currentTime();
var j = <json> {
time: time:toString(btime)
};
io:println(j.toJsonString());
}
Correctly prints:
{"time":"2020-06-03T08:39:07.897+03:00"}
Maryam Ziyad has written a good introduction to Ballerina's JSON support.
The following code is updated for Ballerina Swan Lake Update 1 (2201.1.0) to show how to convert a Ballerina UTC time (time:Utc) to JSON representation. Note that it's also possible to use localized time (time:Civil) but that is no different from time to JSON conversion point of view.
One can read more about Ballerina time handling from the documentation of time module.
import ballerina/io;
import ballerina/time;
public function main() {
time:Utc now = time:utcNow(3);
json j = {
time: time:utcToString(now)
};
io:println(j.toJsonString());
}
That correctly prints:
{"time":"2022-07-20T06:03:46.078Z"}

How to split the data of NodeObject in Apache Flink

I'm using Flink to process the data coming from some data source (such as Kafka, Pravega etc).
In my case, the data source is Pravega, which provided me a flink connector.
My data source is sending me some JSON data as below:
{"key": "value"}
{"key": "value2"}
{"key": "value3"}
...
...
Here is my piece of code:
PravegaDeserializationSchema<ObjectNode> adapter = new PravegaDeserializationSchema<>(ObjectNode.class, new JavaSerializer<>());
FlinkPravegaReader<ObjectNode> source = FlinkPravegaReader.<ObjectNode>builder()
.withPravegaConfig(pravegaConfig)
.forStream(stream)
.withDeserializationSchema(adapter)
.build();
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<ObjectNode> dataStream = env.addSource(source).name("Pravega Stream");
dataStream.map(new MapFunction<ObjectNode, String>() {
#Override
public String map(ObjectNode node) throws Exception {
return node.toString();
}
})
.keyBy("word") // ERROR
.timeWindow(Time.seconds(10))
.sum("count");
As you see, I used the FlinkPravegaReader and a proper deserializer to get the JSON stream coming from Pravega.
Then I try to transform the JSON data into a String, KeyBy them and count them.
However, I get an error:
The program finished with the following exception:
Field expression must be equal to '*' or '_' for non-composite types.
org.apache.flink.api.common.operators.Keys$ExpressionKeys.<init>(Keys.java:342)
org.apache.flink.streaming.api.datastream.DataStream.keyBy(DataStream.java:340)
myflink.StreamingJob.main(StreamingJob.java:114)
It seems that KeyBy threw this exception.
Well, I'm not a Flink expert so I don't know why. I've read the source code of the official example WordCount. In that example, there is a custtom splitter, which is used to split the String data into words.
So I'm thinking if I need to use some kind of splitter in this case too? If so, what kind of splitter should I use? Can you show me an example? If not, why did I get such an error and how to solve it?
I guess you have read the document about how to specify keys
Specify keys
The example codes use keyby("word") because word is a field of POJO type WC.
// some ordinary POJO (Plain old Java Object)
public class WC {
public String word;
public int count;
}
DataStream<WC> words = // [...]
DataStream<WC> wordCounts = words.keyBy("word").window(/*window specification*/);
In your case, you put a map operator before keyBy, and the output of this map operator is a string. So there is obviously no word field in your case. If you actually want to group this string stream, you need to write it like this .keyBy(String::toString)
Or you can even implement a customized keySelector to generate your own key.
Customized Key Selector

Jmeter, Groovy JSON slurper for Keys that are variables

I just recently read that using vars.get("variable") is much more efficient than using ${variable}. I had experience using the latter but it results in an error, but I already managed a work around it so that the error will no longer occur (I will no longer discuss it since it is not my issue here). Here is the part of the code:
import groovy.json.JsonSlurper;
String response = prev.getResponseDataAsString();
def jsonSlurper = new JsonSlurper();
def json = jsonSlurper.parseText(response);
if (json.data.target_list) {
Random random = new Random();
String[] strLeadDBIDList = json.data.target_list.keySet();
int idxLeadDBID = random.nextInt(strLeadDBIDList.length);
String strLeadDBID = strLeadDBIDList[idxLeadDBID];
log.info("Leads Dashboard - Customer ID: " + strLeadDBID);
vars.put("strLeadDBID",strLeadDBID);
String strLeadDBModule = json.data.target_list."${strLeadDBID}".parent_type;
log.info("Leads Dashboard - Customer Type: " + strLeadDBModule);
vars.put("strLeadDBModule",strLeadDBModule);
...
So my question, is there a way to use vars.get("strLeadDBID") instead of "${strLeadDBID}" in the String strLeadDBModule = json.data.target_list."${strLeadDBID}".parent_type; code? Or can I use the variable strLeadDBID instead, and how? Thanks!!!
My expectation that it would be something like:
String strLeadDBModule = json.data.target_list.get(vars.get("strLeadDBID")).parent_type;
If it will not work do the following:
log.info('Target list class name: ' + json.data.target_list.getClass().getName()))
and look for the relevant line in jmeter.log file. Then check JavaDoc for the given class in Groovy GDK API documentation and look for a suitable function.
Also as per JSR223 Sampler documentation
When using this feature, ensure your script code does not use JMeter variables directly in script code as caching would only cache first replacement. Instead use script parameters.
You could try the below setup:
Check out Apache Groovy - Why and How You Should Use It article for more hints on using Groovy scripting in JMeter tests

What is the best way to check all JSON properties values type?

I'm trying to check in my integration test if all of values from some specific property has the same type. I was trying to do it along with jsonPath and JsonPathResultMatchers but without success. Finally in I did something like this :
MvcResult result = mockMvc.perform(get("/weather/" + existingCity))
.andExpect(MockMvcResultMatchers.status().isOk())
.andReturn();
String responseContent = result.getResponse().getContentAsString();
TypeRef<List<Object>> typeRef = new TypeRef<List<Object>>() {
};
List<Object> humidities = JsonPath.using(configuration).parse(responseContent).read("$.*.humidity", typeRef);
Assertions.assertThat(humidities.stream().allMatch(humidity -> humidity instanceof Integer)).isTrue();
But I wonder if exist some clearer way to do this, can the same result be achieved with JSONPath ? Or AssertJ has some method to find it without usage stream code
Just answering on the AssertJ part: Stream assertions are provided with some caveats as the Stream under test is converted to a List in order to be able to perform multiple assertions (otherwise you can't as a Stream can only be consumed once).
Javadoc: assertThat(BaseStream)
Example:
assertThat(DoubleStream.of(1, 2, 3)).isNotNull()
.contains(1.0, 2.0, 3.0)
.allMatch(Double::isFinite);
I have happily used https://github.com/lukas-krecan/JsonUnit to check JSON, you can give it a try and see if you like it.
I personnaly would rather validate it against a JSON schema. There are Java validator implementations that could help you