JMETER - trying to test money values using beanshell - csv

I am trying to execute tests using beanshell assertion. I have a csv file with Expected money amounts which are all to 2 decimal places eg 145.16, 1945.21 etc and i wish to compare them to actual values that will come back from my sampler http response with the same format. I wish my test case to pass if the difference between the two is < 0.1 i.e. 10 cents/pence etc.
I started by parsing the initial string values to doubles or floats or shorts and using Math.abs to compare but of course the accuracy was not there eg if the difference was actually 10 cents(FAIL) the calculation was actually be say 0.999999765 or similar and so the test case would incorrectly PASS.
I have now moved onto BigDecimal with little success. I have tried to use setScale which has made the comparsion a bit more accurate.
So my question is BigDecimal the way to go? What do i do with the BigDecimal after i have created it - if i convert it to a short or float etc i get the same problem again. Would DecimalFormat help ? I need the values to be with two decimal points at the point where i use Math.abs - is there an alternative to Math.abs ?
Hope that makes sense and thanks in advance.

BigDecimal is quite good candidate to go with, at least all Java solutions which are designed to work with money objects are using BigDecimal under the hood. You may find ROUND_HALF_EVEN property useful
Be aware that there is Joda-Money library which provides Money.isGreaterThan() method which can be used instead of custom logic.
Starting from JMeter 3.1 it is recommended to use JSR223 Test Elements and Groovy language for any form of scripting so consider switching to Groovy as soon as it will be possible.

FYI this worked for me in the end -
import java.math.BigDecimal;
BigDecimal Actual_PAF = new BigDecimal("${Actual_PAF}");
BigDecimal Expected_PAF = new BigDecimal("${Expected_PAF}");
BigDecimal ActualPAFDifference = new
BigDecimal(Actual_PAF.subtract(Expected_PAF).toString());
if (ActualPAFDifference.abs() < 0.001)
{ Failure=false;
vars.put("PAFPassOrFail","PASS");}
else
{ Failure=true;
vars.put("PAFPassOrFail","FAIL");
}

Related

Actual value in Junits in Epsilon form

In a junit I have, I get the following reason for failure
java.lang.AssertionError: expected:<1200000> but was:<1.2E+6>
which is essentially the same value. The actual json response is 1200000 when I hit from postman and the method that I am using to get the field for 1200000 has a return type of BigDecimal.
Not sure how to fix this 1.2E+6 as actual.
quite likely you try to compare a BigDecimal to another type and the comparison fail. Please check the javadoc: https://docs.oracle.com/javase/7/docs/api/java/math/BigDecimal.html#equals(java.lang.Object)
"true if and only if the specified Object is a BigDecimal whose value and scale are equal to this BigDecimal's."
If the 2 numbers are not BigDecimal, you'll get false. If the 2 are Bigdecimal but don't have the same scale, you should use compareTo.
The simplest thing that you can do is to compare it using long value:
assertEquals(new BigDecimal("1.2E+6").longValue(), 1200000);
Or you can use AssertJ to have a nice and tidy BigDecimal assertion:
assertThat(new BigDecimal("1.2E+6")).isEqualByComparingTo(new BigDecimal("1200000"));
//or
assertThat(new BigDecimal("1.2E+6")).isEqualByComparingTo("1200000");

How to best validate JSON on the server-side

When handling POST, PUT, and PATCH requests on the server-side, we often need to process some JSON to perform the requests.
It is obvious that we need to validate these JSONs (e.g. structure, permitted/expected keys, and value types) in some way, and I can see at least two ways:
Upon receiving the JSON, validate the JSON upfront as it is, before doing anything with it to complete the request.
Take the JSON as it is, start processing it (e.g. access its various key-values) and try to validate it on-the-go while performing business logic, and possibly use some exception handling to handle vogue data.
The 1st approach seems more robust compared to the 2nd, but probably more expensive (in time cost) because every request will be validated (and hopefully most of them are valid so the validation is sort of redundant).
The 2nd approach may save the compulsory validation on valid requests, but mixing the checks within business logic might be buggy or even risky.
Which of the two above is better? Or, is there yet a better way?
What you are describing with POST, PUT, and PATCH sounds like you are implementing a REST API. Depending on your back-end platform, you can use libraries that will map JSON to objects which is very powerful and performs that validation for you. In JAVA, you can use Jersey, Spring, or Jackson. If you are using .NET, you can use Json.NET.
If efficiency is your goal and you want to validate every single request, it would be ideal if you could evaluate on the front-end if you are using JavaScript you can use json2.js.
In regards to comparing your methods, here is a Pro / Cons list.
Method #1: Upon Request
Pros
The business logic integrity is maintained. As you mentioned trying to validate while processing business logic could result in invalid tests that may actually be valid and vice versa or also the validation could inadvertently impact the business logic negatively.
As Norbert mentioned, catching the errors before hand will improve efficiency. The logical question this poses is why spend the time processing, if there are errors in the first place?
The code will be cleaner and easier to read. Having validation and business logic separated will result in cleaner, easier to read and maintain code.
Cons
It could result in redundant processing meaning longer computing time.
Method #2: Validation on the Go
Pros
It's efficient theoretically by saving process and compute time doing them at the same time.
Cons
In reality, the process time that is saved is likely negligible (as mentioned by Norbert). You are still doing the validation check either way. In addition, processing time is wasted if an error was found.
The data integrity can be comprised. It could be possible that the JSON becomes corrupt when processing it this way.
The code is not as clear. When reading the business logic, it may not be as apparent what is happening because validation logic is mixed in.
What it really boils down to is Accuracy vs Speed. They generally have an inverse relationship. As you become more accurate and validate your JSON, you may have to compromise some on speed. This is really only noticeable in large data sets as computers are really fast these days. It is up to you to decide what is more important given how accurate you think you data may be when receiving it or whether that extra second or so is crucial. In some cases, it does matter (i.e. with the stock market and healthcare applications, milliseconds matter) and both are highly important. It is in those cases, that as you increase one, for example accuracy, you may have to increase speed by getting a higher performant machine.
Hope this helps.
The first approach is more robust, but does not have to be noticeably more expensive. It becomes way less expensive even when you are able to abort the parsing process due to errors: Your business logic usually takes >90% of the resources in a process, so if you have an error % of 10%, you are already resource neutral. If you optimize the validation process so that the validations from the business process are performed upfront, your error rate might be much lower (like 1 in 20 to 1 in 100) to stay resource neutral.
For an example on an implementation assuming upfront data validation, look at GSON (https://code.google.com/p/google-gson/):
GSON works as follows: Every part of the JSON can be cast into an object. This object is typed or contains typed data:
Sample object (JAVA used as example language):
public class someInnerDataFromJSON {
String name;
String address;
int housenumber;
String buildingType;
// Getters and setters
public String getName() { return name; }
public void setName(String name) { this.name=name; }
//etc.
}
The data parsed by GSON is by using the model provided, already type checked.
This is the first point where your code can abort.
After this exit point assuming the data confirmed to the model, you can validate if the data is within certain limits. You can also write that into the model.
Assume for this buildingType is a list:
Single family house
Multi family house
Apartment
You can check data during parsing by creating a setter which checks the data, or you can check it after parsing in a first set of your business rule application. The benefit of first checking the data is that your later code will have less exception handling, so less and easier to understand code.
I would definitively go for validation before processing.
Let's say you receive some json data with 10 variables of which you expect:
the first 5 variables to be of type string
6 and 7 are supposed to be integers
8, 9 and 10 are supposed to be arrays
You can do a quick variable type validation before you start processing any of this data and return a validation error response if one of the ten fails.
foreach($data as $varName => $varValue){
$varType = gettype($varValue);
if(!$this->isTypeValid($varName, $varType)){
// return validation error
}
}
// continue processing
Think of the scenario where you are directly processing the data and then the 10th value turns out to be of invalid type. The processing of the previous 9 variables was a waste of resources since you end up returning some validation error response anyway. On top of that you have to rollback any changes already persisted to your storage.
I only use variable type in my example but I would suggest full validation (length, max/min values, etc) of all variables before processing any of them.
In general, the first option would be the way to go. The only reason why you might need to think of the second option is if you were dealing with JSON data which was tens of MBs large or more.
In other words, only if you are trying to stream JSON and process it on the fly, you will need to think about second option.
Assuming that you are dealing with few hundred KB at most per JSON, you can just go for option one.
Here are some steps you could follow:
Go for a JSON parser like GSON that would just convert your entire
JSON input into the corresponding Java domain model object. (If GSON
doesn't throw an exception, you can be sure that the JSON is
perfectly valid.)
Of course, the objects which were constructed using GSON in step 1
may not be in a functionally valid state. For example, functional
checks like mandatory fields and limit checks would have to be done.
For this, you could define a validateState method which repeatedly
validates the states of the object itself and its child objects.
Here is an example of a validateState method:
public void validateState(){
//Assume this validateState is part of Customer class.
if(age<12 || age>150)
throw new IllegalArgumentException("Age should be in the range 12 to 120");
if(age<18 && (guardianId==null || guardianId.trim().equals(""))
throw new IllegalArgumentException("Guardian id is mandatory for minors");
for(Account a:customer.getAccounts()){
a.validateState(); //Throws appropriate exceptions if any inconsistency in state
}
}
The answer depends entirely on your use case.
If you expect all calls to originate in trusted clients then the upfront schema validation should be implement so that it is activated only when you set a debug flag.
However, if your server delivers public api services then you should validate the calls upfront. This isn't just a performance issue - your server will likely be scrutinized for security vulnerabilities by your customers, hackers, rivals, etc.
If your server delivers private api services to non-trusted clients (e.g., in a closed network setup where it has to integrate with systems from 3rd party developers), then you should at least run upfront those checks that will save you from getting blamed for someone else's goofs.
It really depends on your requirements. But in general I'd always go for #1.
Few considerations:
For consistency I'd use method #1, for performance #2. However when using #2 you have to take into account that rolling back in case of non valid input may become complicated in the future, as the logic changes.
Json validation should not take that long. In python you can use ujson for parsing json strings which is a ultrafast C implementation of the json python module.
For validation, I use the jsonschema python module which makes json validation easy.
Another approach:
if you use jsonschema, you can validate the json request in steps. I'd perform an initial validation of the most common/important parts of the json structure, and validate the remaining parts along the business logic path. This would allow to write simpler json schemas and therefore more lightweight.
The final decision:
If (and only if) this decision is critical I'd implement both solutions, time-profile them in right and wrong input condition, and weight the results depending on the wrong input frequency. Therefore:
1c = average time spent with method 1 on correct input
1w = average time spent with method 1 on wrong input
2c = average time spent with method 2 on correct input
2w = average time spent with method 2 on wrong input
CR = correct input rate (or frequency)
WR = wrong input rate (or frequency)
if ( 1c * CR ) + ( 1w * WR) <= ( 2c * CR ) + ( 2w * WR):
chose method 1
else:
chose method 2

Junit assertEquals on objects with double fields

I have two Lists of Objects. These objects reference other objects which in turn contain doubles. I want to use assertEquals to test that the two objects are the same. I have verified by hand that they are, but assertEquals is still returning false. I think the reason is because the doubles are not the same because of precision issues. I know that I can solve this problem by drilling down to the double fields and using assertEquals(d1, d2, delta), but that seems cumbersome. Is there anyway to provide a delta to assertEquals (or another method), such that it can use that delta whenever it encounters doubles to compare?
Hamcrest matchers may make this a little easier. You can create a custom Matcher (or a FeatureMatcher - Is there a simple way to match a field using Hamcrest?), then compose it with a closeTo to test for doubles, and then use container matchers (How do I assert an Iterable contains elements with a certain property?) to check the list.
For example, to check for a list containing exactly one Thing, which has a getValue method returning approximately 10:
Matcher<Thing> thingWithExpectedDouble =
Matchers.<Thing>hasProperty("value", Matchers.closeTo(10, 0.0001));
assertThat(listOfItems, Matchers.contains(thingWithExpectedDouble));

As3 BigInteger returns an Incorrect Answer

I am trying to implement a RSA encryption program in flash. I looked into working with Big Numbers and found the BigInteger var type in the Crypto package. I started playing around with BigIntegers but my outputs are never the correct answer. For example the below code will output 5911 when the answer should be 9409. Any input about this error would be great.
var temp:BigInteger = new BigInteger(String(97));
temp = temp.pow(2);
trace(temp.toString());
Output = 5911
I'm not sure which crypto package you are referring to, I though it was as3crypto but I don't remember it's implementions having a pow method that has that signature. But either way, you always have to remember what base you are dealing with and what the library was designed for.
(9716)2 = 591116
You are dealing with hex, not decimal, numbers.
Think of that geek-is-chic tshirt that says "There are 10 kinds of people. Those that understand binary and those that don't". In that case "10" is assumed to be 102. Which equals 210. Unqualified bases almost always ruin everybodys day.

How should substring() work?

I do not understand why Java's [String.substring() method](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/String.html#substring(int,%20int%29) is specified the way it is. I can't tell it to start at a numbered-position and return a specified number of characters; I have to compute the end position myself. And if I specify an end position beyond the end of the String, instead of just returning the rest of the String for me, Java throws an Exception.
I'm used to languages where substring() (or substr()) takes two parameters: a start position, and a length. Is this objectively better than the way Java does it, and if so, can you prove it? What's the best language specification for substring() that you have seen, and when if ever would it be a good idea for a language to do things differently? Is that IndexOutOfBoundsException that Java throws a good design idea, or not? Does all this just come down to personal preference?
There are times when the second parameter being a length is more convenient, and there are times when the second parameter being the "offset to stop before" is more convenient. Likewise there are times when "if I give you something that's too big, just go to the end of the string" is convenient, and there are times when it indicates a bug and should really throw an exception.
The second parameter being a length is useful if you've got a fixed length of field. For instance:
// C#
String guid = fullString.Substring(offset, 36);
The second parameter being an offset is useful if you're going up to another delimited:
// Java
int nextColon = fullString.indexOf(':', start);
if (start == -1)
{
// Handle error
}
else
{
String value = fullString.substring(start, nextColon);
}
Typically, the one you want to use is the opposite to the one that's provided on your current platform, in my experience :)
I'm used to languages where
substring() (or substr()) takes two
parameters: a start position, and a
length. Is this objectively better
than the way Java does it, and if so,
can you prove it?
No, it's not objectively better. It all depends on the context in which you want to use it. If you want to extract a substring of a specific length, it's bad, but if you want to extract a substring that ends at, say, the first occurrence of "." in the string, it's better than if you first had to compute a length. The question is: which requirement is more common? I'd say the latter. Of course, the best solution would be to have both versions in the API, but if you need the length-based one all the time, using a static utility method isn't that horrible.
As for the exception, yeah, that's definitely good design. You asked for something specific, and when you can't get that specific thing, the API should not try to guess what you might have wanted instead - that way, bugs become apparent more quickly.
Also, Java DOES have an alternative substring() method that returns the substring from a start index until the end of the string.
second parameter should be optional, first parameter should accept negative values..
If you leave off the 2nd parameter it will go to the end of the string for you without you having to compute it.
Having gotten some feedback, I see when the second-parameter-as-index scenario is useful, but so far all of those scenarios seem to be working around other language/API limitations. For example, the API doesn't provide a convenient routine to give me the Strings before and after the first colon in the input String, so instead I get that String's index and call substring(). (And this explains why the second position parameter in substr() overshoots the desired index by 1, IMO.)
It seems to me that with a more comprehensive set of string-processing functions in the language's toolkit, the second-parameter-as-index scenario loses out to second-parameter-as-length. But somebody please post me a counterexample. :)
If you store this away, the problem should stop plaguing your dreams and you'll finally achieve a good night's rest:
public String skipsSubstring(String s, int index, int length) {
return s.subString(index, index+length);
}