I'm facing an issue in Jmeter. The API I am testing, get the parameters from a prior JDBC request.
This works fine when there is only 1 thread. But, when I run multiple threads it throws the error below
{"Message": "A transient error has occurred. Please try again. (1205)","Data":null}
Here is the screenshot
I need to run 5 threads without having to run the JDBC request 5 times.
I can retrieve 5 results in 1 JDBC call and supply them sequentially for each of the thread. Is this possible? How can I do this?
Worst-case scenario I will have to manually set up CSV file instead of JDBC calls.
Normally people use setUp Thread Group for test data preparation and tearDown Thread Group for eventual clean-up. I would suggest moving your JDBC Request under the setUp Thread Group and run it with 1 virtual user.
If you have to keep the test plan structure as it is and can amend the SQL query to return more results, be aware that according to the JDBC Request sampler documentation the results look like:
myVar_#=5
myVar_1=foo
myVar_2=bar
myVar_3=baz
myVar_4=qux
myVar_5=corge
Therefore you can use the values using __V() and __threadNum() functions combination like:
${__V(myVar_${__threadNum},)}
Related
I am wondering if there exists an execution id into Cloud Run as the one into Google Cloud Functions?
An ID that identifies each invocation separately, it's very useful to use the "Show matching entries" in Cloud Logging to get all logs related to an execution.
I understand the execution process is different, Cloud Run allows concurrency, but is there a workaround to assign each log to a certain execution?
My final need is to group at the same line the request and the response. Because, as for now, I am printing them separately and if a few requests arrive at the same time, I can't see what response corresponds to what request...
Thank you for your attention!
Open Telemetry looks like a great solution, but the learning and manipulation time isn't negligible,
I'm going with a custom id created in before_request, stored in Flask g and called at every print().
#app.before_request
def before_request_func():
execution_id = uuid.uuid4()
g.execution_id = execution_id
I use AMQ 6 (ActiveMQ) on OpenShift, and I use a queue with re-delivery with exponentialBackoff (set in connection query params).
When I have one consumer and two messages and the first message gets processed by my single consumer and does NOT get an ACK...
Will the broker deliver the 2nd message to the single consumer?
Or will the broker wait for the re-delivery to preserve message order.
This documentation states:
...Typically a consumer handles redelivery so that it can maintain message order while a message appears as inflight on the broker. ...
I don't want to have my consumer wait for re-delivery. It should consume other messages. Can I do this without multiple consumers? If so, how?
Note: In my connection query params I don't have the ActiveMQ exclusive consumer set.
I have read the Connection Configuration URI docs, but jms.nonBlockingRedelivery isn't mentioned there.
Can the resource adapter use it by query param?
If you set jms.nonBlockingRedelivery=true on your client's connection URL then messages will be delivered to your consumer while others are in the process of redelivery. This is false by default.
I have a application that is using Spring Boot + JPA/Hibernate (DB being used is PostGresDB).
I have a Controller method that returns back a JSONArray. The size of the array has gotten up to 3.4MB. I have noticed it is taking up to 30-35 seconds for a client to complete their request when retrieving this JSONArray. I checked the query that gets generated and that query itself finishes in 500ms so the DB isn't a really a problem.
I imagine ~4MB of data is too much? I don't have much experience so was wondering if its expected to take this long. This is a webservice that is running on AWS on a stacked machine.
Any troubleshooting steps or insights on debugging or obvious things I should be doing. I looked into paginating the response but would like to avoid that if possible. Didn't think 3.4MB was that big.
How is the information being formed on the back end? Are JOINs / multiple tables involved? Try sending 4 MB of static data as a test and see how that does? (You may need to make a fake REST endpoint to do this, but if you can get the JSON once you should be able to save it off in a file or something ).
You may also be able to place timers n your code, or use a tool like JVisualVM to connect to the running process and collect method timing information. Looking at the "method self time" metric may be useful here if the problem is in your java code or its run time dependencies.
First i would check if hibernate is the bottleneck. You can test this by wrapping your backend call in some logging code, see code below.
#RequestMapping("api")
public SomeObject getSomeObject() {
long start = System.currentTimeMillis();
// call the method that gets the object via JPA
System.out.println("Got all results in '" + (System.currentTimeMillis() - start) / 1000 + "' seconds");
}
If the result it outputs is > 30 seconds then you know hibernate is your bottleneck. If this is the case then you need to perform some pagination to limit the results you are returning to the client.
I have 50 users in my ThreadGroup with 50 seconds rump up (50 rows in my .csv config file). After certain HTTP request I would like to test for certain condition,and if pass, continue to next HTTP requests. Soft of read on google that BeanShell Assertion with the code
String response = SampleResult.getResponseDataAsString();
if(response.contains("\"HasError\":true")){
SampleResult.setStopThread(true);
}
should resolve my problem. But the problem is that this function actually stops the entire test execution, all remaining users (where I might have some more values at the .csv file to test). IS there any convenient way not to stop the entire test? If anybody faced that problem please advise.
You can set a thread to stop on Sampler error by configuring it in the thread-group component. Mark the 'stop thread' in the 'Action to be taken after Sampler error' section.
To ensure that you get a Sampler error by configuring a Response Assertion.
I'm doing API testing using JSON.
My Jmeter's TestPlan looks like below:
Test Plan
Thread Group 1 (run once)
- Login
Thread Group 2 (I will run this multiple times)
- Do some opeartion
Thread Group 3 (run once)
- Logout
I want to pass sessionid from Thread Group 1 to Thread Group 2 and 3.
To extract sessionId use Regular Expression Extractor
you can use following code to pass a value to another thread group using beanshell postprocessor listener element in Jmeter
Beanshell code to save variable
import org.apache.jmeter.util.JMeterUtils;
JMeterUtils.setProperty("propname", "value");
Beanshell code to retrieve variable from another thread group
import org.apache.jmeter.util.JMeterUtils;
vars.put("localvariable", JMeterUtils.getProperty("propname"));
var testVar=vars.get("localvariable");
log.info("# NEXT THREAD GROUP value="+testVar);
Code using Jmeter's getprperty(),setproperty() API to pass the values. Also you can use JMeter Plugins has Inter-Thread Communication.
hope this will help. :)
How to use Beanshell guide contains an example of sharing cookies between different thread groups, scroll down to Advanced Examples section.
If you "session" assumes cookie-based session you'll need to do the following:
Add HTTP Cookie Manager to the Test Plan (or each thread group if you prefer)
Tell the Cookie Manager to save cookies as variables by setting CookieManager.save.cookies property to true in jmeter.properties file which lives under /bin folder of your JMeter installation or passing it as command-line argument as follows
jmeter -JCookieManager.save.cookies=true -n ... -t ... -l ...
Another approach is to have only one Thread Group instead of three and add CookieManager to your Thread Group. Have a Loop Controller to run the operation multiple times.
Your test can be structured as:
Test Plan
Thread Group
- Cookie Manager
- Login
- Loop Controller (run this multiple times)
- Do some operation
- Logout