JMeter: How to save response time taken by each thread in csv - csv

I am checking a load with a minimum of 1000 threads in JMeter in the command line mode. But at the end of the execution, I am getting an aggregated result. What I actually wanted is the time taken for each thread in a CSV or in Graph.
Note: Request and response is in JSON

Please check simple data writer.
It will help to save the data in csv. From the listener configure, you can control what to capture. From the csv, you can find the time taken per thread.

Related

How to skip some response messages based on a pattern while generating html reports from JTL

In my tests, often requests return a very long response message which is dozens of lines long. My tests involve high load so these messages occurs thousands of times. In the end the JTL becomes huge and due to these large response messages, I am getting out of heap memory and array size too large exceptions when I try to generate HTML reports from command line. Test machines have some 16 GB and JTL are around 5 GB only. I allocated max heap of up to 15 GB but issue persists.
As a workaround I need to make jmeter skip considering these response messages alone while generating HTML report. But I still need the particular line item to be part of the report because it is part of the over all load. That is, I cannot skip these resposes entirely.
Any way to do this? Please let me know if any additional details are required.
You can use JSR223 PostProcessor to remove the part of the response message which is not interesting.
Example code which removes the response message part after first dot:
prev.setResponseMessage(prev.getResponseMessage().split('\\.')[0])
In the above code prev stands for SampleResult class instance, see JavaDoc for all available functions and Top 8 JMeter Java Classes You Should Be Using with Groovy article for more information on this and other JMeter API shorthands available for use from JSR223 Test Elements
Demo:

Jmeter - when there are number of threads are running, the variable from JSON extractor is not working sometimes

I am now using Jmeter to run the test of APIs.
The situation is that I have a login Api which will return a token inside response. I use a JSON extractor to save the token as a variable. Then, I use the ${token} is the header of other requests.
However, I found that when I was trying to run 40-50 threads, the ${token} in some threads would be empty, and caused a high error rate.
Therefore, may I ask is there any method to solve it and why?
Thanks very much.
Try saving full response from the Login API, most probably your server gets overloaded and cannot return the token and returning some error message instead.
There are following options:
If you're running JMeter in command-line non-GUI mode you can amend JMeter's Results File Configuration to store the results in XML form and include the response data, add the next lines to user.properties file:
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data=true
and when you run your test next time the .jtl results file will contain response bodies for all the requests.
Another option is using a Listener like Simple Data Writer configured like:
and when you run the test the responses.xml file will contain the response data
Both .jtl results file and responses.xml can be inspected using View Results Tree listener
More information: How to Save Response Data in JMeter

Payload for "onProgress" callback exceeds $ 5120 bytes limit

As part of a revit addin that I am running in design automation, I need to extract some data from the file, send it in json format to an external server for analysis, and get the result to update my revit file with new features. I was able to satisfy my requirement by following the indicated in: https://forge.autodesk.com/blog/communicate-servers-inside-design-automation, which worked as I needed, the problem arises when the size of the data to send for the analysis grows, it results in the following error:
[11/12/2020 07:54:08] Error: Payload for "onProgress" callback exceeds $ 5120 bytes limit.
When checking my data it turns out that the payload is around 27000 bytes, are there other ways to send data from design automation for Payloads larger than 5120 bytes?
I was unable to find documentation related to the use of ACESAPI: acesHttpOperation
There is no other way at the moment to send data from your work item to another server.
So either you would have to split up the data into multiple 5120 byte parts and send them like that or have two work items: one for getting the data from the file before doing the analysis and one for updating the file afterwards.

Error when Reading from a Large Data Lake Store

I have an SSIS package reading web data from Azure Data Lake through the component called Azure Data Lake Store Source Editor.
The data, I am reading is large and web-based data, i.e. there are lots of unreadable stuff in it.
The data is JSON and I don't want to parse the data in the Source component. I am parsing it in another component (script transformation editor). I just need a delimiter that says SSIS not to try to parse the data.
All is fine for an hour or two. SSIS is loading the data for many files, but then I get the error.
Error:
Microsoft.SqlServer.Dts.Pipeline.PipelineComponentHResultException (0xC02090F5): Pipeline component has returned HRESULT error code 0xC02090F5 from a method call. at Microsoft.SqlServer.IntegrationService.AdlsComponents.PipelineComponentSource.TransferToOutputBuffers(Int32 outputs, Int32[] outputIDs, PipelineBuffer[] buffers)
After some investigation, I found that this is what you get when the delimiter is part of the data.
Almost every character I tried in the ASCII table and I am still getting the error after some processing.
Do you have any idea:
Is there a way to bypass the delimiter?
Is there a delimiter you can recommend (may be some control characters) that can never be used as data?
Thanks for reading & considering

MongoDB - Update collection hourly without interrupting normal query

I'm implementing a web service that needs to query a JSON file( size: ~100MB; format: [{},{},...,{}] ) about 70-80 times per second, and the JSON file will be updated every hour. "query a JSON file" means checking if there's a JSON object in the file that has an attribute with a certain value.
Currently I think I will implement the service in Node.js, and import ( mongoimport ) the JSON file into a collection in MongoDB. When a request comes in, it will query the MongoDB collection instead of reading and looking up in the file directly. In the Node.js server, there should be another timer service, which in every hour checks whether the JSON file has been updated, and if it has, it needs to "repopulate" the collection with the data in the new file.
The JSON file is retrieved by sending a request to an external API. The API has two methods: methodA lets me download the entire JSON file; methodB is actually just an HTTP HEAD call, which simply tells whether the file has been updated. I cannot get the incrementally updated data from the API.
My problem is with the hourly update. With the service running, requests are coming in constantly. When the timer detects there is an update for the JSON file, it will download it and when download finishes it will try to re-import the file to the collection, which I think will take at least a few minutes. Is there a way to do this without interrupting the queries to the collection?
Above is my first idea to approach this. Is there anything wrong about the process? Looking up in the file directly just seems too expensive, especially with the requests coming in about 100 times per seconds.