Jmeter CSV issue - csv

Please help me with following issue:
I have a simple Jmeter test with where variables are stored in CSV file. There is only one request in the test:
Get .../api/${page} , where ${page} is a variable from CSV
Everything goes well with thread properties for ex. 10 threads x30 loop count
If i increase any parameter, for ex. in 10x40 or 15x30, i receive at least one error and looks like this is jmeter issue:
one request (random) isn't able to take variable from CSV and i got an error:
-.../api/page returns 404 error
So the question is - is there any limit in jmeter's connection to CSV file?
Thanks in advance.

A key point to focus on is the way your application manage the case when 2 different users require the same page.
There are few checks that I would recommend:
be sure that the "Recycle on EOF" property is true
be sure that you have more lines on CSV than the number of threads you are firing
use a "View result tree" controller to investigate the kind of error you are getting
Let us know

Related

How to correctly annotate a csv file for uploading into a bucket in InfluxDB

I am trying to evaluate InfluxDB as a real time, time series data visualization tool. I have an account with InfluDB and I have created a bucket for data storage. I now want to upload a csv file into the bucket via the click to upload feature but I keep getting errors associated with incorrect annotations. The last error I received was:
'Failed to upload the selected CSV: error in csv.from(): failed to read metadata: failed to read header row: wrong number of fields'
I have tried to decipher their docs and examples on how to annotate a csv file and have tried many different combinations of #datatype, #group and #default but nothing works.
This is the latest attempt that generated the error above.
#datatype,string,string,double,dateTime
#group,true,true,false,false
#default,,,,
_measurement,station,_value,_time
device,MBL,-0.814075542,1.65E+18
device,MBL,-0.837942395,1.65E+18
device,MBL,-0.862699339,1.65E+18
device,MBL,-0.891686336,1.65E+18
device,MBL,-0.891492408,1.65E+18
device,MBL,-0.933193098,1.65E+18
device,MBL,-0.933193098,1.65E+18
device,MBL,-0.976859072,1.65E+18
device,MBL,-0.981019863,1.65E+18
device,MBL,-1.011647128,1.65E+18
device,MBL,-1.017813258,1.65E+18
Any thoughts would be greatly appreciated. Thanks.
From the sample data above, I assume "device" is the name of a measurement and "MBL" is a tag whose name is station. Hence, there is 1 measurement and 1 tag, 1 field and a timestamp.
And you are mixing data types and line protocol elements when using annotated CSV. You could try following version:
#datatype,measurement,tag,double,dateTime
#default device,MBL,
thisIsYouMeasurementName,station,thisIsYourFieldKeyName,time
device,MBL,-0.814075542,1652669077000000000
device,MBL,-0.837942395,1652669077000000001
device,MBL,-0.862699339,1652669077000000002
device,MBL,-0.891686336,1652669077000000003
device,MBL,-0.891492408,1652669077000000004
device,MBL,-0.933193098,1652669077000000005
device,MBL,-0.933193098,1652669077000000006
device,MBL,-0.976859072,1652669077000000007
device,MBL,-0.981019863,1652669077000000008
device,MBL,-1.011647128,1652669077000000009
device,MBL,-1.017813258,1652669077000000010
Note that time column should avoid using "1.65E+18". See more details here.

Working on migration of SPL 3.0 to 4.2 (TEDA)

I am working on migration of 3.0 code into new 4.2 framework. I am facing a few difficulties:
How to do CDR level deduplication in new 4.2 framework? (Note: Table deduplication is already done).
Where to implement PostDedupProcessor - context or chainsink custom? In either case, do I need to remove duplicate hashcodes from the list or just reject the tuples? Here I am also doing column updating for a few tuples.
My file is not moving into archive. The temporary output file is getting generated and that too empty and outside load directory. What could be the possible reasons? - I have thoroughly checked config parameters and after putting logs, it seems correct output is being sent from transformer custom, so I don't know where it is stuck. I had printed TableRowGenerator stream for logs(end of DataProcessor).
1. and 2.:
You need to select the type of deduplication. It is not a big difference if you choose "table-" or "cdr-level-deduplication".
The ite.businessLogic.transformation.outputType does affect this. There is one Dedup only. You can not have both.
Select recordStream for "cdr-level-deduplication", do the transformation to table row format (e.g. if you like to use the TableFileWriter) in xxx.chainsink.custom::PostContextDataProcessor.
In xxx.chainsink.custom::PostContextDataProcessor you need to add custom code for duplicate-handling: reject (discard) tuples or set special column values or write them to different target tables.
3.:
Possibly reasons could be:
Missing forwarding of window punctuations or statistic tuple
error in BloomFilter configuration, you would see it easily because PE is down and error log gives hints about wrong sha2 functions be used
To troubleshoot your ITE application, I recommend to enable the following debug sinks if checking the StreamsStudio live graph is not sufficient:
ite.businessLogic.transformation.debug=on
ite.businessLogic.group.debug=on
ite.businessLogic.sink.debug=on
Run a test with a single input file only and check the flow of your record and statistic tuples. "Debug sinks" write punctuations markers also to debug files.

Jmeter: set property for each loop

I'm trying to create a test that will loop depending on the number of files stored in one folder then output results base on their filename. I'm thinking to use their filename as the name of their result, so for this, I created something like this in BS preProcessor:
props.setProperty("filename", vars.get("current_tc"));
Then use it for the name of the result:
C:\\TEST\\Results\\${__property(filename)}
"current_tc" is the output variable name of a ForEach controller. It returns different value on each loop. e.g loop1 = test1.csv, loop2 = test2.csv ...
I'm expecting that the result name will be test1.csv, test2.csv .... but the actual result is just test1.csv and the result of the other file is also in there. I'm new to Jmeter. Please tell me if I'm doing an obvious mistake.
Test Plan Image
The way of setting the property seems okayish, the question is where and how you are trying to use this C:\\TEST\\Results\\${__property(filename)} line so a snapshot of your test plan would be very useful.
In the meantime I would recommend the following:
Check jmeter.log file for any suspicious entries, if something goes wrong - most probably you will be able to figure out the reason using this file. Normally it is located in JMeter's "bin" folder
Use Debug Sampler and View Results Tree listener combination to check your ${current_tc} variable value, maybe it is the case of the variable not being incremented. See How to Debug your Apache JMeter Script article to learn more about troubleshooting tecnhiques

jmeter several http request reading from one csv data set config

Is it posible to have several http request read from one csv data set config?
I whant to make http request 1 to read from line 1 to 50 and http request 2 read from line 51 to 100 from the .csv file, and so on. Is this posible? or do i ahve to make more small csv files and more csv data set config.
Yes it is possible but generally not recommended. It will be much easier if smaller CSV's are used. Nevertheless you can make following changes for doing it through CSV data set config: -
Configure "Recycle on EOF" to False.
Configure "Stop thread on EOF" to False.
Sharing mode to "All threads".
Place HTTP requests in different thread group and set total number of threads to desired value. In your case the value for 1st and 2nd thread group should be 50. Also make sure that two thread groups does not start at same time. Add a startup delay for 2nd thread group.

How to save all console output in Chrome?

I'm saving Chrome's console.log output to a chrome_debug.log file using the standard --enable-logging key and it turns out that (whatever verbosity level is set) only first argument passed is stored in logs.
So, say, for this:
console.log('AAAAAAAAA', 'BBBBBB', 'CCCCCC');
You'll get just something like:
[68867:1295:0414/182234:INFO:CONSOLE(2)] "AAAAAAAAA", source: (2)
So, BBBBBB and CCCCCC won't be logged at all.
This is, well, frustrating. My question is - is there's a way to save all console.log output to a file.
UPD: please, keep in mind that this question is not duplicate of one which is linked here - in that question the very fact that only the first param is logged is not mentioned at all. The question is, once again, about how to log all params passed to console.log.