JMeter - CSV Data Set Config- blank variable on 2nd iteration - csv

I have a plan with a CSV Data Set Config element in my thread group. The CSV has 1 row in it. Recycle on EOF=true, Stop on EOF=false.
My thread group has 1 thread with loop count=2.
There is a single http request in my thread group that uses a variable assigned in CSV Data Set Config.
My problem: The http request succeeds the 1st loop iteration, but fails the 2nd time. It appears the variable from the CSV is blank on the 2nd iteration.
I read over several stack overflow posts on the subject, but couldn't figure it out.
I must be misunderstanding something.
Any insight or suggestions appreciated.

Figured it out. CSV input file had an extra, blank line at the end.
Stupidly obvious. sorry to distract and waste your time.

Related

LabVIEW - writing data from multiple DAQ Assistants in the same .csv-file

I have the following problem with my VI, which I could not solve by myself or research:
When running the VI, the data should be stored in a .csv-File. In the pictures, you can see the block diagram. When running, it produces the following file:
Test Steady State
T_saug_1/T_saug_2/Unbelegt/Unbelegt/T_ND/T_HD/T_Wasser_ein/T_Wasser_aus/T_front/T_back/T-right/T-left
18,320 18,491 20,873 20,838 20,463 20,969 20,353 20,543 20,480 20,618
20,618 20,238
As you can see, the data gets stored only in the first column (in the preview of the post it looks like it is a row, but it is really a column; T steady state is the header). But these temperatures are not the temperatures of the first sensor, it somehow stored the value for every sensor in the respective row. When the first row was filled, it stopped storing data entirely. I did not figure out how I could insert a file here, otherwise I would have done so... I want to store the data for each sensor in the associated column.
Another problem I have: the waveform-chart, which shows all the temperatures, only updates every 4-6 seconds. Not only is the interval between every update not always the same, but from my understanding it should update every second since the while-loop has a wait-timer set to 1000ms. I don't know what my mistake here is...
Please let me know if you have any ideas on how to solve the problems I have or suggestions where I could find answers to my questions. I am very new to LabVIEW, I am sorry if this question is silly.
With best regards an thank you for the patient help,
lempy.
csv-file
Block diagram
DAQ-Assis. for PT100
DAQ-Ass. for TC
The Write Delimited Spreadsheet VI has two boolean inputs: Append to file? and transpose?
Append to file? is not set for the first write, which defaults to FALSE. That means, on each write, the file is overwritten. For the second and third call, it is set to TRUE, so those data is appended.
The most simple solution is to put the first two write functions outside the main loop. This overwrites the file at start of the VI with the headers, and values will be appended as desired.
transpose? will swap rows and columns. Wire TRUE to it, and check if it works.
About your second question:
A loop runs as fast as the slowest process inside. If the graph is updated every 6s only, something takes 6s to complete. My guess is that those temperature readings take so long...

Ignore last/corrupted record from flat file source in SSIS

I have following csv file:
col1, col2, col3
"r1", "r2", "r3"
"r11", "r22", "r33"
"totals","","",
followed by 2 blank lines. The import is failing as there is extra comma at the end of the last data row and most probably will fail because of the extra blank lines at the end.
Can I skip the last row somehow or even better stop import when I get into that row? It always has "totals" string in the "col1".
UPDATE:
As far as I understood from the answers that it is not possible to do that with Flat File. Currently I did that with the "Script Component" as a source
You can do it by reading the row as a single string.
Conditionally split out Null and left(col0)=="total"
in script component you then use split function
finally trim("\"")
I know of nothing built-in to SSIS that lets you ignore the LAST line of a CSV.
One way to handle this is to precede your dataflow with a script task that uses the FileSystemObject to edit the CSV and remove the last line.
You will need to create a custom script where you read all lines but the last within SSIS.
This is old but it came up for me when searching this topic. My solution was to redirect rows on the destination. The last row is redirected instead of failing and the job completes. Of course you will potentially redirect rows you don't want to. It all depends on how much you can trust the data.

Running a thread group multiple times for all the values in a csv file

I have recorded a series of 5 HTTP requests in a thread group (say TG). The response value of a request has to be sent as a parameter in next request, and so on till the last request is made.
To send the parameter in first request, I have created a csv file with unique values (say 1,2,3,4,5).
Now I want this TG to run for all the values read from the csv file (In above case, TG should start running for value 1, then value 2, till 5).
How do I do this?
Given your CSV file looks like:
1
2
3
4
5
In the Thread Group set Loop Count to "Forever"
Add CSV Data Set Config element under the Thread Group and configure it as follows:
Filename: if file is in JMeter's bin folder - file name only. If in the other location - full path to CSV file
Variable Names: anything meaningful, i.e. parameter
Recycle on EOF - false
Stop thread on OEF - true
Sharing mode - according to your scenario
You'll get something like:
See Using CSV DATA SET CONFIG guide for more detailed explanation.
Another option is using __CSVRead() function
This method of creating individual request for each record will not be scalable for multiple records. There is another scalable solution here - Jmeter multiple executions for each record from CSV

Process Multiple Batches in Flat File SSIS

I have a flat file which contains multiple batches.
I want to read the file into a db table but maintain some reference to the batch each line belongs to.
My thought is to append to each detail row the date/timestamp in the header row from the batch to which each row belongs.
What I have done is read the file into a in-memory recordset and then use the foreach loop container to examine and process each line.
I am stuck on the follwing:
1. examine each line, determine if it is a header or not
2. append batch header information to each line.
Thanks
I found this sample script on MSDN which worked perfectly. If you are coding in C# initialize your counter variable outside of the function else it will keep reinitalizing everytime the function is called.

ssis - capturing the bad rows

HI, Can you help me to figure this out? Is there a way to get the row in which error occured in ssis? I have this flat file with some 10k + records which is being read via a 'flatfilesource'.
Right now the error output defaults to error-column, error-code, and 'flatfilesourceerroroutputcolumn' - and i use a script-component to handle it. But none of these three inputs (to script component) are user-friendly enough. So i want to get an output like the first column-value(this is a unique identifier) of the row in which error occured. How can I add that?
While debugging this in SSIS, you can add a Data Viewer on the path to where your script handles error. This path has all the columns of the original row where your error is.
If you want to handle your SSIS errors and also do something else with it, you can direct the error output from your flat file source to a Multicast and then send one stream down to a file, a table, or something else ( a Recordset destination and a subsequent foreach loop on the object used to store the Recordset will let you do stuff on a row-by-row basis on the errored row(s)).