JMeter read / write to CSV Data Set Config - csv

So I am loading data from CSV Data Set Config, what I like to do is write a value of a variable back into the next column of that same file in the same row but on the next column.
I check "url" for some specific response json, extract and create a variable "status" and "action" and add it to the row
Is it even possible to write back to the source csv file? Maybe some post processor script? Searching here is like a needle in a haystack sometimes.

It is possible but I wouldn't recommend it as if you implement this post-processor logic and run your test with > 1 user most probably you will get into a race condition when several threads are concurrently writing into the same file.
Alternatives are:
Adding your status and action variables values to JMeter's .jtl result file, just declare the following Sample Variables:
sample_variables=url,status,action
in the user.properties file and next time you run JMeter in command-line non-GUI mode you will see 3 extra columns in the .jtl results file holding the values of these 3 JMeter Variables
If you want a separate file - first of all execute step 1 and then add a Flexible File Writer to your Test Plan and configure it to write the variables into a file, the relevant configuration would be something like:
variable#0|,|variable#1|,|variable#2|\r\n

Related

Batch File in SSIS

I am producing two files from an SSIS package.
One is the main content and the other is the header.
After I have output both files - I am then merging them using an Execute Process Task.
So I have a content.txt and a header.txt.
/C copy /B \filepath\header.txt + \filepath\content.txt \filepath\result.txt
What I want to do at this stage it append the data to the result.txt so it becomes result_09102019.txt.
How do I achieve that within the snippet of code I have above?
I am not using an Execute Process Task now in order to achieve the filename.
Instead using an Execute SQL Task to simply write the result set to a variable - which I then point at me Flat File Connection.
select 'filename_' + format(getdate(), 'yyyyMMddHHmm') + '.csv'
The single row result set gets written to a variable called OutputFileName.
I then have an OutputFolder variable and then combine the OutputFolder and OutputFileName to another variable called OutputPath.
Output path is then added via an Expression to the file connection.
suggestion - explore the use of variables in building up file names.
Store filename in variable and create tables with the filename in SSIS

Let Google BigQuery infer schema from csv string file

I want to upload csv data into BigQuery. When the data has different types (like string and int), it is capable of inferring the column names with the headers, because the headers are all strings, whereas the other lines contains integers.
BigQuery infers headers by comparing the first row of the file with
other rows in the data set. If the first line contains only strings,
and the other lines do not, BigQuery assumes that the first row is a
header row.
https://cloud.google.com/bigquery/docs/schema-detect
The problem is when your data is all strings ...
You can specify --skip_leading_rows, but BigQuery still does not use the first row as the name of your variables.
I know I can specify the column names manually, but I would prefer not doing that, as I have a lot of tables. Is there another solution ?
If your data is all in "string" type and if you have the first row of your CSV file containing the metadata, then I guess it is easy to do a quick script that would parse the first line of your CSV and generates a similar "create table" command:
bq mk --schema name:STRING,street:STRING,city:STRING... -t mydataset.myNewTable
Use that command to create a new (void) table, and then load your CSV file into that new table (using --skip_leading_rows as you mentioned)
14/02/2018: Update thanks to Felipe's comment:
Above comment can be simplified this way:
bq mk --schema `head -1 myData.csv` -t mydataset.myNewTable
It's not possible with current API. You can file a feature request in the public BigQuery tracker https://issuetracker.google.com/issues/new?component=187149&template=0.
As a workaround, you can add a single non-string value at the end of the second line in your file, and then set the allowJaggedRows option in the Load configuration. Downside is you'll get an extra column in your table. If having an extra column is not acceptable, you can use query instead of load, and select * EXCEPT the added extra column, but query is not free.

How to write a JSON extracted value to a csv file in jmeter for a specific variable

I have a csv file that looks like this:
varCust_id,varCust_name,varCity,varStateProv,varCountry,varUserId,varUsername
When I run the HTTP Post Request to create a new customer, I get a JSON response. I am extracting the cust_id and cust_name using the json extractor. How can I enter this new value into the csv for the correct variable? For example, after creating the customer, the csv would look like this:
varCust_id,varCust_name,varCity,varStateProv,varCountry,varUserId,varUsername
1234,My Customer Name
Or once I create a user, the file might look like this:
varCust_id,varCust_name,varCity,varStateProv,varCountry,varUserId,varUsername
1234,My Customer Name,,,,9876,myusername
In my searching through the net, I have found ways and I'm able to append these extracted variables to a new line but in my case, I need to replace the value in the correct location so it is associated to the correct variable I have set up in the csv file.
I believe what you're looking to do can be done via a BeanShell PostProcessor and is answered here.
Thank you for the reply. I ended up using User Defined Variables for some things and BeanShell PreProcessors for other bits vs. using the CSV.
Well, never tried this. But what you can do is create all these variables and set them to Null / 0.
Once done, update these during your execution. At the end, you can concatenate these with any delimiter (say ; or Tab) and just push in CSV as a single string.
Once you got data in CSV, you can easily split in Ms excel.

Getting multiple values from a single cell in .CSV file in jmeter

How can I read multiple values from a single cell in CSV file in jmeter . I have an excel sheet as .csv input and one of its column has mobile numbers which have 2 or more values.eg
987#765#456 Which sampler should I use.
now I want it to split at # as 987,765,456
To read the csv file in JMeter, use CSV data set config.
Check this link to understand how to use CSV data set config in JMeter.
Lets assume the column name is mobileNo which has the value as 987#765#456.
Use Beanshell preprocessor to replace '#' by ','.
mobileNo = vars.get("mobileNo");
mobileNo = mobileNo.replace("#", ",");
vars.put("mobileNo",mobileNo);
You can use JMeter's __javaScript function to replace all occurences of # with , as follows:
Given that your 987#765#456 bit lives as ${mobileNumber} JMeter Variable:
${__javaScript("${mobileNumber}".split('#').join('\,'),mobileNumber)}
The script above replaces all "at" signs with commas and stores the result in "mobileNumber" JMeter Variable.
To learn more about different JMeter Functions refer to How to Use JMeter Functions post series.

Howto process multivariate time series given as multiline, multirow *.csv files with Apache Pig?

I need to process multivariate time series given as multiline, multirow *.csv files with Apache Pig. I am trying to use a custom UDF (EvalFunc) to solve my problem. However, all Loaders I tried (except org.apache.pig.impl.io.ReadToEndLoader which I do not get to work) to load data in my csv-files and pass it to the UDF return one line of the file as one record. What I need is, however one column (or the content of the complete file) to be able to process a complete time series. Processing one value is obviously useless because I need longer sequences of values...
The data in the csv-files looks like this (30 columns, 1st is a datetime, all others are double values, here 3 sample lines):
17.06.2013 00:00:00;427;-13.793273;2.885583;-0.074701;209.790688;233.118828;1.411723;329.099170;331.554919;0.077026;0.485670;0.691253;2.847106;297.912382;50.000000;0.000000;0.012599;1.161726;0.023110;0.952259;0.024673;2.304819;0.027350;0.671688;0.025068;0.091313;0.026113;0.271128;0.032320;0
17.06.2013 00:00:01;430;-13.879651;3.137179;-0.067678;209.796500;233.141233;1.411920;329.176863;330.910693;0.071084;0.365037;0.564816;2.837506;293.418550;50.000000;0.000000;0.014108;1.159334;0.020250;0.954318;0.022934;2.294808;0.028274;0.668540;0.020850;0.093157;0.027120;0.265855;0.033370;0
17.06.2013 00:00:02;451;-15.080651;3.397742;-0.078467;209.781511;233.117081;1.410744;328.868437;330.494671;0.076037;0.358719;0.544694;2.841955;288.345883;50.000000;0.000000;0.017203;1.158976;0.022345;0.959076;0.018688;2.298611;0.027253;0.665095;0.025332;0.099996;0.023892;0.271983;0.024882;0
Has anyone an idea how I could process this as 29 time series?
Thanks in advance!
What do you want to achieve?
If you want to read all rows in all files as a single record, this can work:
a = LOAD '...' USING PigStorage(';') as <schema> ;
b = GROUP a ALL;
b will contain all the rows in a bag.
If you want to read each CSV file as a single record, this can work:
a = LOAD '...' USING PigStorage(';','tagsource') as <schema> ;
b = GROUP a BY $0; --$0 is the filename
b will contain all the rows per file in a bag.