Changing file path name every five minutes in logstash csv output plugin - csv

My requirement is my file name should change every 5 minutes . Currently I am using below configuration to change it every minute. Please tell me some way to change it every 5 minute.
output {
stdout { codec => rubydebug }
csv {
# elastic field name
fields => ["#timestamp","requestid","ngnix.responsebytes"]
# This is path where we store output.
path => "C:/Users/M1056317/ELK/csv/try6/csv-export-%{+YYYY-MM-dd_hh.mm}.csv"
}
}

I don't think that the time format will allow you to do this.
Another way to achieve what you need is to always write to C:/Users/M1056317/ELK/csv/try6/csv-export.csv and use LogRotate (or equivalent) to rotate your logs every five minutes.

Related

How to export .csv files using a script in Dymola?

I am running a big set of simulations in Dymola using a script, so far, it works well.
However, it remains incomplete because all the results are still in .mat and I have not find a way to automatically save them as .csv.
I found the DataFiles.convertMATtoCSV() function, but it requires me to specify a list of variables to export. I would like it to export all the variables without writing them one by one, is it possible?
In the Dymola Manual, there is a section "Saving all values into a CSV file".
It contains the following example code:
// Define name of trajectory file (fileName) and CVS file
// (CSVfile)
fileName="PID_Controller.mat";
CSVfile="AllVariables.csv";
// Read the size of the trajectories in the result file and
// store in 'n'
n=readTrajectorySize(fileName);
// Read the names of the trajectories
names = readTrajectoryNames(fileName);
// Read the trajectories 'names' (and store in 'traj')
traj=readTrajectory(fileName,names,n);
// transpose traj
traj_transposed=transpose(traj);
// write the .csv file using the package 'DataFiles'
DataFiles.writeCSVmatrix(CSVfile, names, traj_transposed);
This should do what you want. Also it gives room for customization if necessary later...

CSV Data Set Config not looping

I'm using v5.1.1 of JMeter and attempting to use the "CSV Data Set Config". The file is read correctly as I can tell from the Debug Sampler/Results Tree, but the file is not being read line by line. In other words, it reads the first line and never proceeds to the next line for processing.
I would like to use the data inside the CSV to iterate over a series of HTTP Requests to an external API. I currently have a single thread with only the "CSV Data Set Config" and "HTTP Request".
Do I need to wrap this with a ForEach controller or another looping construct? Perhaps I'm missing it but I do not see in the documentation that would indicate it's necessary.
Thanks
You dont need to wrap this in a ForEach loop. First line in the CSV file is a var name:
Let's say your csv file looks like
foo, bar
1, John
2, George
3, Laura
And you use an http request sampler
then ${foo} and ${bar} will get iterated sequentially. However please make sure you are mindful about the CSV Data Set Config options. The following options works ok for me:
By default CSV Data Set Config doesn't trigged any "looping", it reads next line from the CSV file for each thread (virtual user) for each iteration.
So if you want to see more values from the CSV file - either add more users or loops or both.
Given
This CSV file:
line1
line2
line3
Following CSV Data Set Config setup:
And the following Thread Group setup:
You will get the following values (assuming __threadNum() function to visualize current virtual user number and ${__jm__Thread Group__idx} pre-defined variable to show current Thread Group iteration) :
Check out JMeter Parameterization - The Complete Guide article for more information on various approaches on parameterizing JMeter tests using external data sources

JMeter change CSV file when current file reach its last data

I have a script for stress test using JMeter, the problem is the data I will be using is too many that I needed to divide it to multiple CSVs.
Is it possible in JMeter to change the CSV file, the source of data if the file is at the last data ?
Example:
I have 1 million data in CSV, during run-time when the iteration gets to the 1 million data it will change the file with newer data.
You can have multiple CSV Data Set Config with different variable names as id id1 id2
Mark as Recycle on EOF false
Recycle on EOF? Should the file be re-read from the beginning on reaching EOF? (default is true)
And when you will get to the end check value is EOF as "${id}" == "<EOF>" and override id/use ${id1} instead
Example:
if ("<EOF>".equals(vars.get("Email")){
if ("<EOF>".equals(vars.get("Email2")){
vars.put("Email",vars.get("Email3"));
vars.put("Password",vars.get("Password3));
} else {
vars.put("Email",vars.get("Email2"));
vars.put("Password",vars.get("Password2));
}
}

jmeter - how to skip specific row from csv

I've a csv like this:
NAME;F1;F2;
test1;field1;field2
test2;field1;field2
test3;field1;field2
I would test only test1, so I would change the csv in
ID;F1;F2;
test1;field1;field2
#test2;field1;field2
#test3;field1;field2
how can I skip rows of test2 and test3 in jmeter?
There is always a way to do to something..
maybe my way is not the best and "pretty" but it worth!
Thread Group
Loop Controller
csv Data Set Config
if Controller
Http Request
Inside If Controller I added this code:
${__groovy(vars.get('ID').take(1)!='#')}
In this way when you put an # at the start of the row it will be skipped.
I hope it could be helpfull for someone.
You cannot, the only option I can think of is creating a new CSV file out of the existing one with just first 2 lines like:
Add setUp Thread Group to your Test Plan
Add JSR223 Sampler to the setUp Thread Group
Put the following code into "Script" area
new File('original.csv').readLines().take(2).each {line ->
new File('new.csv') << line << System.getProperty('line.separator')
}
Replace original.csv with path to the current CSV file and set up CSV Data Set Config to use new.csv
The above code will write first 2 lines from the original.csv into the new.csv so you will be able to access limited external data instead of the full CSV file.
More information:
File.readLines()
Collection.take()
The Groovy Templates Cheat Sheet for JMeter

Cassandra RPC Timeout on import from CSV

I am trying to import a CSV into a column family in Cassandra using the following syntax:
copy data (id, time, vol, speed, occupancy, status, flags) from 'C:\Users\Foo\Documents\reallybig.csv' with header = true;
The CSV file is about 700 MB, and for some reason when I run this command in cqlsh I get the following error:
"Request did not complete within rpc_timeout."
What is going wrong? There are no errors in the CSV, and it seems to me that Cassandra should be suck in this CSV without a problem.
Cassandra installation folder has a .yaml file to set rpc timeout value which is "rpc_timeout_in_ms ", you could modify the value and restart cassandra.
But another way is cut your big csv to multiply files and input the files one by one.
This actually ended up being my own misinterpretation of COPY-FROM as the CSV was about 17 million rows. Which in this case the best option was to use the bulk loader example and run sstableloader. However, the answer above would certainly work if I wanted to break the CSV into 17 different CSV's which is an option.