Jmeter 3.0 confusing csv columns, when trying to generate dashboard from file, recorded with another settings - csv

Here is the situation - jmeter results were recorded in .csv quite a long time ago (around 6 month). The version of jmeter was not changed (3.0) but the config files did. Now I've been trying to generate a report from the old csv as usual - using
jmeter.bat -g my.csv -o reportFolder
Also, to defeat the incompatibility of configurations, I created a file named local-saveservice.properties, and passed it through -q command line option. Playing with settings in this file, I managed to defeat several errors like "column number mismatch" or "No column xxx found in sample metadata", but I still didn't generate the report succesfully, and here is the trouble:
File 'D:\ .....\load_NSI_stepping3_2017-03-24-new.csv' does not contain the field names h
eader, ensure the jmeter.save.saveservice.* properties are the same as when the CSV file was created or the file may be
read incorrectly
An error occurred: Error while processing samples:Consumer failed with message :Consumer failed with message :Consumer f
ailed with message :Consumer failed with message :Error in sample at line:1 converting field:Latency at column:11 to:lon
g, fieldValue:'UTF-8'
However,in my .csv column number 11 has the header "Latency" and contains numeric values, though 'UTF-8' is the content of next column - "Encoding"
Here are first lines of my .csv
timeStamp,elapsed,label,responseCode,responseMessage,success,bytes,grpThreads,allThreads,URL,Latency,Encoding,SampleCount,ErrorCount,Connect,threadName
1490364040950,665,searchItemsInCatalogRequest,200,OK,true,25457,1,1,http://*.*.*.*:9080/em/.....Service,654,UTF-8,1,0,9,NSI - search item in catalog
1490364041620,507,searchItemsInCatalogRequest,200,OK,true,25318,1,1,http://*.*.*.*:9080/em/.....Service,499,UTF-8,1,0,0,NSI - search item in catalog
1490364042134,495,searchItemsInCatalogRequest,200,OK,true,24266,2,2,http://*.*.*.*:9080/em/.....Service,487,UTF-8,1,0,0,NSI - search item in catalog
1490364043595,563,searchItemsInCatalogRequest,200,OK,true,24266,2,2,http://*.*.*.*:9080/em/.....Service,556,UTF-8,1,0,6,NSI - search item in catalog
PS I had to add threadName manually, 'cos it was not saved during initial data recording (my knowledge of Jmeter was even less then now :) )

First you should update to JMeter 3.3 as there are bugs fixed in report generation in the 3 versions after 3.0.
Second add to your command line:
jmeter.bat -p <path to jmeter.properties> -q <path to your custom.properties used when you generated file> -g my.csv -o reportFolder
Ensure that in "your custom.properties" you set to false all fields that are prefixed by jmeter.save.saveservice that didn't yet exist at the time you generated the file.

Related

Could not parse timeStamp <1.65269E+12> using format defined by property jmeter.save.saveservice.timestamp_format=ms on sample 1.65269E+12,3752

At the time of generating the Html report getting below error.
please give the Suggestion to overcome this issue.
thanks in advance.
enter image description here
There is a problem with your .jtl results file, JMeter expects to find a long value representing a timestamp in milliseconds since beginning of Unix epoch
You should replace 1.65269E+12 with its "long" equivalent of 1652690000000
If you opened and saved JMeter's .jtl results file using Excel or equivalent you should re-save it again and configure the first column to contain numeric values without floating points.
Also be aware that you can run a JMeter test and generate HTML reporting dashboard in command-line non-GUI mode in one shot like:
jmeter -n -t /path/to/testplan.jmx -l /path/to/testresult.jtl -e -o /path/to/dashboard
More information: Generating Reports

Opensmile: unreadable csv file while extracting prosody features from wav file

I am extracting prosody features from an audio file while using Opensmile using Windows version of Opensmile. It runs successful and an output csv is generated. But when I open csv, it shows some rows that are not readable. I used this command to extract prosody feature:
SMILEXtract -C \opensmile-3.0-win-x64\config\prosody\prosodyShs.conf -I audio_sample_01.wav -O prosody_sample1.csv
And the output of csv looks like this:
[
Even I tried to use the sample wave file given in Example audio folder given in opensmile directory and the output is same (not readable). Can someone help me in identifying where the problem is actually? and how can I fix it?
You need to enable the csvSink component in the configuration file to make it work. The file config\prosody\prosodyShs.conf that you are using does not have this component defined and always writes binary output.
You can verify that it is the standart binary output in this way: omit the -O parameter from your command so it becomesSMILEXtract -C \opensmile-3.0-win-x64\config\prosody\prosodyShs.conf -I audio_sample_01.wav and execute it. You will get a output.htk file which is exactly the same as the prosody_sample1.csv.
How output csv? You can take a look at the example configuration in opensmile-3.0-win-x64\config\demo\demo1_energy.conf where a csvSink component is defined.
You can find more information in the official documentation:
Get started page of the openSMILE documentation
The section on configuration files
Documentation for cCsvSink
This is how I solved the issue. First I added the csvSink component to the list of the component instances. instance[csvSink].type = cCsvSink
Next I added the configuration parameters for this instance.
[csvSink:cCsvSink]
reader.dmLevel = energy
filename = \cm[outputfile(O){output.csv}:file name of the output CSV
file]
delimChar = ;
append = 0
timestamp = 1
number = 1
printHeader = 1
\{../shared/standard_data_output_lldonly.conf.inc}`
Now if you run this file it will throw you errors because reader.dmLevel = energy is dependent on waveframes. So the final changes would be:
[energy:cEnergy]
reader.dmLevel = waveframes
writer.dmLevel = energy
[int:cIntensity]
reader.dmLevel = waveframes
[framer:cFramer]
reader.dmLevel=wave
writer.dmLevel=waveframes
Further reference on how to configure opensmile configuration files can be found here

error finding and uploading a file in octave

I tried converting my .csv file to .dat format and tried to load the file into Octave. It throws an error:
unable to find file filename
I also tried to load the file in .csv format using the syntax
x = csvread(filename)
and it throws the error:
'filename' undefined near line 1 column 13.
I also tried loading the file by opening it on the editor and I tried loading it and now it shows me
warning: load: 'filepath' found by searching load path
error: load: unable to determine file format of 'Salary_Data.dat'.
How can I load my data?
>> load Salary_Data.dat
error: load: unable to find file Salary_Data.dat
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> x = csvread(Salary_Data)
error: 'Salary_Data' undefined near line 1 column 13
>> x = csvread(Salary_Data.csv)
error: 'Salary_Data' undefined near line 1 column 13
>> load Salary_Data.dat
warning: load: 'C:/Users/vaith/Desktop\Salary_Data.dat' found by searching load path
error: load: unable to determine file format of 'Salary_Data.dat'
>> load Salary_Data.csv
warning: load: 'C:/Users/vaith/Desktop\Salary_Data.csv' found by searching load path
error: load: unable to determine file format of 'Salary_Data.csv'
Salary_Data.csv
YearsExperience,Salary
1.1,39343.00
1.3,46205.00
1.5,37731.00
2.0,43525.00
2.2,39891.00
2.9,56642.00
3.0,60150.00
3.2,54445.00
3.2,64445.00
3.7,57189.00
3.9,63218.00
4.0,55794.00
4.0,56957.00
4.1,57081.00
4.5,61111.00
4.9,67938.00
5.1,66029.00
5.3,83088.00
5.9,81363.00
6.0,93940.00
6.8,91738.00
7.1,98273.00
7.9,101302.00
8.2,113812.00
8.7,109431.00
9.0,105582.00
9.5,116969.00
9.6,112635.00
10.3,122391.00
10.5,121872.00
Ok, you've stumbled through a whole pile of issues here.
It would help if you didn't give us error messages without the commands that produced them.
The first message means you were telling Octave to open something called filename and it couldn't find anything called filename. Did you define the variable filename? Your second command and the error message suggests you didn't.
Do you know what Octave's working directory is? Is it the same as where the file is located? From the response to your load commands, I'd guess not. The file is located at C:/Users/vaith/Desktop. Octave's working directory is probably somewhere else.
(Try the pwd command and see what it tells you. Use the file browser or the cd command to navigate to the same location as the file. help pwd and help cd commands would also provide useful information.)
The load command, used as a command (load file.txt) can take an input that is or isn't defined as a string. A function format (load('file.txt') or csvread('file.txt')) must be a string input, hence the quotes around file.txt. So all of your csvread input commands thought you were giving it variable names, not filenames.
Last, the fact that load couldn't read your data isn't overly surprising. Octave is trying to guess what kind of file it is and how to load it. I assume you tried help load to see what the different command options are? You can give it different options to help Octave figure it out. If it actually is a csv file though, and is all numbers not text, then csvread might still be your best option if you use it correctly. help csvread would be good information for you.
It looks from your data like you have a header line that is probably confusing the load command. For data that simply formatted, the csvread command can bring in the data. It will replace your header text with zeros.
So, first, navigate to the location of the file:
>> cd C:/Users/vaith/Desktop
then open the file:
>> mydata = csvread('Salary_Data.csv')
mydata =
0.00000 0.00000
1.10000 39343.00000
1.30000 46205.00000
1.50000 37731.00000
2.00000 43525.00000
...
If you plan to reuse the filename, you can assign it to a variable, then open the file:
>> myfile = 'Salary_Data.csv'
myfile = Salary_Data.csv
>> mydata = csvread(myfile)
mydata =
0.00000 0.00000
1.10000 39343.00000
1.30000 46205.00000
1.50000 37731.00000
2.00000 43525.00000
...
Notice how the filename is stored and used as a string with quotation marks, but the variable name is not. Also, csvread converted non-numeric header data to 'zeros'. The help for csvread and dlmread show you how to change it to something other than zero, or to skip a certain number of rows. If you want to preserve the text, you'll have to use some other input function.

BigQuery loading data from bq command line tool - how to skip header rows

I have a CSV data file with a header row that I am using to populate a BigQuery table:
$ cat dummy.csv
Field1,Field2,Field3,Field4
10.5,20.5,30.5,40.5
10.6,20.6,30.6,40.6
10.7,20.7,30.7,40.7
When using the Web UI, there is a text box where I am able to specify how many header rows to skip. However, if I upload the data into BigQuery using the bq command line tool, I do not have an option to do this, and always get the following error:
$ bq load my-project:my-dataset.dummydata dummy.csv Field1:float,Field2:float,Field3:float,Field4:float
Upload complete.
Waiting on bqjob_r7eccfe35f_0000015e3e8c_1 ... (0s) Current status: DONE
BigQuery error in load operation: Error processing job 'my-project:bqjob_r7eccfe35f_0000015e3e8c_1': CSV table encountered too many errors, giving up. Rows: 1;
errors: 1.
Failure details:
- file-00000000: Could not parse 'Field1' as double for field Field1
(position 0) starting at location 0
The bq command line tool quickstart documentation also does not mention any options for skipping headers.
One simple/obvious solution is to edit dummy.csv to remove the header row, but this is not an option if pointing to a CSV file on Google Cloud Storage instead of the local file dummy.csv.
This is possible to do through the web interface, and through the Python API, so it should also be possible to do with the bq tool.
Checking bq help load revealed a --skip_leading_rows option:
--skip_leading_rows : The number of rows at the beginning of the source file to skip.
(an integer)
Also found this option in the bq command line tool documentation (which is not the same as the quickstart documentation, linked to above).
Adding a --skip_leading_rows=1 to the bq load command worked like a charm.
Here is the successful command:
$ bq load --skip_leading_rows=1 my-project:my-dataset.dummydata dummy.csv Field1:float,Field2:float,Field3:float,Field4:float
Upload complete.
Waiting on bqjob_r43eb07bad58_0000015ecea_1 ... (0s) Current status: DONE

Stacks software: Samples empty in Web interface

Using a reference genome
Manual:
(http://creskolab.uoregon.edu/stacks/manual/#sfiles)
I did run the following command line:
for FILE in $ (ls /home/llcoutinho/Fabio/samples/bam *.); ref_map.pl the T-7-B-1-b chicken_radtags F2-D "Reference aligned genetic map RAD-Tag Samples"-o /home/llcoutinho/Fabio/samples/-s $ FILE; done
all outputs are present and accessible from the command line:
batch_1.catalog.alleles.tsv,
batch_1.catalog.snps.tsv,
batch_1.catalog.tags.tsv,
.matches.tsv(sample by sample),
.alleles.tsv(sample by sample),
.snps.tsv(sample by sample),
.tags.tsv (sample by sample),
batch_1.haplotypes,
batch_1.hapstats,
batch_1.markers,
batch_1.phistats,
batch_1.sumstats,
batch_1.sumstats_summary,
batch_1.populations,
ref_map.log
but when I access the web:
/localhost/stacks/index.php?db=chicken_radtags_
All samples output appear, but without information, totally empty
with no "unique stacks" and no "SNPs found"