I am writing a bdd test using cucumber but i have a problem in reading a line for an assert.
it doesn't read the tabulations at the beginning of the expected line in my feature file
here is my assert:
lines.forEachIndexed { i, _ -> assertEquals(expectedLines[i], lines[i]) }
here is the line causing error in my feature file (expectedlines):
| 335374655.87 Files to be generated compliant with SLA figures 0 |
and here is the error:
org.junit.ComparisonFailure:
Expected :335374655.87 Files to be generated compliant with SLA figures 0
Actual : 335374655.87 Files to be generated compliant with SLA figures 0
any help please?
Related
The Systemverilog code below is a single file testbench which reads a binary file into a memory using $fread then prints the memory contents. The binary file is 16 bytes and a view of it is included below (this is what I expect the Systemverilog code to print).
The output printed matches what I expect for the first 6 (0-5) bytes. At that point the expected output is 0x80, however the printed output is a sequence of 3 bytes starting with 0xef which are not in the stimulus file. After those 3 bytes the output matches the stimulus again. It seems as if when bit 7 of the binary byte read is 1, then the error occurs. It almost as if the data is being treated as signed, but it is not, its binary data printed as hex. The memory is defined as type logic which is unsigned.
This is similar to a question/answer in this post:
Read binary file data in Verilog into 2D Array.
However my code does not have the same issue (I use "rb") in the $fopen statement, so that
solution does not apply to this issue.
The Systemverilog spec 1800-2012 states in section 21.3.4.4 Reading binary data that $fread can be used to read a binary file and goes on to say how. I believe this example is compliant to what is stated in that section.
The code is posted on EDA Playground so that users can see it and run it.
https://www.edaplayground.com/x/5wzA
You need a login to run it and download. The login is free. It provides
access to full cloud-based versions of the industry standard tools for HDL simulation.
Also tried running 3 different simulators on EDA Playground. They all produce the same result.
Have tried re-arranging the stim.bin file so that the 0x80 value occurs at the beginning of the file rather than in the middle. In that case the error also occurs at the beginning of the testbench printing output.
Maybe the Systemverilog code is fine and the problem is the binary file? I have provided a screenshot of what emacs hexl mode shows for it's contents. Also viewed it another viewer and it looked the same. You can download it when running on EDA Playground to examine it in another editor. The binary file was generated by GNU Octave.
Would prefer to have a solution which uses Systemverilog $fread rather than something else in order to debug the original rather than work around it (learning). This will be developed into a Systemverilog testbench which applies stimulus read from a binary file generated in Octave/Matlab to a Systemverilog DUT. Binary fileIO is prefered because of the file access speed.
Why does the Systemverilog testbench print 0xef rather than 0x80 for mem[6]?
module tb();
// file descriptors
int read_file_descriptor;
// memory
logic [7:0] mem [15:0];
// ---------------------------------------------------------------------------
// Open the file
// ---------------------------------------------------------------------------
task open_file();
$display("Opening file");
read_file_descriptor=$fopen("stim.bin","rb");
endtask
// ---------------------------------------------------------------------------
// Read the contents of file descriptor
// ---------------------------------------------------------------------------
task readBinFile2Mem ();
int n_Temp;
n_Temp = $fread(mem, read_file_descriptor);
$display("n_Temp = %0d",n_Temp);
endtask
// ---------------------------------------------------------------------------
// Close the file
// ---------------------------------------------------------------------------
task close_file();
$display("Closing the file");
$fclose(read_file_descriptor);
endtask
// ---------------------------------------------------------------------------
// Shut down testbench
// ---------------------------------------------------------------------------
task shut_down();
$stop;
endtask
// ---------------------------------------------------------------------------
// Print memory contents
// ---------------------------------------------------------------------------
task printMem();
foreach(mem[i])
$display("mem[%0d] = %h",i,mem[i]);
endtask
// ---------------------------------------------------------------------------
// Main execution loop
// ---------------------------------------------------------------------------
initial
begin :initial_block
open_file;
readBinFile2Mem;
close_file;
printMem;
shut_down;
end :initial_block
endmodule
Binary Stimulus File:
Actual output:
Opening file
n_Temp = 16
Closing the file
mem[15] = 01
mem[14] = 00
mem[13] = 50
mem[12] = 60
mem[11] = 71
mem[10] = 72
mem[9] = 73
mem[8] = bd
mem[7] = bf
mem[6] = ef
mem[5] = 73
mem[4] = 72
mem[3] = 71
mem[2] = 60
mem[1] = 50
mem[0] = 00
Update:
An experiment was run in order to test that the binary file may be getting modified during the process of uploading to EDA playground. There is no Systemverilog code involved in these steps, it's just a file upload/download.
Steps:
(Used https://hexed.it/ to create and view the binary file)
Create/save binary file with the hex pattern 80 00 80 00 80 00 80 00
Create new playground
Upload new created binary file to the new playground
Check the 'download files after run' box on the playground
Save playground
Run playground
Save/unzip the results from the playground run
View the binary file, in my case it has been modified during the process of
upload/download. A screenshot of the result is shown below:
This experiment was conducted on two different Windows workstations.
Based on these results and the comments I am going to close this issue, with the disposition that this is not a Systemverilog issue, but is related to upload/dowload of binary files to EDA playground. Thanks to those who commented.
The unexpected output produced by the testbench is due to modifications that occur to the binary stimulus file during/after upload to EDA playground. The Systemverilog testbench performs as intended to print the contents of the binary file.
This conclusion is based on community comments and experimental results which are provided at the end of the updated question. A detailed procedure is given so that others can repeat the experiment.
I tried converting my .csv file to .dat format and tried to load the file into Octave. It throws an error:
unable to find file filename
I also tried to load the file in .csv format using the syntax
x = csvread(filename)
and it throws the error:
'filename' undefined near line 1 column 13.
I also tried loading the file by opening it on the editor and I tried loading it and now it shows me
warning: load: 'filepath' found by searching load path
error: load: unable to determine file format of 'Salary_Data.dat'.
How can I load my data?
>> load Salary_Data.dat
error: load: unable to find file Salary_Data.dat
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> Salary_Data
error: 'Salary_Data' undefined near line 1 column 1
>> x = csvread(Salary_Data)
error: 'Salary_Data' undefined near line 1 column 13
>> x = csvread(Salary_Data.csv)
error: 'Salary_Data' undefined near line 1 column 13
>> load Salary_Data.dat
warning: load: 'C:/Users/vaith/Desktop\Salary_Data.dat' found by searching load path
error: load: unable to determine file format of 'Salary_Data.dat'
>> load Salary_Data.csv
warning: load: 'C:/Users/vaith/Desktop\Salary_Data.csv' found by searching load path
error: load: unable to determine file format of 'Salary_Data.csv'
Salary_Data.csv
YearsExperience,Salary
1.1,39343.00
1.3,46205.00
1.5,37731.00
2.0,43525.00
2.2,39891.00
2.9,56642.00
3.0,60150.00
3.2,54445.00
3.2,64445.00
3.7,57189.00
3.9,63218.00
4.0,55794.00
4.0,56957.00
4.1,57081.00
4.5,61111.00
4.9,67938.00
5.1,66029.00
5.3,83088.00
5.9,81363.00
6.0,93940.00
6.8,91738.00
7.1,98273.00
7.9,101302.00
8.2,113812.00
8.7,109431.00
9.0,105582.00
9.5,116969.00
9.6,112635.00
10.3,122391.00
10.5,121872.00
Ok, you've stumbled through a whole pile of issues here.
It would help if you didn't give us error messages without the commands that produced them.
The first message means you were telling Octave to open something called filename and it couldn't find anything called filename. Did you define the variable filename? Your second command and the error message suggests you didn't.
Do you know what Octave's working directory is? Is it the same as where the file is located? From the response to your load commands, I'd guess not. The file is located at C:/Users/vaith/Desktop. Octave's working directory is probably somewhere else.
(Try the pwd command and see what it tells you. Use the file browser or the cd command to navigate to the same location as the file. help pwd and help cd commands would also provide useful information.)
The load command, used as a command (load file.txt) can take an input that is or isn't defined as a string. A function format (load('file.txt') or csvread('file.txt')) must be a string input, hence the quotes around file.txt. So all of your csvread input commands thought you were giving it variable names, not filenames.
Last, the fact that load couldn't read your data isn't overly surprising. Octave is trying to guess what kind of file it is and how to load it. I assume you tried help load to see what the different command options are? You can give it different options to help Octave figure it out. If it actually is a csv file though, and is all numbers not text, then csvread might still be your best option if you use it correctly. help csvread would be good information for you.
It looks from your data like you have a header line that is probably confusing the load command. For data that simply formatted, the csvread command can bring in the data. It will replace your header text with zeros.
So, first, navigate to the location of the file:
>> cd C:/Users/vaith/Desktop
then open the file:
>> mydata = csvread('Salary_Data.csv')
mydata =
0.00000 0.00000
1.10000 39343.00000
1.30000 46205.00000
1.50000 37731.00000
2.00000 43525.00000
...
If you plan to reuse the filename, you can assign it to a variable, then open the file:
>> myfile = 'Salary_Data.csv'
myfile = Salary_Data.csv
>> mydata = csvread(myfile)
mydata =
0.00000 0.00000
1.10000 39343.00000
1.30000 46205.00000
1.50000 37731.00000
2.00000 43525.00000
...
Notice how the filename is stored and used as a string with quotation marks, but the variable name is not. Also, csvread converted non-numeric header data to 'zeros'. The help for csvread and dlmread show you how to change it to something other than zero, or to skip a certain number of rows. If you want to preserve the text, you'll have to use some other input function.
Here is the situation - jmeter results were recorded in .csv quite a long time ago (around 6 month). The version of jmeter was not changed (3.0) but the config files did. Now I've been trying to generate a report from the old csv as usual - using
jmeter.bat -g my.csv -o reportFolder
Also, to defeat the incompatibility of configurations, I created a file named local-saveservice.properties, and passed it through -q command line option. Playing with settings in this file, I managed to defeat several errors like "column number mismatch" or "No column xxx found in sample metadata", but I still didn't generate the report succesfully, and here is the trouble:
File 'D:\ .....\load_NSI_stepping3_2017-03-24-new.csv' does not contain the field names h
eader, ensure the jmeter.save.saveservice.* properties are the same as when the CSV file was created or the file may be
read incorrectly
An error occurred: Error while processing samples:Consumer failed with message :Consumer failed with message :Consumer f
ailed with message :Consumer failed with message :Error in sample at line:1 converting field:Latency at column:11 to:lon
g, fieldValue:'UTF-8'
However,in my .csv column number 11 has the header "Latency" and contains numeric values, though 'UTF-8' is the content of next column - "Encoding"
Here are first lines of my .csv
timeStamp,elapsed,label,responseCode,responseMessage,success,bytes,grpThreads,allThreads,URL,Latency,Encoding,SampleCount,ErrorCount,Connect,threadName
1490364040950,665,searchItemsInCatalogRequest,200,OK,true,25457,1,1,http://*.*.*.*:9080/em/.....Service,654,UTF-8,1,0,9,NSI - search item in catalog
1490364041620,507,searchItemsInCatalogRequest,200,OK,true,25318,1,1,http://*.*.*.*:9080/em/.....Service,499,UTF-8,1,0,0,NSI - search item in catalog
1490364042134,495,searchItemsInCatalogRequest,200,OK,true,24266,2,2,http://*.*.*.*:9080/em/.....Service,487,UTF-8,1,0,0,NSI - search item in catalog
1490364043595,563,searchItemsInCatalogRequest,200,OK,true,24266,2,2,http://*.*.*.*:9080/em/.....Service,556,UTF-8,1,0,6,NSI - search item in catalog
PS I had to add threadName manually, 'cos it was not saved during initial data recording (my knowledge of Jmeter was even less then now :) )
First you should update to JMeter 3.3 as there are bugs fixed in report generation in the 3 versions after 3.0.
Second add to your command line:
jmeter.bat -p <path to jmeter.properties> -q <path to your custom.properties used when you generated file> -g my.csv -o reportFolder
Ensure that in "your custom.properties" you set to false all fields that are prefixed by jmeter.save.saveservice that didn't yet exist at the time you generated the file.
I have a pretty long R code that takes approx 2-3 hours to run and Knit to HTML. However even with minor errors or warning.. the Knit aborts.. In the below example it has done so due to savehistory error.
processing file: model_v64.Rmd
|...................... | 33%
ordinary text without R code
|........................................... | 67%
label: unnamed-chunk-1 (with options)
List of 1
$ echo: logi TRUE
Quitting from lines 21-278 (model_v64.Rmd)
**Error in .External2(C_savehistory, file) : no history available to save**
Calls: <Anonymous> ... withCallingHandlers -> withVisible -> eval -> eval -> savehistory
Execution halted
Is there any way in which we can
Option 1 - Recover the partially created HTML from tmp directories or anywhere else
Option 2 - Get Knit HTML to continue on errors and not halt the code.
Option 3 - atleast save the already Knit HTML and then stop.
Thanks, Manish
Set the chunk option error = TRUE to display the error instead of halting R:
knitr::opts_chunk$set(error = TRUE)
I'm trying to load a large csv file, 3,715,259 lines.
I created this file myself and there are 9 fields separated by commas.
Here's the error:
df = pd.read_csv("avaya_inventory_rev2.csv", error_bad_lines=False)
Skipping line 2924525: expected 9 fields, saw 11
Skipping line 2924526: expected 9 fields, saw 10
Skipping line 2924527: expected 9 fields, saw 10
Skipping line 2924528: expected 9 fields, saw 10
This doesn't make sense to me, I inspected the offending lines using:
sed -n "2924524,2924525p" infile.csv
I can't list the outputs as they contain proprietary information for a client. I'll try to synthesize a meaningful replacement.
Lines 2924524 and 2924525 look to have to same number of fields to me.
Also, I was able to load the same file into a mySQL table with no error.
create table Inventory (path varchar (255), isText int, ext varchar(5), type varchar(100), size int, sloc int, comments int, blank int, tot_lines int);
I don't know enough about mySQL to understand why that may or maynot be a valid test and why pandas would have a different outcome from loading the same file.
TIA !
'''UPDATE''': I tried to read with the engine='python':
Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode?
When I create this csv, I'm using a shell script I wrote. I feed lines to the file with redirect >>
I tried the suggested fix:
input = open(input, 'rU')
df.read_csv(input, engine='python')
Back to the same error:
ValueError: Expected 9 fields in line 5157, saw 11
I'm guessing it has to do with my csv creation script and how I dealt with
quoting in that. I don't know how to investigate this further.
I opened the csv input file in vim and on line 5157 there's a ^M which google says it Windows CR.
OK...I'm closer, although I did kinda suspect something like this and used dos2unix on the csv intput.
I removed the ^M using vim, and re-ran with same error about
11 fields. However, I can now see the 11 fields where as before I just saw
9. There's v's which is likely some kind of Windows hold over ?
SUMMARY: SOme somebody thought it'd be cute to name files with fobar.sh,v
So my profiler didn't mess up it's was just a name weirdness...plus the random \cr\lf from windows that snuck in....
Cheers