Receiving Error While passing Dynamic Values From CSV in JMETER - csv

Please check the below steps I followed in my test.
First, created a CSV file with the username and password. I have done this in a text editor, and saved as .CSV file.
My CSV Looks Like:
username password
user 1 pwd 1
user 2 pwd 2
And I placed in the Specific path and I have mentioned in the same path in CSV Data Set Configuration steps.
File Name : mentioned full path of the file.
Variable names : username,password
Delimiter : ,
Next I Have defined the below changes in HTTP Request
username with ${username}
password with ${password}
After running the test, the passed user logins are not taking from the script and the test got failed.
please guide me if some where am wrong.

Here's how your input file and Jmeter item should look like:

If your CSV file is actually separated by tabulation symbols you need to change the delimiter to \t
I would suggest removing leading "username" and "password" from your CSV file, JMeter won't recognise header so you'll have at least one failed request.
Other bits look good.
You may also want to check out Using CSV DATA SET CONFIG guide and __CSVRead() function.

Related

How can I import a .TSV file using the "Get Data" command using SPSS syntax?

I'm importing a .TSV file, with the first row being the variable name and the first column as IDs, into SPSS using a syntax but I keep getting a Failure opening file error in my output. This is my code so far:
GET DATA
/TYPE=TXT
/FILE=\filelocation\filename.tsv
/DELCASE=LINE
/DELIMTERS="/t"
/QUALIFIER=''
/ARRANGEMENT=DELIMITED
/FIRSTCASE=2
/IMPORTCASE=ALL
/VARIABLES=
/MAP
RESTORE.
CACHE.
EXECUTE.
SAVE OUTFILE = "newfile.sav"
I think I'm having an issue in the delimters or qualifiers subcommand. Wondering if I should also include the variables under the variables subcommand. Any advice would be helpful. Thanks!
The GET DATA command you cite above has an empty /VARIABLES= subcommand.
If you used the "File -> Import Data -> Text Data' wizard, it would have populated this subcommand for you. If you are writing the GET DATA command syntax yourself, then you'd have to supply that list of field names yourself.

problems in copying a csv file from s3 to redshift

i am getting the following error if i run a copy command to copy contents of a .csv file in s3 to a table in redshift.
error:"String length exceeds DDL length".
i am using following copy command:
COPY enjoy from 's3://nmk-redshift-bucket/my_workbook.csv' CREDENTIALS 'aws_access_key_id=”****”;aws_secret_access_key=’**** ' CSV QUOTE '"' DELIMITER ',' NULL AS '\0'
i figured lets open the link given by s3 for my file through was console.
link for the work book is :
link to my s3bucket cvs file
the above file is filled with many weird characters i really don't understand.
the copy command is taking these characters instead of the information i have entered in my csv file.So hence leading to string length exceeded error.
i use sql workbench to query.My 'stl_load_errors' table in redshift has raw_field_values component similar to the chars in the link i mentioned above, thats how i got to know how its taking in the input
i am new to aws and utf-8 configs. so please i appreciate help on this
The link you provide points to a .xlsx file (but has a .csv extension instead of .xlsx), which is actually a zip file.
That is why you see those strange characters, the first 2 being 'PK', which means it is a zip file.
So you will have to export to .csv first, before using the file.

Importing data from a csv file in Jmeter

I tried to import data from a csv file. But it is not working. Can anybody help me?
My csv file:
username,password
usr1,pswd1
usr2,pswd2,
usr3,pswd3
......
.....
My CSV Data Set Config:
File name: D:\Jmeter\Data\Login.csv
Variable names: username,password
Allowed quoted data?:True
Recycle on EOF?:False
Stop thread on EOF?:True
But in the request body the username and password are not reflected
POST data:
__EVENTTARGET=ctl00%24ContentPlaceHolder1%24submit_btn&__EVENTARGUMENT=&__VIEWSTATE=5yDYRGJAvmOFe2yRZWqpXcU%2Fjy4d3kuQN0MiJSPN%2BSjcO4%2BWHatlhSCDH%2FYSsVkXcmaeSOeM5tgjxITgfplBaZdFcWAehSjSj6pCmpSNqI0%3D&__EVENTVALIDATION=rLAwRsfGRJxIUNuCrGNKSRwRWmH8KlXvFg85hbvt%2FUx9fI3qEEImGpMg%2Fi97mHb20kuESntswMVH5c%2BTkET8ludQxvA9%2Bnoz2wV2W4d%2FgcvK0rvRULKQhR4OxiVNJXQq2q3bR1cIUpYOIFbTEOyumwLlATsfrS1eAfuKZ8UEkeY%3D&ctl00%24ContentPlaceHolder1%24username_txt=%24%28username%29&ctl00%24ContentPlaceHolder1%24password_txt=%24%28password%29
I am just elaborating that how to creating\importing data from CSV
1.Create a CSV file in desired path-->D:\Jmeter\Data\Login.csv
Having data like this:-
usr1,pswd1
usr2,pswd2
usr3,pswd3
2.Add the csv data config similler to attached image
3.Use the username and password where you want like below snapshot
Hope it will work for you

JMeter data source CSV Data Set Config from command line

I have a JMeter setup which reads data from a CSV file configured in CSV Data Set Config element. It works fine, the CSV file is specified in CSV Data Set Config -> Filename.
Now I want to envoke JMeter from command line instead of GUI, and I want to specify a different filename for the above element. How to go about it?
I tried "-JCSVNAME=" but it does not seem to work.
Ideas?
Just use __P function in the Filename field of Listener
Then on command line , add this:
-Jdatadir=full path to folder that contains CSV
Just change the Filename field in CSV Data Set Config to ${__P(datadir)}.
Include the following option while using jmeter.
-Jdatadir=path_to_the_config_csv
Example:
jmeter -Jdatadir=/home/InputData.csv -n -t /home/request.jmx -l some.csv
CSV Data Set Config

How to copy csv data file to Amazon RedShift?

I'm trying to migrating some MySQL tables to Amazon Redshift, but met some problems.
The steps are simple:
1. Dump the MySQL table to a csv file
2. Upload the csv file to S3
3. Copy the data file to RedShift
Error occurs in step 3:
The SQL command is:
copy TABLE_A from 's3://ciphor/TABLE_A.csv' CREDENTIALS
'aws_access_key_id=xxxx;aws_secret_access_key=xxxx' delimiter ',' csv;
The error info:
An error occurred when executing the SQL command: copy TABLE_A from
's3://ciphor/TABLE_A.csv' CREDENTIALS
'aws_access_key_id=xxxx;aws_secret_access_key=xxxx ERROR: COPY CSV is
not supported [SQL State=0A000] Execution time: 0.53s 1 statement(s)
failed.
I don't know if there's any limitations on the format of the csv file, say the delimiters and quotes, I cannot find it in documents.
Any one can help?
The problem is finally resolved by using:
copy TABLE_A from 's3://ciphor/TABLE_A.csv' CREDENTIALS
'aws_access_key_id=xxxx;aws_secret_access_key=xxxx' delimiter ','
removequotes;
More information can be found here http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html
Now Amazon Redshift supports CSV option for COPY command. It's better to use this option to import CSV formatted data correctly. The format is shown bellow.
COPY [table-name] FROM 's3://[bucket-name]/[file-path or prefix]'
CREDENTIALS 'aws_access_key_id=xxxx;aws_secret_access_key=xxxx' CSV;
The default delimiter is ( , ) and the default quotes is ( " ). Also you can import TSV formatted data with CSV and DELIMITER option like this.
COPY [table-name] FROM 's3://[bucket-name]/[file-path or prefix]'
CREDENTIALS 'aws_access_key_id=xxxx;aws_secret_access_key=xxxx' CSV DELIMITER '\t';
There are some disadvantages to use the old way(DELIMITER and REMOVEQUOTES) that REMOVEQUOTES does not support to have a new line or a delimiter character within an enclosed filed. If the data can include this kind of characters, you should use CSV option.
See the following link for the details.
http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html
If you want to save your self some code/ you have a very basic use case you can use Amazon Data Pipeline.
it stats a spot instance and perform the transformation within amazon network and it's really intuitive tool (but very simple so you can't do complex things with it)
You can try with this
copy TABLE_A from 's3://ciphor/TABLE_A.csv' CREDENTIALS 'aws_access_key_id=xxxx;aws_secret_access_key=xxxx' csv;
CSV itself means comma separated values, no need to provide delimiter with this. Please refer link.
[http://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-data-format.html#copy-format]
I always this code:
COPY clinical_survey
FROM 's3://milad-test/clinical_survey.csv'
iam_role 'arn:aws:iam::123456789123:role/miladS3xxx'
CSV
IGNOREHEADER 1
;
Description:
1- COPY the name of your file store in S3
2- FROM address of file
3- iam_role is a substitution for CREDENTIAL. Note that, iam_role should be defined in iam management menu at your console, and then in trust menu should be assigned to the user as well (That is the hardest part!)
4- CSV uses comma delimiter
5- IGNORHEADER 1 is a must! Otherwise it will throw an error. (skip one row of my CSV and consider it as a header)
Since the resolution has already been provided, I'll not repeat the obvious.
However, in case you receive some more error which you're not able to figure out, simply execute on your workbench while you're connected to any of the Redshift accounts:
select * from stl_load_errors [where ...];
stl_load_errors contains all the Amazon RS load errors in historical fashion where a normal user can view details corresponding to his / her own account but a superuser can have all the access.
The details are captured elaborately at :
Amazon STL Load Errors Documentation
Little late to comment but it can be useful:-
You can use an open source project to copy tables directly from mysql to redshift - sqlshift.
It only requires spark and if you have yarn then it can also be used.
Benefits:- It will automatically decides distkey and interleaved sortkey using primary key.
It looks like you are trying to load local file into REDSHIFT table.
CSV file has to be on S3 for COPY command to work.
If you can extract data from table to CSV file you have one more scripting option. You can use Python/boto/psycopg2 combo to script your CSV load to Amazon Redshift.
In my MySQL_To_Redshift_Loader I do the following:
Extract data from MySQL into temp file.
loadConf=[ db_client_dbshell ,'-u', opt.mysql_user,'-p%s' % opt.mysql_pwd,'-D',opt.mysql_db_name, '-h', opt.mysql_db_server]
...
q="""
%s %s
INTO OUTFILE '%s'
FIELDS TERMINATED BY '%s'
ENCLOSED BY '%s'
LINES TERMINATED BY '\r\n';
""" % (in_qry, limit, out_file, opt.mysql_col_delim,opt.mysql_quote)
p1 = Popen(['echo', q], stdout=PIPE,stderr=PIPE,env=env)
p2 = Popen(loadConf, stdin=p1.stdout, stdout=PIPE,stderr=PIPE)
...
Compress and load data to S3 using boto Python module and multipart upload.
conn = boto.connect_s3(AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY)
bucket = conn.get_bucket(bucket_name)
k = Key(bucket)
k.key = s3_key_name
k.set_contents_from_file(file_handle, cb=progress, num_cb=20,
reduced_redundancy=use_rr )
Use psycopg2 COPY command to append data to Redshift table.
sql="""
copy %s from '%s'
CREDENTIALS 'aws_access_key_id=%s;aws_secret_access_key=%s'
DELIMITER '%s'
FORMAT CSV %s
%s
%s
%s;""" % (opt.to_table, fn, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY,opt.delim,quote,gzip, timeformat, ignoreheader)