Import file using DBeaver - mysql

I currently use Mysql Workbeanch and I want to move to DBeaver as it is an all in for multiple databases.
However I use a function to import a CSV from Amazon S3 buckets as follows
LOAD DATA FROM S3 's3-eu-west-2://csv-files/OCN04.txt'
INTO TABLE OCN
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
IGNORE 1 ROWS
;
This function does not work in DBEAVER

Load data infile works but i am guessing the file path is the issue here. I'm guessing that dbeaver only reads file from a specific directory. To know the specific/default directory, you try running the above code with simply the name of your file (i.e. 'OCN04.txt'). The error message will show the file path. Save your txt file in that directory and try running your code again.
The above method works for me.

On DBeaver you can import data from file into a table with the Gui.
Rigth-click on the table and select "import data" menu.
Import Data Capture
and select the file to import.

Related

Putting a file content into an sql query?

Hey I have a large database where customers request data that is specific to them. They usually send me the requests in a text or csv file. I was wondering if there is a way to get sql to read that file and take the content and put them into a sql query. This way I don't have to open up that file and copy and paste everything into a sql query.
Steve already answered it.
Let me add few words only.
you can not use the csv, text,excel or anyother format directly in
query for DML/DDL.. you can use file directly only for export/import.
No. MySQL is not designed to do this.
You need an intermediate script that can interpret the files and generate the queries you require.
Yes there is a way to do it: you can import the csv file to your database and then join it with any query you want.
You can load the csv file with an SQL query such as:
LOAD DATA INFILE "/tmp/test.csv"
INTO TABLE test
COLUMNS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
ESCAPED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES;
You can use other ways to import the data, see: How to import CSV file to MySQL table.
I tried this SQL solution in Ubuntu 14.04 with MySQL 5.6. For this to work you will have to put the test.csv file in the /tmp directory and do a chmod 755 test.csv for it to work. Otherwise MySQL is gives "Permission denied" errors. More about this issue on: LOAD DATA INFILE Error Code : 13

How can I load 10,000 rows of test.xls file into mysql db table?

How can I load 10,000 rows of test.xls file into mysql db table?
When I use below query it shows this error.
LOAD DATA INFILE 'd:/test.xls' INTO TABLE karmaasolutions.tbl_candidatedetail (candidate_firstname,candidate_lastname);
My primary key is candidateid and has below properties.
The test.xls contains data like below.
I have added rows starting from candidateid 61 because upto 60 there are already candidates in table.
please suggest the solutions.
Export your Excel spreadsheet to CSV format.
Import the CSV file into mysql using a similar command to the one you are currently trying:
LOAD DATA INFILE 'd:/test.csv'
INTO TABLE karmaasolutions.tbl_candidatedetail
(candidate_firstname,candidate_lastname);
To import data from Excel (or any other program that can produce a text file) is very simple using the LOAD DATA command from the MySQL Command prompt.
Save your Excel data as a csv file (In Excel 2007 using Save As) Check
the saved file using a text editor such as Notepad to see what it
actually looks like, i.e. what delimiter was used etc. Start the MySQL
Command Prompt (I’m lazy so I usually do this from the MySQL Query
Browser – Tools – MySQL Command Line Client to avoid having to enter
username and password etc.) Enter this command: LOAD DATA LOCAL INFILE
‘C:\temp\yourfile.csv’ INTO TABLE database.table FIELDS TERMINATED
BY ‘;’ ENCLOSED BY ‘”‘ LINES TERMINATED BY ‘\r\n’ (field1, field2);
[Edit: Make sure to check your single quotes (') and double quotes (")
if you copy and paste this code - it seems WordPress is changing them
into some similar but different characters] Done! Very quick and
simple once you know it :)
Some notes from my own import – may not apply to you if you run a different language version, MySQL version, Excel version etc…
TERMINATED BY – this is why I included step 2. I thought a csv would default to comma separated but at least in my case semicolon was the deafult
ENCLOSED BY – my data was not enclosed by anything so I left this as empty string ”
LINES TERMINATED BY – at first I tried with only ‘\n’ but had to add the ‘\r’ to get rid of a carriage return character being imported into the database
Also make sure that if you do not import into the primary key field/column that it has auto increment on, otherwhise only the first row will be imported
Original Author reference

How can I import data from excel to MYSQL?

I tried to import the data from excel to MYSQL by using the following command:
load data local infile "filepath" into table mytab;
But this command is work for import the data from text file not for csv file or xlsx. I want to import the data directly from excel to MYSQL.
check following,
see the seperator in your csv file and make sure you provide the correct one in command.
Referance Link
You need to convert your Excel sheet into a CSV file, or use MySQL for Excel to help with the process.
load data local infile "filepath" into table mytab fields terminated by ';' enclosed by '"' lines terminated by '\r\n';
Should be the command you need, but you should export to csv first.

How to copy csv data file to Amazon RedShift?

I'm trying to migrating some MySQL tables to Amazon Redshift, but met some problems.
The steps are simple:
1. Dump the MySQL table to a csv file
2. Upload the csv file to S3
3. Copy the data file to RedShift
Error occurs in step 3:
The SQL command is:
copy TABLE_A from 's3://ciphor/TABLE_A.csv' CREDENTIALS
'aws_access_key_id=xxxx;aws_secret_access_key=xxxx' delimiter ',' csv;
The error info:
An error occurred when executing the SQL command: copy TABLE_A from
's3://ciphor/TABLE_A.csv' CREDENTIALS
'aws_access_key_id=xxxx;aws_secret_access_key=xxxx ERROR: COPY CSV is
not supported [SQL State=0A000] Execution time: 0.53s 1 statement(s)
failed.
I don't know if there's any limitations on the format of the csv file, say the delimiters and quotes, I cannot find it in documents.
Any one can help?
The problem is finally resolved by using:
copy TABLE_A from 's3://ciphor/TABLE_A.csv' CREDENTIALS
'aws_access_key_id=xxxx;aws_secret_access_key=xxxx' delimiter ','
removequotes;
More information can be found here http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html
Now Amazon Redshift supports CSV option for COPY command. It's better to use this option to import CSV formatted data correctly. The format is shown bellow.
COPY [table-name] FROM 's3://[bucket-name]/[file-path or prefix]'
CREDENTIALS 'aws_access_key_id=xxxx;aws_secret_access_key=xxxx' CSV;
The default delimiter is ( , ) and the default quotes is ( " ). Also you can import TSV formatted data with CSV and DELIMITER option like this.
COPY [table-name] FROM 's3://[bucket-name]/[file-path or prefix]'
CREDENTIALS 'aws_access_key_id=xxxx;aws_secret_access_key=xxxx' CSV DELIMITER '\t';
There are some disadvantages to use the old way(DELIMITER and REMOVEQUOTES) that REMOVEQUOTES does not support to have a new line or a delimiter character within an enclosed filed. If the data can include this kind of characters, you should use CSV option.
See the following link for the details.
http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html
If you want to save your self some code/ you have a very basic use case you can use Amazon Data Pipeline.
it stats a spot instance and perform the transformation within amazon network and it's really intuitive tool (but very simple so you can't do complex things with it)
You can try with this
copy TABLE_A from 's3://ciphor/TABLE_A.csv' CREDENTIALS 'aws_access_key_id=xxxx;aws_secret_access_key=xxxx' csv;
CSV itself means comma separated values, no need to provide delimiter with this. Please refer link.
[http://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-data-format.html#copy-format]
I always this code:
COPY clinical_survey
FROM 's3://milad-test/clinical_survey.csv'
iam_role 'arn:aws:iam::123456789123:role/miladS3xxx'
CSV
IGNOREHEADER 1
;
Description:
1- COPY the name of your file store in S3
2- FROM address of file
3- iam_role is a substitution for CREDENTIAL. Note that, iam_role should be defined in iam management menu at your console, and then in trust menu should be assigned to the user as well (That is the hardest part!)
4- CSV uses comma delimiter
5- IGNORHEADER 1 is a must! Otherwise it will throw an error. (skip one row of my CSV and consider it as a header)
Since the resolution has already been provided, I'll not repeat the obvious.
However, in case you receive some more error which you're not able to figure out, simply execute on your workbench while you're connected to any of the Redshift accounts:
select * from stl_load_errors [where ...];
stl_load_errors contains all the Amazon RS load errors in historical fashion where a normal user can view details corresponding to his / her own account but a superuser can have all the access.
The details are captured elaborately at :
Amazon STL Load Errors Documentation
Little late to comment but it can be useful:-
You can use an open source project to copy tables directly from mysql to redshift - sqlshift.
It only requires spark and if you have yarn then it can also be used.
Benefits:- It will automatically decides distkey and interleaved sortkey using primary key.
It looks like you are trying to load local file into REDSHIFT table.
CSV file has to be on S3 for COPY command to work.
If you can extract data from table to CSV file you have one more scripting option. You can use Python/boto/psycopg2 combo to script your CSV load to Amazon Redshift.
In my MySQL_To_Redshift_Loader I do the following:
Extract data from MySQL into temp file.
loadConf=[ db_client_dbshell ,'-u', opt.mysql_user,'-p%s' % opt.mysql_pwd,'-D',opt.mysql_db_name, '-h', opt.mysql_db_server]
...
q="""
%s %s
INTO OUTFILE '%s'
FIELDS TERMINATED BY '%s'
ENCLOSED BY '%s'
LINES TERMINATED BY '\r\n';
""" % (in_qry, limit, out_file, opt.mysql_col_delim,opt.mysql_quote)
p1 = Popen(['echo', q], stdout=PIPE,stderr=PIPE,env=env)
p2 = Popen(loadConf, stdin=p1.stdout, stdout=PIPE,stderr=PIPE)
...
Compress and load data to S3 using boto Python module and multipart upload.
conn = boto.connect_s3(AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY)
bucket = conn.get_bucket(bucket_name)
k = Key(bucket)
k.key = s3_key_name
k.set_contents_from_file(file_handle, cb=progress, num_cb=20,
reduced_redundancy=use_rr )
Use psycopg2 COPY command to append data to Redshift table.
sql="""
copy %s from '%s'
CREDENTIALS 'aws_access_key_id=%s;aws_secret_access_key=%s'
DELIMITER '%s'
FORMAT CSV %s
%s
%s
%s;""" % (opt.to_table, fn, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY,opt.delim,quote,gzip, timeformat, ignoreheader)

Is there a way of LOAD DATA INFILE (import) xlsx file into MySQL database table

I know that this is discussed a lot but I don't find solution of how to do that.
What I need is to import an excel file (xls/xlsx) to my database table. It is a button which does that and the command which is executed is like that:
string cmdText = "LOAD DATA INFILE 'importTest4MoreMore.csv' INTO TABLE management FIELDS TERMINATED BY ',';";
It works great. But I need to import excel file not CSV. As far as I know LOAD DATA command does not support binary files which xls is.
So what's the solution to that? Please help
Thanks a lot
pepys
.xls will never be importable directly into MySQL. it's a compound OLE file, which means its internal layout is not understandable by mere mortals (or even Bill Gates). .xlsx is basically just a .zip file which contains multiple .xml/xslt/etc. files. You can probably extract the relevant .xml that contains the actual spreadsheet data, but again - it's not likely to be in a format that's directly importable by MySQL's load infile.
The simplest solution is to export the .xls/xlsx to a .csv.
How to import 'xlsx' file into MySQL:
1/ Open your '.xlsx' file Office Excel and click on 'Save As' button from menu and select
'CSV (MS-DOS) (*.csv)'
from 'Save as type' list. Finally click 'Save' button.
2/ Copy or upload the .csv file into your installed MySQL server (a directory path like: '/root/someDirectory/' in Linux servers)
3/ Login to your database:
mysql -u root -pSomePassword
4/ Create and use destination database:
use db1
5/ Create a MySQL table in your destination database (e.g. 'db1') with columns like the ones of '.csv' file above.
6/ Execute the following command:
LOAD DATA INFILE '/root/someDirectory/file1.csv' INTO TABLE `Table1` FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\r\n' IGNORE 1 LINES;
Please note that the option 'IGNORE 1 LINES' says MySQL to ignore the first line of '.csv' file. So, it is just for '.xlsx' files with 1 header column. You can remove this option.
You can load xls or xlsx files with Data Import tool (MS Excel or MS Excel 2007 format) in dbForge Studio for MySQL. This tool opens Excel files directly, COM interface is not used; and command line is supported.