MySQL is awesome! I am currently involved in a major server migration and previously, our small database used to be hosted on the same server as the client. So we used to do this : SELECT * INTO OUTFILE .... LOAD DATA INFILE ....
Now, we moved the database to a different server and SELECT * INTO OUTFILE .... no longer works, understandable - security reasons I believe.
But, interestingly LOAD DATA INFILE .... can be changed to LOAD DATA LOCAL INFILE .... and bam, it works.
I am not complaining nor am I expressing disgust towards MySQL. The alternative to that added 2 lines of extra code and a system call form a .sql script. All I wanted to know is why LOAD DATA LOCAL INFILE works and why is there no such thing as SELECT INTO OUTFILE LOCAL?
I did my homework, couldn't find a direct answer to my questions above. I couldn't find a feature request # MySQL either. If someone can clear that up, that had be awesome!
Is MariaDB capable of handling this problem?
From the manual: The SELECT ... INTO OUTFILE statement is intended primarily to let you very quickly dump a table to a text file on the server machine. If you want to create the resulting file on some client host other than the server host, you cannot use SELECT ... INTO OUTFILE. In that case, you should instead use a command such as mysql -e "SELECT ..." > file_name to generate the file on the client host."
http://dev.mysql.com/doc/refman/5.0/en/select.html
An example:
mysql -h my.db.com -u usrname--password=pass db_name -e 'SELECT foo FROM bar' > /tmp/myfile.txt
You can achieve what you want with the mysql console with the -s (--silent) option passed in.
It's probably a good idea to also pass in the -r (--raw) option so that special characters don't get escaped. You can use this to pipe queries like you're wanting.
mysql -u username -h hostname -p -s -r -e "select concat('this',' ','works')"
EDIT: Also, if you want to remove the column name from your output, just add another -s (mysql -ss -r etc.)
The path you give to LOAD DATA INFILE is for the filesystem on the machine where the server is running, not the machine you connect from. LOAD DATA LOCAL INFILE is for the client's machine, but it requires that the server was started with the right settings, otherwise it's not allowed. You can read all about it here: http://dev.mysql.com/doc/refman/5.0/en/load-data-local.html
As for SELECT INTO OUTFILE I'm not sure why there is not a local version, besides it probably being tricky to do over the connection. You can get the same functionality through the mysqldump tool, but not through sending SQL to the server.
Since I find myself rather regularly looking for this exact problem (in the hopes I missed something before...), I finally decided to take the time and write up a small gist to export MySQL queries as CSV files, kinda like https://stackoverflow.com/a/28168869 but based on PHP and with a couple of more options. This was important for my use case, because I need to be able to fine-tune the CSV parameters (delimiter, NULL value handling) AND the files need to be actually valid CSV, so that a simple CONCAT is not sufficient since it doesn't generate valid CSV files if the values contain line breaks or the CSV delimiter.
Caution: Requires PHP to be installed on the server!
(Can be checked via php -v)
"Install" mysql2csv via
wget https://gist.githubusercontent.com/paslandau/37bf787eab1b84fc7ae679d1823cf401/raw/29a48bb0a43f6750858e1ddec054d3552f3cbc45/mysql2csv -O mysql2csv -q && (sha256sum mysql2csv | cmp <(echo "b109535b29733bd596ecc8608e008732e617e97906f119c66dd7cf6ab2865a65 mysql2csv") || (echo "ERROR comparing hash, Found:" ;sha256sum mysql2csv) ) && chmod +x mysql2csv
(download content of the gist, check checksum and make it executable)
Usage example
./mysql2csv --file="/tmp/result.csv" --query='SELECT 1 as foo, 2 as bar;' --user="username" --password="password"
generates file /tmp/result.csv with content
foo,bar
1,2
help for reference
./mysql2csv --help
Helper command to export data for an arbitrary mysql query into a CSV file.
Especially helpful if the use of "SELECT ... INTO OUTFILE" is not an option, e.g.
because the mysql server is running on a remote host.
Usage example:
./mysql2csv --file="/tmp/result.csv" --query='SELECT 1 as foo, 2 as bar;' --user="username" --password="password"
cat /tmp/result.csv
Options:
-q,--query=name [required]
The query string to extract data from mysql.
-h,--host=name
(Default: 127.0.0.1) The hostname of the mysql server.
-D,--database=name
The default database.
-P,--port=name
(Default: 3306) The port of the mysql server.
-u,--user=name
The username to connect to the mysql server.
-p,--password=name
The password to connect to the mysql server.
-F,--file=name
(Default: php://stdout) The filename to export the query result to ('php://stdout' prints to console).
-L,--delimiter=name
(Default: ,) The CSV delimiter.
-C,--enclosure=name
(Default: ") The CSV enclosure (that is used to enclose values that contain special characters).
-E,--escape=name
(Default: \) The CSV escape character.
-N,--null=name
(Default: \N) The value that is used to replace NULL values in the CSV file.
-H,--header=name
(Default: 1) If '0', the resulting CSV file does not contain headers.
--help
Prints the help for this command.
Using mysql CLI with -e option as Waverly360 suggests is a good one, but that might go out of memory and get killed on large results. (Havent find the reason behind it).
If that is the case, and you need all records, my solution is: mysqldump + mysqldump2csv:
wget https://raw.githubusercontent.com/jamesmishra/mysqldump-to-csv/master/mysqldump_to_csv.py
mysqldump -u username -p --host=hostname database table | python mysqldump_to_csv.py > table.csv
Re: SELECT * INTO OUTFILE
Check if MySQL has permissions to write a file to the OUTFILE directory on the server.
Try setting path to /var/lib/mysql-files/filename.csv (MySQL 8). Determine what files directory is yours by typping SHOW VARIABLES LIKE "secure_file_priv"; in mysql client command line.
See answer about here: (...) --secure-file-priv in MySQL answered in 2015 by vhu user
Related
Let's suppose I have large MySql dump which I want to import to a specific database.
I could use
mysql -D bar --one-database < foo.mysql
foo.mysql has a use foo; somewhere.
This command is already doing most of what I want: Ignoring data which would be important to another database than bar.
I could use a grep -e "^use " foo.mysql to check if the database dump contains a use statement.
But can I do this also during the import, so I do not have to read the dump twice?
Reading while importing example:
< dump.sql tee >(sed -n '/^USE `[^`]*`;$/ p' 1>&2) | mysql ...
The example will import the file dump.sql into mysql while printing the use-statements as they come by:
...
USE `blue-racoon`;
USE `funny-basil`;
USE `purple-fish`;
...
Explanation: If you have a full dump with all databases of the mysql server (--all-databases long option) and you would like to review all SQL USE-statements while the file pipes into mysql, you could make use of tee to duplicate the content on the fly and sed to only print from those duplicated lines if a line is a USE-statement.
Then the filtered output is redirected to STDERR for review while the unfiltered output can be imported as normal by mysql.
I hope this helps.
I am attempting to upload a .txt file into my sql database I just created.
I was able to load several lines of data into the table using INSERT INTO, but when I tried to utilize LOAD DATA LOCAL INFILE '/pathto/file.txt' INTO TABLE mytable, it first gave me the error that command is not allowed in my version of mysql.
So after I read How can I correct MySQL Load Error, I used the --local-infile=1 -u mysqlname -p followed by the above command I have repeatedly been awarded the syntax error.
I've tried this to load the .txt file with all sorts of different combinations of the above, and still get one of the two errors.
Below is a screen shot.
This is with ubuntu 15.10 and mysql version 5.6.28-0ubuntu0.15.10.1.
Screen shot of terminal in question
--local-infile is a server and client parameter. It's not valid syntax as part of a statement such as LOAD DATA or INSERT statement.
You would specify server variables and options either in the appropriate sections of the my.cnf file, or as command line parameters to the MySQL program being executed.
For example, at the OS prompt...
# mysql -h myserverhost -u mysqlname -p --local-infile=1
That option has to be specified for the MySQL server.
If you are connecting as user#localhost, you don't need LOCAL. You can give the MySQL user (whichever OS user the mysql server is running under) read privilege on the file you want to load... chmod ugo+r /mypath/myfile (and read execute on the directories in the path.
You only need LOCAL if the msyql user isn't #'localhost'.
I have 5GB database that needs to be uploaded to phpmyadmin and that too on the shared server where i cannot access the shell.Is there any solution that can take lesser time to upload? Please do help me by providing the steps to upload the sql file. I have searched through internet but could not find an answer.
Do not use phpmyadmin.
Assuming you have shell, upload the file and feed it directly to mysql command.
Your shell command will look like:
cat file.sql | mysql -uuser -ppassword database
or you can do gzipped file:
zcat file.sql.gz | mysql -uuser -ppassword database
Prior doing this check:
database connection works (correct database, user and password)
database is empty :)
mysql max packet size is OK
you have enough diskspace
* UPDATE *
You said you do not have shell access.
Then you have following options -
upload the file and contact support, let they do it for you.
feed it remote, cpanel have special menu where you can get remove access, other panels have same ability too.
in this case code will be executed on your computer and look like:
cat file.sql | mysql -uroot -phipopodil -hwebsite.com
or for windows:
/path/to/mysql -uroot -phipopodil -hwebsite.com < file.sql
do some "hack" - feed it through crontab, at or via php system() command.
If you choose "hack" option, note following:
php have max_execution_time - even if you set it to zero, there could be some limit "imposed" from hosting.
usually hosts have limited mysql updates per hour.
there could be some ulimit restrictions.
if you execute feeding of 5 GB on shared server, server will slow down and administrator will check what you are doing.
This depends on your database, you tagged it with 3 different database types, mysql, sql-server, and postgresql. I know mysql and postgresql have import features, although I'd be surprised if SQL Server didn't as well. You could import the database file via the command line instead of having to use phpmyadmin.
Incidentally, the phpmyadmin tool also has an import feature, but that again depends on the format of your database. If it's a compatible sql file, you could upload it to phpmyadmin and import it there, but I'd recommend the previous method I mentioned, upload it to your host, then use whatever database tool (mysqlimport for mysql, or if it's the result of a pg_dump command, you can just run:
psql <dbname> < <yourfile>
ie
psql mydatabase < inputfile.sql
A mysqldump command like the following:
mysqldump -u<username> -p<password> -h<remote_db_host> -T<target_directory> <db_name> --fields-terminated-by=,
will write out two files for each table (one is the schema, the other is CSV table data). To get CSV output you must specify a target directory (with -T). When -T is passed to mysqldump, it writes the data to the filesystem of the server where mysqld is running - NOT the system where the command is issued.
Is there an easy way to dump CSV files from a remote system ?
Note: I am familiar with using a simple mysqldump and handling the STDOUT output, but I don't know of a way to get CSV table data that way without doing some substantial parsing. In this case I will use the -X option and dump xml.
mysql -h remote_host -e "SELECT * FROM my_schema.my_table" --batch --silent > my_file.csv
I want to add to codeman's answer. It worked but needed about 30 minutes of tweaking for my needs.
My webserver uses centos 6/cpanel and the flags and sequence which codeman used above did not work for me and I had to rearrange and use different flags, etc.
Also, I used this for a local file dump, its not just useful for remote DBs, because I had too many issues with selinux and mysql user permissions for SELECT INTO OUTFILE commands, etc.
What worked on my Centos+Cpanel Server
mysql -B -s -uUSERNAME -pPASSWORD < query.sql > /path/to/myfile.txt
Caveats
No Column Names
I cant get column names to appear at the top. I tried adding the flag:
--column-names
but it made no difference. I am still stuck on this one. I currently add it to the file after processing.
Selecting a Database
For some reason, I couldn't include the database name in the commandline. I tried with
-D databasename
in the commandline but I kept getting permission errors, so I ended using the following the top of my query.sql:
USE database_name;
On many systems, MySQL runs as a distinct user (such as user "mysql") and your mysqldump will fail if the MySQL user does not have write permissions in the dump directory - it doesn't matter what your own write permissions are in that directory. Changing your directory (at least temporarily) to world-writable (777) will often fix your export problem.
I'm trying to write the results of a query to a file using mysql. I've seen some information on the outfile construct in a few places but it seems that this only writes the file to the machine that MySQL is running on (in this case a remote machine, i.e. the database is not on my local machine).
Alternatively, I've also tried to run the query and grab (copy/paste) the results from the mysql workbench results window. This worked for some of the smaller datasets, but the largest of the datasets seems to be too big and causing an out of memory exception/bug/crash.
Any help on this matter would be greatly appreciated.
You could try executing the query from the your local cli and redirect the output to a local file destination;
mysql -user -pass -e"select cols from table where cols not null" > /tmp/output
This is dependent on the SQL client you're using to interact with the database. For example, you could use the mysql command line interface in conjunction with the "tee" operator to output to a local file:
http://dev.mysql.com/doc/refman/5.1/en/mysql-commands.html
tee [file_name], \T [file_name]
Execute the command above before executing the SQL and the result of the query will be output to the file.
Specifically for MySQL Workbench, here's an article on Execute Query to Text Output. Although I don't see any documentation, there are indications that there should be also be an "Export" option under Query, though that is almost certainly version dependent.
You could try this, if you want to write MySQL query result in a file.
This example write the MySQL query result into a csv file with comma separated format
SELECT id,name,email FROM customers
INTO OUTFILE '/tmp/customers.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
If you are running mysql queries on the command line. Here I suppose you have the list of queries in a text file and you want the output in another text file. Then you can use this. [ test_2 is the database name ]
COMMAND 1
mysql -vv -u root -p test_2 < query.txt > /root/results.txt 2>&1
Where -vv is for the verbose output.
If you use the above statement as
COMMAND 2
mysql -vv -u root -p test_2 < query.txt 2>&1 > /root/results.txt
It will redirect STDERR to normal location (i.e on the terminal) and STDOUT to the output file which in my case is results.txt
The first command executes the query.txt until is faces an error and stops there.
That's how the redirection works. You can try
#ls key.pem asdf > /tmp/output_1 2>&1 /tmp/output_2
Here key.pm file exists and asdf doesn't exists. So when you cat the files you get the following
# cat /tmp/output_1
key.pem
#cat /tmp/output_2
ls: cannot access asdf: No such file or directory
But if you modify the previous statement with this
ls key.pem asdf > /tmp/output_1 > /tmp/output_2 2>&1
Then you get the both error and output in output_2
cat /tmp/output_2
ls: cannot access asdf: No such file or directory
key.pem
mysql -v -u -c root -p < /media/sf_Share/Solution2.sql 2>&1 > /media/sf_Share/results.txt
This worked for me. Since I wanted the comments in my script also to be reflected in the report I added a flag -c