Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 3 years ago.
Improve this question
I'm trying to insert some data fom a .tsv file to a mysql database using awk. When I run this, I just get the mysql rules back at the command line. Any ideas?
Here's the command I am using:
awk '{print "INSERT INTO scores(id, score) VALUES('\''"$1"'\'', "$2");"}' "data.tsv" | mysql -u "user" -p "passw" db
I'm not getting any error messages back, but I check my database and no rows have been inserted.
You can try to remove the space between -p "passw", like this:
| mysql -u "user" -p"passw" db
The other answer already hinted that the -p flag has special, counter-intuitive behavior in the mysql client. If you have a space after it, it makes the mysql client prompt you for a password. The argument following is NOT taken as the password, it's taken as the next argument unrelated to -p.
The following two commands are equivalent:
mysql -u <user> -p <databasename>
mysql -p -u <user> <databasename>
If you want to include the password, you must have no space after the -p:
mysql -u <user> -p<password> <databasename>
To make scripts more clear, I like to use the long option names:
mysql --user=<user> --password=<password> <databasename>
But you shouldn't be using passwords on the command-line anyway, because then anyone who can run ps can see your password. Instead, put user & password into an options file and have the client read it.
mysql --defaults-file my.cnf <databasename>
Your awk code is going to output a long series of SQL injection vulnerabilities. I mean, you're trusting that all the content in your .tsv file is safe to insert, won't contain any characters like apostrophes that will do anything unexpected to the SQL syntax. For example, what happens if $1 is "O'Hare"?
Awk doesn't have any function to do string-escapes to protect you from this, nor any feature to do parameterized queries, which is a better method of running safe SQL statements with dynamic values.
I have used awk for other tasks for many years, but I wouldn't use it for this task. For example, in Ruby:
require 'mysql2'
require 'csv'
client = Mysql2::Client.new(:host => "localhost", :database => "test", :username => "...")
sql = client.prepare("INSERT INTO scores (id, score) VALUES (?, ?)")
CSV.open('data.tsv', col_sep: "\t", liberal_parsing: true) do |csv|
csv.each do |row|
sql.execute(*row)
end
end
Another alternative to load TSV files, with much better performance, is to use mysqlimport --local. But there are some configuration values you need to set to get this to work on a default MySQL instance, and the filename must have the same name as the table (except for the .tsv file extension).
Example: I loaded a .tsv file with four lines of text into a table test.scores:
mysqlimport --local test scores.tsv
test.scores: Records: 4 Deleted: 0 Skipped: 0 Warnings: 0
Related
How can I execute multiple SQL queries in the bash script?
I read these two posts from previous years:
A better way to execute multiple MySQL commands using shell script
How to execute a MySQL command from a shell script?
They brought some clarification, but there is still something I do not understand.
I have multiple queries for deleting information about subject with defined subject_id.
Unfortunately I need to run all of them since the table is not in the "cascade" mode.
Is there a way, to create a bash script in which I can use the "user given" variable (by that I mean for example [ read -p 'Subject ID' SUBJECT_ID ]) that will be used inside as the subject_id in each of the queries?
Do I still have to adjust everything to this:
mysql -h "server-name" -u root "password" "database-name" < "filename.sql"
or is there a way to just run this script with connection to db from .cnf file inside it?
There are two questions above. One is how to get a bash variable into your SQL script. I would do this:
read -p 'Subject ID' SUBJECT_ID
mysql -e "SET #subject = '${SUBJECT_ID}'; source filename.sql;"
Bash will expand ${SUBJECT_ID} into the string before it uses it as an argument to the mysql -e command. So the MySQL variable is assigned the string value of SUBJECT_ID.
This will be tricky if SUBJECT_ID may contain literal single-quote characters! So I suggest using Bash syntax for string replacement to make each single-quote in that into two single-quotes:
mysql -e "SET #subject = '${SUBJECT_ID//'/''}'; source filename.sql;"
Note you must put a semicolon at the end after the filename.
The second question is about specifying the host, user, and password. I would recommend putting these into an options file:
[client]
host=server-name
user=root
password=xyzzy
Then when you invoke the mysql client:
mysql --defaults-extra-file myoptions.cnf -e '...'
This is a good idea to avoid putting your plaintext password on the command-line.
Read https://dev.mysql.com/doc/refman/8.0/en/option-files.html for more details on option files.
How can I import multiple files(.csv, .sql etc) into xampp mysql database ?
I am using Xampp and windows XP.
If I need to write command prompt type command, please tell in details where to find the command prompt type screen and so on.
The SQL files you can execute with MySQL command-line tool, e.g. -
shell> mysql db_name < script.sql
Load data from the CSV file into specified table you can with LOAD DATA INFILE statement.
If you do not have access to mysql client, then try dbForge Studio for MySQL. Free express edition allows to execute SQL scripts and import data from the CSV file without limitations.
This topic has been covered already, in its parts.
to import a CSV you can check this question, which will lead you to use:
LOAD DATA INFILE yourfile.csv
or if you need to update some data that is already on the database you can relate to a question I answered not long ago (and possibly other answers on stackoverflow).
You may execute this statement in the mysql prompt with mysql -u user -p -h localhost -D database (learn how to find the path to mysql.exe in your XAMMP using this question) or using some other way, such as your scripting/programming language of choice together with mysql connectors/libraries.
You may also use the mysqlimport.exe command (it'll be in the same folder as your mysql binary).
To import a sql file you can take a look at this question. You will essentially just copy the file contents into the mysql prompt, which is usually done with input redirection on the console:
C:>mysql -u user -p -h localhost -D database -o < yoursqlfile.sql
I hope that besides answering your question i might also have introduced you to the fact that with thousands(?) of questions in stackoverflow you are very likely to find the answers to your doubts by searching the questions database, possibly faster than asking your own new question.
A mysqldump command like the following:
mysqldump -u<username> -p<password> -h<remote_db_host> -T<target_directory> <db_name> --fields-terminated-by=,
will write out two files for each table (one is the schema, the other is CSV table data). To get CSV output you must specify a target directory (with -T). When -T is passed to mysqldump, it writes the data to the filesystem of the server where mysqld is running - NOT the system where the command is issued.
Is there an easy way to dump CSV files from a remote system ?
Note: I am familiar with using a simple mysqldump and handling the STDOUT output, but I don't know of a way to get CSV table data that way without doing some substantial parsing. In this case I will use the -X option and dump xml.
mysql -h remote_host -e "SELECT * FROM my_schema.my_table" --batch --silent > my_file.csv
I want to add to codeman's answer. It worked but needed about 30 minutes of tweaking for my needs.
My webserver uses centos 6/cpanel and the flags and sequence which codeman used above did not work for me and I had to rearrange and use different flags, etc.
Also, I used this for a local file dump, its not just useful for remote DBs, because I had too many issues with selinux and mysql user permissions for SELECT INTO OUTFILE commands, etc.
What worked on my Centos+Cpanel Server
mysql -B -s -uUSERNAME -pPASSWORD < query.sql > /path/to/myfile.txt
Caveats
No Column Names
I cant get column names to appear at the top. I tried adding the flag:
--column-names
but it made no difference. I am still stuck on this one. I currently add it to the file after processing.
Selecting a Database
For some reason, I couldn't include the database name in the commandline. I tried with
-D databasename
in the commandline but I kept getting permission errors, so I ended using the following the top of my query.sql:
USE database_name;
On many systems, MySQL runs as a distinct user (such as user "mysql") and your mysqldump will fail if the MySQL user does not have write permissions in the dump directory - it doesn't matter what your own write permissions are in that directory. Changing your directory (at least temporarily) to world-writable (777) will often fix your export problem.
I am trying to import a mysqldump file via the command line, but continue to get an error. I dumped the file from my other server using:
mysqldump -u XXX -p database_name > database.sql
Then I try to import the file with:
mysql -u XXX -p database_name < database.sql
It loads a small portion and then gets stuck. The error I receive is:
ERROR at line 1153: Unknown command '\''.
I checked that line in the file with:
awk '{ if (NR==1153) print $0 }' database.sql >> line1153.sql
and it happens to be over 1MB in size, just for that line.
Any ideas what might be going on here?
You have binary blobs in your DB, try adding --hex-blob to your mysqldump statement.
You know what's going on - you have an extra single quote in your SQL!O
If you have 'awk', you probably have 'vi', which will open your line1153.sql file with ease and allow you to find the value in your database that is causing the problem.
Or... The line is probably large because it contains multiple rows. You could also use the --skip-extended-insert option to mysqldump so that each row got a separate insert statement.
Good luck.
I had the same problem because I had Chinese characters in my datasbase. Below is what I found from some Chinese forum and it worked for me.
mysql -u[USERNAME] -p[PASSWORD] --default-character-set=latin1
[DATABASE_NAME] < [BACKUP_SQL_FILE.sql]
I think you need to use path/to/file.sql instead of path\to\file.sql
Also, database < path/to/file.sql didn't work for me for some reason - I had to use use database; and source path/to/file.sql;.
If all else fails, use MySQLWorkbench to do the import. This solved the same problem for me.
I recently had a similar problem where I had done an sql dump on a Windows machine and tried to install it on a Linux machine. I had a fairly large SQL file and my error was happening at line 3455360. I used the following command to copy all text up to the point where I was getting an error:
sed -n '1, 3455359p' < sourcefile.sql > destinationfile.sql
This copied all the good code into a destination file. I looked at the last few lines of the destination file and saw that it was a complete SQL command (The last line ended with a ';') so I imported the good code and didn't get any errors.
I then looked at the rest of the file which was about 20 lines. It turns out that the export might not have completed b/c I saw the following php code at the end of the code:
Array
(
[type] => 1
[message] => Maximum execution time of 300 seconds exceeded
[file] => C:\xampp\htdocs\openemr\phpmyadmin\libraries\Util.class.php
[line] => 296
)
I removed the offending php code and imported the rest of the database.
I had special character in table names , like _\ and it give error when try to import that tables.
i fixed it by changing \ to \\ in dumped sql.
my table names where like rate_\ and i used this command to repair dump :
sed 's._\\._\\\\.g' dump.sql > dump2.sql
i didn't replace all backslashes , because i was not sure if there is some backslash somewhere in database that should not be replaces.
special characters in table name will be converted to # at sign in file name.
read http://dev.mysql.com/doc/refman/5.5/en/identifier-mapping.html
I have same error as,
Unknown command '\▒'.
when I ran this
mysql -u root -p trainee < /xx/yy.gz
So I'd followed these answers. But I did not got the restored db trainee. Then found that
yy.gz is zip file. So I restoring after unzip the file as:
mysql -u root -p trainee < /xx/yy.sql
MySQL is awesome! I am currently involved in a major server migration and previously, our small database used to be hosted on the same server as the client. So we used to do this : SELECT * INTO OUTFILE .... LOAD DATA INFILE ....
Now, we moved the database to a different server and SELECT * INTO OUTFILE .... no longer works, understandable - security reasons I believe.
But, interestingly LOAD DATA INFILE .... can be changed to LOAD DATA LOCAL INFILE .... and bam, it works.
I am not complaining nor am I expressing disgust towards MySQL. The alternative to that added 2 lines of extra code and a system call form a .sql script. All I wanted to know is why LOAD DATA LOCAL INFILE works and why is there no such thing as SELECT INTO OUTFILE LOCAL?
I did my homework, couldn't find a direct answer to my questions above. I couldn't find a feature request # MySQL either. If someone can clear that up, that had be awesome!
Is MariaDB capable of handling this problem?
From the manual: The SELECT ... INTO OUTFILE statement is intended primarily to let you very quickly dump a table to a text file on the server machine. If you want to create the resulting file on some client host other than the server host, you cannot use SELECT ... INTO OUTFILE. In that case, you should instead use a command such as mysql -e "SELECT ..." > file_name to generate the file on the client host."
http://dev.mysql.com/doc/refman/5.0/en/select.html
An example:
mysql -h my.db.com -u usrname--password=pass db_name -e 'SELECT foo FROM bar' > /tmp/myfile.txt
You can achieve what you want with the mysql console with the -s (--silent) option passed in.
It's probably a good idea to also pass in the -r (--raw) option so that special characters don't get escaped. You can use this to pipe queries like you're wanting.
mysql -u username -h hostname -p -s -r -e "select concat('this',' ','works')"
EDIT: Also, if you want to remove the column name from your output, just add another -s (mysql -ss -r etc.)
The path you give to LOAD DATA INFILE is for the filesystem on the machine where the server is running, not the machine you connect from. LOAD DATA LOCAL INFILE is for the client's machine, but it requires that the server was started with the right settings, otherwise it's not allowed. You can read all about it here: http://dev.mysql.com/doc/refman/5.0/en/load-data-local.html
As for SELECT INTO OUTFILE I'm not sure why there is not a local version, besides it probably being tricky to do over the connection. You can get the same functionality through the mysqldump tool, but not through sending SQL to the server.
Since I find myself rather regularly looking for this exact problem (in the hopes I missed something before...), I finally decided to take the time and write up a small gist to export MySQL queries as CSV files, kinda like https://stackoverflow.com/a/28168869 but based on PHP and with a couple of more options. This was important for my use case, because I need to be able to fine-tune the CSV parameters (delimiter, NULL value handling) AND the files need to be actually valid CSV, so that a simple CONCAT is not sufficient since it doesn't generate valid CSV files if the values contain line breaks or the CSV delimiter.
Caution: Requires PHP to be installed on the server!
(Can be checked via php -v)
"Install" mysql2csv via
wget https://gist.githubusercontent.com/paslandau/37bf787eab1b84fc7ae679d1823cf401/raw/29a48bb0a43f6750858e1ddec054d3552f3cbc45/mysql2csv -O mysql2csv -q && (sha256sum mysql2csv | cmp <(echo "b109535b29733bd596ecc8608e008732e617e97906f119c66dd7cf6ab2865a65 mysql2csv") || (echo "ERROR comparing hash, Found:" ;sha256sum mysql2csv) ) && chmod +x mysql2csv
(download content of the gist, check checksum and make it executable)
Usage example
./mysql2csv --file="/tmp/result.csv" --query='SELECT 1 as foo, 2 as bar;' --user="username" --password="password"
generates file /tmp/result.csv with content
foo,bar
1,2
help for reference
./mysql2csv --help
Helper command to export data for an arbitrary mysql query into a CSV file.
Especially helpful if the use of "SELECT ... INTO OUTFILE" is not an option, e.g.
because the mysql server is running on a remote host.
Usage example:
./mysql2csv --file="/tmp/result.csv" --query='SELECT 1 as foo, 2 as bar;' --user="username" --password="password"
cat /tmp/result.csv
Options:
-q,--query=name [required]
The query string to extract data from mysql.
-h,--host=name
(Default: 127.0.0.1) The hostname of the mysql server.
-D,--database=name
The default database.
-P,--port=name
(Default: 3306) The port of the mysql server.
-u,--user=name
The username to connect to the mysql server.
-p,--password=name
The password to connect to the mysql server.
-F,--file=name
(Default: php://stdout) The filename to export the query result to ('php://stdout' prints to console).
-L,--delimiter=name
(Default: ,) The CSV delimiter.
-C,--enclosure=name
(Default: ") The CSV enclosure (that is used to enclose values that contain special characters).
-E,--escape=name
(Default: \) The CSV escape character.
-N,--null=name
(Default: \N) The value that is used to replace NULL values in the CSV file.
-H,--header=name
(Default: 1) If '0', the resulting CSV file does not contain headers.
--help
Prints the help for this command.
Using mysql CLI with -e option as Waverly360 suggests is a good one, but that might go out of memory and get killed on large results. (Havent find the reason behind it).
If that is the case, and you need all records, my solution is: mysqldump + mysqldump2csv:
wget https://raw.githubusercontent.com/jamesmishra/mysqldump-to-csv/master/mysqldump_to_csv.py
mysqldump -u username -p --host=hostname database table | python mysqldump_to_csv.py > table.csv
Re: SELECT * INTO OUTFILE
Check if MySQL has permissions to write a file to the OUTFILE directory on the server.
Try setting path to /var/lib/mysql-files/filename.csv (MySQL 8). Determine what files directory is yours by typping SHOW VARIABLES LIKE "secure_file_priv"; in mysql client command line.
See answer about here: (...) --secure-file-priv in MySQL answered in 2015 by vhu user