not sure if this is a question better suited for serverfault but I've been messing with amazon RDS lately and was having trouble getting 'file' privileges to my web host mysql user.
I'd assume that a simple:
grant file on *.* to 'webuser#'%';
would work but it does not and I can't seem to do it with my 'root' user as well. What gives? The reason we use load data is because it is super super fast for doing thousands of inserts at once.
anyone know how to remedy this or do I need to find a different way?
This page, http://docs.amazonwebservices.com/AmazonRDS/latest/DeveloperGuide/index.html?Concepts.DBInstance.html seems to suggest that I need to find a different way around this.
Help?
UPDATE
I'm not trying to import a database -- I just want to use the file load option to insert several hundred-thousand rows at a time.
after digging around this is what we have:
mysql> grant file on *.* to 'devuser'#'%';
ERROR 1045 (28000): Access denied for user 'root'#'%' (using password: YES)
mysql> select User, File_priv, Grant_priv, Super_priv from mysql.user;
+----------+-----------+------------+------------+
| User | File_priv | Grant_priv | Super_priv |
+----------+-----------+------------+------------+
| rdsadmin | Y | Y | Y |
| root | N | Y | N |
| devuser | N | N | N |
+----------+-----------+------------+------------+
You need to use LOAD DATA LOCAL INFILE as the file is not on the MySQL server, but is on the machine you are running the command from.
As per comment below you may also need to include the flag:
--local-infile=1
For whatever it's worth... You can add the LOCAL operand to the LOAD DATA INFILE instead of using mysqlimport to get around this problem.
LOAD DATA LOCAL INFILE ...
This will work without granting FILE permissions.
Also struggled with this issue, trying to upload .csv data into AWS RDS instance from my local machine using MySQL Workbench on Windows.
The addition I needed was adding OPT_LOCAL_INFILE=1 in: Connection > Advanced > Others. Note CAPS was required.
I found this answer by PeterMag in AWS Developer Forums.
For further info:
SHOW VARIABLES LIKE 'local_infile'; already returned ON
and the query was:
LOAD DATA LOCAL INFILE 'filepath/file.csv'
INTO TABLE `table_name`
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 ROWS;
Copying from the answer source referenced above:
Apparently this is a bug in MYSQL Workbench V8.X. In addition to the
configurations shown earlier in this thread, you also need to change
the MYSQL Connection in Workbench as follows:
Go to the Welcome page of MYSQL which displays all your connections
Select Manage Server Connections (the little spanner icon)
Select your connection
Select Advanced tab
In the Others box, add OPT_LOCAL_INFILE=1
Now I can use the LOAD DATA LOCAL INFILE query on MYSQL RDS. It seems
that the File_priv permission is not required.*
Pretty sure you can't do it yet, as you don't have the highest level MySQL privileges with RDS. We've only done a little testing, but the easiest way to import a database seems to be to pipe it from the source box, e.g.
mysqldump MYDB | mysql -h rds-amazon-blah.com --user=youruser --pass=thepass
Importing bulk data into Amazon MySQL RDS is possible two ways. You could choose anyone of below as per your convenience.
Using Import utility.
mysqlimport --local --compress -u <user-name> -p<password> -h <host-address> <database-name> --fields-terminated-by=',' TEST_TABLE.csv
--Make sure, here the utility will be inserting the data into TEST_TABLE only.
Sending a bulk insert SQL by piping into into mysql command.
mysql -u <user-name> -p<password> -h <host-address> <database-name> < TEST_TABLE_INSERT.SQL
--Here file TEST_TABLE_INSERT.SQL will have bulk import sql statement like below
--insert into TEST_TABLE values('1','test1','2017-09-08'),('2','test2','2017-09-08'),('3','test3','2017-09-08'),('3','test3','2017-09-08');
I ran into similar issues. I was in fact trying to import a database but the conditions should be the same - I needed to use load data due to the size of some tables, a spotty connection, and the desire for a modest resume functionality.
I agree with chris finne that not specifying the local option can lead to that error. After many fits and starts I found that the mk-parallel-restore tool from Maatkit provided what I needed with some excellent extra features. It might be a great match for your use case.
Related
I've a very fresh installation of mariadb-server-10.5 (1:10.5.15-0+deb11u1) on a freshly installed debian 11.1 .
On the old machine with mysql-server (5.5.9999+default) and debian 9.6 I created a dump like this:
mysqldump -u root -pSOMEPW --all-databases > all_databases.dump
and I loaded this dump on the new server:
source /path/to/all_databases.dump
. The source took a while, did not result any error, however it beeped once at the end (no visible error or warning message).
Checking the mysql.user table it has only 3 entries for root, mysql and mariadb.sys , so I tried to create users (which were existing and used on the old machine) with this command:
create user 'testuser'#'localhost' identified by 'pw';
but it result this error:
ERROR 1396 (HY000): Operation CREATE USER failed for 'testuser'#'localhost'
.
With a short script checking all the tables of the mysql db the 'testuser' appears in 3 different tables, but as a User only in the db table twice like this:
| Host | Db | User | Select_priv
| localhost | somedb | testuser | Y
| localhost | somedbp2 | testuser | Y
.
I think that might cause create user to fail.
How could I fix this issue without losing the information in the db table?
Thanks.
In general you need to run mysql_upgrade whenever you switch to a more recent MySQL or MariaDB release, or after importing a backup taken from an older major version.
This is especially true for MariaDB 10.4 and later when importing from MySQL or from MariaDB 10.3 or earlier, as the internal privilege tables changed substantially with 10.4.
mysql.user table was replaced by mysql.global_priv in 10.4, allowing for more fine grained authentication control, e.g. supporting multiple authentication plugins for a single user.
So now mysql.user is just a VIEW presenting information from mysql.global_priv in a backwards compatible way. Simple information like user and host name can still be modified via that view directly as it is an updateable view, but this does not work for the more complex columns.
And commands like CREATE USER now directly operate on the mysql.global_priv table anyway, the errors you are getting are due to that table not being present in your imported dump.
The good news is: mysql_upgrade will take care of the necessary conversion, and after that CREATE USER should work again.
See also: https://mariadb.com/kb/en/mysql_upgrade/
See also: https://mariadb.com/kb/en/mysqlglobal_priv-table/
I'm trying to automate a mysql dump of all databases from an Azure Database for MySQL Server. Current size of databases:
mysql> SELECT table_schema "DB Name", Round(Sum(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB"
FROM information_schema.tables GROUP BY table_schema;
+--------------------+---------------+
| DB Name | DB Size in MB |
+--------------------+---------------+
| db1 | 278.3 |
| db2 | 51.8 |
| information_schema | 0.2 |
| mysql | 8.9 |
| performance_schema | 0.0 |
| db3 | 43.3 |
| sys | 0.0 |
+--------------------+---------------+
7 rows in set (31.80 sec)
I have a python script, on a different VM, that calls mysqldump to dump all of these into a file. However, I'm running into an issue with db1. It is being dumped to a file but it is very slow, less than ~4MB in 30min. However db2 and db3 are dumped almost immediately, in seconds.
I have tried all of the following options and combinations to see if the write speed changes, but it doesn't:
--compress
--lock-tables (true / false)
--skip-lock-tables
--max-allowed-packet (512M)
--quick
--single-transaction
--opt
I'm currently not even using the script, just running the commands in a shell, with the same result.
mysqldump -h <host> -P <port> -u'<user>' -p'<password>' db1 > db1.sql
db1 has ~500 tables.
I understand that it is bigger than db2 and db3 but it's not by that much, and I'm wondering if anyone knows what could be the issue here?
EDIT
After these helpful answers and google research showed that the database is most likely fine, I run test by duplicating the db1 database on the server into a test database and then deleting tables one by one to decrease the size. And at around 50MB the writes became instant like the other databases. This leads me to believe that there is some throttling going on in Azure because the database is just fine and we will take it up with their support team. I have also found a lot of posts on google complaining about Azure database speeds in general.
In the meantime, I changed the script to ignore large databases. And we will try to move the databases to a SQL Server provided by Azure or a simple VM with a mysql server on it to see where we can get a better performance.
It's possible it's slow on the MySQL Server end, but it seems unlikely. You can open a second shell window, connect to MySQL and use SHOW PROCESSLIST or SHOW ENGINE INNODB STATUS to check for stuck queries or locks.
It's also possible it's having trouble writing the data to db1.sql, if you have very slow storage. But 4MB is 30min. is ridiculous. Make sure you're saving to storage local to the instance you're running mysqldump on. Don't save to remote storage. Also be careful if the storage volume to which you're writing the dump has other heavy I/O traffic saturating it, this could slow down writes.
Another way you can test for slow data writes is to try mysqldump ... > /dev/null and if that is fast, then it's a pretty good clue that the slowness is the fault of the disk writes.
Finally, there's a possibility that the network is causing the slowness. If saving the dump file to /dev/null is still slow, I'd suspect the network.
An answer in
https://serverfault.com/questions/233963/mysql-checking-permission-takes-a-long-time suggests that slowness in "checking permissions" might be caused by having too much data in the MySQL grant tables (e.g. mysql.user). If you have thousands of user credentials, this could be the cause. You can try eliminating these entries (and run FLUSH HOSTS afterwards).
Create a backup from your database first. After that, try this:
mysqlcheck
More info about this: mysqlcheck
I have 5GB database that needs to be uploaded to phpmyadmin and that too on the shared server where i cannot access the shell.Is there any solution that can take lesser time to upload? Please do help me by providing the steps to upload the sql file. I have searched through internet but could not find an answer.
Do not use phpmyadmin.
Assuming you have shell, upload the file and feed it directly to mysql command.
Your shell command will look like:
cat file.sql | mysql -uuser -ppassword database
or you can do gzipped file:
zcat file.sql.gz | mysql -uuser -ppassword database
Prior doing this check:
database connection works (correct database, user and password)
database is empty :)
mysql max packet size is OK
you have enough diskspace
* UPDATE *
You said you do not have shell access.
Then you have following options -
upload the file and contact support, let they do it for you.
feed it remote, cpanel have special menu where you can get remove access, other panels have same ability too.
in this case code will be executed on your computer and look like:
cat file.sql | mysql -uroot -phipopodil -hwebsite.com
or for windows:
/path/to/mysql -uroot -phipopodil -hwebsite.com < file.sql
do some "hack" - feed it through crontab, at or via php system() command.
If you choose "hack" option, note following:
php have max_execution_time - even if you set it to zero, there could be some limit "imposed" from hosting.
usually hosts have limited mysql updates per hour.
there could be some ulimit restrictions.
if you execute feeding of 5 GB on shared server, server will slow down and administrator will check what you are doing.
This depends on your database, you tagged it with 3 different database types, mysql, sql-server, and postgresql. I know mysql and postgresql have import features, although I'd be surprised if SQL Server didn't as well. You could import the database file via the command line instead of having to use phpmyadmin.
Incidentally, the phpmyadmin tool also has an import feature, but that again depends on the format of your database. If it's a compatible sql file, you could upload it to phpmyadmin and import it there, but I'd recommend the previous method I mentioned, upload it to your host, then use whatever database tool (mysqlimport for mysql, or if it's the result of a pg_dump command, you can just run:
psql <dbname> < <yourfile>
ie
psql mydatabase < inputfile.sql
I have searched and found this post (http://stackoverflow.com/questions/1814297/cant-load-file-data-in-the-mysql-directory) but it is not working for me.
i am un Ubuntu 12.04 and MySQL version is 5.5.22-0ubuntu1
I have logged into MySQL as root and so grants should all be okay:
mysql> show grants;
+---------------------------------------------------------------------+
| Grants for root#localhost |
+---------------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'root'#'localhost' WITH GRANT OPTION |
| GRANT PROXY ON ''#'' TO 'root'#'localhost' WITH GRANT OPTION |
+---------------------------------------------------------------------+
I am trying to insert some data from a text file into a MySQL database and the LOAD_FILE function doesn't seem to work properly
I created a test file, permissions of 777 and copied to root of the install (I tried changing owner/group to root:root and mysql:mysql and still no good):
mysql> select load_file('/test.txt');
+------------------------+
| load_file('/test.txt') |
+------------------------+
| NULL |
+------------------------+
1 row in set (0.00 sec)
But if I try this:
mysql> select load_file('/etc/hosts');
It works fine. If I copy the test file into /etc it still fails.
has anyone seen this before or can perhaps point me to another way to load into the database?
To use load_file, the following conditions must be met (from the documentation):
The file must be located on the server host
You must specify the full path name to the file, and you must have the FILE privilege.
The file must be readable by all and its size less than max_allowed_packet bytes.
If the secure_file_priv system variable is set to a nonempty directory name, the file to be loaded must be located in that directory.
If the file contains SQL statements that you want to execute, an easier approach might be to pipe it in:
mysql -u foo -p dbname < filename.sql
Im not an expert on MySQL, but ive observed that MySQL version 5.5 has a problem with UBUNTU OS.
Even after following the documentation in mysql docs LOAD_FILE() didnt work.
There is a service called apparmour, preventing the function LOAD_FILE() from executing, i tried stopping that service but still it persisted.....
I know this doesnt solve your problem, but at least it'll help u find where the problem is......
Consider this one-liner (note, I'm on Ubuntu):
printf "$(cat update_xml.sql)" "$(cat my.xml | sed s/"'"/"\\\'"/g)" | mysql -h myRemoteHost -u me -p***
In update_xml.sql there is:
UPDATE
myTable
SET
myXmlColumn = '%s'
WHERE
...
Adding this for future reference. Probably won't help the OP.
As noted before, AppArmor is to blame. You need to whitelist the paths needed for load_file into the provided profile which can be found here: /etc/apparmor.d/usr.sbin.mysqld. The apparmor.d documentation can be found here. This is the recommended way as AppArmor has its reasons to be there.
Alternatives:
This is the unrecommended method. Disable the usr.sbin.mysqld profile so you won't expose all the services. Simply link the profile to /etc/apparmor.d/disable with ln -s /etc/apparmor.d/usr.sbin.mysqld /etc/apparmor.d/disable/usr.sbin.mysqld. Reload the profiles with /etc/init.d/apparmor restart. It probably makes sense for a development machine.
This is the highly unrecommended method, if you don't actually need AppArmor. The profiles can be unloaded with /etc/init.d/apparmor teardown. Disable the init script with update-rc.d -f apparmor remove.
All the above stuff requires root privileges, but I skipped the ever repetitive sudo in front of all the commands.
MySQL is awesome! I am currently involved in a major server migration and previously, our small database used to be hosted on the same server as the client. So we used to do this : SELECT * INTO OUTFILE .... LOAD DATA INFILE ....
Now, we moved the database to a different server and SELECT * INTO OUTFILE .... no longer works, understandable - security reasons I believe.
But, interestingly LOAD DATA INFILE .... can be changed to LOAD DATA LOCAL INFILE .... and bam, it works.
I am not complaining nor am I expressing disgust towards MySQL. The alternative to that added 2 lines of extra code and a system call form a .sql script. All I wanted to know is why LOAD DATA LOCAL INFILE works and why is there no such thing as SELECT INTO OUTFILE LOCAL?
I did my homework, couldn't find a direct answer to my questions above. I couldn't find a feature request # MySQL either. If someone can clear that up, that had be awesome!
Is MariaDB capable of handling this problem?
From the manual: The SELECT ... INTO OUTFILE statement is intended primarily to let you very quickly dump a table to a text file on the server machine. If you want to create the resulting file on some client host other than the server host, you cannot use SELECT ... INTO OUTFILE. In that case, you should instead use a command such as mysql -e "SELECT ..." > file_name to generate the file on the client host."
http://dev.mysql.com/doc/refman/5.0/en/select.html
An example:
mysql -h my.db.com -u usrname--password=pass db_name -e 'SELECT foo FROM bar' > /tmp/myfile.txt
You can achieve what you want with the mysql console with the -s (--silent) option passed in.
It's probably a good idea to also pass in the -r (--raw) option so that special characters don't get escaped. You can use this to pipe queries like you're wanting.
mysql -u username -h hostname -p -s -r -e "select concat('this',' ','works')"
EDIT: Also, if you want to remove the column name from your output, just add another -s (mysql -ss -r etc.)
The path you give to LOAD DATA INFILE is for the filesystem on the machine where the server is running, not the machine you connect from. LOAD DATA LOCAL INFILE is for the client's machine, but it requires that the server was started with the right settings, otherwise it's not allowed. You can read all about it here: http://dev.mysql.com/doc/refman/5.0/en/load-data-local.html
As for SELECT INTO OUTFILE I'm not sure why there is not a local version, besides it probably being tricky to do over the connection. You can get the same functionality through the mysqldump tool, but not through sending SQL to the server.
Since I find myself rather regularly looking for this exact problem (in the hopes I missed something before...), I finally decided to take the time and write up a small gist to export MySQL queries as CSV files, kinda like https://stackoverflow.com/a/28168869 but based on PHP and with a couple of more options. This was important for my use case, because I need to be able to fine-tune the CSV parameters (delimiter, NULL value handling) AND the files need to be actually valid CSV, so that a simple CONCAT is not sufficient since it doesn't generate valid CSV files if the values contain line breaks or the CSV delimiter.
Caution: Requires PHP to be installed on the server!
(Can be checked via php -v)
"Install" mysql2csv via
wget https://gist.githubusercontent.com/paslandau/37bf787eab1b84fc7ae679d1823cf401/raw/29a48bb0a43f6750858e1ddec054d3552f3cbc45/mysql2csv -O mysql2csv -q && (sha256sum mysql2csv | cmp <(echo "b109535b29733bd596ecc8608e008732e617e97906f119c66dd7cf6ab2865a65 mysql2csv") || (echo "ERROR comparing hash, Found:" ;sha256sum mysql2csv) ) && chmod +x mysql2csv
(download content of the gist, check checksum and make it executable)
Usage example
./mysql2csv --file="/tmp/result.csv" --query='SELECT 1 as foo, 2 as bar;' --user="username" --password="password"
generates file /tmp/result.csv with content
foo,bar
1,2
help for reference
./mysql2csv --help
Helper command to export data for an arbitrary mysql query into a CSV file.
Especially helpful if the use of "SELECT ... INTO OUTFILE" is not an option, e.g.
because the mysql server is running on a remote host.
Usage example:
./mysql2csv --file="/tmp/result.csv" --query='SELECT 1 as foo, 2 as bar;' --user="username" --password="password"
cat /tmp/result.csv
Options:
-q,--query=name [required]
The query string to extract data from mysql.
-h,--host=name
(Default: 127.0.0.1) The hostname of the mysql server.
-D,--database=name
The default database.
-P,--port=name
(Default: 3306) The port of the mysql server.
-u,--user=name
The username to connect to the mysql server.
-p,--password=name
The password to connect to the mysql server.
-F,--file=name
(Default: php://stdout) The filename to export the query result to ('php://stdout' prints to console).
-L,--delimiter=name
(Default: ,) The CSV delimiter.
-C,--enclosure=name
(Default: ") The CSV enclosure (that is used to enclose values that contain special characters).
-E,--escape=name
(Default: \) The CSV escape character.
-N,--null=name
(Default: \N) The value that is used to replace NULL values in the CSV file.
-H,--header=name
(Default: 1) If '0', the resulting CSV file does not contain headers.
--help
Prints the help for this command.
Using mysql CLI with -e option as Waverly360 suggests is a good one, but that might go out of memory and get killed on large results. (Havent find the reason behind it).
If that is the case, and you need all records, my solution is: mysqldump + mysqldump2csv:
wget https://raw.githubusercontent.com/jamesmishra/mysqldump-to-csv/master/mysqldump_to_csv.py
mysqldump -u username -p --host=hostname database table | python mysqldump_to_csv.py > table.csv
Re: SELECT * INTO OUTFILE
Check if MySQL has permissions to write a file to the OUTFILE directory on the server.
Try setting path to /var/lib/mysql-files/filename.csv (MySQL 8). Determine what files directory is yours by typping SHOW VARIABLES LIKE "secure_file_priv"; in mysql client command line.
See answer about here: (...) --secure-file-priv in MySQL answered in 2015 by vhu user