As a security minded person, in the off chance that an SQL injection ever happens I'd like to minimize the damage caused. One such possibility is that there are queries that can read and write information to a file on the local file system. This is clearly a major issue in the case of a security breach, and the usage for these commands is fairly limited in day to day usage. Optionally, it could be turned on for an isolated period of time in case I have the need for import and exporting data, but I would like to have it turned off explicitly any other time. No amount of googling or skimming the MySQL manual has led me to a specific setting that allows me to disable this option.
I know I could easily just revoke the privilege for all users, but I'd like a simpler solution that by default increases my security (at least in this specific case).
Does anyone know of any setting or way to deactivate any file interactions in MySQL?
Thanks!
http://dev.mysql.com/doc/refman/5.6/en/load-data.html says:
Also, to use LOAD DATA INFILE on server files, you must have the FILE privilege.
http://dev.mysql.com/doc/refman/5.6/en/select-into.html says:
The SELECT ... INTO OUTFILE 'file_name' form of SELECT writes the selected rows to a file. The file is created on the server host, so you must have the FILE privilege to use this syntax.
You can also set the secure_file_priv config variable:
By default, this variable is empty. If set to the name of a directory, it limits the effect of the LOAD_FILE() function and the LOAD DATA and SELECT ... INTO OUTFILE statements to work only with files in that directory.
In Percona Server, secure_file_priv has an additional usage:
When used with no argument, the LOAD_FILE() function will always return NULL. The LOAD DATA INFILE and SELECT INTO OUTFILE statements will fail with the following error: “The MySQL server is running with the –secure-file-priv option so it cannot execute this statement”.
Related
This involves MySQL 5.7 running on Windows Server 2016.
I'm working with a TRUNCATE statement in MySql to reduce the size of a large Log file (named "mySite.log"), which resides in:
ProgramData/MySQL/MySQL Server 5.7/Data/
I have researched and implemented the following:
mysql> SET GLOBAL general_log=OFF;
and this was successful.
However, I am trying to ascertain that the large log file that I see in the directory stated above is in fact the General Query Log File. It carries the name of the database as the prefix of the file name ("MySite.log") just as the other files (.bin's and .err, .pid) in the same directory do.
Is this large log file actually the general_log file? (If using MySQL Workbench, where would the naming of the log file and storage location be set up? I can't seem to locate that.)
Will the following syntax truncate the log file?
mysql> TRUNCATE TABLE mysql.general_log;
Will 'TRUNCATE TABLE' be used, even if the log is stored in a file, rather than database table?
Will 'mysql.general_log' need to be renamed to 'myDatabase.mysite' to match the name of my "MySite.log" file, from above?
Thanks for any leads.
Some interesting manual entries to read about this:
5.4.1 Selecting General Query and Slow Query Log Output Destinations
5.4.3 The General Query Log
You can check how your server is configured with
SHOW GLOBAL VARIABLES LIKE '%log%';
Then look for the value of log-output. This shows if you're logging to FILE, TABLE or both.
When it's FILE, check for the value of general_log_file. This is where the log file is in your file system. You can simply remove it, and create a new file (in case you ever want to enable general_log again). Then execute FLUSH LOGS; afterwards.
When it's TABLE then your TRUNCATE TABLE mysql.general_log; statement is correct.
Regarding your second question, just never mess with the tables in the mysql schema. Just don't (if you don't know what you're doing). Also I don't even get how you got to that thought.
I've got a script to load a text file into an Amazon RDS mysql database. It needs to handle a variable number of columns from the text file and store them as JSON. Here's an example with 5 columns that get stored as JSON:
LOAD DATA LOCAL INFILE '/Applications/FileMaker Pro 14/containers/imports/load1.txt' INTO TABLE jdataholder (rowdatetime, #v1, #v2, #v3, #v4, #v5) SET loadno = 1, formatno = 1, jdata = JSON_OBJECT('Site', #v1, 'Nutrients', #v2, 'Dissolved_O2', #v3, 'Turbitidy', #v4, 'Nitrogen', #v5);
local_infile is 'ON' on the server. The query works in Sequel Pro. When I try to run it in Filemaker Pro 14's (running on OS X 10.12) Execute SQL script step it doesn't work. I know the connection to the server is working because I can run other queries that don't use the LOAD DATA LOCAL INFILE statement.
The error message I get says:
ODBC ERROR: [Actual][MySQL] Load data local infile forbidden
From other answers on SO and elsewhere it seems like the client also needs to have local_infile enabled. This would explain why it works on one client and not the other. I've tried to do this, but the instructions I've seen all use the terminal. I don't think Filemaker has anything like this - you can just enter SQL into a query editor and send that to a remote database. I'm not sure how or even if you can change the configuration of the client.
Does anyone know how to enable this on Filemaker? Or, is there something else I can do to make this work?
Could I avoid this if I ran the load data local infile query from a stored procedure? That was my original plan, but the documentation says that the load data infile step has to call literal strings, not variables, so I couldn't think of a way to handle a variable number of columns.
Thanks.
It's possible that the FileMaker ExecuteSQL does not support this. I would suggest using a plugin that has the ability to run terminal commands, then use that plugin to perform the action in the terminal through FileMaker. There's a free plugin that has a function for that, it's called BaseElements. Below is a link to the documentation on this specific function.
https://baseelementsplugin.zendesk.com/hc/en-us/articles/205350547-BE-ExecuteShellCommand
If you are trying to insert data into a MySQL table, a better way is to use FileMaker ESS (External SQL Sources), which allows you to work with MySQL, Oracle and other supported databases inside FileMaker. This includes being able to import data into the ESS ( MySQL table. You can see a pdf document on ESS below:
https://www.filemaker.com/downloads/documentation/techbrief_intro_ess.pdf
I've been working on this for days, pretty frustrated.
Have a Magento database, about 1Gb with 3MM records - need to make a backup and import it onto my local machine. Local machine is running WAMP on a brand new gaming rig specs with 16 Gb RAM). Exported the db fine using PHPMyAdmin into a .sql file.
Saw BigDump was highly recommended to import a large db. Also find a link that says it's recommended for the syntax to include column names in every INSERT statement Done. ( http://www.atomicsmash.co.uk/blog/import-large-sql-databases/ )
Start importing. Hours go by (around 3-4). Get an error: Page unavailable, or wrong url! More searching, try suggestions ( mostly here: http://www.sitehostingtalk.com/f16/bigdump-error-page-unavailable-wrong-url-56939/ ) to drop the $linespersession to 500 and add a $delaypersession of 300. Run again, more hours, same error.
I then re-exported the db into two .sql dumps (one that held all the large tables with over 100K records), repeat, same error. So I quit using Bigdump.
Next up was the command line! Using Console2 I ran source mydump.sql. 30 hours go by. Then an error:
ERROR 1231 (42000): Variable 'character_set_client' can't be set to the value of 'NULL'
ERROR 1231 (42000): Variable 'collation_connection' can't be set to the value of 'NULL'
More searching, really varied explanations. I tried with the split files from before - run it again, same error.
I can't figure out what would cause both of these errors. I know that I got the same error on two different exports. I know there are a few tables that are between 1-300,000 rows. I also don't think 30 hours is normal (on a screaming fast machine) for an import of only a 1Gb but I could be wrong.
What other options should I try? Is it the format of the export? Should it be compressed or not? Is there a faster way of importing? Any way of making this go faster?
Thanks!
EDIT
Thanks to some searching and #Bill Karwin suggestion here's where I'm at:
Grabbed a new mysqldump using ssh and downloaded it.
Imported the database 10 different times. Each time was MUCH faster (5-10 mins) so that fixed the ridiculous import time.
used command line, >source dump.sql
However, each import from that same dump.sql file has a different number of records. Of the 3 million records they differ by between 600 and 200,000 records. One of the imports has 12,000 MORE records than the original. I've tried with and without setting the foreign_key_checks = 0; I tried running the same query multiple times with exactly the same settings. Every time the number of rows are different.
I'm also getting these errors now:
ERROR 1231 (42000): Variable 'time_zone' can't be set to the value of 'NULL'
ERROR 1231 (42000): Variable 'sql_mode' can't be set to the value of 'NULL'
ERROR 1231 (42000): Variable 'foreign_key_checks' can't be set to the value of 'NULL'
ERROR 1231 (42000): Variable 'unique_checks' can't be set to the value of 'NULL'
ERROR 1231 (42000): Variable 'character_set_client' can't be set to the value of 'NULL'
ERROR 1231 (42000): Variable 'collation_connection' can't be set to the value of 'NULL'
ERROR 1231 (42000): Variable 'sql_notes' can't be set to the value of 'NULL'
Doesn't seem like these are that important from what I read. There are other warnings but I can't seem to determine what they are.
Any ideas?
EDIT: Solution removed here and listed below as a separate post
References:
https://serverfault.com/questions/244725/how-to-is-mysqls-net-buffer-length-config-viewed-and-reset
http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar_net_buffer_length
Make phpMyAdmin show exact number of records for InnoDB tables?
Export a large MySQL table as multiple smaller files
https://dba.stackexchange.com/questions/31197/why-max-allowed-packet-is-larger-in-mysqldump-than-mysqld-in-my-cnf
No, that's not a normal restore time, unless you're running MySQL on a 15 year old computer or you're trying to write the database to a shared volume over a very slow network. I can import a data dump of about that size in about 45 minutes, even on an x-small EC2 instance.
The error about setting variables to NULL appears to be a limitation of BigDump. It's mentioned in the BigDump FAQ. I have never seen those errors from restoring a dump file with the command-line client.
So here are some recommendations:
Make sure your local MySQL data directory is on a locally-attached drive -- not a network drive.
Use the mysql command-line client, not phpMyAdmin or BigDump.
mysql> source mydump.sql
Dump files are mostly a long list of INSERT statements, you can read Speed of INSERT Statements for tips on speeding up INSERT. Be sure to read the sub-pages it links to.
For example, when you export the database, check the radiobutton for "insert multiple rows in every INSERT statement" (this is incompatible with BigDump, but better for performance when you use source in the mysql client).
Durability settings are recommended for production use, but they come with some performance penalties. It sounds like you're just trying to get a development instance running, so reducing the durability may be worthwhile, at least while you do your import. A good summary of reducing durability is found in MySQL Community Manager Morgan Tocker's blog: Reducing MySQL durability for testing.
Re your new questions and errors:
A lot of people report similar errors when importing a large dump file created by phpMyAdmin or Drupal or other tools.
The most likely cause is that you have some data in the dump file that is larger than max_allowed_packet. This MySQL config setting is the largest size for an individual SQL statement or an individual row of data. When you exceed this in an individual SQL statement, the server aborts that SQL statement, and closes your connection. The mysql client tries to reconnect automatically and resume sourcing the dump file, but there are two side-effects:
Some of your rows of data failed to load.
The session variables that preserve #time_zone and other settings during the import are lost, because they are scoped to the session. When the reconnect happens, you get a new session.
The fix is to increase your max_allowed_packet. The default level is 4MB on MySQL 5.6, and only 1MB on earlier versions. You can find out what your current value for this config is:
mysql> SELECT ##max_allowed_packet;
+----------------------+
| ##max_allowed_packet |
+----------------------+
| 4194304 |
+----------------------+
You can increase it as high as 1GB:
mysql> set global max_allowed_packet = 1024*1024*1024;
Then try the import again:
mysql> source mydump.sql
Also, if you're measuring the size of the tables with a command like SHOW TABLE STATUS or a query against INFORMATION_SCHEMA.TABLES, you should know that the TABLE_ROWS count is only an estimate -- it can be pretty far off, like +/- 10% (or more) of the actual number of rows of the table. The number reported is even likely to change from time to time, even if you haven't changed any data in the table. The only true way to count rows in a table is with SELECT COUNT(*) FROM SomeTable.
SOLUTION
For anyone who wanted a step by step:
Using PuTTY, grab a mysql dump of the database (don't include everything to the right of the > and replace all capitals with the appropriate info)
> mysqldump -uUSERNAME -p DATABASENAME > DATABASE_DUMP_FILE_NAME.sql
You'll get a password prompt, type it in, hit enter. Wait till you get a prompt again. If you're using an FTP client go to the root of your host and you should see your file there, download it.
Locally get a mysql prompt by navigating to where your mysql.exe file is (there's a few ways of doing this, this is one of them) and typing:
> mysql.exe -use NEW_DATABASE -u USERNAME
Now you're in the mysql prompt. Turn on warnings...just in case
mysql > \W;
Increase the max_allowed_packet to a true Gig. I've seen references to also changing the net_buffer_length but after 5.1.31 it doesn't seem to be changed (link at bottom)
mysql > SET global max_allowed_packet = 1024*1024*1024;
Now import your sql file
mysql > source C:\path\to\DATABASE_DUMP_FILE_NAME.sql
If you want to check if all of the records imported you could either type SELECT COUNT(*) FROM SomeTable OR
Go to C:\wamp\apps\phpmyadmin\config.inc.php
At the bottom before the ?> add:
/* Show the exact count of each row */
$cfg['MaxExactCount'] = 2000000;
This is only recommended for a development platform - but really handy when you have to scan a bunch of tables / databases. Will probably slow down the works with large sets.
I was going through the specifications for loading data into and from MySQL tables at dev.mysql, when I came across the specifications for the local option in the load infile data command. It says that if local is not used, then if a file name with no leading components is given, the server looks for the file in the database directory of the default database. Can anyone tell me what is meant by default database here, and how to set one? Can it be set from within MySQL itself, or through some server directive?
The default database is the one you called with a USE clause or specified at login time. If you use SELECT * FROM tablename as opposed to SELECT * FROM databasename.tablename you also use the default database.
Edit
Just to make that clear: The default database is not a static thing - it is defined only for a defined point in time on a defined session - e.g. such as the PIT and session where you start the load data infile command.
Generally the default database will be specified in a database param under the [client] header in your config (my.ini/my.cnf/etc), like such:
[client]
database = name_of_default_db
I've got one easy question: say there is a site with a query like:
SELECT id, name, message FROM messages WHERE id = $_GET['q'].
Is there any way to get something updated/deleted in the database (MySQL)? Until now I've never seen an injection that was able to delete/update using a SELECT query, so, is it even possible?
Before directly answering the question, it's worth noting that even if all an attacker can do is read data that he shouldn't be able to, that's usually still really bad. Consider that by using JOINs and SELECTing from system tables (like mysql.innodb_table_stats), an attacker who starts with a SELECT injection and no other knowledge of your database can map your schema and then exfiltrate the entirety of the data that you have in MySQL. For the vast majority of databases and applications, that already represents a catastrophic security hole.
But to answer the question directly: there are a few ways that I know of by which injection into a MySQL SELECT can be used to modify data. Fortunately, they all require reasonably unusual circumstances to be possible. All example injections below are given relative to the example injectable query from the question:
SELECT id, name, message FROM messages WHERE id = $_GET['q']
1. "Stacked" or "batched" queries.
The classic injection technique of just putting an entire other statement after the one being injected into. As suggested in another answer here, you could set $_GET['q'] to 1; DELETE FROM users; -- so that the query forms two statements which get executed consecutively, the second of which deletes everything in the users table.
In mitigation
Most MySQL connectors - notably including PHP's (deprecated) mysql_* and (non-deprecated) mysqli_* functions - don't support stacked or batched queries at all, so this kind of attack just plain doesn't work. However, some do - notably including PHP's PDO connector (although the support can be disabled to increase security).
2. Exploiting user-defined functions
Functions can be called from a SELECT, and can alter data. If a data-altering function has been created in the database, you could make the SELECT call it, for instance by passing 0 OR SOME_FUNCTION_NAME() as the value of $_GET['q'].
In mitigation
Most databases don't contain any user-defined functions - let alone data-altering ones - and so offer no opportunity at all to perform this sort of exploit.
3. Writing to files
As described in Muhaimin Dzulfakar's (somewhat presumptuously named) paper Advanced MySQL Exploitation, you can use INTO OUTFILE or INTO DUMPFILE clauses on a MySQL select to dump the result into a file. Since, by using a UNION, any arbitrary result can be SELECTed, this allows writing new files with arbitrary content at any location that the user running mysqld can access. Conceivably this can be exploited not merely to modify data in the MySQL database, but to get shell access to the server on which it is running - for instance, by writing a PHP script to the webroot and then making a request to it, if the MySQL server is co-hosted with a PHP server.
In mitigation
Lots of factors reduce the practical exploitability of this otherwise impressive-sounding attack:
MySQL will never let you use INTO OUTFILE or INTO DUMPFILE to overwrite an existing file, nor write to a folder that doesn't exist. This prevents attacks like creating a .ssh folder with a private key in the mysql user's home directory and then SSHing in, or overwriting the mysqld binary itself with a malicious version and waiting for a server restart.
Any halfway decent installation package will set up a special user (typically named mysql) to run mysqld, and give that user only very limited permissions. As such, it shouldn't be able to write to most locations on the file system - and certainly shouldn't ordinarily be able to do things like write to a web application's webroot.
Modern installations of MySQL come with --secure-file-priv set by default, preventing MySQL from writing to anywhere other than a designated data import/export directory and thereby rendering this attack almost completely impotent... unless the owner of the server has deliberately disabled it. Fortunately, nobody would ever just completely disable a security feature like that since that would obviously be - oh wait never mind.
4. Calling the sys_exec() function from lib_mysqludf_sys to run arbitrary shell commands
There's a MySQL extension called lib_mysqludf_sys that - judging from its stars on GitHub and a quick Stack Overflow search - has at least a few hundred users. It adds a function called sys_exec that runs shell commands. As noted in #2, functions can be called from within a SELECT; the implications are hopefully obvious. To quote from the source, this function "can be a security hazard".
In mitigation
Most systems don't have this extension installed.
If you say you use mysql_query that doesn't support multiple queries, you cannot directly add DELETE/UPDATE/INSERT, but it's possible to modify data under some circumstances. For example, let's say you have the following function
DELIMITER //
CREATE DEFINER=`root`#`localhost` FUNCTION `testP`()
RETURNS int(11)
LANGUAGE SQL
NOT DETERMINISTIC
MODIFIES SQL DATA
SQL SECURITY DEFINER
COMMENT ''
BEGIN
DELETE FROM test2;
return 1;
END //
Now you can call this function in SELECT :
SELECT id, name, message FROM messages WHERE id = NULL OR testP()
(id = NULL - always NULL(FALSE), so testP() always gets executed.
It depends on the DBMS connector you are using. Most of the time your scenario should not be possible, but under certain circumstances it could work. For further details you should take a look at chapter 4 and 5 from the Blackhat-Paper Advanced MySQL Exploitation.
Yes it's possible.
$_GET['q'] would hold 1; DELETE FROM users; --
SELECT id, name, message FROM messages WHERE id = 1; DELETE FROM users; -- whatever here');