MySQL "INSERT" and SQL injection - mysql

I have this simple mysql query:
INSERT INTO table (col1, col2) VALUES ('1', '2')
col1 and col2 are foreign keys for another table so any value for col1 and col2 must be present in the other table or otherwise the row won't be inserted.
Is there still any risk of SQL injection in this case? If i receive these col values from PHP POST, do I still need to bind them before insertion into the database or they are already secure as the cols are foreign keys?

Yes. All input from users needs to be check for sanitized. E.g. if a user is sending you a string like that '2'); drop table <table> as your second value it might get executed and giving you some surprise. (String might not work exactly, but I think you got the point)

It's indeed prone to SQL Injection, since the user could, for example, break your query and get information about your RDBMS and your database schema in the error message, and use it to prepare another attacks to your applications.
There are a lot of ways to explore SQL Injection issue.

Yes, there is always a risk of injection, even with foreign key constraints. If I know what is a valid col1 and col2 value, I can use those values to construct an attack. It is best to always scrub user input and assume the user is trying to hurt your database.

When constructing database queries in PHP, use an interface that allows you to use placeholders for your data that will handle any escaping automatically. For example, your query would look like:
INSERT INTO table (col1, col2) VALUES (:col1, :col2)
Then you can bind against that using the appropriate method in a driver like PDO. If you're disciplined about using placeholders for user data, the chance of a SQL injection bug occurring is very low.
There's a number of ways to properly escape user input, but you must be sure to use them on all user data, no exceptions. A single mistake can be sufficient to crack your site wide open.

Related

Insert by reference in MySQL

In Oracle Database (SQL Plus), there is an alternative method to insert values into a table, which my lecturers called "insert by reference". It looks like this:
SQL> INSERT INTO table_name ('&col_name1','&col_name2' ...);
Enter value for col_name1: value1
Enter value for col_name1: value1
...
This enables you to use the same command repeatedly (by pressing up arrow) to enter multiple records in the table; you only need to enter the specific values separately after executing the command. And there is no need to go back to each value, erase it and type in the new value.
So my question is, is there any way to replicate this handy command in MySQL?
This is a feature of sqlplus, not of oracle's, taking advantage of oracle's prepared statement feature.
You need to find or develop an sql client for mysql that can similarly use mysql's prepared statement feature in a nicer way either directly through SQL or through an API (C API is just an example).
We cannot recommend 3rd party tools or utilities here on SO, you need to find the one that best suits your needs.
perhaps use the multi row insert:
insert into table_name (col_name1, col_name2)
values (value_1_1, value_1_2), (value_2_1, value_2_2) [...]

Is there any disadvantages of unique column in MYSQL

i'd like to ask a question regarding Unique columns in MySQL.
Would like to ask experts on which is a better way to approach this problem, advantages or disadvantages if there is any.
Set a varchar column as unique
Do a SQL INSERT IGNORE
If affected rows > 0 proceed with running the code
versus
Leave a varchar column as not-unique
Do a search query to look for identical value
If there is no rows returned in query, Do a SQL INSERT
proceed with running the code
Neither of the 2 approaches is good.
You don't do INSERT IGNORE nor do you search. The searching part is also unreliable, because it fails at concurrency and compromises the integrity. Imagine this scenario: you and I try to insert the same info into the database. We connect at the same time. Code in question determines that there's no such record in the database, for both of us. We both insert the same data. Now your column isn't unique, therefore we'll end up with 2 records that are the same - your integrity now fails.
What you do is set the column to unique, insert and catch the exception in the language of your choice.
MySQL will fail in case of duplicate record, and any proper db driver for MySQL will interpret this as an exception.
Since you haven't mentioned what the language is, it's difficult to move forward with examples.
Defining a column as an unique index has a few advantages, first of all when you define it as an "unique index" MySQL can optimize your index for unique values (same as a primary key) because mysql doesn't have to check if there are more rows with the same value so it can use an optimized algoritme for the lookups.
Also you are assured that there never will be a double entry in your database instead of handeling this in multiple places in your code.
When you don't define it as UNIQUE you first need to check if an records exists in your table, and then insert something wich requires 2 queries (and even a full table lock) instead of 1 wich decreases your performance and is more error prone
http://dev.mysql.com/doc/refman/5.0/en/constraint-primary-key.html
I'm leaving the fact that you would use the INSERT IGNORE wich IGNORES the exception when the entry allready exists in the database (Still you could use it for high performance operations maybe in some sort of special case). A normal INSERT will give you the feedback if an entry allready exists
Putting a constraint like UNIQUE is better when it comes to query performance and data reliability. But there is also a trade-off when it comes to writing. So It's up to you which do you prefer. But in your case, since you also do INSERT IF NOT EXIST query, so I guess, it's better to just use the Constraint.

Manually increament primary key - Transaction and racing condition

This may not be a real world issue but is more like a learning topic.
Using PHP, MySQL and PDO, I know all about auto_increment and lastInsertId(). Consider that the primary key has no auto_incerment attribute and we have to use something like SELECT MAX(id) FROM table in order to retrieve last id, increment it manually and then INSERT INTO table (id) VALUES (:lastIdPlusOne). Wrap whole code in beginTransaction and commit.
Is this approach safe? If user A and B at the same time load this script what will happens at the end? both transaction will be failed? Or both will be successful (for instance, if the last id was 10, A will insert 11 and B will insert 12)?
Note that since I am a PHP & MySQL developer, therefor I am more interested in MySQL behavior in this case.
If both got the same max, then the one that inserts first will succeed, and other(s) will fail.
To overcome this issue without using using auto_increment fields, you may use a trigger before insert that does the job (new.id=max) i.e. same logic, but in a trigger, so the DB server is the one who controls it.
Not sure though if this is 100% safe in a master-master replication environment in case of a server failure.
This is #eggyal comment, that I quote here:
You must ensure that you use a locking read to fetch the MAX() in the first (select) query; it will then block until the transaction is committed. However, this is very poor design and should not be used in a production system.

MySQL - Split up INSERT in to 2 queries maybe

I have an INSERT query which looks like:
$db->Query("INSERT INTO `surfed` (user, site) VALUES('".$data['id']."', '".$id."')");
Basically I want to insert just like the above query but if the site is already submitted by another user I don't want it to then re-submit the same $id in to the site column. But multiple users can view the same site and all users need to be in the same row as the site that they have viewed which causes the surfed table to have 10s of thousands of inserts which dramatically slows down the site.
Is there any way to maybe split up the insert in some way so that if a site is already submitted it won't then submit it again for another user. Maybe there's a way to use UPDATE so that there isn't an overload of inserts?
Thanks,
I guess the easiest way to do it would be setting up a stored procedure which executes a SELECT to check if the user-site-combination is already in the table. If not, you execute the insert statement. If that combination already exist, you're done and don't execute the insert.
Check out the manual on stored procedures
http://dev.mysql.com/doc/refman/5.1/en/create-procedure.html
You need to set a conditional statement that asks whether the id already exists then if it does update otherwise insert
If you don't need to know whether you actually inserted a line, you can use INSERT IGNORE ....
$db->Query("INSERT IGNORE INTO `surfed` (user, site) VALUES('".$data['id']."', '".$id."')");
But this assumes that you have a unique key defined for the columns.
IGNORE here will ignore the Integrity constraint violation error triggered by attempting to add the same unique key twice.
The MySQL Reference Manual on the INSERT syntax has some informations on that http://dev.mysql.com/doc/refman/5.5/en/insert.html

MySQL set a field value for all INSERTS during the current session

I'm developing a system in C that commits updates into a MySQL database. The client isn't always online and therefore the application will save the SQL commands that would be executed into a *.sql when the server is inaccessible.
I'm thinking of adding a BOOLEAN field named late_commit to the tables used so I’ll know that those were inserted into the database later when the connection was restored.
I could alter the programming logic within the program to include the late_commit field in the insert queries but I’d rather have it with a default value of false and somehow have it set to true only when the .sql file is be executed.
I thought of intercalating the inserts with alter statements, but this seems a bit clumsy and will offer poor performance.
I've never used triggers but from what I see in this SO question they could work. They seem, however, not to be temporary or local to the session, which would interfere with the concurrent inserts from other clients.
Do you have any idea on how you did/would do this? Not necessarily the query(ies) to use, but the technology/approach that would apply the best.
EDIT:
I think that a solution, if no other comes up, could be the creation of a temporary table with the same structure and a late_commit default to true, insert the data into it, then copy into the main table.
NOTICE:
I've added an answer with some approaches that I’ve found. I’m still looking for the permanent solution though. So please if you know how to do it better please comment or answer. thank you!
I would make the default false for late_commit and have all normal code ignore its presence. I would then have the code that writes the SQL to file go through a "decorator" that injects the late_commit stuff, eg normal SQL:
insert into table1 (col1, col2) values (val1, val2);
But when written to file:
insert into table1 (late_commit, col1, col2) values (true, val1, val2);
That way only one piece of code needs to know about it. The SQL parsing to work out where to put the extra bits is fairly straightforward.
I thought i'd leave here what i found so far in order to answer the question.
As stated in the question i'm looking preferably to a sql only approach, that avoids using injectors.
Prepared statements won't work as they require a connection to the database that might not exist in the first place.
IFNULL() SOLUTION:
I'm inclined to use a solution I came up with using the IFNULL mysql statement which by setting a local variable would allow me to configure the late_commit column using the same query while doing it live (late_commit = false) or when doing it later (late_commit = true).
so using the query:
INSERT INTO `tmp`.`new_table` (`a`,`b`,`late_commit`)
VALUES ('abc','def', IFNULL(#LATECOMMIT, FALSE) );
I can insert the values with the late_commit column set to false, making use of the IFNULL statement: because the session variable #latecommit is not defined the code will configure the column to false.
on the other hand, if we are actually doing an offline commit, we just need to preceed all the inserts with:
SET #LATECOMMIT = TRUE;
and then proceed with all the necessary inserts, which in my case include several different tables, but in all of them there is a late_commit field that is set to IFNULL(#LATECOMMIT, FALSE):
INSERT INTO `tmp`.`new_table` (`a`,`b`,`latecommit`)
VALUES ('abc','def', IFNULL(#LATECOMMIT, FALSE) );
I like this solution because the variable only set for the current session, and its quite easy to implement (eg. you just prepend your .sql file with the SET instruction and proceed executing it).
TIMEDIFF() or Timestamp subtraction SOLUTION:
When discussing this in the chat, #TehShrike also provided me with a sql only solution, my making use of the difference between the time when the query was first generated and the time when it was inserted.
This could be done because my tables are actually being inserted with a clientdate timestamp variable, which is the local unix timestamp.
So for this solution all that would be needed is to determine what would be considered a late_commit (eg. 60 seconds, one hour, ..) and then make our inserts like this:
INSERT INTO `tmp`.`new_table` (`a`,`b`,`clientdate`,`latecommit`)
VALUES ('abc','def', FROM_UNIXTIME(1313489338),
IF( CURRENT_TIMESTAMP() - clientdate > 3600, TRUE, FALSE) );
In this insert we consider a insert that was generated more than an hour ago (3600 seconds) to be a late_commit.
If you are not using timestamps you could also use datetime fields in which case you probably would use the DATEDIFF() or TIMEDIFF() functions.