Is it possible to sneak an "insert" statement (or anything else that changes the database) into a MySQL "select" statement?
I ask because I'm concerned I've found an injection vulnerability, but it's safeguarded from obvious mayhem like '; drop database; -- by virtue of only being able to run a single statement at a time, no matter how many statements the query has been corrupted to contain. But if the back end is executing something like select bar from foo where param = '$improperly_escaped_input', is there something I can input that will compromise my database?
The vulnerability needs to be corrected, regardless. But if I can show an example of how it can be exploited to screw with the data, fixing it goes way up in the priority queue.
Modification of data is only one aspect of a Sql Injection vulnerability. Even with just read permissions, an attacker can elevate their privileges, or use a Blind Sql Injection attack to scrape every last bit of data out of your database.
I can't think of a way off the top of my head that data would be modified inside a select statement... but, are you sure that you're only able to run a single command at a time?
Regardless, the other attack vectors should be enough of a threat to raise the priority on the issue.
EDIT: Data modification is allowed in MySql sub-queries:
MySQL permits a subquery to refer to a stored function that has
data-modifying side effects such as inserting rows into a table. For
example, if f() inserts rows, the following query can modify data:
SELECT ... WHERE x IN (SELECT f() ...);
This behavior is nonstandard
(not permitted by the SQL standard). In MySQL, it can produce
indeterminate results because f() might be executed a different number
of times for different executions of a given query depending on how
the optimizer chooses to handle it.
Related
So just before the weekend I made a bit of a catastrophic error where I got distracted and forgot to finish my SQL-statement in the code I was working on for my site and completely left out any WHERE-clause before saving. This resulted in each time a new order was created, every single order in the system had it's payment-option set to whatever the new order used.
This time I was lucky I could salvage the situation with a rather recent backup and saw the error immediately (but not until after 180.000+ orders had their payment info changed) and I could manually deduct what the payments should have been for the most recent orders made after the backup had been created.
Unfortunately I don't have the luxury of a good testing environment, which I know is very bad.
Question: To prevent anything like this from happening again, is there any way we can set up our SQL server to prevent UPDATE statements to be considered WHERE 1, and instead be considered WHERE 0, where the WHERE clause is missing completely?
You can set the session variable sql_safe_updates to ON with
SET sql_safe_updates=ON;
Read more about it in the manual:
For beginners, a useful startup option is --safe-updates (or
--i-am-a-dummy, which has the same effect). Safe-updates mode is helpful for cases when you might have issued an UPDATE or DELETE
statement but forgotten the WHERE clause indicating which rows to
modify. Normally, such statements update or delete all rows in the
table. With --safe-updates, you can modify rows only by specifying the
key values that identify them, or a LIMIT clause, or both. This helps
prevent accidents. Safe-updates mode also restricts SELECT statements
that produce (or are estimated to produce) very large result sets.
... (much more info in the link provided)
There are few IDE's like dbweaver which provides you a warning when there is on any where clause in your Update/ Delete statements. Ideally you can also use **SET sql_safe_mode=ON;
**. But this would be comfortable only in the test environment, not sure if you can enable it in production. Most right way in sql is to take a backup in an automated way using triggers before updating/ deleting
We are currently doing a lot of small queries. We execute a query, read the results, and then execute the next one. Since network requests cost a lot of time, this ping-ponging gets slow very fast.
This is why we want to do multiple queries at once, sending all data that the SQL server must know to it, and only retrieving one result (consisting of multiple result sets).
We found that Qt 5.14.1's QSqlQuery has the nextResult() function, but in the documentation (link) it says:
Some databases may execute all statements at once while others may delay the execution until the result set is actually accessed, [...].
MY QUESTION:
So, does MySql Server 8.0 delay the execution until the result set is actually accessed? If this is the case, then we still have a ping-pong for every query right? Which would be very slow still.
P.S. Our current solution to just have 1 ping-pong is to union different result sets (resulting in kind of a block diagonal matrix) with lots and lots of null values), and this question is meant to find a better way to do this.
Should a statement be reused as many time as possible or there's a limitation?
If there is a limitation, when is the right time to close it?
Is creating and closing statement a costly operation?
Creating and closing a statement doesn't really make sense. I believe what you mean is creating and closing a cursor. A cursor is a query that you iterate over the results of. Typically you see them in Stored Procedures and Functions in MySQL. Yes, they have a cost to open and close and you should iterate over the entire set.
Alternately you're talking about prepared statements such as you might create using the PDO library in PHP. In which case, you can use them as many times as possible and indeed you should, as this is more efficient.
Every time MySQL receives a statement, it translates that into its own internal logic and creates a query plan. Using prepared statements means it only has to do this once rather than every time you call it.
Finally, you might be trying to ask about a connection, rather than a statement. In which case, again, the answer is yes - you can (and should) use it as many time as you need as there's a significant performance impact of opening it. Though you don't want to keep it open longer than you need it because MySQL has a maximum number of connections it can open.
Hopefully one of those will answer your question.
My situation:
MySQL 5.5, but possible to migrate to 5.7
Legacy app is executing single MySQL query to get some data (1-10 rows, 20 columns)
Query can be modified via application configuration
Query is very complex SELECT with multiple JOINS and conditions, it's about 20KB of code
Query is well profiled, index usage fine-tuned, I spent much time on this and se no room for improvement without splitting to smaller queries
With traditional app I would split this large query to several smaller and use caching to avoid many JOINS, but my legacy app does not allow to do that. I can use only one query to return results
My plan to improve performance is:
Reduce parsing time. Parsing 20KB of SQL on every request, while only parameters values are changed seems ineffective
I'd like to turn this query into prepared statement and only fill placeholders with data
Query will be parsed once and executed multiple times, should be much faster
Problems/questions:
First of all: does above solution make sense?
MySQL prepared statements seem to be session related. I can't use that since I cannot execute any additional code ("init code") to create statements for each session
Other solution I see is to use prepared statement generated inside procedure or function. But examples I saw rely on dynamically generating queries using CONCAT() and making prepared statement executed locally inside of procedure. It seems that this kind of statements will be prepared every procedure call, so it will not save any processing time
Is there any way to declare server-wide and not session related prepared statement in MySQL? So they will survive application restart and server restart?
If not, is it possible to cache prepared statements declared in functions/procedures?
I think the following will achieve your goal...
Put the monster in a Stored Routine.
Arrange to always execute that Stored Routine from the same connection. (This may involve restructuring your client and/or inserting a "web service" in the middle.)
The logic here is that Stored Routines are compiled once per connection. I don't know whether that includes caching the "prepare". Nor do I know whether you should leave the query naked, or artificially prepare & execute.
Suggest you try some timings, plus try some profiling. The latter may give you clues into what I am uncertain about.
I am experiencing what appears to be the effects of a race condition in an application I am involved with. The situation is as follows, generally, a page responsible for some heavy application logic is following this format:
Select from test and determine if there are rows already matching a clause.
If a matching row already exists, we terminate here, otherwise we proceed with the application logic
Insert into the test table with values that will match our initial select.
Normally, this works fine and limits the action to a single execution. However, under high load and user-abuse where many requests are intentionally sent simultaneously, MySQL allows many instances of the application logic to run, bypassing the restriction from the select clause.
It seems to actually run something like:
select from test
select from test
select from test
(all of which pass the check)
insert into test
insert into test
insert into test
I believe this is done for efficiency reasons, but it has serious ramifications in the context of my application. I have attempted to use Get_Lock() and Release_Lock() but this does not appear to suffice under high load as the race condition still appears to be present. Transactions are also not a possibility as the application logic is very heavy and all tables involved are not transaction-capable.
To anyone familiar with this behavior, is it possible to turn this type of handling off so that MySQL always processes queries in the order in which they are received? Is there another way to make such queries atomic? Any help with this matter would be appreciated, I can't find much documented about this behavior.
The problem here is that you have, as you surmised, a race condition.
The SELECT and the INSERT need to be one atomic unit.
The way you do this is via transactions. You cannot safely make the SELECT, return to PHP, and assume the SELECT's results will reflect the database state when you make the INSERT.
If well-designed transactions (the correct solution) are as you say not possible - and I still strongly recommend them - you're going to have to make the final INSERT atomically check if its assumptions are still true (such as via an INSERT IF NOT EXISTS, a stored procedure, or catching the INSERT's error in the application). If they aren't, it will abort back to your PHP code, which must start the logic over.
By the way, MySQL likely is executing requests in the order they were received. It's possible with multiple simultaneous connections to receive SELECT A,SELECT B,INSERT A,INSERT B. Thus, the only "solution" would be to only allow one connection at a time - and that will kill your scalability dead.
Personally, I would go about the check another way.
Attempt to insert the row. If it fails, then there was already a row there.
In this manner, you check or a duplicate and insert the new row in a single query, eliminating the possibility of races.