Mysql subquery, Update embedded into select? - mysql

I am doing a mysql injection on a site (for educational purpose i promise hehe), now, It uses mysql as its database, I cannot do: "; UPDATE..." so my question is, if i do: "OR id=(update...)".. as a subquery, that of course doesn't make any sense yet will it execute the update on the table i choose?

Your success or failure will depend on a number of factors. The first major hurdle you face is whether or not you "friend" was smart enough to use PHP for his database inputs and use the line mysql_real_escape_string which will prevent you from sending any commands through his textboxes and/or other input areas.
http://php.net/manual/en/function.mysql-real-escape-string.php
Your second major hurdle after determining that mysql_real_escape_string has not been used is to determine the true name of the table you want to update. I personally never expose my true database names to the web, I use pseudo names which represent the true names.
If you have succeeded this far you should be able to manipulate the MYSQL server in any way you see fit.
Check out this link for more helpful tips. I have never utilized any of these techniques in a manner other than testing my own MYSQL servers for vulnerabilities.
http://old.justinshattuck.com/2007/01/18/mysql-injection-cheat-sheet/

Related

When should I close a statement in MySQL

Should a statement be reused as many time as possible or there's a limitation?
If there is a limitation, when is the right time to close it?
Is creating and closing statement a costly operation?
Creating and closing a statement doesn't really make sense. I believe what you mean is creating and closing a cursor. A cursor is a query that you iterate over the results of. Typically you see them in Stored Procedures and Functions in MySQL. Yes, they have a cost to open and close and you should iterate over the entire set.
Alternately you're talking about prepared statements such as you might create using the PDO library in PHP. In which case, you can use them as many times as possible and indeed you should, as this is more efficient.
Every time MySQL receives a statement, it translates that into its own internal logic and creates a query plan. Using prepared statements means it only has to do this once rather than every time you call it.
Finally, you might be trying to ask about a connection, rather than a statement. In which case, again, the answer is yes - you can (and should) use it as many time as you need as there's a significant performance impact of opening it. Though you don't want to keep it open longer than you need it because MySQL has a maximum number of connections it can open.
Hopefully one of those will answer your question.

Relying on MySQL features vs my script

I've always relied on my PHP programming for most processes which I need to do, that I know can be done via a MySQL query or feature. For example:
I know that MySQL has a FOREIGN KEY feature that helps maintain data integrity but I don't rely on MySQL. I might as well make my scripts do this as it is more flexible; I'm basically using MySQL as STORAGE and my SCRIPTS as the processor.
I would like to keep things that way, put most of the load on my coding. I make sure that my scripts are robust to check for conflicts, orphaned rows, etc every time it makes changes and I even have a SYSTEM CHECK routine that runs through all these data verification processes so I really try to do everything on script side as long as it doesn't impact the whole thing's performance significantly (since I know MySQL can do things faster internally I mean I do use MySQL COUNT() functions of course).
Of course any direct changes done to the tables will not trigger routines in my script. but that's a different story. I'm pretty comfortable with doing this and I plan to keep doing this until I am convinced otherwise.
The only thing that I really have an issue with right now is, checking for duplicates.
My current routine is basically inserting products with serial numbers. I need to make sure that there are no duplicate serial numbers entered into the database.
I can simply rely on MySQL UNIQUE constraint to make sure of this or I can do it script side and this is what I did.
This product routine is a BATCH routine where anything from 1 to 500 products will be entered into the database at one call to the script.
Obviously I check for both duplicate entries in the data submitted as well as the data in the database. Here's a chunk of my routine
for ($i = 1; $i <= $qty; $i++) {
//
$serial = $serials_array[$i - 1]; // -1 coz arrays start at zero
//check duplicates in submitted data ++++++++++++++++++++++++++
if($serial_check[$serial] == 1) { // duplicate found!
exit("stat=err&statMsg=Duplicate serial found in your entry! ($serial)");
}else{
$serial_check[$serial] = 1;
}
//check duplicates in database
if(db_checkRow("inventory_stocks", "WHERE serial='$serial'"))exit("stat=err&statMsg=Serial Number is already used. ($serial)");
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
}
OK so basically it's:
1) Check submitted data for duplicates via creating an array that I can check against each serial number submitted - THIS IS no problem and really fast with PHP even up to 1000 records.
2) But, to check the database for duplicates, I have to call a function I made (db_checkRow) w/c basically issues a SELECT statement on EACH serial submitted and see if there's a hit/duplicate.
So, basically, 500 SELECT statements to check for duplicates vs just the MySQL unique constraint feature.
Does it really matter much??
Another reason I design my software like this is because at least if I need to deploy my stuff on a different database I don't rely too much on database features, hence I can easily port my application with very little tweaking.
It's almost guaranteed that MySQL will be faster at checking duplicates. Unless you are running your PHP on some uber-machine and the MySQL is running on an old wristwatch the index checking will be faster and better optimized than anything you can do via PHP.
Not to mention that your process is fine until someone else (or some other app) starts writing to the db. You can save yourself having to write the duplicate checking code in the first place - and again in the next app - and so on.
You're wrong. You're very, dangerously wrong.
The database has been designed for a specific function. You will never beat MySQL at enforcing a unique constraint. The database has been designed to do explicitly that as quickly as possible. It is impossible that you can do it quicker or more efficiently in PHP as you still need to access the database to determine whether the data you're inserting would be a duplicate.
This is easily demonstrated by the fact that you have 500 select statements to enforce a single unique constraint. As your table grows this will get even more ridiculous. What happens when your table hits 2,000 rows? What if you have a new table with a million rows?
Use the database features that have been designed explicitly to make your life easy.
You're also assuming that the only way the database will be accessed is through the application. This is an extremely dangerous assumption that is almost certain to be incorrect as time progresses.
Please read this programmers question, which seems like it's been written just for you. Simply put, “Never do in code what you can get the SQL server to do well for you”. I cannot emphasise this enough.

How many of you have gone from MySQL to Postgresql? Was it worth it?

I'm thinking about moving from MySQL to Postgres for Rails development and I just want to hear what other developers that made the move have to say about it.
I'm looking for personal experiences, not a Mysql v Postgres shootout, just the pros and cons that you yourself have arrived at. Stuff that folks might not necessarily think.
Feel free to explain why you moved in the first place as well.
I made the switch and frankly couldn't be happier. While Postgres lacks a few things of MySQL (Insert Ignore, Replace, Upsert stuff, and Load Data Infile for me mainly), the features it does have MORE than make up. Its stored procedures are so much more powerful and it's far easier to write complex functions and aggregates in Postgres.
Performance-wise, if you're comparing to InnoDB (which is only fair because of MVCC), then it feels at least as fast, possibly faster - we weren't able to do some real measurements here due to some constraints, but there certainly hasn't been a performance issue. The complex queries with several joins are certainly faster, MUCH faster.
I find you're more likely to get the correct answer to your issue from the Postgres community. Everybody and their grandmother has 50 different ways to do something in MySQL. With Postgres, hit up the mailing list and you're likely to get lots of very very good help.
Any of the syntax and the like differences are a bit trivial.
Overall, Postgres feels a lot more "grown-up" to me. I used MySQL for years and I now go out of my way to avoid it.
Oh dear, this could end in tears.
Speaking from personal experience only, we moved from MySQL solely because our production system (Heroku) is running PostgreSQL. We had custom-built-for-MySQL queries which were breaking on PostgreSQL. So I guess the morale of the story here is to run on the same DBMS over everything, otherwise you may run into problems.
We also sometimes needs to insert records Über-quick-like. For this, we use PostgreSQL's built-in COPY function, used similarly to this in our app:
query = "COPY users(email) FROM STDIN WITH CSV"
values = users.map! do |user|
# Be wary of the types of the objects here, they matter.
# For instance if you set the id to a string it will error.
%Q{#{user["email"]}}
end.join("\n")
raw_connection.exec(query)
raw_connection.put_copy_data(values)
raw_connection.put_copy_end
This inserts ~500,000 records into the database in just under two minutes. Around about the same time if we add more fields.
Another couple of nice things PostgreSQL has over MySQL:
Full text searching
Geographical querying (PostGIS)
LIKE syntax is like this email ~ 'hotmail|gmail', NOT LIKE is like email !~ 'hotmail|gmail'. The | indicates an or.
In summary: PostgreSQL is like bricks & mortar, where MySQL is Lego. Go with whatever "feels" right to you. This is only my personal opinion.
We switched to PostgreSQL for several reasons in early 2007 (or was it the year before?). The main reasons were:
SQL support - PostgreSQL is much better for complex SQL-queries, for example with lots of joins and aggregates
MySQL's stored procedures didn't feel very mature
MySQL license changes - dual licensed, open source and commercial, a split that made me wonder about the future. With PG's BSD license you can do whatever you want.
Faulty behaviour - when MySQL was counting rows, sometimes it just returned an approximated value, not the actual counted rows.
Constraints behaved a bit odd, inserting truncated/adapted values. See http://use.perl.org/~Smylers/journal/34246
The administrative interface PgAdminIII felt more stable and mature than the MySQL counterpart
PostgreSQL is very solid and crash safe in case of an outage
// John
Haven't made the switch myself, but got bitten a few times by MySQL's lack of transactional schema changes which apparently Postgre supports.
This would solve those nasty problems you get when you move from your dev environment with sqlite to your MySQL server and realise your migrations screwed up and were left half-done! (No I didn't do this on a production server but it did make a mess of our shared testing server!)

Is there some kind of "strict performance mode" for MySQL?

I'd like to setup one instance of MySQL to flat-out reject certain types of queries. For instance, any JOINs not using an index should just fail and die and show up on the application stack trace, instead of running slow and showing up on the slow_query_log with no easy way to tie it back to the actual test case that caused it.
Also, I'd like to disallow "*" (as in "SELECT * FROM ...") and have that throw essentially a syntax error. Anything which is questionable or dangerous from a MySQL performance perspective should just cause an error.
Is this possible? Other than hacking up MySQL internals... is there an easy way?
If you really want to control what users/programmers do via SQL, you have to put a layer between MySQL and your code that restricts access, like an ORM that only allows for certain tables to be accessed, and only certain queries. You can then also check to make sure the tables have indexes, etc.
You won't be able to know for sure if a query uses an index or not though. That's decided by the query optimizer layer in the database and the logic can get quite complex.
Impossible.
What you could do to make things work better, is createing views optimized by you and give the users only access to these views. Now you're sure the relevent SELECT's will use indexes.
But they can still destroy performance, just do a crazy JOIN on some views and performance is gone.
As far as I'm aware there's nothing baked into MySQL that provides this functionality, but any answer of "Impossible", or similar, is incorrect. If you really want to do this then you could always download the source and add the functionality yourself, unfortunately this would certainly class as "hacking up the MySQL internals".

retrieval of user-supplied data: any benefit for prepared statements

Prepared statements are good to prevent sql injection when the user supplies data and we use that data for db insertion or just even to structure the query. But is really any benefit to PDO when I'm retrieving previously-inserted user-supplied data from the database?
It sounds to me like the answer is no. It's already in. As long as the query itself that retrieves it isn't tarnished by user-supplied parameters (e.g. select * from table is not tarnished by user-supplied data), it's ok to use anything even not PDO, even if the data itself being retrieved was at one point in the past user-supplied data. Any input on this?
My guess is that once people start using PDO in their code, it becomes a matter of uniformity to keep using it for all pieces of their code and never go back to normal mysql (even if something is slightly harder with PDO).
consistency is a benefit. in fact, it's the main (theoretical) benefit of using PDO. preventing injection through bound parameters is orthogonal to PDO.