I'd like to setup one instance of MySQL to flat-out reject certain types of queries. For instance, any JOINs not using an index should just fail and die and show up on the application stack trace, instead of running slow and showing up on the slow_query_log with no easy way to tie it back to the actual test case that caused it.
Also, I'd like to disallow "*" (as in "SELECT * FROM ...") and have that throw essentially a syntax error. Anything which is questionable or dangerous from a MySQL performance perspective should just cause an error.
Is this possible? Other than hacking up MySQL internals... is there an easy way?
If you really want to control what users/programmers do via SQL, you have to put a layer between MySQL and your code that restricts access, like an ORM that only allows for certain tables to be accessed, and only certain queries. You can then also check to make sure the tables have indexes, etc.
You won't be able to know for sure if a query uses an index or not though. That's decided by the query optimizer layer in the database and the logic can get quite complex.
Impossible.
What you could do to make things work better, is createing views optimized by you and give the users only access to these views. Now you're sure the relevent SELECT's will use indexes.
But they can still destroy performance, just do a crazy JOIN on some views and performance is gone.
As far as I'm aware there's nothing baked into MySQL that provides this functionality, but any answer of "Impossible", or similar, is incorrect. If you really want to do this then you could always download the source and add the functionality yourself, unfortunately this would certainly class as "hacking up the MySQL internals".
Related
It is obvious that executing database query in loops has performance issues. but if the query is used as prepared statement, does it make any difference?
What is preferable joining together the tables and get the results or using prepared statement in loop?
Using join would almost always be preferred instead of looping over a result set to get additional results.
Relational Database Management Systems are built for combining related results, and does so very efficiently... additionally, this will you save many round trips to the database, which can become costly if used excessively - regardless of if you're using prepared statements or not.
The overhead to the prepared statements is probably not going to be the escaping of the inputs, it's going to be the connection to the database, or reconnection, or act of sending of the finalized sql statement. That interface between your code and the relational database is likely to be the slow point of the process more than anything else.
However, for my part, I would generally go for whatever is simplest and most able to be maintained from the start, and only worry about performance if the performance actually shows itself to be slow. Write the data-grabbing functionality in a separate function or method so that the implementation can change if the performance proves to need optimization, though.
At that point you can then start optimizing your sql, and use joins or unions as alternatives to multiple prepared statements.
I am developing with Codeigniter and when it gets to complicated database queries
I am using
$this->db->query('my complicated query');
then cast to array of object using $query->result();
and so far it's very good and useful
Now my question is
what if I want to create mysql view and select from it? Will
$this->db->from('mysql_view')
take the mysql view as it's a table or not?
And if I do that will be any difference in performance are views faster than normal database query?
What would be best practice with Codeigniter and MYSQL database dealing with complicated queries as I understand that ActiveRecord is just query builder and as some tests it's even a little slower
Thanks in advance for your advise
MySQL views are queried the same way as tables, on a side note, you can't have a table and a view share the same name.
Depends on the query you use in the view, views can be internally cached so in the long run - yes, they are faster.
Best practice in this case is to use whatever you find easy to use for yourself and your team, I personally stick to using $this->db->query(); as I find it's easier to change a simple query of this kind to have some advanced functionality like sub-queries or other things that are hard and/or impossible to do with CI query builder. My advice would be to stick to one way of queries - if you use ->query(), then use them everywhere, if you use a query builder, then use it wherever it is possible to achieve the result using it.
I am doing a mysql injection on a site (for educational purpose i promise hehe), now, It uses mysql as its database, I cannot do: "; UPDATE..." so my question is, if i do: "OR id=(update...)".. as a subquery, that of course doesn't make any sense yet will it execute the update on the table i choose?
Your success or failure will depend on a number of factors. The first major hurdle you face is whether or not you "friend" was smart enough to use PHP for his database inputs and use the line mysql_real_escape_string which will prevent you from sending any commands through his textboxes and/or other input areas.
http://php.net/manual/en/function.mysql-real-escape-string.php
Your second major hurdle after determining that mysql_real_escape_string has not been used is to determine the true name of the table you want to update. I personally never expose my true database names to the web, I use pseudo names which represent the true names.
If you have succeeded this far you should be able to manipulate the MYSQL server in any way you see fit.
Check out this link for more helpful tips. I have never utilized any of these techniques in a manner other than testing my own MYSQL servers for vulnerabilities.
http://old.justinshattuck.com/2007/01/18/mysql-injection-cheat-sheet/
I'm thinking about moving from MySQL to Postgres for Rails development and I just want to hear what other developers that made the move have to say about it.
I'm looking for personal experiences, not a Mysql v Postgres shootout, just the pros and cons that you yourself have arrived at. Stuff that folks might not necessarily think.
Feel free to explain why you moved in the first place as well.
I made the switch and frankly couldn't be happier. While Postgres lacks a few things of MySQL (Insert Ignore, Replace, Upsert stuff, and Load Data Infile for me mainly), the features it does have MORE than make up. Its stored procedures are so much more powerful and it's far easier to write complex functions and aggregates in Postgres.
Performance-wise, if you're comparing to InnoDB (which is only fair because of MVCC), then it feels at least as fast, possibly faster - we weren't able to do some real measurements here due to some constraints, but there certainly hasn't been a performance issue. The complex queries with several joins are certainly faster, MUCH faster.
I find you're more likely to get the correct answer to your issue from the Postgres community. Everybody and their grandmother has 50 different ways to do something in MySQL. With Postgres, hit up the mailing list and you're likely to get lots of very very good help.
Any of the syntax and the like differences are a bit trivial.
Overall, Postgres feels a lot more "grown-up" to me. I used MySQL for years and I now go out of my way to avoid it.
Oh dear, this could end in tears.
Speaking from personal experience only, we moved from MySQL solely because our production system (Heroku) is running PostgreSQL. We had custom-built-for-MySQL queries which were breaking on PostgreSQL. So I guess the morale of the story here is to run on the same DBMS over everything, otherwise you may run into problems.
We also sometimes needs to insert records Über-quick-like. For this, we use PostgreSQL's built-in COPY function, used similarly to this in our app:
query = "COPY users(email) FROM STDIN WITH CSV"
values = users.map! do |user|
# Be wary of the types of the objects here, they matter.
# For instance if you set the id to a string it will error.
%Q{#{user["email"]}}
end.join("\n")
raw_connection.exec(query)
raw_connection.put_copy_data(values)
raw_connection.put_copy_end
This inserts ~500,000 records into the database in just under two minutes. Around about the same time if we add more fields.
Another couple of nice things PostgreSQL has over MySQL:
Full text searching
Geographical querying (PostGIS)
LIKE syntax is like this email ~ 'hotmail|gmail', NOT LIKE is like email !~ 'hotmail|gmail'. The | indicates an or.
In summary: PostgreSQL is like bricks & mortar, where MySQL is Lego. Go with whatever "feels" right to you. This is only my personal opinion.
We switched to PostgreSQL for several reasons in early 2007 (or was it the year before?). The main reasons were:
SQL support - PostgreSQL is much better for complex SQL-queries, for example with lots of joins and aggregates
MySQL's stored procedures didn't feel very mature
MySQL license changes - dual licensed, open source and commercial, a split that made me wonder about the future. With PG's BSD license you can do whatever you want.
Faulty behaviour - when MySQL was counting rows, sometimes it just returned an approximated value, not the actual counted rows.
Constraints behaved a bit odd, inserting truncated/adapted values. See http://use.perl.org/~Smylers/journal/34246
The administrative interface PgAdminIII felt more stable and mature than the MySQL counterpart
PostgreSQL is very solid and crash safe in case of an outage
// John
Haven't made the switch myself, but got bitten a few times by MySQL's lack of transactional schema changes which apparently Postgre supports.
This would solve those nasty problems you get when you move from your dev environment with sqlite to your MySQL server and realise your migrations screwed up and were left half-done! (No I didn't do this on a production server but it did make a mess of our shared testing server!)
I want to create a query result page for a simple search, and i don't know , should i use views in my db, would it be better if i would write a query into my code with the same syntax like i would create my view.
What is the better solution for merging 7 tables, when i want to build a search module for my site witch has lots of users and pageloads?
(I'm searching in more tables at the same time)
you would be better off using a plain query with joins, instead of a view. VIEWS in MySQL are not optimized. be sure to have your tables properly indexed on the fields being used in the joins
If you always use all 7 tables, i think you should use views. Be aware that mysql changes your original query when creating the view so its always good practice to save your query elsewhere.
Also, remember you can tweak mysql's query cache env var so that it stores more data, therefore making your queries respond faster. However, I would suggest that you used some other method for caching like memcached. The paying version of mysql supports memcached natively, but Im sure you can implement it in the application layer no problem.
Good luck!