I found this picture in a document. It shows the queries running on a given SQL database. It looks similar to DBeaver software. However, I couldn't find a way to get to this screen from DBeaver. Can someone please help me to figure out what software this is? It would be really helpful to troubleshoot performance-related issues.
That's MySQL's SHOW PROCESSLIST.
IN ( SELECT ... ) often optimizes poorly; try to rewrite using a JOIN.
This may help:
INDEX(status, Calendar_Date, Date_Received)
For more help we need to see the queries and SHOW CREATE TABLE. You can obfuscate the names, but don't make it too hard to read.
{The TIMESTAMP qualifier is not required.)
Frameworks (eg, DBeaver) are handy for getting started. But ultimately you need to understand the underlying database.
Related
I am generating some MySQL queries using php. In some cases, my code generates duplicate query code for some of the queries, for security precautions. For example, let's say I have a table UploadedImages, which contains images uploaded by a user. It is connected to the User table via a reference. When a plain user wants to query that table, if he doesn't have admin rights, I forcefully put in a WHERE condition to the query, which only retrieves images which belong to him.
Because of this forcefull inclusion, sometimes, the query which I generate results in duplicate where conditions:
SELECT * FROM UploadedImages WHERE
accounts_AccountId = '143' AND
DateUploaded > '2017-10-11 21:42:32' AND
accounts_AccountId = '143'
Should I bother, with cleaning up this query before running it, or will MariaDB clean it up for me? (ie, will this query run any slower, or is it possible that it will result in erroneus results if I don't clean it up beforehand, by removing the identical duplicate conditions?)
If your question is "Should I bother cleaning it up?", Yes you should clean up the code that produces this because the fact that it can include the same clause multiple times suggests the database layer is not abstracted to a particularly modern level. The database layer should be able to be re-written to use a different database provider without having to change the code that depends upon it. It looks like it is not the case.
If your question is "Does adding the same restriction twice slow the query?" then the answer is no, not significantly.
You can answer the question for yourself: Run EXPLAIN SELECT ... on both queries. If the output is the same, then the dup was cleaned up.
I have one question. Working on mysql database having large amount of data.
Looking to set up general query log. But one thing I was wondering which type of general query log setup will effect less the performance ?
Writing general query log in separate file or getting log in mysql table (mysql.general_log)
Any suggestions ?
In my quest for a similar answer I came across this post that I feel explains the options and their impact pretty clearly.
http://www.fromdual.com/general_query_log_vs_mysql_performance
According to the post, having the General log disabled is your best performing option. Logical, right?
The next best option would be writing to a file (non-csv), and writing to a database table will come trailing in after that.
How to find the inefficient queries in mysql database ? I want to do performance tuning on my queries , but i coudn't find where my queries are located ? Please suggest me where can i find mysql queries for my tables .
Thanks
Prabhakaran.R
You can enable the general log and slow query logs.
Enabling general query log will log all the queries and might be heavy if you have many reads/writes. In slow query log, you can mention a threshold and only queries taking time beyond some time will be logged. Post that, you can manually analyze it or you can use tools provided( Percona has great tools)
Have you analyzed your queries with explain plans? You may be able to find a query that will return the result set you wish that isn't as heavy a load on the query engine. Remember to only select the columns you actually need (try to avoid SELECT *), well planned indexing and to use inner/outer joins in favour of a huge list of WHERE clause filters.
Good luck.
RP
In addition to what the others said, use pt-query-digest (from percona.com) to summarize the slowlog. That will tell you the worst queries.
Performance tuning often involves knowing what indexes to put on tables. My index cookbook is one place to learn how to build an INDEX for a given SELECT.
I'd like to setup one instance of MySQL to flat-out reject certain types of queries. For instance, any JOINs not using an index should just fail and die and show up on the application stack trace, instead of running slow and showing up on the slow_query_log with no easy way to tie it back to the actual test case that caused it.
Also, I'd like to disallow "*" (as in "SELECT * FROM ...") and have that throw essentially a syntax error. Anything which is questionable or dangerous from a MySQL performance perspective should just cause an error.
Is this possible? Other than hacking up MySQL internals... is there an easy way?
If you really want to control what users/programmers do via SQL, you have to put a layer between MySQL and your code that restricts access, like an ORM that only allows for certain tables to be accessed, and only certain queries. You can then also check to make sure the tables have indexes, etc.
You won't be able to know for sure if a query uses an index or not though. That's decided by the query optimizer layer in the database and the logic can get quite complex.
Impossible.
What you could do to make things work better, is createing views optimized by you and give the users only access to these views. Now you're sure the relevent SELECT's will use indexes.
But they can still destroy performance, just do a crazy JOIN on some views and performance is gone.
As far as I'm aware there's nothing baked into MySQL that provides this functionality, but any answer of "Impossible", or similar, is incorrect. If you really want to do this then you could always download the source and add the functionality yourself, unfortunately this would certainly class as "hacking up the MySQL internals".
I am using Joomla 1.5 and VirtueMart 1.1.3.
There is an issue where tmp files that are 1.6 GB are created every time a certain query is executed. is this normal? I think virtuemart is using a huge join statement to pull the whole products table and several other tables. I found the file that builds the query but i don't know where to begin to optimize this. even if i did virtuemart seems to use this one file to build all sql statements so i could end up breaking something.
You could look at the MySQL slow query log (and/or enable it) to see the particular query taking time and space. With that in hand, you can use MySQL's EXPLAIN functionality to see why the query is slow.
If you're lucky, the VirtueMart developers simply haven't added valid indexes to their tables, which causes MySQL to have to do things the slow way (ie. filesort, etc). If you're unlucky, changing the schema won't help and you'll have to take this up with the VirtueMart developers, or fix it yourself.
In any case, if you find a solution, you probably should let the VirtueMart team know.
Best of luck!