How to throttle or prioritize a query in MySql - mysql

Is there anyway to prioritize or throttle a query at all in MySQL?
I'm a DBA on a server that sees a lot of unoptimized queries come into the server and they just destroy the CPU. I'm looking at throttling certain users that hit on the database in poor fashion.
Clarification:
I realize that MySQL has facilities built in to limit number of queries, connections, etc. But those aren't really the issue, it's that once in a blue moon a user will send an unoptimized query, and I'd need to time it out or something similar.

http://dev.mysql.com/doc/refman/4.1/en/user-resources.html
Starting from MySQL 4.0.2, you can limit access to the following server resources for individual accounts:
The number of queries that an account can issue per hour
The number of updates that an account can issue per hour
The number of times an account can connect to the server per hour
Additional Info -
MySQL does not have a query timeout value, or any other built-in way to throttle an individual query. However, it is pretty trivial to write a script that will kill long-running queries. You could even capture the user and query so that you can beat the offending user/programmer later.
Here is an example of one using PERL.

I know that phpMyAdmin has an area for Max # of Queries per hour, Max # of Updates per hour, Max # of Connections per hour, and Max # of User Connections per user. There are probably other ways to set these values other than phpMyAdmin

Related

How to get MySQL status variables every second?

I am using MySQL workbench 6.3 CE. I want to take the snapshots of MySQL status variables.
I want to store the values of status variables after every 1 second during the execution of query.
I can simply show the variables using 'show global status'. But I want to execute it automatically after every 1 second.
You can run a procedure and a query at the same time by having two separate connections. Workbench is a handy tool, but you should learn to use the mysql commandline tool, too.
The query is rather simple. INDEX(l_shipdate) is likely to be the best for it.
The real way to speed up the query (assuming that is your ultimate goal) is to build and maintain a "Summary table" of daily or monthly subtotals. Then sum the sums and sum the counts. Avg is (SUM(sums)/SUM(counts)).
More discussion: http://mysql.rjweb.org/doc.php/summarytables
Be cautious about running (via EVENT or cron) any code that might take longer than the interval time. If it gets behind, it is likely to cascade and bring the server down, or at least slow things down severely. For that reason, I much prefer the WHILE loop.

MySQL server very high load

I run a website with ~500 real time visitors, ~50k daily visitors and ~1,3million total users. I host my server on AWS, where I use several instances of different kind. When I started the website the different instances cost rougly the same. When the website started to gain users the RDS instance (MySQL DB) CPU constantly keept hitting the roof, I had to upgrade it several times, now it have started to take up the main part of the performance and monthly cost (around 95% of (2,8k$/month)). I currently use a database server with 16vCPU and 64GiB of RAM, I also use Multi-AZ Deployment to protect against failures. I wonder if it is normal for the database to be that expensive, or if I have done something terribly wrong?
Database Info
At the moment my database have 40 tables with the most of them have 100k rows, some have ~2millions and 1 have 30 millions.
I have a system the archives rows that are older then 21 days when they are not needed anymore.
Website Info
The website mainly use PHP, but also some NodeJS and python.
Most of the functions of the website works like this:
Start transaction
Insert row
Get last inserted id (lastrowid)
Do some calculations
Updated the inserted row
Update the user
Commit transaction
I also run around 100bots wich polls from the database with 10-30sec interval, they also inserts/updates the database sometimes.
Extra
I have done several things to try to lower the load on the database. Such as enable database cache, use a redis cache for some queries, tried to remove very slow queries, tried to upgrade the storage type to "Provisioned IOPS SSD". But nothing seems to help.
This is the changes I have done to the setting paramters:
I have though about creating a MySQL cluster of several smaller instances, but I don't know if this would help, and I also don't know if this works good with transactions.
If you need any more information, please ask, any help on this issue is greatly appriciated!
In my experience, as soon as you ask the question "how can I scale up performance?" you know you have outgrown RDS (edit: I admit my experience that leads me to this opinion may be outdated).
It sounds like your query load is pretty write-heavy. Lots of inserts and updates. You should increase the innodb_log_file_size if you can on your version of RDS. Otherwise you may have to abandon RDS and move to an EC2 instance where you can tune MySQL more easily.
I would also disable the MySQL query cache. On every insert/update, MySQL has to scan the query cache to see if there any results cached that need to be purged. This is a waste of time if you have a write-heavy workload. Increasing your query cache to 2.56GB makes it even worse! Set the cache size to 0 and the cache type to 0.
I have no idea what queries you run, or how well you have optimized them. MySQL's optimizer is limited, so it's frequently the case that you can get huge benefits from redesigning SQL queries. That is, changing the query syntax, as well as adding the right indexes.
You should do a query audit to find out which queries are accounting for your high load. A great free tool to do this is https://www.percona.com/doc/percona-toolkit/2.2/pt-query-digest.html, which can give you a report based on your slow query log. Download the RDS slow query log with the http://docs.aws.amazon.com/cli/latest/reference/rds/download-db-log-file-portion.html CLI command.
Set your long_query_time=0, let it run for a while to collect information, then change long_query_time back to the value you normally use. It's important to collect all queries in this log, because you might find that 75% of your load is from queries under 2 seconds, but they are run so frequently that it's a burden on the server.
After you know which queries are accounting for the load, you can make some informed strategy about how to address them:
Query optimization or redesign
More caching in the application
Scale out to more instances
I think the answer is "you're doing something wrong". It is very unlikely you have reached an RDS limitation, although you may be hitting limits on some parts of it.
Start by enabling detailed monitoring. This will give you some OS-level information which should help determine what your limiting factor really is. Look at your slow query logs and database stats - you may have some queries that are causing problems.
Once you understand the problem - which could be bad queries, I/O limits, or something else - then you can address them. RDS allows you to create multiple read replicas, so you can move some of your read load to slaves.
You could also move to Aurora, which should give you better I/O performance. Or use PIOPS (or allocate more disk, which should increase performance). You are using SSD storage, right?
One other suggestion - if your calculations (step 4 above) takes a significant amount of time, you might want look at breaking it into two or more transactions.
A query_cache_size of more than 50M is bad news. You are writing often -- many times per second per table? That means the QC needs to be scanned many times/second to purge the entries for the table that changed. This is a big load on the system when the QC is 2.5GB!
query_cache_type should be DEMAND if you can justify it being on at all. And in that case, pepper the SELECTs with SQL_CACHE and SQL_NO_CACHE.
Since you have the slowlog turned on, look at the output with pt-query-digest. What are the first couple of queries?
Since your typical operation involves writing, I don't see an advantage of using readonly Slaves.
Are the bots running at random times? Or do they all start at the same time? (The latter could cause terrible spikes in CPU, etc.)
How are you "archiving" "old" records? It might be best to use PARTITIONing and "transportable tablespaces". Use PARTITION BY RANGE and 21 partitions (plus a couple of extras).
Your typical transaction seems to work with one row. Can it be modified to work with 10 or 100 all at once? (More than 100 is probably not cost-effective.) SQL is much more efficient in doing lots of rows at once versus lots of queries of one row each. Show us the SQL; we can dig into the details.
It seems strange to insert a new row, then update it, all in one transaction. Can't you completely compute it before doing the insert? Hanging onto the inserted_id for so long probably interferes with others doing the same thing. What is the value of innodb_autoinc_lock_mode?
Do the "users" interactive with each other? If so, in what way?

How to limit potential mysql performance issues caused by querying users?

I have some people that need to perform query on my Db,
this is mostly done by using workbench.
The pro of letting them querying directly the DB instead of providing them a service is that I don't need to set up a service anytime they need different data.
The cons and my worry is that they may launch (potentially) queries that may cause the mysql process to hang...
What's the way(is there some?) to limit the resource that a mysql user may occupy by querying? (I'm thinking something like configuring a short query timeout per user... or maybe there's something better.)
Essentially, no.
Some people have invented a "long query killer". It is a moderately simple script that repeatedly does SHOW PROCESSLIST an kills any query that has been running longer than N seconds.

MySQL Connection Limit Advice

I've hit a problem with my MySQL queries and was hoping someone could offer some help/advice.
I'm developing a PHP-based system which combines quite a lot of data in different tabs on one page (tab1 = profile, tab2 = address, tab3 = payments etc.) and as a result, one page can have up to 34/40 MySQL queries pulling from different tables or with different criteria.
The page load became really slow and I asked my web host if they knew what was wrong and they advised it's because of slow MySQL queries (some over 2 seconds). They also said that my MySQL user is only allowed 15 connections at a time.
If my page has 40 queries and only 15 connections are allowed at a time, does this mean they effectively queue and wait for one to complete? If this was the case then I can understand why the page is taking a while to load but i'm not sure of the solution. Is 15 MySQL queries considered a lot or is this quite a tight restriction by my host (HostMonster)?
Also, if there were 15 users accessing the system at the same time, would this 15 connections be split between each of them or is it 15 connections per user logged into the site? I assume they mean per database user but all people who access the system will be using the same database user so it seems impossible to create a system in which several users can access at one?
The whole connections thing has confused me a little.
Thanks in advance for any help!
Have one connection per page. Put the queries into sequence
Optimize those queries - see explain and use indexes
Perhaps combine queries to reduce the through put.
BTW 10+ queries per page is excessive IMHO.
If having max connect error problem then use this command from command line
mysqladmin flush-hosts -uuser -p'password'
this will flush hosts that MySQL has recorded and will build the list again. In the newer version of MYSQL 5.6 you get more information on this but not on previous version.
You can set the following
max_connect_errors 10000
to avoid the message to appear again.
15 queries or connection is not a problem at all, in busy databases we have seen thousand connections and tens of thousands of queries per second.

Perfectly good queries on my sql taking more than 5 seconds

Lately we are seeing some queries in mysql(master) logs but no idea why they are shown there:
Queries are select/update table where id = <some integer>.
There is index on id
table size is below 100 000
Rows scanned are in hundreds (sometimes < 100)
Server is running on extremely good hardware
there are no joins involved
We do not see any heavy activity running on database at that time
tables are innodb
the same queries generally don’t even take 50ms, but sometimes all the execution of these queries take about 4-8 seconds
One observation was all the similar "non-slow-but-weirdly-taking-high-time" queries take almost the same amount of time for some duration . I.e. queries like stated in the top will all take about 4.35 seconds with variation of 0.05 seconds.
Does the network latency/packets-drop affect mysql query timing?
show processlist;
show global status like '%onnect%';
show global status like '%open%';
Is anything backed up? Is it waiting in a queue? waiting for file handles? What are your max_connections, open-files-limit, thread_concurrency ?
One side question: does the network latency/packets-drop affect mysql query timing?
Yes, the timeout must occur before the query is resent by the client
Do you see these problems locally or over the network? If the latter, then obviously packet drops can affect your performance if you are measuring from the client.
Is it running in a virtual machine that can affect the performance?
Disk problems?
How is serialization set up? Can it be a contention problem by many processes accessing the same row?
You may want to enable the query/slow query logs to see if there is any sort of pattern that causes this.
Mysql slow log is not representative source to learn about your slow queries. If something makes server work slow all queries usually go to slow log.
E.g. if you have some slow blocking select on MyISAM a lot of PK updates will go to slow log.
You need to search for other slow queries or server problems. What about load average on this particular machine? Isn't mysql displaced into swap memory? Other applications? Queries per second?