Slow Update, Delete and Insert Queries under MariaDB - mysql

our server was updated from Ubuntu 16 to Ubuntu 20 with MariaDB. Unfortunately, the loading time of the website has become slower. Normally MariaDB should be faster than Mysql. I've found that, quite simply, update commands on the website take about 7 seconds sometimes. However, if I enter these update commands directly into the database via myphpadmin, they only take 0.0005ms.
It seems to me that MariaDB has a problem with update commands when they occur frequently. This was never a problem with mysql. Here's an query example:
UPDATE LOW_PRIORITY users
SET user_video_count = user_video_count + 1
WHERE user_id = 12345
The database format is MyISAM.
I have no idea what could be the reason. Do you?
Thank you very much.

It may be something as simple as a SELECT searching for something in users. Note, InnoDB would not suffer this problem.
MyISAM necessarily does a table lock when doing UPDATE, INSERT, or DELETE. (Also ALTER and other DDL statements.) If there are a lot of connections doing any mixture of writes and even SELECTs, the locks can cascade for a surprisingly long time.
The real solution, whether in MariaDB or [especially] in MySQL, is to switch to InnoDB.
If this is a case of high volume counting of "likes" or "views", then a partial solution (in either Engine) is to put such counters in a separate, parallel, table. This avoids those simple and fast updates fighting with other actions on the main table. In an extremely high traffic area, gathering such increments and applying them in batches is warranted. I don't think your volume needs that radical solution.
MySQL has all-but-eliminated MyISAM. MariaDB may follow suit in a few years.
To address this:
the same query in myphpadmin its really fast
The problem is not with how you run it, but what else happens to be going on at the same time.
(LOW PRIORITY is a MyISAM-specific kludge that sometimes works.)
MyISAM does "table locking"; InnoDB does "row locking". Hence, Innodb can do a lot of "simultaneous" actions on a table, whereas MyISAM becomes serialized as soon as a write occurs.
More (Now focusing on InnoDB.)
Some other things that may be involved.
If two UPDATEs are trying to modify the same row at the same time, one will have to wait (due to the row locking).
If there is a really large number of things going on, delays can cascade. If 20 connections are actively running at one instance, they are each slowing down each other. Each connection is given a fair share, but that means that they all are slowed down.
SHOW PROCESSLIST to see what is running -- not "Sleep". The process with the highest "Time" (except for system threads) is likely to be the instigator of the fracas.
The slowlog can help in diving deeper. I turn it on, with a low enough long_query_time and wait for the 'event' to happen. Then I use pt-query-digest (or mydumpslow -s t) to find out the slowest queries. With some more effort, one might notice that there were a lot of queries that were "slow" at one instant -- possibly even "point queries" (like UPDATE ... WHERE id=constant) unexpectedly running slower than long_query_time. This indicates too many queries and/or some query that is locking rows unexpectedly. (Note: the "timestamp" of the queries is when the query ended; subtract Query_time to get the start.) SlowLog
More
innodb_flush_log_at_trx_commit = 2, as you found out, is a good fix when rapidly doing lots of single-query transactions. If the frequency becomes too large for that fix, then my comments above may become necessary.
There won't be much performance difference between =2 and =0.
As for innodb_flush_log_at_timeout. Please provide `SHOW GLOBAL STATUS LIKE 'Binlog%commits'
As for innodb_lock_wait_timeout... I don't think that changing that will help you. If one of your queries aborts due to that timeout, you should record that it happened and retry the transaction.
It sounds like you are running with autocommit = ON and not using explicit transactions? That's fine (for non-money activity). There are cases where using a transaction can help performance -- such as artificially batching several queries together to avoid some I/O. The drawback is an increased chance of conflicts with other connections. Still, if you are always checking for errors and rerunning the 'transaction', all should be well.
innodb_flush_log_at_trx_commit
When that setting is "1", which is probably what you originally had, each Update did an extra write to disk to assure the data integrity. If the disk is HDD (not SDD), that adds about 10ms to each Update, hence leading to a max of somewhere around 100 updates/second. There are several ways around it.
innodb_flush_log_at_trx_commit = 0 or 2, sacrificing some data integrity.
Artificially combining several Updates into a single transaction, thereby spreading out the 10ms over multiple queries.
Explicitly combining several Updates based on what they are doing and/or which rows they touch. (In really busy systems, this could involve other servers and/or other tables.)
Moving the counter to another table (see above) -- this allows interference from more time-consuming operations on the main table. (I did not hear a clear example of this, but the slowlog might have pointed out such.)
Switch to SSD drives -- perhaps 10x increase in capacity of Updates.
I suspect the social media giants do all of the above.

As you are using MariaDB, you can use tools like EverSQL to find missing indexes or discover redundant indexes (e.g. you have an index on user_video_count that you don't really need)

First of all I would like to thank everyone who helped me. I really appreciate that people try to invest their precious time.
I would like to tell you how I managed to fix the problem with the slow update, insert and delete queries.
I added this value to the my.cnf file:
innodb_flush_log_at_trx_commit = 2
After I restarted the mysql server, the server load dropped suddenly and the update, insert and delete queries also dropped from about 0.22222 - 0.91922 seconds to 0.000013 under load. Just like it was before with Myisam and Mysql and how it should be for so simple updates with a index.
I have to mention that I have set all tables that receive frequent insert or update commands to INNODB and those with many selects to ARIA.
Since we don't handle money transactions, it's no a problem for me if we lose last seconds due to
innodb_flush_log_at_trx_commit = 2
I go even further. I can also live with it if we lose the last 30 seconds in a failure.
So I have also set:
innodb_flush_log_at_timeout = 30
I'm currently testing
innodb_flush_log_at_trx_commit = 0
But so far, I do not see a significant improvement with
innodb_flush_log_at_timeout = 30
innodb_flush_log_at_trx_commit = 0
instead of
innodb_flush_log_at_timeout = 1 (default)
innodb_flush_log_at_trx_commit = 2
So the main goal was:
innodb_flush_log_at_trx_commit = 2
or
innodb_flush_log_at_trx_commit = 0
Does anyone know, why:
innodb_flush_log_at_timeout = 30
innodb_flush_log_at_trx_commit = 0
is not faster then just
innodb_flush_log_at_trx_commit = 2
?
I also dont understand, why this settings are not more popular because many websites could have big improvements in case of speed, if they dont mind of loosing a second or more.
Thank you very much.

Related

MySQL queries very slow - occasionally

I'm running MariaDB 10.2.31 on Ubuntu 18.4.4 LTS.
On a regular basis I encounter the following conundrum - especially when starting out in the morning, that is when my DEV environment has been idle for the night - but also during the day from time to time.
I have a table (this applies to other tables as well) with approx. 15.000 rows and (amongst others) an index on a VARCHAR column containing on average 5 to 10 characters.
Notably, most columns including this one are GENERATED ALWAYS AS (JSON_EXTRACT(....)) STORED since 99% of my data comes from a REST API as JSON-encoded strings (and conveniently I simply store those in one column and extract everything else).
When running a query on that column WHERE colname LIKE 'text%' I find query-result durations of i.e. 0.006 seconds. Nice. When I have my query EXPLAINed, I can see that the index is being used.
However, as I have mentioned, when I start out in the morning, this takes way longer (14 seconds this morning). I know about the query cache and I tried this with query cache turned off (both via SET GLOBAL query_cache_type=OFF and RESET QUERY CACHE). In this case I get consistent times of approx. 0.3 seconds - as expected.
So, what would you recommend I should look into? Is my DB sleeping? Is there such a thing?
There are two things that could be going on:
1) Cold caches (overnight backup, mysqld restart, or large processing job results in this particular index and table data being evicted from memory).
2) Statistics on the table go stale and the query planner gets confused until you run some queries against the table and the statistics get refreshed. You can force an update using ANALYZE TABLE table_name.
3) Query planner heisenbug. Very common in MySQL 5.7 and later, never seen it before on MariaDB so this is rather unlikely.
You can get to the bottom of this by enablign the following in the config:
log_output='FILE'
log_slow_queries=1
log_slow_verbosity='query_plan,explain'
long_query_time=1
Then review what is in the slow log just after you see a slow occurrence. If the logged explain plan looks the same for both slow and fast cases, you have a cold caches issue. If they are different, you have a table stats issue and you need to cron ANALYZE TABLE at the end of the over night task that reads/writes a lot to that table. If that doesn't help, as a last resort, hard code an index hint into your query with FORCE INDEX (index_name).
Enable your slow query log with log_slow_verbosity=query_plan,explain and the long_query_time sufficient to catch the results. See if occasionally its using a different (or no) index.
Before you start your next day, look at SHOW GLOBAL STATUS LIKE "innodb_buffer_pool%" and after your query look at the values again. See how many buffer pool reads vs read requests are in this status output to see if all are coming off disk.
As #Solarflare mentioned, backups and nightly activity might be purging the innodb buffer pool of cached data and reverting bad to disk to make it slow again. As part of your nightly activites you could set innodb_buffer_pool_dump_now=1 to save the pages being hot before scripted activity and innodb_buffer_pool_load_now=1 to restore it.
Shout-out and Thank you to everyone giving valuable insight!
From all the tips you guys gave I think I am starting to understand the problem better and beginning to narrow it down:
First thing I found was my default innodb_buffer_pool_size of 134 MB. With the sort and amount of data I'm processing this is ridiculously low - so I was able to increase it.
Very helpful post: https://dba.stackexchange.com/a/27341
And from the docs: https://dev.mysql.com/doc/refman/8.0/en/innodb-buffer-pool-resize.html
Now that I have increased it to close to 2GB and am able to monitor its usage and RAM usage in general (cli: cat /proc/meminfo) I realize that my 4GB RAM is in fact on the low side of things. I am nowhere near seeing any unused overhead (buffer usage still at 99% and free RAM around 100MB).
I will start to optimize RAM usage of my daemon next and see where this leads - but this will not free enough RAM altogether.
#danblack mentioned innodb_buffer_pool_dump_now and innodb_buffer_pool_load_now. This is an interesting approach to maybe use whenever the daemon accesses the DB as I would love to separate my daemon's buffer usage from the front end's (apparently this is not possible!). I will look into this further but as my daemon is running all the time (not only at night) this might not be feasible.
#Gordan Bobic mentioned "refreshing" DBtables by using ANALYZE TABLE tableName. I found this to be quite fast and incorporated it into the daemon after each time it does an extensive read/write. This increases daemon run times by a few seconds but this is no issue at all. And I figure I can't go wrong with it :)
So, in the end I believe my issue to be a combination of things: Too small buffer size, too small RAM, too many read/write operations for that environment (evicting buffered indexes etc.).
Also I will have to learn more about memory allocation etc and optimize this better (large-pages=1 etc).

MySQL server very high load

I run a website with ~500 real time visitors, ~50k daily visitors and ~1,3million total users. I host my server on AWS, where I use several instances of different kind. When I started the website the different instances cost rougly the same. When the website started to gain users the RDS instance (MySQL DB) CPU constantly keept hitting the roof, I had to upgrade it several times, now it have started to take up the main part of the performance and monthly cost (around 95% of (2,8k$/month)). I currently use a database server with 16vCPU and 64GiB of RAM, I also use Multi-AZ Deployment to protect against failures. I wonder if it is normal for the database to be that expensive, or if I have done something terribly wrong?
Database Info
At the moment my database have 40 tables with the most of them have 100k rows, some have ~2millions and 1 have 30 millions.
I have a system the archives rows that are older then 21 days when they are not needed anymore.
Website Info
The website mainly use PHP, but also some NodeJS and python.
Most of the functions of the website works like this:
Start transaction
Insert row
Get last inserted id (lastrowid)
Do some calculations
Updated the inserted row
Update the user
Commit transaction
I also run around 100bots wich polls from the database with 10-30sec interval, they also inserts/updates the database sometimes.
Extra
I have done several things to try to lower the load on the database. Such as enable database cache, use a redis cache for some queries, tried to remove very slow queries, tried to upgrade the storage type to "Provisioned IOPS SSD". But nothing seems to help.
This is the changes I have done to the setting paramters:
I have though about creating a MySQL cluster of several smaller instances, but I don't know if this would help, and I also don't know if this works good with transactions.
If you need any more information, please ask, any help on this issue is greatly appriciated!
In my experience, as soon as you ask the question "how can I scale up performance?" you know you have outgrown RDS (edit: I admit my experience that leads me to this opinion may be outdated).
It sounds like your query load is pretty write-heavy. Lots of inserts and updates. You should increase the innodb_log_file_size if you can on your version of RDS. Otherwise you may have to abandon RDS and move to an EC2 instance where you can tune MySQL more easily.
I would also disable the MySQL query cache. On every insert/update, MySQL has to scan the query cache to see if there any results cached that need to be purged. This is a waste of time if you have a write-heavy workload. Increasing your query cache to 2.56GB makes it even worse! Set the cache size to 0 and the cache type to 0.
I have no idea what queries you run, or how well you have optimized them. MySQL's optimizer is limited, so it's frequently the case that you can get huge benefits from redesigning SQL queries. That is, changing the query syntax, as well as adding the right indexes.
You should do a query audit to find out which queries are accounting for your high load. A great free tool to do this is https://www.percona.com/doc/percona-toolkit/2.2/pt-query-digest.html, which can give you a report based on your slow query log. Download the RDS slow query log with the http://docs.aws.amazon.com/cli/latest/reference/rds/download-db-log-file-portion.html CLI command.
Set your long_query_time=0, let it run for a while to collect information, then change long_query_time back to the value you normally use. It's important to collect all queries in this log, because you might find that 75% of your load is from queries under 2 seconds, but they are run so frequently that it's a burden on the server.
After you know which queries are accounting for the load, you can make some informed strategy about how to address them:
Query optimization or redesign
More caching in the application
Scale out to more instances
I think the answer is "you're doing something wrong". It is very unlikely you have reached an RDS limitation, although you may be hitting limits on some parts of it.
Start by enabling detailed monitoring. This will give you some OS-level information which should help determine what your limiting factor really is. Look at your slow query logs and database stats - you may have some queries that are causing problems.
Once you understand the problem - which could be bad queries, I/O limits, or something else - then you can address them. RDS allows you to create multiple read replicas, so you can move some of your read load to slaves.
You could also move to Aurora, which should give you better I/O performance. Or use PIOPS (or allocate more disk, which should increase performance). You are using SSD storage, right?
One other suggestion - if your calculations (step 4 above) takes a significant amount of time, you might want look at breaking it into two or more transactions.
A query_cache_size of more than 50M is bad news. You are writing often -- many times per second per table? That means the QC needs to be scanned many times/second to purge the entries for the table that changed. This is a big load on the system when the QC is 2.5GB!
query_cache_type should be DEMAND if you can justify it being on at all. And in that case, pepper the SELECTs with SQL_CACHE and SQL_NO_CACHE.
Since you have the slowlog turned on, look at the output with pt-query-digest. What are the first couple of queries?
Since your typical operation involves writing, I don't see an advantage of using readonly Slaves.
Are the bots running at random times? Or do they all start at the same time? (The latter could cause terrible spikes in CPU, etc.)
How are you "archiving" "old" records? It might be best to use PARTITIONing and "transportable tablespaces". Use PARTITION BY RANGE and 21 partitions (plus a couple of extras).
Your typical transaction seems to work with one row. Can it be modified to work with 10 or 100 all at once? (More than 100 is probably not cost-effective.) SQL is much more efficient in doing lots of rows at once versus lots of queries of one row each. Show us the SQL; we can dig into the details.
It seems strange to insert a new row, then update it, all in one transaction. Can't you completely compute it before doing the insert? Hanging onto the inserted_id for so long probably interferes with others doing the same thing. What is the value of innodb_autoinc_lock_mode?
Do the "users" interactive with each other? If so, in what way?

Where should I focus: Optimize the query, changing database config or what else?

I took over a project and have 2 MyISAM tables.
table1 with approx. 1M rows, and
table2 with approx. 100K rows.
In the project these tables are accessed often, and at first it seems ok.
After I installed the project on a Windows 8.1 for local development I found that every day, the first time I access the site, my query takes 14 seconds. A bit too much.
Afterwards is less than 0.1 second.
Now, since on dev this accumulated with another query runs into a timeout-exception for php, it got me concerned about whether it's recommended to do anything about it or not. On production it seems not to occur (or hard to reproduce).
I heard of things like warm cache or optimize query but don't know what is meant by that.
What do experts like you do in this case?
I had another question set up here trying to see whether I can optimize the query.
Changing to InnoDB doesn't seem to have an impact.
The "first" time you run a query, two things may or may not happen:
Lots of disk I/O may be done to fetch the index blocks and/or data blocks from disk. (If other queries happened to have fetched those blocks, the blocks may be cached already.) (14s vs 0.1s is more than I usually see for this cold/warm cache difference.)
If the "Query cache" was on, the first SELECT and its resultset were stored in the QC. The second call may have found it there and returned the result almost instantly. (Usually this is ~1ms, not the 100ms you mentioned.) The QC can be bypassed for a single query by saying SELECT SQL_NO_CACHE ....
Since it is annoying you daily, you may as well go through the exercise of trying to optimize the query. If the tables are growing daily, it may get slower and slower over time. Note that if production needs to be restarted for any reason, that query may timeout on it. So, yes, try to optimize it.
A million rows is beginning to be "big".
The characteristics of this indicate that you are I/O-bound only initially. So it does not indicate that key_buffer_size and innodb_buffer_pool_size are too low.
If you want to discuss the performance of a particular query, start a new thread and provide SHOW CREATE TABLE and EXPLAIN SELECT ....

Which engine to be used for more than 100 insert query per second

Which engine to be used for more than 100 insert query per second
I read differences and pros and cons of MYISAM and Innodb.
But i am still confused for 100+ insert query in a table (basically for tracking purpose) which db should i use.
I refered What's the difference between MyISAM and InnoDB?
Based on my understanding, for each insert MYISAM will lock table and hence innodb should be used for row locking.
But on the otherhand performance of MYISAM are 100times better.So what should be the optimal and correct selection and why?
Simple code that does one-row INSERTs without any tuning maxes out at about 100 rows per second in any engine, especially InnoDB.
But, it is possible to get 1000 rows per second or even more.
The quick fix for InnoDB is to set innodb_flush_log_at_trx_commit = 2; that will uncork the main thing stopping InnoDB at 100 inserts/second using a commodity spinning disk. Setting innodb_buffer_pool_size to about 70% of available RAM is also important.
If a user is inserting multiple rows into the same table at the same time, then LOAD DATA or a batch Insert (INSERT ... VALUES (...), (...), ...) of 100 rows or more will insert ten times as fast. This applies to any Engine.
MyISAM is not 100 times as fast; it is not even 10 times as fast as InnoDB. Today (5.6 or newer), you would be hard pressed to find a well tuned application that is more than a little faster in MyISAM. You are, or will be, I/O-limited.
As for corruption -- No engine suffers from corruption except during a crash. A power failure may mangle MyISAM indexes, usually recoverably. Moreover, a batch insert could be half done. InnoDB will be clean -- the entire batch is done or none of it is done; no corruption.
ARCHIVE saves disk space, but costs CPU.
MEMORY is often faster because it has no I/O. But you have too much data for that Engine, correct?
MariaDB with TokuDB can probably run faster than anything I describe here; but you have not indicated the need for it.
100 rows inserted per second = 8M/day = 3 Billion/year. Will you be purging the data eventually? Will you be querying the data? Purging: Let's talk about PARTITION. Querying: Let's talk about Summary Tables.
Indexing: Minimize the number of indexes. If you have a 'random' index, such as a UUID, and you have a billion rows, you will be stuck with 100 rows/second, regardless of which Engine and regardless of any tuning. Do I need to explain further?
If this is a queuing system, I say "Don't queue it, just do it."
Bottom line: Use a InnoDB. Tune it. Use batch inserts. Avoid random indexes. etc.
You are correct that MyISAM is a faster choice if your operational use case is lots of insertions. But that answer can change drastically based on the kind of use you make of the data. If this is an archival application you might consider the ARCHIVE storage engine. It is best for write-once, read-rarely applications.
You should investigate INSERT DELAYED as it will allow your client programs to fire-and-forget these inserts rather than waiting for completion. This burns RAM in your mysqld process, though. If that style of operation meets your needs, this is a compelling reason to go with MyISAM.
Beware indexes in the target table of your inserts. Maintaining indexes is a big part of the server's insert workload.
Don't forget to look into MariaDB. It's a compatible fork of MySQL with some more advanced storage engines and features.
I have experience with a similar application. In our case, the application scaled up beyond the original insert rate, and the server could not keep up.(It's always good when an application workload grows!) We ended up doing two things, one after the other.
Using a message queuing system, and running just a couple of processes to actually do the inserts. The original clients wrote their logging records to the message queue rather than directly to the database. (Amazon AWS's SQS is an example of such a queuing system).
reworking the insert process to use LOAD DATA INFILE to load great gobs of log rows at once.
(You probably have figured out that this kind of workload isn't feasible on a cheap shared hosting service or an AWS micro instance.)

Why would a simple MySQL update query occasionally take several minutes?

I have a hefty db server with lots of very similar InnoDB databases. A query that I run often simply updates a timestamp on one row in a small table. This takes like 1-2 ms most of the time. Occasionally, at night, probably while backups and maatkit replication tools are running, one or more of these queries may show "Updating" for several minutes. During this time, other queries, including maatkit queries, seem to be proceeding normally, and no other queries seem to be executing. I have been unable to explain or fix this.
We are using mysql 4.1.22 and gentoo 2.6.21 on a pair of 4-way Xeon with 16gig of RAM and RAIDed drives for storage. Replication is in place and operating well with maatkit confirming replication nightly. InnoDB is using most of the RAM and the cpu's are typically 70-80% idle. The table in question has about 100 rows of about 200 bytes each. I've tried with and without an index on the WHERE clause with no discernible change. No unusual log messages have been found (checked system messages and mysql errors).
Has anybody else heard of this? Solved something like this? Any ideas of how to investigate?
When making DML operations, InnoDB places locks on rows and index gaps.
The problem is that it locks all rows examined, not only those affected.
Say, if you run this query:
UPDATE mytable
SET value = 10
WHERE col1 = 1
AND col2 = 2
, the locking will depend on the indexes used for the query:
If an index on col1, col2 was used, then only affected rows will be locked
If an index on col was used, all rows with col1 = 1 will be locked
If an index on col2 was used, all rows with col2 = 2 will be locked
If no index was used, all rows and index gaps will be locked (including that on the PRIMARY KEY, so that even INSERT to an AUTO_INCREMENT column will lock)
To make things worse, EXPLAIN in MySQL does not work on DML operations, so you'll have to guess which index was used, since optimizer can pick any if it considers it to be best.
So it may be so that your replication tools and updates concurrently lock the records (and as you can see this may happen even if the WHERE conditions do not overlap).
If you can get at the server while this query is hanging, try doing a "show innodb status". Part of the mess of data you get from that is the status of all active connections/queries on InnoDB tables. If your query is hanging because of another transaction, it will be indicated in there. There's samples of the lock data here.
As well, you mention that it seems to happen durint backups. Are you using mysqldump for that? That will lock tables while the dump is active so that the dumped data is consistent.
Using some of the information offered in the responses, we continued investigating and found some disturbing behavior on our server. A simple "check table" on any table in any database caused simple update queries to lock in other databases and other tables. I don't have any idea why this would happen, though we could not reproduce it on MySQL v5.1, so we intend to upgrade our database server.
I don't think maatkit's mk-table-checksum does a "check table" but it is having a similar effect. Turning off this script reduced the problems significantly, but we believe that we cannot live without this script.
I'm going to mark this as the answer to my question. Thanks for the help.