I have a single processor dedicated server with 4GB RAM and a 400MB Mysql database (Myisam) who has big performance problems.
The database is used by an ecommerce.
I already tryied to tune it using the mysqltuner script, but without good results.
Because the variable settings have been modified several times, I would like to have a basic configuration to start from, thereafter try to tune it.
Try this tool, it always show good results for performance tuning.
https://tools.percona.com/wizard
For ecommerce, you need InnoDB. If you don't change, you will be burned badly when a crash occurs at just the wrong instant in a monetary transaction.
Make that change, then
key_buffer_size = 20M
innodb_buffer_pool_size = 1000M
read my blog on moving from MyISAM to InnoDB.
When you then find that things are not working fast enough, do
long_query_time = 1
turn on the slowlog
wait a day
run pt-query-digest to find the worst couple of queries
present them to us for critique. The solution could be as simple as adding an composite index. Or maybe reformulating a SELECT.
I have redirected you toward slow queries because you cannot "tune" your way out of bad schema, bad queries, etc.
Related
our server was updated from Ubuntu 16 to Ubuntu 20 with MariaDB. Unfortunately, the loading time of the website has become slower. Normally MariaDB should be faster than Mysql. I've found that, quite simply, update commands on the website take about 7 seconds sometimes. However, if I enter these update commands directly into the database via myphpadmin, they only take 0.0005ms.
It seems to me that MariaDB has a problem with update commands when they occur frequently. This was never a problem with mysql. Here's an query example:
UPDATE LOW_PRIORITY users
SET user_video_count = user_video_count + 1
WHERE user_id = 12345
The database format is MyISAM.
I have no idea what could be the reason. Do you?
Thank you very much.
It may be something as simple as a SELECT searching for something in users. Note, InnoDB would not suffer this problem.
MyISAM necessarily does a table lock when doing UPDATE, INSERT, or DELETE. (Also ALTER and other DDL statements.) If there are a lot of connections doing any mixture of writes and even SELECTs, the locks can cascade for a surprisingly long time.
The real solution, whether in MariaDB or [especially] in MySQL, is to switch to InnoDB.
If this is a case of high volume counting of "likes" or "views", then a partial solution (in either Engine) is to put such counters in a separate, parallel, table. This avoids those simple and fast updates fighting with other actions on the main table. In an extremely high traffic area, gathering such increments and applying them in batches is warranted. I don't think your volume needs that radical solution.
MySQL has all-but-eliminated MyISAM. MariaDB may follow suit in a few years.
To address this:
the same query in myphpadmin its really fast
The problem is not with how you run it, but what else happens to be going on at the same time.
(LOW PRIORITY is a MyISAM-specific kludge that sometimes works.)
MyISAM does "table locking"; InnoDB does "row locking". Hence, Innodb can do a lot of "simultaneous" actions on a table, whereas MyISAM becomes serialized as soon as a write occurs.
More (Now focusing on InnoDB.)
Some other things that may be involved.
If two UPDATEs are trying to modify the same row at the same time, one will have to wait (due to the row locking).
If there is a really large number of things going on, delays can cascade. If 20 connections are actively running at one instance, they are each slowing down each other. Each connection is given a fair share, but that means that they all are slowed down.
SHOW PROCESSLIST to see what is running -- not "Sleep". The process with the highest "Time" (except for system threads) is likely to be the instigator of the fracas.
The slowlog can help in diving deeper. I turn it on, with a low enough long_query_time and wait for the 'event' to happen. Then I use pt-query-digest (or mydumpslow -s t) to find out the slowest queries. With some more effort, one might notice that there were a lot of queries that were "slow" at one instant -- possibly even "point queries" (like UPDATE ... WHERE id=constant) unexpectedly running slower than long_query_time. This indicates too many queries and/or some query that is locking rows unexpectedly. (Note: the "timestamp" of the queries is when the query ended; subtract Query_time to get the start.) SlowLog
More
innodb_flush_log_at_trx_commit = 2, as you found out, is a good fix when rapidly doing lots of single-query transactions. If the frequency becomes too large for that fix, then my comments above may become necessary.
There won't be much performance difference between =2 and =0.
As for innodb_flush_log_at_timeout. Please provide `SHOW GLOBAL STATUS LIKE 'Binlog%commits'
As for innodb_lock_wait_timeout... I don't think that changing that will help you. If one of your queries aborts due to that timeout, you should record that it happened and retry the transaction.
It sounds like you are running with autocommit = ON and not using explicit transactions? That's fine (for non-money activity). There are cases where using a transaction can help performance -- such as artificially batching several queries together to avoid some I/O. The drawback is an increased chance of conflicts with other connections. Still, if you are always checking for errors and rerunning the 'transaction', all should be well.
innodb_flush_log_at_trx_commit
When that setting is "1", which is probably what you originally had, each Update did an extra write to disk to assure the data integrity. If the disk is HDD (not SDD), that adds about 10ms to each Update, hence leading to a max of somewhere around 100 updates/second. There are several ways around it.
innodb_flush_log_at_trx_commit = 0 or 2, sacrificing some data integrity.
Artificially combining several Updates into a single transaction, thereby spreading out the 10ms over multiple queries.
Explicitly combining several Updates based on what they are doing and/or which rows they touch. (In really busy systems, this could involve other servers and/or other tables.)
Moving the counter to another table (see above) -- this allows interference from more time-consuming operations on the main table. (I did not hear a clear example of this, but the slowlog might have pointed out such.)
Switch to SSD drives -- perhaps 10x increase in capacity of Updates.
I suspect the social media giants do all of the above.
As you are using MariaDB, you can use tools like EverSQL to find missing indexes or discover redundant indexes (e.g. you have an index on user_video_count that you don't really need)
First of all I would like to thank everyone who helped me. I really appreciate that people try to invest their precious time.
I would like to tell you how I managed to fix the problem with the slow update, insert and delete queries.
I added this value to the my.cnf file:
innodb_flush_log_at_trx_commit = 2
After I restarted the mysql server, the server load dropped suddenly and the update, insert and delete queries also dropped from about 0.22222 - 0.91922 seconds to 0.000013 under load. Just like it was before with Myisam and Mysql and how it should be for so simple updates with a index.
I have to mention that I have set all tables that receive frequent insert or update commands to INNODB and those with many selects to ARIA.
Since we don't handle money transactions, it's no a problem for me if we lose last seconds due to
innodb_flush_log_at_trx_commit = 2
I go even further. I can also live with it if we lose the last 30 seconds in a failure.
So I have also set:
innodb_flush_log_at_timeout = 30
I'm currently testing
innodb_flush_log_at_trx_commit = 0
But so far, I do not see a significant improvement with
innodb_flush_log_at_timeout = 30
innodb_flush_log_at_trx_commit = 0
instead of
innodb_flush_log_at_timeout = 1 (default)
innodb_flush_log_at_trx_commit = 2
So the main goal was:
innodb_flush_log_at_trx_commit = 2
or
innodb_flush_log_at_trx_commit = 0
Does anyone know, why:
innodb_flush_log_at_timeout = 30
innodb_flush_log_at_trx_commit = 0
is not faster then just
innodb_flush_log_at_trx_commit = 2
?
I also dont understand, why this settings are not more popular because many websites could have big improvements in case of speed, if they dont mind of loosing a second or more.
Thank you very much.
We are having troubles tuning MySQL 5.6 on AWS, and are looking for pointers to solve some performance issues.
We used to have dedicated servers and we could configure them how we like.
Our application uses a lot of temporary tables and we had no performance issue on that regard.
Until we switched to AWS RDS instance.
Now, many slow queries show up in the logs and it slows down the whole application.
Previously we worked with MySQL 5.4 and now it's 5.6.
Looking through the docs, we discovered some changes regarding the temporary tables default format.
It is InnoDB by default, and we set it back to MYISAM like we are used to and constated improvements on that regard.
first aspect:
Also, our DB is quite large and we have hundreds of simultaneous access to our application, and some tables require real-time computation. Join and Unions are used in that cases.
When developing the application (with MySQL 5.4), we found that splitting the larger queries into 2 or more steps and using intermediate tables, the over-whole performance improved.
Explain showed file sort and temporary file so we could get rid of those with temporary tables.
Is splitting queries into temporary tables really a good idea?
Edit We know conditions about the implicit conversion of MEMORY temporary tables to disk.
Another thing we are not sure is what makes query using temporary files?
We know using correct indexes is one way to go (correct order usage in where clause, correct usage of order by etc…), but is there anything else we could do?
second aspect:
Regarding some settings we are using, we have some hundred Mb for max_heap_table_size and tmp_table_size so we hoped that our temporary tables could hold in memory.
We also found articles describing to look at ReadIOPS and WriteIOPS.
The reads are stable and low, but the writes are showing unstable and high numbers.
Here is a graph:
The values on the vertical axis are operations/sec.
How can we interpret those numbers?
One thing to know about our application, is that every user action is logged into one big logs table. But it should be once per page load.
third aspect:
How far can we go with those settings so temporary tables can be used in Memory?
For instance, we read some articles explaining they set few Gb of max_heap_table_size on a dedicated MySQL server with about 12Gb of ram. (sounded like 80% or so)
Is that really something we can try? (Are same settings applicable on RDS?)
Also, can we set innodb_buffer_pool_size at the same value as well?
Note I can't find there article where I found that info, but I might have confused some parameter names. I'll edit the question if I find the source again.
The server settings are very different from what we used to have (the new servers on AWS are not set by us) and many settings values have been increased, and some decreased.
We fear it's not a good thing…
Here are the some noticable changes:
innodb_buffer_pool_size (increased *6)
innodb_log_buffer_size (decreased /4)
innodb_additional_mem_pool_size (decreased /4)
sort_buffer_size (decreased /2)
myisam_sort_buffer_size (increased *1279) (we have only innoDB tables, do we need to touch that?)
read_buffer_size (increased *2)
join_buffer_size (decreased /4)
read_rnd_buffer_size (increased *4)
tmp_table_size + max_heap_table_size (increased *8)
Some changes look weird to us (like myisam_sort_buffer_size), we think that using the same settings would have been better in the first place.
Can we have some pointers on those variables? (Sorry, we can't provide the exact numbers)
Also, is there a good article we could read that sums up a good balance between all those parameters?
Because we are concerned about the temporary tables not fitting in memory, I made that query to see what percentage of queries are actually written to disk:
select
tmp_tables
,tmp_tables_disk
,round( (tmp_tables_disk / tmp_tables) * 100, 2 ) as to_disk_percent
from
(select variable_value as tmp_tables from information_schema.GLOBAL_STATUS where variable_name = 'Created_tmp_tables') as s1
,(select variable_value as tmp_tables_disk from information_schema.GLOBAL_STATUS where variable_name =
'Created_tmp_disk_tables') as s2
The result is 10~12% (depending on the instance). Is that high?
TL;DR
I tried to add as many details to the question as possible about the current situation, I hope it's not more confusing that anything...
The issues are about writes.
Is there a way to diagnose what causes so many writes? (maybe linked to temporary tables?)
MySQL 5.4 never existed. Perhaps MariaDB 5.4?
Temp tables, even if seemingly not hurting performance, are a clue that things could be improved.
Do you have TEXT columns that you don't need to fetch. (This can force an in-memory temp table to turn into on-disk.)
Do you have TEXT columns that could be a smaller VARCHAR? (Same reason.)
Do you understand that INDEX(a,b) may be better than INDEX(a), INDEX(b)? ("composite" index)
The buffer_pool is very important for performance. Did the value of innodb_buffer_pool_size change? How much data do/did you have? How much RAM do/did you have.
In general, "you cannot tune your way out of a performance problem".
Is this a Data Warehouse application? Building and maintaining Summary tables is often a huge performance boost for DW "reports".
MyISAM only has table locking. This leads to blocking, which leads to all sorts of performance problems, especially slowing down queries. I'm surprised that InnoDB was slower for you. MyISAM is a dead end. By the time you get to 8.0, there will be even stronger reasons to switch.
Let's see one of the slow queries that benefited from splitting up. There are several techniques for making complex queries run faster (at least in InnoDB).
The order of ANDed things in WHERE does not matter. It does matter in composite indexes.
Changing tmp_table_size rarely helps or hurts. It should, however, be kept under 1% of RAM -- this is to avoid running out of RAM. Swapping is the worst thing to happen.
Graphs - what are the Y-axis units? Where did the graphs come from? They look useless.
"Gb of max_heap_table_size on a dedicated MySQL server with about 12Gb of ram. (sounded like 80% or so)" -- Check that article again. I recommend 120M for that setting on that machine.
When exclusively using MyISAM on a 12GB machine, key_buffer_size should be about 2400M and innodb_buffer_pool_size should be 0.
When exclusively using InnoDB on a 12GB machine, key_buffer_size should be about 20M and innodb_buffer_pool_size should be 8G. Did you change those when you tested InnoDB?
innodb_additional_mem_pool_size -- Unused, and eventually removed.
myisam_sort_buffer_size -- 250M on a 12G machine. If using only InnoDB, the setting can be ignored since that buffer won't be used.
As for the *6 and /4 -- I need to see the actual sizes to judge anything.
"sums up a good balance between all those parameters" -- Two things:
http://mysql.rjweb.org/doc.php/memory
Leave the rest at their defaults.
For more tuning advice, plus Slowlog advice, see http://mysql.rjweb.org/doc.php/mysql_analysis
Could someone help me with a problem I have!? It is involving MariaDB configuration.
I have a server that has E5-Xeon CPU, 96GB DDR3 RAM, SSD Storage Space (1.2TB).
Recently something weird is happening. Some pages load very slow, other go instant. The pages that load slow include SELECT or INSERT queries.
Most of the tables are using MyISAM, but i also have InnoDB.
My.cnf file is kinda the default one and I was wondering what settings should i use.
I am using MySQL version 10.0.23-MariaDB.
The site has around 15.000 members. But never more than 1500-2500 online at the same time.
Thank you for any help i get :)
There are far too many possibilities to answer your question without more info. But here are the 'most important' settings:
For 96GB RAM and a mixture of InnoDB and MyISAM:
innodb_buffer_pool_size = 32G
innodb_buffer_pool_instances = 16
key_buffer_size = 9G
The key_buffer does not need to be bigger than the sum all MyISAM indexes. Reference.
For more info, turn on the slow log, wait a while, then summarize using pt-query-digest or mysqldumpslow -s t to see the top couple of queries. Then focus on optimizing them in some way. Often it is simple as devising the optimal composite index.
What is Max_used_connections? If it is really 1500-2500, then you have one set of issues.
Do not set query_cache_size bigger than, say, 100M. That is a known performance killer.
If you tweaked any other 'variables', fess up.
For further critique of your settings, provide me with SHOW VARIABLES and SHOW GLOBAL STATUS.
MyISAM only has "table locking", which can slow things down; converting to InnoDB is likely to help. More discussion.
We're using MySql 5.5 in our production environment. Is it advisable to turn on slow query logs in production? What is the performance implication of doing so? I referred official doc here, but it doesn't say anything about performance.
I highly recommend turning ON the slow query log in production environment. No matter how much logging you have done in your dev/QA/pre-prod environment, nothing gives you a better idea of the true performance than a production environment. Performance hit, in my experience, has not been significant.
You can improve performance, if it becomes significant, by saving your slow query log on a different disk or disk array.
long_query_time dynamic variable helps MySQL determine what to write in the slow query log. If your long_query_time is 2 seconds and you see a whole bunch of queries being logged in the slow query log, you can raise the long_query_time using something like SET GLOBAL long_query_time=10 or change it in my.cnf and restart MySQL.
I prefer keeping long_query_time to around 3 seconds to start and see what gets logged, and resolve the slowness in a timely manner. After that I drop it down to 2 seconds and keep it there. Logging is one thing, but taking action on the logging - may it be resolving the long running queries, monitoring the IO with iostat -x or sar etc. - is vital.
I never found any Performance penalties on using slow-query log so I think it's save to turn it on. The database just keeps an extra logfile where it will store query exceeding a certain runtime (configurable). You can Analyse the slow queries after that e.g. by doing an
EXPLAIN SELECT ... FROM ...
We're running a social networking site that logs every member's action (including visiting other member's pages); this involves a lot of writes to the db. These actions are stored in a MyISAM table and since something is starting to tax the CPU, my first thought was that it's the table locking of MyISAM that is causing this stress on the CPU.
There are only reads and writes, no updates to this table. I think the balance between reads and writes is about 50/50 for this table, would InnoDB therefore be a better option?
If I want to change the table to InnoDB and we don't use foreign key constraints, transactions or fulltext indexes - do I need to worry about anything?
Notwithstanding any benefits / drawbacks of its use, which are discussed in other threads ( MyISAM versus InnoDB ), migration is a nontrivial process.
Consider
Functionally testing all components which talk to the database if possible - difference engines have different semantics
Running as much performance testing as you can - some things may improve, others may be much worse. A well-known example is SELECT COUNT(*) on a large table.
Checking that all your code will handle deadlocks gracefully - you can get them without explicit use of transactions
Estimate how much space usage you'll get by converting - test this in a non-production environment.
You will doubtless need to change things in a large software platform; this is ok, but seeing as you (hopefully) have a lot of auto-test coverage, change should be acceptable.
PS: If "Something is starting to tax the CPU", then you should a) Find out what, in a non-production environment, b) Try various options to reduce it, in a non-production environment. You should not blindly start doing major things like changing database engines when you haven't fully analysed the problem.
All performance testing should be done in a non-production environment, with production-like data and on production-grade hardware. Otherwise it is difficult to interpret results correctly.
With regards to other potential migration problems:
1) Space - InnoDB tables often require more disk space, though the Barracuda file format for new versions of InnoDB have narrowed the difference. You can get a sense for this by converting a recent backup of the tables and comparing the size. Use "show table status" to compare the data length.
2) Full text search - only on MyISAM
3) GIS/Spatial datatypes - only on MyISAM
On performance, as the other answers and the referenced answer indicate, it depends on your workload. MyISAM is much faster for full table scans. InnoDB tends to be much faster for highly concurrent access. InnoDB can also be much faster if your lookups are based on the primary key.
Another performance issue is that MyISAM can always keep a row count, since it only does table level locking. So, if you're frequently trying to get the row count for a very large table, it may be much slower with InnoDB. Search the Internet if you need a workaround for this, as I've seen several proposed.
Depending on the size of the table(s), you may also need to update your MySQL config file. At the very least, you may want to shift bytes from key_buffer to innodb_buffer_pool_size. You won't get a fair comparison if you leave the database as being optimized for MyISAM. Read up on all the innodb_* configuration properties.
I think it's quite possible that switching to InnoDB would improve performance, but In my experience, you can't really be sure until you try it. If I were you, I would set up a test environment on the same server, convert to InnoDB and run a benchmark.
From my experience, MyISAM tables are only useful for text indexing where you need good performance with searches on big text, but you still don't need a full fledged search engine like Solr or ElasticSearch.
If you want to switch to InnoDB but want to keep indexing your text in a MyISAM table, I suggest you take a look at this: http://blog.lavoie.sl/2013/05/converting-myisam-to-innodb-keeping-fulltext.html
Also: InnoDB supports live atomic backups using innobackupex from Percona. This is godsent when dealing with production servers.