i had a MySQL 5.7 instance on Google CloudSQL.
Now i created a MySQL 8 instance.
The configuration is pretty much the same (except i use 2 CPUs instead of one, 3.75GB of RAM).
Now the default config for MySQL Memory usage (like innodb_buffer_pool_size,..) seems to be the same.
I migrated about half of my applications to use this instance. What is happening now is, that the instance Memory usage goes above 3.XX GB and the service gets restarted.
Which is super annoying because in that time my applications obviously crash.
It seems like memory usage grows with every select statement, and everything is cached.
Here are some of the config values:
| key_buffer_size | 8.00 MB |
| innodb_buffer_pool_size | 1408.00 MB |
| innodb_log_buffer_size | 16.00 MB |
| sort_buffer_size | 0.25 MB |
| read_buffer_size | 0.125 MB |
| read_rnd_buffer_size | 0.25 MB |
| join_buffer_size | 0.25 MB |
| thread_stack | 0.273 MB |
| binlog_cache_size | 0.031 MB |
| tmp_table_size | 16.00 MB |
This makes CloudSQL pretty much unusable to me. I need MySQL 8 without crashing several times a day.
My mariadb is increase disk usage every day until full the disk up. Actually data inserting to the other disk.
Only reclaim the used disk space when stop mariadb daemon.(or restart)
Mariadb is work fine until the disk full up.
Mariadb Env.
Version : 10.3.x
OS : Ubuntu 18.04 LTS
Related DB Configurations.
| Variable_name | Value |
+------------------------+--------------------------------------+
| basedir | /usr |
| innodb_temp_data_file_path | ibtmp1:12M:autoextend |
| datadir | /mysql_data |
| innodb_tmpdir | |
| max_tmp_tables | 32 |
| slave_load_tmpdir | /mysql_data/mysql_tmp |
| tmp_disk_table_size | 4294967295 |
| tmp_memory_table_size | 33554432 |
| tmp_table_size | 33554432 |
| tmpdir | /mysql_data/mysql_tmp |
+----------------------------+----------------------------------+
'/mysql_data' is physically separated disk with '/'.
I understand that increased '/mysql_data' disk by inserted data but don't under stand why increased every day about 2G size usage to '/' disk.
When stop mariadb daemon then reclaim the space clearly at the disk '/'.
Everyday full backup the database by mysqldump command to the '/mysql_data' disk.
Searching mariadb log but there's noting error or noticeable messages.
I think mariadb daemon is grabbed system space('/'), but don't know why.
I need your help. How can i solve this?
lately my mysql 5.5.27 has been performing very poorly. I have changed just about everything in the config to try and see if it makes a difference with no luck. I am getting tables locked up constantly reaching 6-9 locks per table. My select queries take forever 300sec-1200sec.
Moved Everything to PasteBin because it exceeded 30k chars
http://pastebin.com/bP7jMd97
SYS ACTIVITIES
90% UPDATES AND INSERTS
10% SELECT
My slow query log is backed up. below I have my mysql info. Please let me know if there is anything i should add that would help.
Server version 5.5.27-log
Protocol version 10
Connection XX.xx.xxx via TCP/IP
TCP port 3306
Uptime: 21 hours 39 min 40 sec
Uptime: 78246 Threads: 125 Questions: 6764445 Slow queries: 25 Opens: 1382 Flush tables: 2 Open tables: 22 Queries per second avg: 86.451
SHOW OPEN TABLES
+----------+---------------+--------+-------------+
| Database | Table | In_use | Name_locked |
+----------+---------------+--------+-------------+
| aridb | ek | 0 | 0 |
| aridb | ey | 0 | 0 |
| aridb | ts | 4 | 0 |
| aridb | tts | 6 | 0 |
| aridb | tg | 0 | 0 |
| aridb | tgle | 2 | 0 |
| aridb | ts | 5 | 0 |
| aridb | tg2 | 1 | 0 |
| aridb | bts | 0 | 0 |
+---------+--------------+-------+------------+
I've hit a brick wall and need some guidance. thanks!
From looking through your log it would seem the problem (as I’m quite sure you’re aware) is due to the huge amount of locks that are present given the amount of data being updated / selected / inserted and possible at the same time.
It is really hard to give performance tips without first knowing lots of information which you don’t provide such as size of tables, schema, hardware, config, topology etc – SO probably isn’t the best place for such a broad question anyway!
I’ll keep my answer as generic as I can, but possible things to look at or try would be:
Run Explain the select queries and make sure they are selectively finding data and not performing full table scans or wasting huge amounts of data
Leave the server to do it's inserts and updates but create a read replica for reporting, this way data won’t be locked
If you’re updating many rows at a time, think about updating with a limit supplied to stop so much data being locked
If you are able to, delay the inserts to relieve pressure
Look at a hardware fix such as Solid State Disks for IO performance and more memory so more indexing / data can be held in memory or to have a larger buffer
I found a very slow MySQL query in my web app. The weird thing is that the query is only slow the first time it's executed, despite the fact that the query_cache is set to its default (query_cache_size 0) like so:
mysql> show variables like 'query%';
+------------------------------+---------+
| Variable_name | Value |
+------------------------------+---------+
| query_alloc_block_size | 8192 |
| query_cache_limit | 1048576 |
| query_cache_min_res_unit | 4096 |
| query_cache_size | 0 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
| query_prealloc_size | 8192 |
+------------------------------+---------+
The even weirder thing is that this speedup persists even after the MySQL server has been stopped and restarted (I'm using OSX, and perform this restart using the system preferences pane.) The only way I can re-create the poor performance of the initial query is by rebooting the system.
So my question is: how is this happening? Obviously some sort of caching at work, but where? And how does it persist across database restarts? This query is mediated through our web app, which comes via PHP/Apache, but there are no extra bells and whistles, and the curious caching also persists across Apache restarts.
Help?
wild guess is operating system, or hard disk, caching the file
have an issue here of how to configure mysql (myisam) properly for the bulk insert (load data infile) to be performed fast.
There is 6 Gb text file to be imported, 15 mln rows, 16 columns (some int, some varchar(255), one varchar(40), one char(1) some datetime, one mediumtext).
relative my.conf settings:
key_buffer = 800M
max_allowed_packet = 160M
thread_cache_size = 80
myisam_sort_buffer_size = 400M
bulk_insert_buffer_size = 400M
delay_key_write = ON
delayed_insert_limit = 10000
There are three indexes - one primary (autincrement int), one unique int and one unique varchar(40).
The problem is that after executing the load data infile command, the first 3 gigs of data are imported quickly (based on the increasing size of table.myd - 5-8 mb/s), but uppon crossing the 3020 Mb limit the import speed decreases greatly - the size of table.myd is growing 0,5mb/s. I've noticed, that the import process slows down upon the Key_blocks_unused gets drained to zero. These are the output of mysql> show status like '%key%'; in the beginning of import:
mysql> show status like '%key%';
+------------------------+---------+
| Variable_name | Value |
+------------------------+---------+
| Com_preload_keys | 0 |
| Com_show_keys | 0 |
| Handler_read_key | 0 |
| Key_blocks_not_flushed | 57664 |
| Key_blocks_unused | 669364 |
| Key_blocks_used | 57672 |
| Key_read_requests | 7865321 |
| Key_reads | 57672 |
| Key_write_requests | 2170158 |
| Key_writes | 4 |
+------------------------+---------+
10 rows in set (0.00 sec)
and this is what how it looks after the 3020Mb limit, i.e. when key_blocks_unused gets down to zero, and that's when the bulk insert process get really slow:
mysql> show status like '%key%';
+------------------------+-----------+
| Variable_name | Value |
+------------------------+-----------+
| Com_preload_keys | 0 |
| Com_show_keys | 0 |
| Handler_read_key | 0 |
| Key_blocks_not_flushed | 727031 |
| Key_blocks_unused | 0 |
| Key_blocks_used | 727036 |
| Key_read_requests | 171275179 |
| Key_reads | 1163091 |
| Key_write_requests | 41181024 |
| Key_writes | 436095 |
+------------------------+-----------+
10 rows in set (0.00 sec)
The problem is pretty clear, to my understanding - indexes are being stored in cache, but once the cache fills in, the indexes get written to disk one by one, which is slow, therefore all the process slows down. If i disable the unique index based on varchar(40) column and, therefore, all the indexes fit into Key_blocks_used (i guess this is the variable directly dependant on key_buffer, isn't it?), all the bulk import is successfull. So, i'm curious, how to make mysql put all the Key_blocks_used data to disk at once, and free up the Key_blocks_used?. I understand that it might be doing some sorting on-the-fly, but still, i guess it should be available to do some cached RAM-disk synchronization in order to successfully manage indexes even when they don't all fit into the memory cache. So my question is "how to configure mysql so that bulk inserting would avoid writing to disk on (almost)each index, even when all indexes don't fit into a cache?" last not least - delay_key_write is set to 1 for a given table, though it didn't add any speed-up, in comparison to when it was disabled.
Thanks for any thoughts, ideas, explanations and RTMs in advance ! (:
One more little question - how would i calculate how many varchar(40) indexes would fit into cache before Key_blocks_unused gets to 0?
P.S. disabling indexes with $myisamchk --keys-used=0 -rq /path/to/db/tbl_name and then re-enabling them with $myisamchk -rq /path/to/db/tbl_name, as described in Mysql docs is a known solution, which works, but only when bulk-inserting into an empty table. When there are some data in a table already, the index uniqueness checking is necessary, therefore disabling indexes is not a solution.
When you import data with "load data infile", I think mysql perform the insert one by one and with each insert, it tries to update the index file .MYI as well and this could slow down your import as it consume bot I/O and CPU resources for each individual insert.
What you could do is add 4 files to your import file to disable the keys of your table and enable it at the end of the insert statement and you should see the difference.
LOCK TABLES tableName WRITE;
ALTER TABLE tableName DISABLE KEYS;
----
your insert statement from go here..
----
ALTER TABLE tableName ENABLE KEYS
UNLOCK TABLES;
If you don't want to edit your data file, try to use mysqldump to get a proper dump file and you shouldn't run into this slowness with import data.
##Dump the database
mysqldump databaseName > database.sql
##Import the database
mysql databaseName < database.sql
Hope this helps!
I am not sure the key_buffer you mention is same as key_buffer_size.
I had faced similar problem. My problem was resolved by bumping up the key_buffer_size value to something like 1GB. Check my question here.