Huge Temp table going on disk In mysql - mysql

How to avoid temporary table creation on disk in mysql if table has multiple text column.
I have given tmp_table_size to 1 GB and Max_heap_size = GB
Still tables going on Disk.
+-------------------------+-------+
| Variable_name | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 70742 |
| Created_tmp_files | 6 |
| Created_tmp_tables | 71076 |
Thanks :-)

Related

Why the error: Neither --relay-log nor --relay-log-index were used

So i have this mysql innodb cluster of 3 mysql instances on docker that is always failing with the error group replication not active.
Looking at the error logs, the only thing i can find is this warning:
Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=f3353ca0124a-relay-bin' to avoid this problem
I try the following query on the particular mysql instance 'f3353ca0124a'
show variables like '%relay_log%';
and i find
+---------------------------+---------------------------------------------+
| Variable_name | Value |
+---------------------------+---------------------------------------------+
| max_relay_log_size | 0 |
| relay_log | f3353ca0124a-relay-bin |
| relay_log_basename | /var/lib/mysql/f3353ca0124a-relay-bin |
| relay_log_index | /var/lib/mysql/f3353ca0124a-relay-bin.index |
| relay_log_info_file | relay-log.info |
| relay_log_info_repository | TABLE |
| relay_log_purge | ON |
| relay_log_recovery | OFF |
| relay_log_space_limit | 0 |
| sync_relay_log | 10000 |
| sync_relay_log_info | 10000 |
+---------------------------+---------------------------------------------+
11 rows in set (0.11 sec)
So i am confused cos it looks to me that the value which the error suggest i should set is already set.
Please can anyone help show me how to fix this error.

Select all rows (700000) very long time - hours

I use mysql mariadb(Server version: 10.3.20-MariaDB-1:10.3.20+maria~stretch mariadb.org binary distribution).
I have ~700 000 records with columns:
id
html (mediumtext) with very big average length in field: ~150000
date
+2 small other
In html I have very long text (it's html's).
Now I need select * from table;, to analyse this html but this query takes over ~0.03819s per query (I tested on smaller part) so: total rows 700000*0.03819s per query = (700000*0.03819s)/60/60 = over 7 hours of selecting!
I have 8 cores and 60GB of RAM. Profiling query shows that time of transferring data is very very long.
How to speed it up? It's is possible, or that much of data it's too much for mysql and I need mongodb?
query_cache_limit = 64M
query_cache_size = 1024M
max_allowed_packet = 64M
net_buffer_length = 16384
max_connect_errors = 1000
thread_concurrency = 32
concurrent_insert = 2
read_rnd_buffer_size = 8M
bulk_insert_buffer_size = 8M
query_cache_limit = 64M
query_cache_size = 1024M
query_cache_type = 1
query_prealloc_size = 262144
query_alloc_block_size = 65536
transaction_alloc_block_size = 8192
transaction_prealloc_size = 4096
max_write_lock_count = 16
innodb_buffer_pool_size=30G
innodb_flush_log_at_trx_commit=2
innodb_thread_concurrency=16
innodb_flush_method=O_DIRECT
innodb_read_io_threads = 64
innodb_write_io_threads = 16
innodb_buffer_pool_instances = 20
MariaDB [db]> explain select id, href, html from raw limit 10;
+------+-------------+-------+------+---------------+------+---------+------+--------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+------+---------------+------+---------+------+--------+-------+
| 1 | SIMPLE | raw | ALL | NULL | NULL | NULL | NULL | 658793 | |
+------+-------------+-------+------+---------------+------+---------+------+--------+-------+
1 row in set (0.227 sec)
after playing with indexes:
MariaDB [db]> show index from raw;
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| raw | 0 | PRIMARY | 1 | id | A | 658793 | NULL | NULL | | BTREE | | |
| raw | 1 | id | 1 | id | A | 658793 | NULL | NULL | | BTREE | | |
| raw | 1 | href | 1 | href | A | 658793 | NULL | NULL | YES | BTREE | | |
| raw | 1 | date | 1 | date | A | 131758 | NULL | NULL | YES | BTREE | | |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
4 rows in set (3.724 sec)
38ms to fetch 150Kb from a spinning disk is quite fast.
query_cache_size = 1024M -- This is much too high. Stop at about 50M.
A PRIMARY KEY is a unique index. So, if id is the primary key, do not also say KEY(id).
It's is possible, or that much of data it's too much for mysql and I need mongodb?
Assuming you are running at disk speed, you cannot expect any other product to run faster.
What will the client do with 100GB of data in one batch? MySQL will be happy to deliver it, but the client will probably choke to death.

RDS High CPU utilization

I am facing high CPU utilization issue, is too many concurrent create temporary table statement cause high CPU utilization?
Is there any query through that we can capture queries which causing high CPU utilization?
Variable we set:-
tmp_table_size = 1G
max_heap_table_size = 1G
innodb_buffer_pool_size = 145 G
innodb_buffer_pool_instance = 8
innodb_page_cleaner = 8
Status Variables:-
mysql> show global status like '%tmp%';
+-------------------------+-----------+
| Variable_name | Value |
+-------------------------+-----------+
| Created_tmp_disk_tables | 60844516 |
| Created_tmp_files | 135751 |
| Created_tmp_tables | 107643364 |
+-------------------------+-----------+
mysql> show global status like '%innodb_buffer%';
+---------------------------------------+--------------------------------------------------+
| Variable_name | Value |
+---------------------------------------+--------------------------------------------------+
| Innodb_buffer_pool_dump_status | Dumping of buffer pool not started |
| Innodb_buffer_pool_load_status | Buffer pool(s) load completed at 170917 19:11:45 |
| Innodb_buffer_pool_resize_status | |
| Innodb_buffer_pool_pages_data | 8935464 |
| Innodb_buffer_pool_bytes_data | 146398642176 |
| Innodb_buffer_pool_pages_dirty | 18824 |
| Innodb_buffer_pool_bytes_dirty | 308412416 |
| Innodb_buffer_pool_pages_flushed | 122454921 |
| Innodb_buffer_pool_pages_free | 188279 |
| Innodb_buffer_pool_pages_misc | 377817 |
| Innodb_buffer_pool_pages_total | 9501560 |
| Innodb_buffer_pool_read_ahead_rnd | 0 |
| Innodb_buffer_pool_read_ahead | 585245 |
| Innodb_buffer_pool_read_ahead_evicted | 14383 |
| Innodb_buffer_pool_read_requests | 304878851665 |
| Innodb_buffer_pool_reads | 10537188 |
| Innodb_buffer_pool_wait_free | 0 |
| Innodb_buffer_pool_write_requests | 14749510186 |
+---------------------------------------+--------------------------------------------------+
Step 1 -
show processlist
Find is any process is locking table if yes than change it to myisam.
Step 2 -
Check Ram and your db size
Step 3 -
Explain complex queries and check if file sort or maximum number od rows are getting scan remove it either by making table flat , not more than 4 sub queries
Step 4 -
Use of joins efficiently

why information_schema.tables.data_free always be 8388608?

Dose any one knows why information_schema.tables.data_free in InnoDB always be 8388608, no matter how many rows in tables;
table_schema table_name table_rows data_free engine
g33v1 appraise 0 8388608 InnoDB
g33v1 areatype 12403 8388608 InnoDB
g33v1 atype 581982 8388608 InnoDB
g33v1 atype2 579700 8388608 InnoDB
thanks.
I think it is the temporary table maximum size allocated for doing sorting and other things which requires the space on HDD. As the same things appears in other cases also
mysql> show global variables like '%tmp%';
+----------------+-----------------------+
| Variable_name | Value |
+----------------+-----------------------+
| bdb_tmpdir | /usr/local/mysql/tmp/ |
| max_tmp_tables | 32 |
| tmp_table_size | 8388608 |
| tmpdir | /usr/local/mysql/tmp |
+----------------+-----------------------+
mysql> show global variables like '%myisam%';
+---------------------------------+---------------+
| Variable_name | Value |
+---------------------------------+---------------+
| myisam_data_pointer_size | 4 |
| myisam_max_extra_sort_file_size | 2147483648 |
| myisam_max_sort_file_size | 2147483647 |
| myisam_recover_options | OFF |
| myisam_repair_threads | 1 |
| myisam_sort_buffer_size | 4194304 |
| myisam_stats_method | nulls_unequal |
+---------------------------------+---------------

Increasing the caching capability of MySQL

My mysql only caters read requests. I thought, it will be a good idea to make use of the cache completely. I am running MySQL in a VM and that is the only application running inside the VM. I am allocating 2GB memory for that VM. I am using a 64 bit centos on the VM. If you think already it is using the maximum memory that can be used, I can also allocate more to that VM. I am not very good in understanding the mysql settings and finding out the memory footprint used by a process but I am interested in learning how to. Thanks a lot for the help.
These are some information regarding my MYSQL :
mysql> show global variables like "%cache%";
+------------------------------+----------------------+
| Variable_name | Value |
+------------------------------+----------------------+
| binlog_cache_size | 32768 |
| have_query_cache | YES |
| key_cache_age_threshold | 300 |
| key_cache_block_size | 1024 |
| key_cache_division_limit | 100 |
| max_binlog_cache_size | 18446744073709547520 |
| query_cache_limit | 1048576 |
| query_cache_min_res_unit | 4096 |
| query_cache_size | 0 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
| table_cache | 64 |
| thread_cache_size | 0 |
+------------------------------+----------------------+
13 rows in set (0.00 sec)
mysql> show global variables like "%buffer%";
+-------------------------------+---------+
| Variable_name | Value |
+-------------------------------+---------+
| bulk_insert_buffer_size | 8388608 |
| innodb_buffer_pool_awe_mem_mb | 0 |
| innodb_buffer_pool_size | 8388608 |
| innodb_log_buffer_size | 1048576 |
| join_buffer_size | 131072 |
| key_buffer_size | 8384512 |
| myisam_sort_buffer_size | 8388608 |
| net_buffer_length | 16384 |
| preload_buffer_size | 32768 |
| read_buffer_size | 131072 |
| read_rnd_buffer_size | 262144 |
| sort_buffer_size | 2097144 |
+-------------------------------+---------+
12 rows in set (0.00 sec)
mysql> show table status where name="items"
-> ;
+-------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+------------+-------------------+----------+----------------+---------+
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment |
+-------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+------------+-------------------+----------+----------------+---------+
| items | MyISAM | 10 | Dynamic | 42667 | 346 | 14775916 | 281474976710655 | 1970176 | 0 | 341337 | 2009-07-22 13:31:00 | 2010-10-20 15:37:18 | NULL | latin1_swedish_ci | NULL | | |
+-------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+------------+-------------------+----------+----------------+---------+
This is the output of my ulimit -a
[sethu#work13 root]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 8191
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 8191
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Please let me know if you need more information.
You can achieve a quick performance boost by enabling the query cache!
Add this to your my.cnf:
query_cache_type=1
query_cache_limit=1M
query_cache_size=32M
For basic statistics and recommendations you can start with mysqltuner.pl script. Do not apply recommendations blindly, as it might decrease performance.
MySQLTuner Github page
One-liner to fetch latest version of script and run it:
curl -sSL mysqltuner.pl | perl
Perhaps you could consider using memcached? However, going to InnoDB table should give an immediate improvement (as N.B. said in a comment)