have an issue here of how to configure mysql (myisam) properly for the bulk insert (load data infile) to be performed fast.
There is 6 Gb text file to be imported, 15 mln rows, 16 columns (some int, some varchar(255), one varchar(40), one char(1) some datetime, one mediumtext).
relative my.conf settings:
key_buffer = 800M
max_allowed_packet = 160M
thread_cache_size = 80
myisam_sort_buffer_size = 400M
bulk_insert_buffer_size = 400M
delay_key_write = ON
delayed_insert_limit = 10000
There are three indexes - one primary (autincrement int), one unique int and one unique varchar(40).
The problem is that after executing the load data infile command, the first 3 gigs of data are imported quickly (based on the increasing size of table.myd - 5-8 mb/s), but uppon crossing the 3020 Mb limit the import speed decreases greatly - the size of table.myd is growing 0,5mb/s. I've noticed, that the import process slows down upon the Key_blocks_unused gets drained to zero. These are the output of mysql> show status like '%key%'; in the beginning of import:
mysql> show status like '%key%';
+------------------------+---------+
| Variable_name | Value |
+------------------------+---------+
| Com_preload_keys | 0 |
| Com_show_keys | 0 |
| Handler_read_key | 0 |
| Key_blocks_not_flushed | 57664 |
| Key_blocks_unused | 669364 |
| Key_blocks_used | 57672 |
| Key_read_requests | 7865321 |
| Key_reads | 57672 |
| Key_write_requests | 2170158 |
| Key_writes | 4 |
+------------------------+---------+
10 rows in set (0.00 sec)
and this is what how it looks after the 3020Mb limit, i.e. when key_blocks_unused gets down to zero, and that's when the bulk insert process get really slow:
mysql> show status like '%key%';
+------------------------+-----------+
| Variable_name | Value |
+------------------------+-----------+
| Com_preload_keys | 0 |
| Com_show_keys | 0 |
| Handler_read_key | 0 |
| Key_blocks_not_flushed | 727031 |
| Key_blocks_unused | 0 |
| Key_blocks_used | 727036 |
| Key_read_requests | 171275179 |
| Key_reads | 1163091 |
| Key_write_requests | 41181024 |
| Key_writes | 436095 |
+------------------------+-----------+
10 rows in set (0.00 sec)
The problem is pretty clear, to my understanding - indexes are being stored in cache, but once the cache fills in, the indexes get written to disk one by one, which is slow, therefore all the process slows down. If i disable the unique index based on varchar(40) column and, therefore, all the indexes fit into Key_blocks_used (i guess this is the variable directly dependant on key_buffer, isn't it?), all the bulk import is successfull. So, i'm curious, how to make mysql put all the Key_blocks_used data to disk at once, and free up the Key_blocks_used?. I understand that it might be doing some sorting on-the-fly, but still, i guess it should be available to do some cached RAM-disk synchronization in order to successfully manage indexes even when they don't all fit into the memory cache. So my question is "how to configure mysql so that bulk inserting would avoid writing to disk on (almost)each index, even when all indexes don't fit into a cache?" last not least - delay_key_write is set to 1 for a given table, though it didn't add any speed-up, in comparison to when it was disabled.
Thanks for any thoughts, ideas, explanations and RTMs in advance ! (:
One more little question - how would i calculate how many varchar(40) indexes would fit into cache before Key_blocks_unused gets to 0?
P.S. disabling indexes with $myisamchk --keys-used=0 -rq /path/to/db/tbl_name and then re-enabling them with $myisamchk -rq /path/to/db/tbl_name, as described in Mysql docs is a known solution, which works, but only when bulk-inserting into an empty table. When there are some data in a table already, the index uniqueness checking is necessary, therefore disabling indexes is not a solution.
When you import data with "load data infile", I think mysql perform the insert one by one and with each insert, it tries to update the index file .MYI as well and this could slow down your import as it consume bot I/O and CPU resources for each individual insert.
What you could do is add 4 files to your import file to disable the keys of your table and enable it at the end of the insert statement and you should see the difference.
LOCK TABLES tableName WRITE;
ALTER TABLE tableName DISABLE KEYS;
----
your insert statement from go here..
----
ALTER TABLE tableName ENABLE KEYS
UNLOCK TABLES;
If you don't want to edit your data file, try to use mysqldump to get a proper dump file and you shouldn't run into this slowness with import data.
##Dump the database
mysqldump databaseName > database.sql
##Import the database
mysql databaseName < database.sql
Hope this helps!
I am not sure the key_buffer you mention is same as key_buffer_size.
I had faced similar problem. My problem was resolved by bumping up the key_buffer_size value to something like 1GB. Check my question here.
Related
Many of us who are working on their home or pet projects and who use databases for storing structured data may encounter performance issues when trying to dump/restore data. It can annoying just to sit and wait for another dump restore operation for dozens of minutes or even for hours.
I have quite typical machine specs - 4 core i5 7300, 8 Gb RAM, quite fast M2 drive and Windows 10/MySQL 5.7.
The problem was that trying to restore ~4.5Gb file it took more than 4 hours. That was ridiculous and I wondered if mysqld process isn't using even a half of system resources - CPU/Memory/Disk I/O
Generally speaking, this post relates to some kind of summary of related issues including credits to many other posts which I put below
I performed a number of experiments with MySQL parameters for better dump restore operations
+--------------------------------+---------+---------+-----------------------+---------------------+
| Parameter | Default | Changed | Performance (minutes) | Perfomance gain (%) |
+--------------------------------+---------+---------+-----------------------+---------------------+
| All default | - | - | 259 min | - |
| innodb_buffer_pool_size | 8M | 4G | 32 min | +709% |
| innodb_buffer_pool_size | 4G | 6G | 32 min | ~0% |
| innodb_log_file_size | 48M | 1G | 11 min | +190% |
| innodb_log_file_size | 1G | 2G | 10 min | +10% |
| max_allowed_packet | 4M | 128M | 10 min | ~0% |
| innodb_flush_log_at_trx_commit | 1 | 0 | 9 min 25 sec | +5% |
| innodb_thread_concurrency | 9 | 0 | 9 min 27 sec | ~0% |
| innodb_double_write | - | off | 8 min 5 sec | +18% |
+--------------------------------+---------+---------+-----------------------+---------------------+
Summary (for best dump restore performance):
Set innodb_buffer_pool_size to half of RAM
Set innodb_log_file_size to 1G
Set innodb_flush_log_at_trx_commit to 0
Disabling innodb_double_write recommended only for fastest performance, it should be enabled on production. I also found, that changing another related parameter innodb_flush_method didn't change performance. But this can be an issue of Windows platform.
If you have complex structure with a lot of foreign keys for example, you can try Bulk Data Loading for InnoDB Tables tricks, link is listed at bottom of page
As you can see, I tried to increase CPU utilization by setting innodb_thread_concurrency to 0 (and also setting innodb_read_io_threads to maximum of 64) but results didn't change - it seems that mysqld process is already quite efficient for multi-core environment.
Restoring only data (without table structure) also didn't affect performance
I also changed a number of other parameters, but those above are most relevant ones for dump restore operation so far.
It may seem obvious, but novice question can be - where I can find and set those settings?
In Windows, my.ini file is located at ProgramData/MySQL/MySQL Server <version>/my.ini. You won't find some settings there (like innodb_double_write) - it's ok, just add to the end of the file.
The best way to change settings is to use MySQL Workbench (Server > Options file > InnoDB).
I pay my credits to following posts (and a lot of similar ones), which I found very useful:
https://www.percona.com/blog/2018/02/22/restore-mysql-logical-backup-maximum-speed/
https://www.percona.com/blog/2014/01/28/10-mysql-performance-tuning-settings-after-installation/
https://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-bulk-data-loading.html
https://dba.stackexchange.com/questions/86636/when-is-it-safe-to-disable-innodb-doublewrite-buffering
https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html
I have recently started seeing high amount of set-option query count in mysql. Its around 15k/sec
mysql> SHOW GLOBAL STATUS LIKE '%set%';
+-------------------+------------+
| Variable_name | Value |
+-------------------+------------+
| Com_reset | 0 |
| Com_set_option | 5472249432 |
| Com_show_charsets | 31 |
| Com_stmt_reset | 0 |
+-------------------+------------+
4 rows in set (0.00 sec)
However nothing like "set" operation is seen in the "show processlist"
IMAGE
Any idea why?
Thanks
COM_SET_OPTION count is high probably because the MySQL library you are using issues a SET option command every time it creates a connection to the MySql server. Few libraries tend to set some basic options everytime they create a new connection.
This could also happen if your system does a lot of transactions with MySQL.
However, this is pretty normal. I am using MySql 5.6.34 with PHP. Refer
According to slow query log, the following query (and similar queries) would take around 2s to execute occassionally:
INSERT INTO incoming_gprs_data (data,type) VALUES ('3782379837891273|890128398120983891823881abcabc','GT100');
Table structure:
CREATE TABLE `incoming_gprs_data` (
`id` int(200) NOT NULL AUTO_INCREMENT,
`dt` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`data` text NOT NULL,
`type` char(10) NOT NULL,
`test_udp_id` int(20) NOT NULL,
`parse_result` text NOT NULL,
`completed` tinyint(1) NOT NULL,
PRIMARY KEY (`id`),
KEY `completed` (`completed`)
) ENGINE=InnoDB AUTO_INCREMENT=5478246 DEFAULT CHARSET=latin1
Activities related to this table:
Around 200 rows are inserted to this table every second. The incoming data is originating from different sources (thus, it does not happen in one process but multiple processed at every second).
A cron process will process these rows by getting the rows via SELECT * FROM incoming_gprs_data WHERE completed = 0, process them, and update completed = 1
Another cron process (runs every 15 minutes) will delete the completed rows (i.e. completed = 1) to make the table slimmer.
Slow log query does not indicate any slow SELECT query related to the table.
The size of the table is relatively small less than 200K rows.
The reason we are doing #2 and #3 because previously, we have discovered that deleting completed row took time because the index needs to be rebuilt. Therefore, we added the completed flag and perform the deletion less frequently. These changes help to reduce the number of slow queries.
Here are the innodb_settings that we have:
+---------------------------------+------------------------+
| Variable_name | Value |
+---------------------------------+------------------------+
| have_innodb | YES |
| ignore_builtin_innodb | OFF |
| innodb_adaptive_flushing | ON |
| innodb_adaptive_hash_index | ON |
| innodb_additional_mem_pool_size | 8388608 |
| innodb_autoextend_increment | 8 |
| innodb_autoinc_lock_mode | 1 |
| innodb_buffer_pool_instances | 2 |
| innodb_buffer_pool_size | 6442450944 |
| innodb_change_buffering | all |
| innodb_checksums | ON |
| innodb_commit_concurrency | 0 |
| innodb_concurrency_tickets | 500 |
| innodb_data_file_path | ibdata1:10M:autoextend |
| innodb_data_home_dir | |
| innodb_doublewrite | OFF |
| innodb_fast_shutdown | 1 |
| innodb_file_format | Antelope |
| innodb_file_format_check | ON |
| innodb_file_format_max | Antelope |
| innodb_file_per_table | ON |
| innodb_flush_log_at_trx_commit | 2 |
| innodb_flush_method | O_DIRECT |
| innodb_force_load_corrupted | OFF |
| innodb_force_recovery | 0 |
| innodb_io_capacity | 200 |
| innodb_large_prefix | OFF |
| innodb_lock_wait_timeout | 50 |
| innodb_locks_unsafe_for_binlog | OFF |
| innodb_log_buffer_size | 67108864 |
| innodb_log_file_size | 536870912 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
| innodb_max_dirty_pages_pct | 75 |
| innodb_max_purge_lag | 0 |
| innodb_mirrored_log_groups | 1 |
| innodb_old_blocks_pct | 37 |
| innodb_old_blocks_time | 0 |
| innodb_open_files | 300 |
| innodb_purge_batch_size | 20 |
| innodb_purge_threads | 0 |
| innodb_random_read_ahead | OFF |
| innodb_read_ahead_threshold | 56 |
| innodb_read_io_threads | 4 |
| innodb_replication_delay | 0 |
| innodb_rollback_on_timeout | OFF |
| innodb_rollback_segments | 128 |
| innodb_spin_wait_delay | 6 |
| innodb_stats_method | nulls_equal |
| innodb_stats_on_metadata | OFF |
| innodb_stats_sample_pages | 8 |
| innodb_strict_mode | OFF |
| innodb_support_xa | ON |
| innodb_sync_spin_loops | 30 |
| innodb_table_locks | ON |
| innodb_thread_concurrency | 0 |
| innodb_thread_sleep_delay | 10000 |
| innodb_use_native_aio | OFF |
| innodb_use_sys_malloc | ON |
| innodb_version | 1.1.8 |
| innodb_write_io_threads | 4 |
+---------------------------------+------------------------+
We have set our innodb_buffer_pool_size to 6G after calculating using the follow SQL query:
SELECT CEILING(Total_InnoDB_Bytes*1.6/POWER(1024,3)) RIBPS FROM (SELECT SUM(data_length+index_length) Total_InnoDB_Bytes FROM information_schema.tables WHERE engine='InnoDB') A;
And it generates the result of 5GB. We estimated that it won't exceed this size for our InnoDB tables.
Our primary concern right at the moment is on how to speed up the insert query into the table and what causes the occasional slow insert queries.
As you know, 200 rows a second of insertion is a lot. It is worth your trouble to try to optimize this data flow on an application of this scale.
InnoDB uses database transactions on all insertions. That is, every insert looks like this:
START TRANSACTION;
INSERT something...;
COMMIT;
If you don't specify these transactions, you get autocommit behavior.
The secret to doing insertions at high volume is to do many of them in each transaction, like so:
START TRANSACTION;
INSERT something...;
INSERT something...;
...
INSERT something...;
INSERT something...;
INSERT something...;
COMMIT;
START TRANSACTION;
INSERT something...;
INSERT something...;
...
INSERT something...;
INSERT something...;
INSERT something...;
COMMIT;
START TRANSACTION;
INSERT something...;
INSERT something...;
...
INSERT something...;
INSERT something...;
INSERT something...;
COMMIT;
I have had good success with up to one hundred INSERT commands before each COMMIT;
Do not forget the final COMMIT! Don't ask me how I know to give this advice. :-)
Another way to do this in MySQL is with multiple-row INSERT commands In your case they might look like this.
INSERT INTO incoming_gprs_data (data,type) VALUES
('3782379837891273|890128398120983891823881abcabc','GT100'),
('3782379837891273|890128398120983891823881abcabd','GT101'),
('3782379837891273|890128398120983891823881abcabe','GT102'),
...
('3782379837891273|890128398120983891823881abcabf','GT103'),
('3782379837891273|890128398120983891823881abcac0','GT104');
A third way, the hardest and the highest performance way, to get a very high insert rate is to store your batches of data in text files, and then use the LOAD DATA INFILE command to put the data into your table. This technique can be very fast indeed, especially if the file can be loaded directly from the file system of your MySQL server.
I suggest you try the transaction stuff first to see if you get the performance you need.
Another thing: if you have a quiet time of day or night, you can delete the completed rows then, rather than every fifteen minutes. In any case, when you read back these rows to process or to delete, you should use a transaction-batch process like this:
done = false /* pseudocode for your programming language */
while not done {
DELETE FROM table WHERE completed = 1 LIMIT 50;
if that query handled zero rows {
done = true
}
}
This will do your deletion operation in reasonably sized transactional batches. Your occasional two-second insertion delay is probably a result of a very large transactional batch on your processing or deletion.
Around 200 rows are inserted to this table every second. One at a time? Much better is to do a single multi-row INSERT.
Slow log query does not indicate any slow SELECT query related to the table. Lower long_query_time, the default of 10 seconds is virtually 'useless'.
Cron processes do SELECT * FROM incoming_gprs_data WHERE completed = 0.
Don't scan the entire table all at once. Walk through the table, preferably via the PRIMARY KEY, doing say 1000 rows at a time. More details on chunking.
The index is not "rebuilt", it is always incrementally updated. (I hope you are not explicitly rebuilding it!)
I assume you have at least 8GB of RAM? (The buffer_pool is my clue, which should be about 70% of available RAM.)
int(200) -- the (200) means nothing. An INT is 4 bytes regardless.
Don't do two cron processes; go ahead and delete on the first pass. The UPDATE to set completed is about as costly as the DELETE.
More
If you cannot "batch" the inserts, can you at least but them in a single "transaction" (BEGIN...COMMIT)? Ditto for the DELETEs. For data integrity, there is at least one disk hit per transaction. So, doing several operations in a single transaction decreases I/O, thereby speeding up the query. But... Don't get carried away; if you do a million inserts/deletes/updates in a single transaction, there are other issues.
Another thing that can be done to decrease the I/O overhead: innodb_flush_log_at_trx_commit = 2, which is faster, but less safe than the default of 1. If your "200 inserts/sec" is 200 transactions (such as with autocommit=1), this setting change can make a big difference.
You posted in your own answer:
To resolve this, we alter the incoming_gprs_data table to use MEMORY
engine. This table acts like a temporary table to gather all the
incoming data from different sources. We will then using a cron will
process these data, insert them into another table processed_data_xxx,
and finally delete them. This removes all the slow insert queries.
You should use a message queue for this, not a database. If you have a workflow that processes data and then deletes it, this sounds perfect for a message queue. There are many message queues that can handle 200 entries per second easily.
Instead of a cron job to update and delete records from a database, you could just have an application listening to a topic on the message queue, process an item, and then... nothing. No need to store that item, just move on to the next item from the queue.
We use Apache ActiveMQ at my current company. I know other developers who recommend RabbitMQ as well.
In the end, we ended up having a different solution to resolve this mainly because the incoming data that we are inserting is coming from different sources (hence, different processes). Therefore, we cannot use multiple-row INSERT and START TRANSACTION and COMMIT in this matter.
To resolve this, we alter the incoming_gprs_data table to use MEMORY engine. This table acts like a temporary table to gather all the incoming data from different sources. We will then using a cron will process these data, insert them into another table processed_data_xxx, and finally delete them. This removes all the slow insert queries.
We do understand the cons of having MEMORY engine (such as high volatility and lack of sorting and hash indexes). But, the speed in writing and reading using MEMORY engine suits this situation.
In inserting the processed data to the table processed_data_xxx, we have followed suggestion from #Ollie Jones to use START TRANSACTION and COMMIT instead of autocommitting each insert query.
One solution that is fairly pragmatic is to not do direct inserts, but to write to a redis queue and then consume that once per second in order to do batch inserts. These processes require only a few lines of code (in any language).
Something like: In a loop read all records from the queue and insert them into mysql. Sleep x times 100 ms in the loop until the wall clock is one second further and then start the loop again.
It is very fast and pragmatic, but you lose real-time confirmation of successful inserts into the database. With this method I was able to achieve up to 40k inserts per second on a single machine.
I have a MySQL database with a single InnoDB table with about 300 million rows. There are up to 10 connected clients sending 50-60 read queries per second. Everything was smooth for a few months until recently, when MySQL started stalling, while using large amounts of CPU (100%+. uptime command shows values like 15, 12, 15.). Queries that would take 500ms now take several seconds, from tens to hundreds. Doing a SHOW PROCESSLIST shows queries hanging at the Sending data state.
I'm unable to figure out why and any help is appreciated.
Server
Intel(R) Xeon(R) CPU E5 # 2.40GHz | 12 Cpus | 32 GB RAM
my.cnf
innodb_file_per_table = 1
tmp-table-size = 32M
max-heap-table-size = 32M
innodb-log-files-in-group = 2
innodb-flush-method = O_DIRECT
innodb-log-file-size = 512M
innodb-buffer-pool-size = 26G
innodb-flush-log-at-trx-commit = 2
innodb-file-per-table = 1
innodb_file_format = barracuda
Table (name: records)
+----------------+------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+------------+------+-----+---------+----------------+
| id | bigint(20) | NO | PRI | NULL | auto_increment |
| identifier | int(11) | YES | MUL | 0 | |
| timestamp | int(11) | YES | MUL | NULL | |
| rtype | int(5) | YES | MUL | NULL | |
| x1 | int(11) | YES | | NULL | |
| x2 | int(11) | YES | | NULL | |
| net | bigint(20) | YES | | NULL | |
| created_at | datetime | NO | | NULL | |
+----------------+------------+------+-----+---------+----------------+
Indexed and used in the WHERE query:
timestamp (UNIX timestamp as INT)
identifier
rtype (five possible
values, 1-5)
Data size
Data_length = ~18 GB
Index_length = ~16 GB
Query
SELECT identifier, timestamp, x1 AS a, x2 AS b, net
FROM records
WHERE
identifier=1010
AND timestamp >=1463111100
AND timestamp <= 1463738400
AND rtype=5
ORDER BY timestamp;
(Returns about 900 rows. Sometimes completes in less than a second, sometimes 10-100s of seconds)
Query analysis
select_type = SIMPLE
type = index_merge
possible_keys = indeXidentifier, indeXtimestamp, indeXrtype
key = indeXidentifier, indeXrtype
key_len = 4,5
rows = 10641
Extra = Using intersect(indeXidentifier,indeXrtype); Using where
I have two recommendation :
1 . Change the column order in your multi-column index.
Recommended order is: identifier, rtype, timestamp.
Index unique scan is faster than index range scan then it is better to appear first.
2 . Change your query like this:
select * from(
SELECT identifier, timestamp, x1 AS a, x2 AS b, net
FROM records
WHERE
identifier=1010
AND timestamp >=1463111100
AND timestamp <= 1463738400
AND rtype=5
) t1 ORDER BY timestamp;
To avoid using index for sorting.
Either of these is optimal for the SELECT in the Question:
INDEX(rtype, identifier, timestamp)
INDEX(identifier, rtype, timestamp)
The principle is to put all the "= constant" parts of the WHERE first, then add one more thing on (the 'range' over timestamp). More cookbook tips.
There is no need to put this in a subquery -- that will only slow things down by building an unnecessary tmp table.
Why did it suddenly go slow? The likely reason is "caching". Right after adding the new index, less stuff was cached in RAM, and any SELECTs were hitting the disk a lot.
Let's double-check the query plan. Please provide EXPLAIN SELECT .... It should be one line long, indicate that it is using the 3-column index, and not say "intersect", "temporary", or "filesort".
If anything is still amiss, please provide that explain, plus SHOW CREATE TABLE (It is more descriptive than DESCRIBE.)
Another thing to be sure to do: Turn off the Query Cache. Add/change these settings in my.cnf and restart the server:
query_cache_type = OFF
query_cache_size = 0
How are the INSERTs occurring? One row at a time? If they can be 'batching', even a few dozen at a time, that will help significantly.
Since you are commenting about CPU, it sounds like you have some query that is CPU-bound, not I/O-bound. Do SHOW FULL PROCESSLIST; -- do you see some query with a large "Time"? Is it something you have not mentioned yet?
Please run
SHOW VARIABLES LIKE 'max_connections';
SHOW GLOBAL STATUS LIKE 'Max_used_connections';
(The values may lead to a discussion of "thundering herds".)
lately my mysql 5.5.27 has been performing very poorly. I have changed just about everything in the config to try and see if it makes a difference with no luck. I am getting tables locked up constantly reaching 6-9 locks per table. My select queries take forever 300sec-1200sec.
Moved Everything to PasteBin because it exceeded 30k chars
http://pastebin.com/bP7jMd97
SYS ACTIVITIES
90% UPDATES AND INSERTS
10% SELECT
My slow query log is backed up. below I have my mysql info. Please let me know if there is anything i should add that would help.
Server version 5.5.27-log
Protocol version 10
Connection XX.xx.xxx via TCP/IP
TCP port 3306
Uptime: 21 hours 39 min 40 sec
Uptime: 78246 Threads: 125 Questions: 6764445 Slow queries: 25 Opens: 1382 Flush tables: 2 Open tables: 22 Queries per second avg: 86.451
SHOW OPEN TABLES
+----------+---------------+--------+-------------+
| Database | Table | In_use | Name_locked |
+----------+---------------+--------+-------------+
| aridb | ek | 0 | 0 |
| aridb | ey | 0 | 0 |
| aridb | ts | 4 | 0 |
| aridb | tts | 6 | 0 |
| aridb | tg | 0 | 0 |
| aridb | tgle | 2 | 0 |
| aridb | ts | 5 | 0 |
| aridb | tg2 | 1 | 0 |
| aridb | bts | 0 | 0 |
+---------+--------------+-------+------------+
I've hit a brick wall and need some guidance. thanks!
From looking through your log it would seem the problem (as I’m quite sure you’re aware) is due to the huge amount of locks that are present given the amount of data being updated / selected / inserted and possible at the same time.
It is really hard to give performance tips without first knowing lots of information which you don’t provide such as size of tables, schema, hardware, config, topology etc – SO probably isn’t the best place for such a broad question anyway!
I’ll keep my answer as generic as I can, but possible things to look at or try would be:
Run Explain the select queries and make sure they are selectively finding data and not performing full table scans or wasting huge amounts of data
Leave the server to do it's inserts and updates but create a read replica for reporting, this way data won’t be locked
If you’re updating many rows at a time, think about updating with a limit supplied to stop so much data being locked
If you are able to, delay the inserts to relieve pressure
Look at a hardware fix such as Solid State Disks for IO performance and more memory so more indexing / data can be held in memory or to have a larger buffer