I searched lot of topics, but couldn't find any information about how to find which query consumes more CPU time.
I'm using MySQL 8.0 and MySQL Workbench.
I can find there average and total execution time of queries, but I'm not sure is it an indicator of CPU usage. Queries can take a longer time to execute because of low memory and additional reads from disk.
Suggestions to consider for your my.ini [mysqld] section to improve your instance performance,
thread_cache_size=100 # from 10 to reduce threads_created count of 34,663
innodb_flushing_avg_loops=5 # from 30 to reduce loop delay and reduce innodb_buffer_pool_pages_dirty 1,660 in this STATUS report
max_connections=100 # from 151 to reduce RAM footprint, max_used_connections was 31
innodb_io_capacity=5000 # from 200 stop/start required
if you have innodb_io_capacity_max in your my.ini, REMOVE it for new max of 10000
innodb_open_file=-1 # from 300 for auto calc value = table_open_cache current value of 2000
For additional suggestions, view my profile, Network profile for contact info and get in touch by email or Skype, please.
Congratulations on RELEASING RESOURCES with proper use of com_dealloc_sql and com_stmt_close coding technique.
Related
upon increasing total number of pool connection from application to MySQL 40000 getting this error ER_CANT_CREATE_THREAD.
MySQL conf file looks like this
innodb_buffer_pool_size=100G # from 128M to use half of your RAM for data/index storage
innodb_buffer_pool_instances=32 # from 1 to reduce mutex contention
innodb_lru_scan_depth=1024 # from 1024 to conserve 90% of CPU cycles used for function
max_connections=65536
max_prepared_stmt_count=204800
max_connect_errors=100000
thread_cache_size=8192
innodb_log_buffer_size = 256M
query_cache_size = 0
innodb_read_io_threads = 64
innodb_write_io_threads = 64
innodb_thread_concurrency = 0
innodb_io_capacity = 2000
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
max_connections=65536 is unreasonably, perhaps dangerously, high. Lower it to 100.
If 90000 connections are active at once, MySQL will meltdown. It is better to avoid too many connections than to let them stumble over each other. (Consider what would happen if you let 10000 people into a grocery store at the same time. Traffic would be so clogged up that people might take an hour to buy just one item.)
If your IoT action is a quick connect-insert-disconnect, that should take very few milliseconds.
If the connections came in very smoothly and take 10ms elapsed each, they could handle 90K per minute with only max_connections=15.
Benchmarks have shown that MySQL gets bottlenecked at not much more than the number of CPU cores that you have.
So, by limiting the number of current active connections to max_connections=100 should be a safe compromise.
Set back_log to a higher number, maybe 1000. (I do not have a feel for what a good number is.) The idea is that delaying the connection is better than letting it in, only to be stalled.
I am confident that MySQL can handle 90K IoT inserts per minute, but only if you take the advice here.
You mentioned "connection pooling". The 100 and 1000 should be moved back into the pooling layer. Or get rid of the layer. (It is hard to say which would be better.)
Please lower innodb_read_io_threads and innodb_write_io_threads to 4. Those variables should be increased for docker environments.
If you want more than 4 threads each, make sure there is enough RAM allocated.
My DB has around 15 tables, each with 40 columns, with 10.000 rows each.
Most of it with VARCHAR, some indexes and foreign keys.
Sometime I need to reconstruct my database (design flaw, working on it), which takes about 40 seconds locally. Now I'm trying to do the same to a AWS RDS MySQL 5.75 instance, but it takes forever, something like 40-50 minutes. The last time I had to do this same process it took no more than 5 minutes, still way more than the local 40 seconds, but I'm happy with it.
My internet speed is at about 35 Mbps Download / 5 Mbps Upload.
I know it's not fast, but it's consistent, and it hasn't changed since my last rebuilt.
I enabled General Logs, but all I can see are the INSERT queries, occasionally some "SELECT 1".
I do have same space for improvements on my code, but still, from 00:40:00 to 50:00:00, it seems that there's something else going on.
Any ideas on how to diagnose and find the bottleneck?
Thanks
--
Additional relevant information:
It is a Micro instance from AWS, all of the relevant monitoring indicators are basically flat: CPU at 4%, Free Storage Space at 20.000 MB, Freeable Memory at 200 MB, Write IOPS at around 2,5, the server runs a 5.7.25 MySQL, 1vCPU, 1Gb of RAM and 20GB of SSD. This is the same as 3 months ago when I last rebuilt the database.
SHOW GLOBAL STATUS: https://pastebin.com/jSrAzYZP
SHOW GLOBAL VARIABLES: https://pastebin.com/YxD7dVhR
SHOW ENGINE INNODB STATUS: https://pastebin.com/r5wffB5t
SHOW PROCESS LIST: https://pastebin.com/kWwiyGwf
SELECT * FROM information_schema...: https://pastebin.com/eXGBmetP
I haven't made any big changes to the server configuration, except enabling logs, e maxing out max_allowed_packets and saving logs to file.
In my backend I have a Flask app running, when it receives the API call, it takes a bunch of pickled objects and adds them all to the database (appending the Flask SQLAlchemy class to a list) and then running db.session.add_all(entries), trying to run a bulk operation. The code is the same, both for localhost and my remote server.
It does get slower in three specific tables, most of them with VARCHAR columns, but nothing different from my last inserts - it seems odd that the problem would be data, or the way the code is structured, or at least doesn't seem reasonable that this would result in a 20 second (localhost) to 40 minutes (hosted server) time, specially when the rest of the tables work mostly the same.
Enable the slow log, set long_query_time=0, run your code, then put the resulting log through mysqldumpslow.
Establish which queries contribute most to slowness and take it from there.
Compare the config between your old server and your new one.
Also, are they the same version of MySQL? 5.6, 5.7 and 8.0 can produce very different execution plans (with 5.6 usually coming up with the sane one if they differ).
Rate Per Second = RPS
Suggestions to consider for your AWS RDS Parameters group
thread_cache_size=24 # from 8 to reduce threads_created count
innodb_io_capacity=1900 # from 200 to enable more use of SSD IOPS capacity
read_rnd_buffer_size=128K # from 512K to reduce handler_read_rnd_next RPS of 21
query_cache_size=0 # from 1M since you have QC turned off with query_cache_typ=OFF
Determine why com_flush is running 13 times per hour and get it stopped to avoid table open thrashing.
I found that after migrating to RDS all my database Indexes are gone! They weren't migrated along with the schema and data. Make sure you're indexes are there.
Also, MySQL query cache is OFF by default in RDS. This won't help the performance of your initial query, but it may speed things up in general.
You can set query_cache_type to 1 and define a value for query_cache_size. I also changed the thread_cache_size from 8 to 24 and innodb_io_capacity from 200 to 1900 don't know if it helps you.
Also creating AWS DB Parameter Groups helped me a lot with configuring and tuning DB variables. Here you can read more:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html
I noticed that a mysql server is with CPU at 100%, and the "kernel time" (I'm not sure what it means) is unusually high, about 70%.
There are many connections on this server (around 400) and some active queries (about 40). Would that explain this behavior? Is there something wrong or this is expected?
Edit:
As suggested by a comment, I checked the 'handler_read%' variables:
show global status like 'handler_read%'. Here are the results:
Handler_read_first 248684
Handler_read_key 3081370400
Handler_read_last 83333
Handler_read_next 3520958058
Handler_read_prev 330
Handler_read_rnd 2210158755
Handler_read_rnd_deleted 60107588
Handler_read_rnd_next 929907565
The complete show status and show variables result is here:
https://www.dropbox.com/s/98pnd1rzgfp4jtf/server_status.txt?dl=0
https://www.dropbox.com/s/rh0m8np0mosx6tp/server_variables.txt?dl=0
The high values for handler_read_rnd* indicate that your tables are not properly indexed or that your queries are not written to take advantage of the indexes you have.
Due to syscall overhead and context switches table scans use more CPU.
Before changing parameters or invest money in hardware, I would suggest to optimize your database:
Activate the slow query log (additionally you might specify parameters log_queries_not_using_indexes and min_examined_row_limit) for a limited time (size of slow query log might grow very fast).
Analyze the queries in query log with EXPLAIN or EXPLAIN EXTENDED
If the problems occurs on a production server, replicate the content first to a test system
A number of settings are too high or too low...
tmp_table_size and max_heap_table_size are 16G -- This is disastrous! Each connection might need one or more of these. Lower it to 1% of RAM.
There are a large number of Com_show_fields -- complain to the 3rd party vendor.
Large number for Created_tmp_disk_tables -- this usually means poorly indexed or designed queries.
Select_scan / Com_select = 77% -- Missing lots of indexes?
Threads_running = 229 -- they are probably tripping over each other.
FLUSH STATUS was run recently, so some STATUS values are not useful.
table_open_cache is 256 -- There some indications that a bigger number would be good. Try 1500.
key_buffer_size is only 1% of RAM; raise it to 20%.
Still, ... High CPU means poor indexes and/or poorly designed queries. Let's see some of them, together with SHOW CREATE TABLE.
In the below status i have opened files count to be '95349'.
this value is increasing rapidly.
mysql> show global status like 'open_%';
Open_files = 721
Open_streams = 0
Open_table_definitions = 706
Open_tables = 741
Opened_files = 95349
Opened_table_definitions = 701
Opened_tables = 2851
also see this.
mysql>show variables like '%open%';
have_openssl = DISABLED
innodb_open_files = 300
open_files_limit = 8502
table_open_cache = 4096
and
max_connection = 300
is there any relation to open files and opened files. will there be any performance issues because of increasing opened_files value. This is a server of 8 GD RAM and 500 GB hardisk with processor: Intel(R) Xeon(R) CPU E3-1220 V2 # 3.10GHz. It is a dedicated mysql server.
here for the command
ulimit -n;
1024 was the count
the server is hanging often. using some online tools i have optimised some parameters already. need to know what else should be optimized ? in what case the opened files count will reduce? is it necessary that opened files count should be with in some limit. if so how to find the appropriate limit for my server. if am not clear some where please help me by asking more questions.
Opened_files is a counter of how many times you have opened a table since the last time you restarted mysqld (see status variable Uptime for the number of seconds since last restart).
Open_files is not a counter; it's the current number of open files.
If your Opened_files counter is increasing rapidly, you may be able to gain improvement to performance by increasing the size of the table_open_cache.
For some tips on the performance implications of this variable (and some cautions about setting it too high), see:
http://www.mysqlperformanceblog.com/2009/11/16/table_cache-negative-scalability/ (the problem described there seems to be solved finally in MySQL 5.6)
Re your comments:
You misunderstand the purpose of the counter. It always increases. It counts the number of times a particular operation has occurred since the last restart of mysqld. In this case, opening a file for a table.
Having a high value in a counter isn't necessarily a problem. It could mean simply that your mysqld has been running for many days or weeks without a restart. So you have to look at that number compared to your Uptime (that is, MySQL status variable Uptime, not Linux uptime).
What is more meaningful is the rate of increase of a counter, that is how fast does it grow in a given interval of time. That could indicate that you are re-opening tables rapidly.
Normally, MySQL shouldn't have to re-open tables, because it retains an open table handle for each table. But it can only have a finite number of those. That's what table_open_cache is for. In your case, your MySQL instance can "remember" that it has already opened up to 4096 tables at a time. If you need another table opened, it closes one of the file descriptors and opens the table you requested.
So if you have many thousands of tables (or partitions of tables) and you access a wide variety of them rapidly, you could see a lot of turnover in that table open cache. That would be indicated by the counter Opened_tables increasing rapidly.
Therefore sizing the table_open_cache higher means that MySQL can retain more open table handles, and possibly decrease the rate of turnover.
SO the solution is either to increase my hardware (especially RAM) so that i will be able to increase the table_open_cache beyond 4096 or to optimize the query.
I need to improve I/O performance for my database. I'm using the "2xlarge" HW described below & considering upgrading to the "4xlarge" HW (http://aws.amazon.com/ec2/instance-types/). Thanks for the help!
Details:
CPU usage is fine (usually under 30%), uptime load averages anywhere from 0.5 to 2.0 (but I believe I'm supposed to divide that by the number of CPU's) so that looks okay as well. However, the I/O is bad: iostat show favorable service times, but the time spent in queue (I suppose this means waiting to access the disk) is far too high. I've configured MySQL to flush to disk every 1sec instead of every write, which helps, but not enough. Profiling shows there are a handful of tables that are the culprits for most of the load (both read && write operations). Queries are already indexed & optimized, but not partitioned. Average MySQL states are: Sending Data # 45%, statistics # 20%, Updating # 15%, Sorting result # 8%.
Questions:
How much performance will I get by upgrading HW?
Same question, but if I partition the high-load tables?
Machines:
m2.2xlarge
64-bit
4 vCPU
13 ECU
34.2 Gb Mem
EBS-Optimized
Network Performance: "Moderate"
m2.4xlarge
64-bit
6 vCPU
26 ECU
68.4 Gb Mem
EBS-Optimized
Network Performance: "High"
In my experience, the biggest boost in MySQL performance comes from IO. You have alot of RAM. Try setting up a ram drive and point the tmpdir to it.
I have several MySQL servers that are very busy. My settings are below - maybe this can help you tweak your settings.
My Setup is:
-Dual 2.66 CPU 8 core box with a 6-drive Raid-1E array - 1.3TB.
-innodblogs on a separate SSD drives.
-tmpdir is on a 2GB tempfs partition.
-32GB of ram
InnoDB settings:
innodb_thread_concurrency=16
innodb_buffer_pool_size = 22G
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 400M
innodb_log_files_in_group=8
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2 (This is a slave machine - 1 is not required fo my purposes)
innodb_flush_method=O_DIRECT
Current Queries per second avg: 5185.650
I am using Percona Server, which is quite a bit faster that other MySQLs from my testing.