Hi recently I conducted a test on two different ubuntu servers.
Here are the results:
innodb_flush_trx_commit = 1
Staging Server: 10,000 Inserts ----> 81 seconds
innodb_flush_trx_commit = 2
Staging Server: 10,000 Inserts ----> 61 seconds
Dev Server:
innodb_flush_trx_commit = 1
10,000 Inserts ----> 5 seconds
Dev Server:
innodb_flush_trx_commit = 2
10,000 Inserts ----> 2 seconds
I am clear that performance vary with innodb_flush setting.
But why there is a huge diff in performance from the server to server ?
What are the things to consider here...?
Here are the some of the details considered but no significant thing to suspect:
staging server: Intel(R) Xeon(R) CPU X5355 #2.66GHz
processor 0, 1
mysql 5.1.61
innodb_buffer_pool : 8MB
RAM: 4GB
dev server: AMD Opteron(tm) Processor 4130 #2.60ghZ
processor 0, 1
mysql 5.0.67
innodb_buffer_pool : 8MB
RAM: 4GB
Please help in understanding what is the exact thing that has lead to this huge difference in perfromance on different servers...?
NOTE: same script used in the same way on noth the servers and not from remote
sesrvers.
Thanks in advance.
Regards,
UDAY
Some questions I'd go through...
Is binary logging turned on with one instance and not the other?
Is the staging server using a networked drive to access the mysql data?
Is the filesystem type the same on both servers (ext3, ext2, etc.)?
Disk activity seems to be the culprit here.
Is you staging server a production server? In that case it is probably a concurrency issue. If many users are working on that server, inserts may be slower, especially if the table(s) you are inserting into are being used by others too.
The dev server probably has much less load, with only a few developers using it simultaneously.
Keeping the hard ware differences aside (which is not a easy assumption). The factors that influence a query performance are:
Version of the DBMS engine
Query plan and db statistics used
Network latency
Memory allocated to the DBMS engine (Transaction buffer log included)
Process/thread priority of the DBMS engine
I also would suggest to check the disk utilization. To get a raw idea how your disk utilization performs while you are running your test you could use iostat (on redhat based systems it is in the package systat)
For example you could try the following:
iostat -xd 1
which would lead to an output simmilar to this:
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 0.31 0.00 0.05 0.16 2.89 56.67 0.00 2.99 0.62 0.00
Compare here the await time on both machines. This metric shows you the average time in milliseconds which the disk needs to serve I/O requests. Its a quick way to check if there are big differences. Also interesting could be the %util metric. This one gives you the percentage of cpu time which is spent while issuing I/O operations to the device, as higher this value gets, as closer you get to the full saturation of your device.
For the other options check man iostat.
Of course above described only gives you a very basic overview of how your disks perform. So keep this in mind as an additional operation to the previous mentioned one in order to track down your problem.
Related
My DB has around 15 tables, each with 40 columns, with 10.000 rows each.
Most of it with VARCHAR, some indexes and foreign keys.
Sometime I need to reconstruct my database (design flaw, working on it), which takes about 40 seconds locally. Now I'm trying to do the same to a AWS RDS MySQL 5.75 instance, but it takes forever, something like 40-50 minutes. The last time I had to do this same process it took no more than 5 minutes, still way more than the local 40 seconds, but I'm happy with it.
My internet speed is at about 35 Mbps Download / 5 Mbps Upload.
I know it's not fast, but it's consistent, and it hasn't changed since my last rebuilt.
I enabled General Logs, but all I can see are the INSERT queries, occasionally some "SELECT 1".
I do have same space for improvements on my code, but still, from 00:40:00 to 50:00:00, it seems that there's something else going on.
Any ideas on how to diagnose and find the bottleneck?
Thanks
--
Additional relevant information:
It is a Micro instance from AWS, all of the relevant monitoring indicators are basically flat: CPU at 4%, Free Storage Space at 20.000 MB, Freeable Memory at 200 MB, Write IOPS at around 2,5, the server runs a 5.7.25 MySQL, 1vCPU, 1Gb of RAM and 20GB of SSD. This is the same as 3 months ago when I last rebuilt the database.
SHOW GLOBAL STATUS: https://pastebin.com/jSrAzYZP
SHOW GLOBAL VARIABLES: https://pastebin.com/YxD7dVhR
SHOW ENGINE INNODB STATUS: https://pastebin.com/r5wffB5t
SHOW PROCESS LIST: https://pastebin.com/kWwiyGwf
SELECT * FROM information_schema...: https://pastebin.com/eXGBmetP
I haven't made any big changes to the server configuration, except enabling logs, e maxing out max_allowed_packets and saving logs to file.
In my backend I have a Flask app running, when it receives the API call, it takes a bunch of pickled objects and adds them all to the database (appending the Flask SQLAlchemy class to a list) and then running db.session.add_all(entries), trying to run a bulk operation. The code is the same, both for localhost and my remote server.
It does get slower in three specific tables, most of them with VARCHAR columns, but nothing different from my last inserts - it seems odd that the problem would be data, or the way the code is structured, or at least doesn't seem reasonable that this would result in a 20 second (localhost) to 40 minutes (hosted server) time, specially when the rest of the tables work mostly the same.
Enable the slow log, set long_query_time=0, run your code, then put the resulting log through mysqldumpslow.
Establish which queries contribute most to slowness and take it from there.
Compare the config between your old server and your new one.
Also, are they the same version of MySQL? 5.6, 5.7 and 8.0 can produce very different execution plans (with 5.6 usually coming up with the sane one if they differ).
Rate Per Second = RPS
Suggestions to consider for your AWS RDS Parameters group
thread_cache_size=24 # from 8 to reduce threads_created count
innodb_io_capacity=1900 # from 200 to enable more use of SSD IOPS capacity
read_rnd_buffer_size=128K # from 512K to reduce handler_read_rnd_next RPS of 21
query_cache_size=0 # from 1M since you have QC turned off with query_cache_typ=OFF
Determine why com_flush is running 13 times per hour and get it stopped to avoid table open thrashing.
I found that after migrating to RDS all my database Indexes are gone! They weren't migrated along with the schema and data. Make sure you're indexes are there.
Also, MySQL query cache is OFF by default in RDS. This won't help the performance of your initial query, but it may speed things up in general.
You can set query_cache_type to 1 and define a value for query_cache_size. I also changed the thread_cache_size from 8 to 24 and innodb_io_capacity from 200 to 1900 don't know if it helps you.
Also creating AWS DB Parameter Groups helped me a lot with configuring and tuning DB variables. Here you can read more:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html
I am running a flask app with mysql db.
I have 400000 records in a table and there is a query (insert with select) that takes around 3 seconds if run single.
But when I tried to load test it by hitting the api with multiple requests at a time (like 20 hits at a time, 50 and 100 hits at a time), the response for all requests coming at once. For example, if total 100 concurrent requests takes around 3 mins, then all those individual requests are starting immediately but giving response after 3 mins only (instead of 3 or 4 seconds).
Also, I tried with 1 gb ram server, 4 gb ram server and also 32 gb ram server with 16 cpus. Here is the response as below:
# 4GB RAM, 2 CPUS server with only Mysql installed in it
Total time is: 0:05:29.752275 (all 100 requests getting response after 5 mins(total time), not 3 or 4 seconds)
successful: 89
Failed: 11
Tried: 100
# 32 GB RAM, 16 CPUs server with only Mysql installed in it
Total time is: 0:05:17.119773 (all 100 requests getting response after 5 mins(total time), not 3 or 4 seconds)
successful: 86
Failed: 14
Tried: 100
So, if you see, both 4gb and 32gb servers has almost no difference in performance. So it seems like something totally wrong with my setup/configuration/query.
More details:
I ran another script where I directly hit db in the script without any api. This way, even though server has 4gb RAM, mysql server dying(segmentation fault and also mysql server dies) for just 3 concurrent requests.
But then I gave like 0.2, 0.3 and 0.5 milli seconds delay between each hit/thread so the results were slighly meaningful. Each request used to take total time but with 0.5 ms delay between each hit, each request completing in less than 10 seconds.
Can I do anything so my server easily returns response fast for atleast 100 concurrent requests without any gap between requests(and is that necessary?)
Any thoughts on what to do here?
I think the root cause is flask. Flask is not good in multi-process/thread at all.
I meet this problem before, then change to Tornado and use supervisord to keep Tornado as daemon mode.
another solution is Gunicorn => https://intellipaat.com/community/12737/how-to-run-flask-with-gunicorn-in-multithreaded-mode
Simply put, "atleast 100 concurrent requests without any gap" is not realistic. The user goes to the client, which connects to the database, which takes queries rapidly, but not really simultaneously. That is, in real life queries rarely start simultaneously.
Also, if you have the configuration (MySQL's max_connections) and/or the corresponding setting in the client too high, then you are asking for the "thundering herd" syndrome. It's like being in an over-crowded grocery store and you can't move your cart because all the space is taken.
More specifically, 16 CPUs will stumble over each other vying for resources when you throw 100 queries into the mix "concurrently".
As for inserting a lot of rows, there are several techniques.
LOAD DATA is very fast.
"Batched INSERT" is fast. This is where a single INSERT has lots of records. I often see 10x speedup with 100 rows at a time. (versus single-row inserts)
BEGIN...COMMIT around a bunch of single-row inserts. This avoids some of the "transaction" overhead.
Avoid UNIQUE indexes (other than the PRIMARY KEY) on the table you are loading.
Ping-ponging staging tables: http://mysql.rjweb.org/doc.php/staging_table -- this allows multiple clients to rapidly feed data in.
We have nightly load jobs that writes several hundred thousand records to an Mysql reporting database running in Amazon RDS.
The load jobs are taking several hours to complete, but I am having a hard time figuring out where the bottleneck is.
The instance is currently running with General Purpose (SSD) storage. By looking at the cloudwatch metrics, it appears I am averaging less than 50 IOPS for the last week. However, Network Receive Throughput is less than 0.2 MB/sec.
Is there anyway to tell from this data if I am being bottlenecked by network latency (we are currently loading the data from a remote server...this will change eventually) or by Write IOPS?
If IOPS is the bottleneck, I can easily upgrade to Provisioned IOPS. But if network latency is the issue, I will need to redesign our load jobs to load raw data from EC2 instances instead of our remote servers, which will take some time to implement.
Any advice is appreciated.
UPDATE:
More info about my instance. I am using an m3.xlarge instance. It is provisioned for 500GB in size. The load jobs are done with the ETL tool from pentaho. They pull from multiple (remote) source databases and insert into the RDS instance using multiple threads.
You aren't using up much CPU. Your memory is very low. An instance with more memory should be a good win.
You're only doing 50-150 iops. That's low, you should get 3000 in a burst on standard SSD-level storage. However, if your database is small, it is probably hurting you (since you get 3 iops per GB- so if you are on a 50gb or smaller database, consider paying for provisioned iops).
You might also try Aurora; it speaks mysql, and supposedly has great performance.
If you can spread out your writes, the spikes will be smaller.
A very quick test is to buy provisioned IOPS, but be careful as you may get fewer than you do currently during a burst.
Another quick means to determine your bottleneck is to profile your job execution application with a profiler that understands your database driver. If you're using Java, JProfiler will show the characteristics of your job and it's use of the database.
A third is to configure your database driver to print statistics about the database workload. This might inform you that you are issuing far more queries than you would expect.
Your most likely culprit accessing the database remotely is actually round-trip latency. The impact is easy to overlook or underestimate.
If the remote database has, for example, a 75 millisecond round-trip time, you can't possibly execute more than 1000 (milliseconds/sec) / 75 (milliseconds/round trip) = 13.3 queries per second if you're using a single connection. There's no getting around the laws of physics.
The spikes suggest inefficiency in the loading process, where it gathers for a while, then loads for a while, then gathers for a while, then loads for a while.
Separate but related, if you don't have the MySQL client/server compression protocol enabled on the client side... find out how to enable it. (The server always supports compression but the client has to request it during the initial connection handshake), This won't fix the core problem but should improve the situation somewhat, since less data to physically transfer could mean less time wasted in transit.
I'm not an RDS expert and I don't know if my own particular case can shed you some light. Anyway, hope this give you any kind of insight.
I have a db.t1.micro with 200GB provisioned (that gives be 600 IOPS baseline performance), on a General Purpose SSD storage.
The heaviest workload is when I aggregate thousands of records from a pool of around 2.5 million rows from a 10 million rows table and another one of 8 million rows. I do this every day. This is what I average (it is steady performance, unlike yours where I see a pattern of spikes):
Write/ReadIOPS: +600 IOPS
NetworkTrafficReceived/Transmit throughput: < 3,000 Bytes/sec (my queries are relatively short)
Database connections: 15 (workers aggregating on parallel)
Queue depth: 7.5 counts
Read/Write Throughput: 10MB per second
The whole aggregation task takes around 3 hours.
Also check 10 tips to improve The Performance of your app in AWS slideshare from AWS Summit 2014.
I don't know what else to say since I'm not an expert! Good luck!
In my case it was the amount of records. I was writing only 30 records per minute and had an Write IOPS of round about the same 20 to 30. But this was eating at the CPU, which reduced the CPU credit quite steeply. So I took all the data in that table and moved it to another "historic" table. And cleared all data in that table.
CPU dropped back down to normal measures, but Write IOPS stayed about the same, this was fine though. The problem: Indexes, I think because so many records needed to indexed when inserting it took way more CPU to do this indexing with that amount of rows. Even though the only index I had was a Primary Key.
Moral of my story, the problem is not always where you think it lies, although I had increased Write IOPS it was not the root cause of the problem, but rather the CPU that was being used to do index stuff when inserting which caused the CPU credit to fall.
Not even X-RAY on the Lambda could catch an increased query time. That is when I started to look at the DB directly.
Your Queue depth graph shows > 2 , which clearly indicate that the IOPS is under provisioned. (if Queue depth < 2 then IOPS is over provisioned)
I think you have used the default AUTOCOMMIT = 1 (autocommit mode). It performs a log flush to disk for every insert and exhausted the IOPS.
So,It is better to use (for performance tuning) AUTOCOMMIT = 0 before bulk inserts of data in MySQL, if the insert query looks like
set AUTOCOMMIT = 0;
START TRANSACTION;
-- first 10000 recs
INSERT INTO SomeTable (column1, column2) VALUES (vala1,valb1),(vala2,valb2) ... (val10000,val10000);
COMMIT;
--- next 10000 recs
START TRANSACTION;
INSERT INTO SomeTable (column1, column2) VALUES (vala10001,valb10001),(vala10001,valb10001) ... (val20000,val20000);
COMMIT;
--- next 10000 recs
.
.
.
set AUTOCOMMIT = 1
Using the above approach in t2.micro and inserted 300000 in 15 minutes using PHP.
I need to improve I/O performance for my database. I'm using the "2xlarge" HW described below & considering upgrading to the "4xlarge" HW (http://aws.amazon.com/ec2/instance-types/). Thanks for the help!
Details:
CPU usage is fine (usually under 30%), uptime load averages anywhere from 0.5 to 2.0 (but I believe I'm supposed to divide that by the number of CPU's) so that looks okay as well. However, the I/O is bad: iostat show favorable service times, but the time spent in queue (I suppose this means waiting to access the disk) is far too high. I've configured MySQL to flush to disk every 1sec instead of every write, which helps, but not enough. Profiling shows there are a handful of tables that are the culprits for most of the load (both read && write operations). Queries are already indexed & optimized, but not partitioned. Average MySQL states are: Sending Data # 45%, statistics # 20%, Updating # 15%, Sorting result # 8%.
Questions:
How much performance will I get by upgrading HW?
Same question, but if I partition the high-load tables?
Machines:
m2.2xlarge
64-bit
4 vCPU
13 ECU
34.2 Gb Mem
EBS-Optimized
Network Performance: "Moderate"
m2.4xlarge
64-bit
6 vCPU
26 ECU
68.4 Gb Mem
EBS-Optimized
Network Performance: "High"
In my experience, the biggest boost in MySQL performance comes from IO. You have alot of RAM. Try setting up a ram drive and point the tmpdir to it.
I have several MySQL servers that are very busy. My settings are below - maybe this can help you tweak your settings.
My Setup is:
-Dual 2.66 CPU 8 core box with a 6-drive Raid-1E array - 1.3TB.
-innodblogs on a separate SSD drives.
-tmpdir is on a 2GB tempfs partition.
-32GB of ram
InnoDB settings:
innodb_thread_concurrency=16
innodb_buffer_pool_size = 22G
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 400M
innodb_log_files_in_group=8
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2 (This is a slave machine - 1 is not required fo my purposes)
innodb_flush_method=O_DIRECT
Current Queries per second avg: 5185.650
I am using Percona Server, which is quite a bit faster that other MySQLs from my testing.
I have a denormalized table product with about 6 million rows (~ 2GB) mainly for lookups. Fields include price, color, unitprice, weight, ...
I have BTREE indexes on color etc. Query conditions are dynamically generated from the Web, such as
select count(*)
from product
where color = 1 and price > 5 and price < 100 and weight > 30 ... etc
and
select *
from product
where color = 2 and price > 35 and unitprice < 110
order by weight
limit 25;
I used to use InnoDB and tried MEMORY tables, and switched to NDB hoping more concurrent queries can be done faster. I have 2 tables with the same schema, indexes, and data. One is InnoDB while the other is NDB. But the results are very disappointing:for the queries mentioned above, InnoDB is like 50 times faster than NDB. It's like 0.8 seocond vs 40 seconds. For this test I was running only a single select query repeatedbly. Both InnoDB and NDB queries are using the same index on color.
I am using mysql-5.1.47 ndb-7.1.5 on a dual Xeon 5506 (8 cores total), 32GB memory running CentOS 5. I set up 2 NDB Data nodes, one MGM node and one MYSQL node on the same box. For each node I allocated like 9GB memory, and also tried MaxNoOfExecutionThreads=8, LockPagesInMainMemory, LockExecuteThreadToCPU and many other config parameters, but no luck. While NDB is running the query, my peak CPU load was only like 200%, i.e., only 2 out of 8 cores were busy. Most of the time it was like 100%. I was using ndbmtd, and verified in the data node log and the LQH threads were indeed spawned.
I also tried explain, profiling -- it just showing that Sending data was consuming most of the time. I also went thru some Mysql Cluster tuning documents available online, not very helpful in my case.
Anybody can shed some light on this? Is there any better way to tune an NDB database? Appreciate it!
You need to pick the right storage engine for your application.
myISAM -- read frequently / write infrequently. Ideal for data lookups in big tables. Does reasonably well with complex indexes and is quite good for batch reloads.
MEMORY -- good for fast access to relatively small and simple tables.
InnoDB -- good for transaction processing. Also good for a mixed read / write workload.
NDB -- relatively less mature. Good for fault tolerance.
The mySQL server is not inherently multiprocessor software. So adding cores isn't necessarily going to jack up performance. A good host for mySQL is a decent two-core system with plenty of RAM and the fastest disk IO channels and disks you can afford. Do NOT put your mySQL data files on a networked or shared file system, unless you don't care about query performance.
If you're running on Linux issue these two commands (on the machine running the mySQL server) to see whether you're burning all your cpu, or burning all your disk IO:
sar -u 1 10
sar -d 1 10
Your application sounds like a candidate for myISAM. It sounds like you have plenty of hardware. In that case you can build a master server and an automatically replicated slave server But you may be fine with just one server. This will be easier to maintain.
Edit It's eight years latar and this answer is now basically obsolete.