I need to load a large csv file (16 GB in total) into AWS RDS (MariaDB).
I tried to load via LOAD DATA LOCAL INFILE.
It was running very fast in my local environment, but very slow in RDS (about 0.01 MB/Sec).
Of course, I loaded the 16GB file separately, and the connection to the DB was also released normally in the code I wrote, and there was nothing unusual about checking the current process with show processlist.
I would like to know what to check and fix in this situation to improve network performance.
Thank You.
Related
I have a Laravel application running on AWS Elastic Beanstalk, and a MYSQL RDS Database. In localhost a query that might take .002 seconds will take .2 seconds in the live server, thats a 100x hit on performance, this becomes incredibly problematic when I have to load several datapoints, a simple table that will take half a second to load on Localhost now takes 13 seconds to load live.
I figured out the problem is the latency between the instances, so I created an RDS database connected directly to the Elastic Beanstalk application, and viola! Even faster than localhost...However having your database tied to your EC2 instance is awful and non scalable.
I have tried everything, from reducing the queries, trying different queries, I tested without eloquent and used raw MYSQL, and that does not solve the problem, the queries are not the problem, the latency is!
I am sure I am doing something wrong. The two instances are in the same timezone, I tried with higher tier databases and the latency does get reduced, but not significantly, amazon aurora reduced it from 13 seconds to 3 seconds which is still unacceptable. So the question is, how do I configure my elastic beanstalk + RDS so that it performs well...at least as good as in localhost? How do I reduce that latency?
Thanks!
I have started using AWS RDS MYSQL for my use-case.
I am using RDS to store the user uploaded server information from a website.
I have tested uploading 1 file with 500 records; took 20 seconds. Again for testing, I have uploaded 9 files simultaneously with a total of 500 records; took 55 seconds.
I don't know what is the possible reason for this? I am using db.t3.large instance.
Which RDS metric do I need to look after or which Performance Insights metrics?
Is this the issue due to lesser baseline I/O performance of the gp2 volume?
Check the following metrics:
Write IOPS (Count/Second)
Write Latency (Milliseconds)
Network Receive Throughput
Are you running the operation over the internet or within the VPC? There should be some network latency if you are using the internet.
I have a Windows Server with MySQL Database Server installed.
Multiple databases exist among them, database A contains a huge table named 'tlog', size about 220gb.
I would like to move over database A to another server for backup purposes.
I know I can do SQL Dump or use MySQL Workbench/SQLyog to do table copy.
But due to limited disk storage in server (less than 50gb) SQL Dump is not possible.
The server is serving other works so basically the CPU & RAM is limited too. As a result, copy table without used up CPU & RAM is not possible.
Is there any other method that can do the moving of the huge database A over to another server please?
Thanks in advance.
You have a few ways:
Method 1
Dump and compress at the same time: mysqldump ... | gzip > blah.sql.gz
This method is good because chances are your database will be less than 50GB; as the database dump should be in ASCII; you're then compressing it on the fly.
Method 2
You can use slave replication; this method will require a dump of the data.
Method 3
You can also use xtrabackup.
Method 4
You can shutdown the database, and rsync the data directory.
Note: You don't actually have to shutdown the database; you can however do multiple rsyncs; and eventually nothing will change (unlikely if the database is busy; have to do during slow time); which means the database would have sync'd over.
I've had to do this method with fairly large PostgreSQL databases (1TB+). It takes a few rsyncs: but, hey; it's the cost of 0 down time.
Method 5
If you're in a virtual environment you could:
Clone the disk image.
If you're in AWS you could create an AMI.
You could add another disk and just sync locally; then detach the disk, and re-attach to the new VM.
If you're worried about consuming resources during the dump or transfer you can use ionice and renice to limit the priority of the dump/transfer.
I have wordpress website running in a VPS server and is handling about 150 mysql queries / second.
Ocassionaly when we notice a spike in traffic to about 200 mysql queries a second the https requests to the site is extremely slow.
The site loads decently with http but with https it takes 20+ seconds.
Gradually over a period of an hour after the spike, the load times get better and then it gets back to normal again.
There server load and memory looks fine. There is only a spike in mysql queries, firewall traffic and eth0 requests. There are no mysql slow queries
Any help would be appreciated.
Thank You
I think your answer is in "disk latency" and "disk utilization" charts.
MySQL works well under small loads, when it can cache all data it needs. But when your results or queries get too big, or you request too many of them, it will start doing many disk I/O operations. This enables you to handle huge loads and very big data, but in the moment you exceed your MySQL allocated memory, you will need to read and write everything to disk.
This would not be so bad if you ran on a local SSD drive. But from the device name I see that you are running on an EBS volume, which is not a real hard drive. It is networked drive, so all the traffic overloads your network connection.
You have several options:
1.) install mysqltuner, let the server operate for some time and then run it and see what it suggest. My guess is, that it will suggest you to increase your MySQL memory pool, decrease number of parallel connections or restructure your queries.
2.) use EC2 instance type with actual local storage (e.g. m3 or r3) and write to local SSD. You can do RAID on several SSD drives to make it even faster.
3.) use EBS optimized instance (dedicated EBS network bandwidth) and correct volume type (some EBS volume types have I/O credits, similar to CPU credits for t-type instances, and when you run of those, your operations will slow down to crawl).
I have a giant MySQL database with write intensive workload.
Actually I've created at application level a script that create sql files and CSV with the data, I export this data and load with a batch that use LOAD DATA INFILE syntax because I read that this method is faster than normal sql insert statement.
I know that this is a workaround and this doesn't allow me to replicate the data using statement-based logging because it's unsafe.
There are any suggestion for loading huge and constant bunch of data without degrading performance of MySQL and without using "load data infile" method?
Actual server:
Percona mysql server 5.6
8 SSD disk raid 10
RAM 128Gb
Buffer pool: 108Gb.