improving MySQL write performance - mysql

I have a giant MySQL database with write intensive workload.
Actually I've created at application level a script that create sql files and CSV with the data, I export this data and load with a batch that use LOAD DATA INFILE syntax because I read that this method is faster than normal sql insert statement.
I know that this is a workaround and this doesn't allow me to replicate the data using statement-based logging because it's unsafe.
There are any suggestion for loading huge and constant bunch of data without degrading performance of MySQL and without using "load data infile" method?
Actual server:
Percona mysql server 5.6
8 SSD disk raid 10
RAM 128Gb
Buffer pool: 108Gb.

Related

Network performance issues in AWS RDS with bulk insert

I need to load a large csv file (16 GB in total) into AWS RDS (MariaDB).
I tried to load via LOAD DATA LOCAL INFILE.
It was running very fast in my local environment, but very slow in RDS (about 0.01 MB/Sec).
Of course, I loaded the 16GB file separately, and the connection to the DB was also released normally in the code I wrote, and there was nothing unusual about checking the current process with show processlist.
I would like to know what to check and fix in this situation to improve network performance.
Thank You.

How to import compressed .csv file from AWS-EFS to MariaDB Cluster

We copy compressed .csv files from AWS-S3 (U.S.) to AWS-EFS (Europe) and need to import them into MariaDB Cluster (Europe), challenge is where/how to best do decompression when calling mySQLImport or LOAD DATA INFILE.
Background:
Users (via browser-based client) will upload large .csv files (<=2GB) using a pre-signed URL to AWS-S3, to then be imported into our European MariaDB cluster. We copy the compressed files from S3 to AWS-EFS (Europe). We use EFS because of speed (over S3) and we don't know what load-balanced DB server will handle the LOAD DATA INFILE (EBS is EC2-specific, so not fault-tolerant).
Our SysAdmin is recommending to write a bash script to decompress the file using one of the DB servers in the cluster, then do the import using mySqlImport of LOAD DATA INFILE. The concern is that we'd be slowing down the DB server(s) for a decompression task that's not normally done by a DB server (I/O, CPU, Memory impact affecting online users).
We can't decompress in the U.S. and ship the file decompressed to AWS Europe region, because of higher transfer times and inter-region transfer costs.
Question:
Is the solution to add a single or dual servers near the DB Cluster to handle decompression (need to have failover, loadbalancing, etc. to be robust) or is using the DB server for decompression ok, maybe 'beefing up' the db servers will suffice ? ...or any other options ?
If AWS even lets you do LOAD DATA, then
Compress the csv file in US.
Ship the compressed file to Europe.
Decompress the file in Europe.
Run LOAD DATA INPUT.
Note: Steps 1,2,3 do not involve the database. Even if you are using the same VM for the work, the impact on the database should be minimal (chewing up IOPs).
A typical CSV file compresses only 3:1. Is it worth the effort to save only 1.3GB of xfer costs for a 2GB file?

How to dump large mysql databse faster ?

I have MySQL database which is in size of 4 TB and when I'm dumping it using mysqldump then it is taking around 2 days to dump that database in the .sql format
Can anyone help to faster this process?
OS ubuntu 14
MySQL 5.6
The single database of size 4 TB
hundreds of table average tables size is around 100 to 200 GB
Please help if anyone have any solution to this
I would:
stop the database,
copy the files in a new database
restart the database
process the data from the new place (maybe in an other machine).
If you are replicating, just stop replication, process, start replication.
These methods should improve speed, because of lack of concurrent processes that access the database (and all lock logic).
On such large databases, I would try not to have to make dumps. Just use mysql table files if possible.
In any case 2 days seems a lot, also for a old machine. Check that you are not swapping, and try to check your mysql configuration for possible problems. In general, try to get a better machine. Computer are cheaper than time to optimize.

Moving of large MySQL database from limited resource server

I have a Windows Server with MySQL Database Server installed.
Multiple databases exist among them, database A contains a huge table named 'tlog', size about 220gb.
I would like to move over database A to another server for backup purposes.
I know I can do SQL Dump or use MySQL Workbench/SQLyog to do table copy.
But due to limited disk storage in server (less than 50gb) SQL Dump is not possible.
The server is serving other works so basically the CPU & RAM is limited too. As a result, copy table without used up CPU & RAM is not possible.
Is there any other method that can do the moving of the huge database A over to another server please?
Thanks in advance.
You have a few ways:
Method 1
Dump and compress at the same time: mysqldump ... | gzip > blah.sql.gz
This method is good because chances are your database will be less than 50GB; as the database dump should be in ASCII; you're then compressing it on the fly.
Method 2
You can use slave replication; this method will require a dump of the data.
Method 3
You can also use xtrabackup.
Method 4
You can shutdown the database, and rsync the data directory.
Note: You don't actually have to shutdown the database; you can however do multiple rsyncs; and eventually nothing will change (unlikely if the database is busy; have to do during slow time); which means the database would have sync'd over.
I've had to do this method with fairly large PostgreSQL databases (1TB+). It takes a few rsyncs: but, hey; it's the cost of 0 down time.
Method 5
If you're in a virtual environment you could:
Clone the disk image.
If you're in AWS you could create an AMI.
You could add another disk and just sync locally; then detach the disk, and re-attach to the new VM.
If you're worried about consuming resources during the dump or transfer you can use ionice and renice to limit the priority of the dump/transfer.

Do named fifo pipes use disk writes and reads ?

I want to parse MySQL general log and store that information on another server.
I was wondering if it would have a performance increase to have MySQL write its log to a Linux named pipe FIFO instead of just moving the log file and then parsing it.
My goal is to remove the hard disk access and increase the performance of the MySQL server.
This is all done on Linux centos.
So does FIFO use disk access or is everything done in memory?
If I had MySQL write to a FIFO and had a process that ran in memory parsing that information and then have it send to a different server would that save on disk writes?
Also would this be better than storing MySQL general log into a MySQL database.
I've noticed that insert statements can add .2 seconds to a script. So I am wondering if I turn on logging for MySQL that its going to add .2 to every query that's ran.
From the fifo(7) man-page:
FIFO special file has no contents on the file system
Whether it is a good idea to use fifo in an attempt to increase MySQL performance is another question.