Efficient ways/ tools for mysql db cloning - mysql

I have an ERP that is requesting instances of e-commerce to be created.
These instances are basically clones of the master on a same server.
So I have master DB (currently a dump file) and master code gz.
Through various scripts clone is created and installed.
Everything else is fast except db import from dump.
Plus when there is a request for, say, 50 instances simultaneously, it isn't realistic to import all these databases.
The question is:
Is there any other efficient tool to import a database other then mysql db < file.sql?
Any binary formats?
The only solution I can think about is to use a script to create 100 db clones in advance one buy one, and then when 50 new db's are requested, just rename existing clones.

You can recreate the database on the fly without exporting to a file and importing back, this will eliminate parsing a .SQL dump file back to mysql:
mysqldbcopy --source=root:root#localhost \
--destination=root:root#localhost world:world_clone
https://dev.mysql.com/doc/mysql-utilities/1.5/en/utils-task-clone-db.html

Related

Import data from CSV file to Amazon Web Services RDS MySQL database

I have created a Relational Database (MySQL) hosted on Amazon Web Services. What I would like to do next is, import the data in my local CSV files into this database. I would really appreciate if someone provides me an outline on how to go about it.Thanks!
This is easiest and most hands-off by using MySQL command line. For large loads, consider spinning up a new EC2 instance, installing MySQL CL tools, and transferring your file to that machine. Then, after connecting to your database via CL, you'd do something like:
mysql> LOAD DATA LOCAL INFILE 'C:/upload.csv' INTO TABLE myTable;
Also options to match your file's details and ignore header (plenty more in the docs)
mysql> LOAD DATA LOCAL INFILE 'C:/upload.csv' INTO TABLE myTable FIELDS TERMINATED BY ','
ENCLOSED BY '"' IGNORE 1 LINES;
If you're hesitant to use CL, download MySQL Workbench. It connects no prob to AWS RDS.
Closing thoughts:
MySQL LOAD DATA Docs
AWS' Aurora RDS is MySQL-compatible so command works there too
"LOCAL" flag actually transfers the file from your client machine (where you're running the command) to the DB server. Without LOCAL, the file must be on the DB server (not possible to transfer it there in advance with RDS)
Works great on huge files too! Just sent a 8.2GB file via this method (260 million rows). Took just over 10 hours from a t2-medium EC2 to db.t2.small Aurora
Not a solution if you need to watch out for unique keys or read the CSV row-by-row and change the data before inserting/updating
I did some digging and found this official AWS documentation on how to import data from any source to MySQL hosted on RDS.
It is a very detailed step by step guide and icludes an explanation on how to import CSV files.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.AnySource.html
Basically, each table must have its own file. Data for multiple tables cannot be combined in the same file. Give each file the same name as the table it corresponds to. The file extension can be anything you like. For example, if the table name is "sales", the file name could be "sales.csv" or "sales.txt", but not "sales_01.csv".
Whenever possible, order the data by the primary key of the table being loaded. This drastically improves load times and minimizes disk storage requirements.
There is another option to import data to MySQL database, you can use an external tool Alooma that can do the data import for you in real time.
Depending on how large is your file, but if it is under 1GB I found that DataGrip imports smaller files without any issues: https://www.jetbrains.com/datagrip/
You get nice mapping tool and graphical IDE to play around. DataGrip is available as a trial for 30 days free.
I am experiencing myself RDS connection dropouts with bigger files like > 2GB. Not sure if it is about the DataGrip or AWS side.
I think your best bet would be to develop a script in your language of choice to connect to the database and import it.
If your database is internet accessible then you can run that script locally. If it is in a private subnet then you can either run that script on an EC2 instance with access to the private subnet or on lambda connected to your VPC. You should really only use lambda if you expect runtime to be less than 5 minutes or so.
Edit: Note that lambda only supports a handful of languages
AWS Lambda supports code written in Node.js (JavaScript), Python, Java
(Java 8 compatible), and C# (.NET Core).

MySQL: How to migrate database structure from dev to stage environment with PhpMyAdmin

I am doing constant upgrades to my clients website - and most upgrades include changes to the database structure.
Currently every time I change the structure, I build a seed file and run it when we go to the staging environment.
This is very time consuming and I can only assume there is a better way. Looking through PhpMyAdmin, there is a way to export just the database structure, but when I import this into my staging database it doesn't update the structure.
I tried following this thread, but I can't get anything to work: How do I migrate new MySQL database structure from dev to production website using the command line?
Is there a way to quickly dump database structure changes from one database table to another without changing the data?

Is it possible to copy a local MySQL database to a remote MySQL database?

Situation: I have 2 servers, one of them currently hosting a live WordPress site, and I want to be able to transfer the site to the other server in case the first server goes down. Transferring the source files is easy; transferring the database is what I need to figure out how to do. Both of the servers are Windows Server 2008.
Is there any easy to do this?
Simplest way would be to mysqldump the database, transfer it using the same mechanism you have for your source files, then import it into mysql.
Dump the primary database...
mysqldump -u user -p database > c:\somedir\backup.sql
...transfer the sql file...
Import on the failover...
mysql -u user -p database < c:\somedir\backup.sql
Both export and import can easily be scripted in batch files.
The easiest way that I know is using the plugin "Duplicator". I used it several times with Apache servers, but as is commented here, seems that three years ago it was running ok with Windows 2008 IIS 7, so I figure now it would be better.
Duplicator generates two packages: one with fields (where you can exclude uploads if needed) and the other with the database. Once you have the two packages, you need to upload into your new server and install the package. Of course you need the new database credentials. The plugin ask you in the las step for the new url base to make the adequate substitutions in all the database.

percona backup and restore

I'm trying to use percona xtrabackup to backup a mysql database. in the restoring the database according to the documentation:
rsync -avrP /data/backup/ /var/lib/mysql/
this will copy the ibdata1 as well.
what if I have want to restore the backup into an existing mysql instance with some some existing databases? would this corrupt my other databases? clearly this will overwrite existing ibdata1.
I suppose you have a local http/php server, so in case you don't need to batch import or export information, I suggest you use a database manager app that can import or export as sql, csv or tsv files.
I use a web-based admin tool called Adminer and it works great (plus, it's just a single php file). It has options to export or import a whole database or just certain tables and even specific registers. Its usage is pretty straightforward.

Duplicating PostgreSQL database on one server to MySQL database on another server

I have a PostgreSQL database with 4-5 tables (some of those have more than 20 million rows). i have to replicate this entire database onto another machine. However, there I have MySQL (and for some reason cannot install PostgreSQL) on that machine.
The database is static and is not updated or refreshed. No need to sync between the databases once replication is done. So basically, I am trying to backup the data.
There is a utility called pg_dump which will dump the contents onto a file. I can zip and ftp this onto the other server. However, I do not have psql on the other machine to reload this into a database. Is there a possibility that mysql might parse and decode this file into a consistent database?
Postgres is version 9.1.9 and mysql is version 5.5.32-0ubuntu0.12.04.1.
Is there any other simple way to do this without installing any services?
Depends on what you consider "simple". Since it's only a small number of tables, the way I'd do it is like this:
dump individual tables with pg_dump -t table_name --column-inserts
edit the individual files, change the schema definitions to be compatible with mysql (e.g. using auto_increment instead of serial, etc. : like this: http://www.xach.com/aolserver/mysql-to-postgresql.html only in reverse)
load the files into the mysql utility like you would any other mysql script.
If the files are too large for step #2, use the -s and -a arguments to pg_dump to dump the data and the schema separately, then edit only the schema file and load both files in mysql.