Sync Local Mysql databse to online Mysql database - mysql

I just uploaded my database to online server then my application successfully connected but suddenly my application goes too slow about 5 seconds delay in action and sometimes not responding.
To over come this, I think the solution is to sync my local database dynamically then connect my application to lacal database . .
My question is How can I sync local database to web server?
I'm using mysql and Ado.net as connector
With out using any third party software . .

Your question isn't very clear, but if I understand, you want to syncronise a mysql database on a remote server, with the database on your local machine.
There are a number of ways to do this efficiently:
use mysqldump to make a backup of the data on your local machine, transfer the dump file to the remote server, via ftp/sftp/scp and then restore it on the remote server.
mysqldump -u yourusername -p yourpassword yourdatabasename > yourbackupfilename.sql
and then run on the remote server
mysql -u yourusername -p yourpassword remotedatabasename < yourbackupfilename.sql
Use a mysql GUI (my favourite is sqlyog (https://www.webyog.com/product/sqlyog), but you have to pay for a license). sqlyog has a really nice feature where you can connect to both your local mysql installation, and the remote server. You can then copy databases from one to the other, all in the GUI. Super-convenient and easy, but method 1 will be faster for large databases.
If I misunderstood, please try to be clearer about what you want to do.

Related

How to setup and connect my database to NodeJS application on digital ocean

I have managed to create a droplet on Digital-Ocean and managed to clone my Node JS app onto it. Locally , the app connects to MySQL database and I wanted to the same on the live version. Ignorantly, I attempted to create a Managed database cluster which I did and added 1 user account and created 1 database. Right now I do not know how I can import the exported database.sql file into the database since I am only used to phpMyAdmin.
How can I get this to work and connect to my NodeJS app?
You were using phpmyadmin as an interactive Mysql client program. It's easy to use but hard to set up because it's a web app.
Try another MySql client program. The command-line client, memorably named mysql, is a good choice.
Get a shell on your droplet, then say
sudo apt install -y mysql-client
mysql -u username -p -h databasehostname -D database
mysql> source database.sql
mysql> quit
You'll be prompted for your database password.
That should import your database.
The mysql command line program is very useful and worth some of your time to learn to use.
First, make sure your database cluster is not open to the outside world by adding a DB firewall using DigitalOcean databases. You can allow connections from your own droplet's private IP address, and your own public address (or VPN or however way you're set up). Once you've done that, you should be able to import your SQL file locally (or from the DO Droplet, as long as you have the mysql client installed):
mysql -h [host-provided-do] -P [port-provided-do] -u [username-provided-do] -p [db-name-provided-do] < my-file.sql
The most important thing is to make sure your managed database is not open to the outside world, and that you make sure it only allows incoming connections from known IP addresses.
In your NodeJS app, you can set the driver to connect to the private subnet that DO provides.

Missing MySQL users after migration using Plesk Migration Tool

so I'm not a server expert. Managed to migrate a server using Plesk's migration tool. All Plesk managed DBs were moved. But discovered that all DBs and users managed through MySQL were not migrated. Can anyone tell me a solution to this?
This is an expected behavior - Plesk Migration Tool will migrate only objects it knows about. Since you have some databases and users which are managed through MySQL directly, Plesk does not know anything about it, so they will not be transferred.
You should transfer such databases and users manually with mysqldump.
To create a backup of database with mysqldump you can use the following command:
MYSQL_PWD=`cat /etc/psa/.psa.shadow` mysqldump -u admin DATABASE_NAME > FILE_NAME.sql
To restore such database run:
MYSQL_PWD=`cat /etc/psa/.psa.shadow` mysql -u admin DATABASE_NAME < FILE_NAME.sql
Also you will need the mysql database which contains grant information. I do not recommend you to blindly transfer it and just re-create users.
Keep in mind that if MySQL version on target server is higher than on source you will need to run mysql_upgrade script to make changes in schema.
Alternatively you can export/import databases through phpMyAdmin which is shipped with Plesk and can be found at Plesk > Tools & Settings > Database Servers.

Backup SQL database from secondary linux server

I took over a website at a less-than-optimal hoster with no backups yet.
I do have an FTP-access and I know the database access parameters of the installed web-app to the MySQL server, but I don't have access to the MySQL interface or the underlying server.
I would like to do an automated backup to a Linux server under my control.
I can download all data via FTP, zip it and store it on a backed up storage.
How to do this for the database?
As an initial solution I installed phpMyAdmin and did a manual backup, but I would like to automate this process.
You can use mysqldump to back up a remote MySQL database.
Suppose your MySQL database is on a host called "dbhost". You can reach that host over the network from your new Linux host.
Run this command on your new Linux host:
$ mysqldump --single-transaction --all-databases --host dbhost > datadump.sql
(You might also need to add the --user and --password options.)
You can automate any command you can run at the command-line. Put it in a shell script. Then you an invoke the script for example from cron.

MySQL database dump from remote host without temporary file

I'm trying to implement a database backup cron (other solutions welcome) in my job but I have a small problem:
I have a large database that is over 10GB in space and the current vm doesn't have space to store it in the temporary file that mysql creates.
I know I can use mysqldump with a host parameter, but my question is, when doing that does the temporary file generated by mysqldump stay at the machine that is running it or does it stay on the database server?
UPDATE:
I forgot to mention that I'm trying to backup a network of websites and that some of them are behind a firewall (needing VPN access), some need server hopping to get to the database server.
You can run a shell script from an archive host, where you've traded password-less ssh keys with the database server. This lets you transfer the file directly over ssh, without creating any temp files on the remote database server:
ssh -C myhost.com mysqldump -u my_user --password=bigsecret \
--skip-lock-tables --opt database_name > local_backup_file.sql
Obviously there are ways to secure that password on the command line, but this a method that could accomplish what you want. One advantage of this method is that it doesn't require the archive host to have access to port 3306 on the remote host.
This guy's version is cool because it also compresses the data on-the-fly before transferring it over the network, and then he uncompresses it before loading it into a local database.
ssh me#remoteserver 'mysqldump -u user -psecret production_database | \
gzip -9' | gzip -d | mysql local_database
But that's why my version uses ssh -C, which enables its own compression algorithm and avoids extra gzip pipes.
Depending on the circumstance it might be a better idea to use MySQL replication. Set up MySQL on your backup server and configure it as a slave of your production database (see http://dev.mysql.com/doc/refman/5.7/en/replication-howto.html). You can then dump the slave database easily.
An advantage of this approach is you're not transferring 10GB each time you want to backup, you're only transferring any changes to the database as and when they occur.
You'll need to keep an eye on the replication though, because if it fails your slave database will become stale.

Where are the MySQL databases stored (cPanel/WHM)?

I have cPanel & WHM installed on my server.
Is it safe to backup this directory (if I only care about backing up the MySQL databases):
"/var/lib/mysql/"
I don't care about the other MySQL databases that cPanel provide by default. I only care about the MySQL databases that other cPanel users have created and currently own.
I know I could just back it up with other ways, but let's say due to a hard disk drive failure, I cannot access cPanel and WHM.
The only access to the server I have is via SSH (and SFTP).
Okay, so would it be my best interest to just download everything in "/var/lib/mysql/"?
If not, what other files would I need to back up? Let me guess, just the "/home/" directory?
I hope my description of my issue was made clear and was descriptive.
Basically, I need to transfer the MySQL databases from one HDD to another, but the HDD with the MySQL databases has lots of errors, is corrupted (I cannot access cPanel/WHM) and my server provider tells me my HDD has failed.
In advance, I would like to thank you very much for your help.
Even if you did not help, thank you very much for taking your time reading this. It is much appreciated.
You mentioned that you can access the server via SSH but have no access to WHM or cPanel. I guess you have no access to phpMyAdmin(?). I am also guessing that the second HDD is on another server.
Instead of backing up a directory, I would suggest you connect via SSH to your server, then make remote backups with mysqldump, download them locally with SFTP and then import the database backups to the other HDD/server.
Connect to your server with SSH
ssh root#xxx.xxx.xxx.xx1
Where xxx.xxx.xxx.xx1 is the IP address of your first server. Give your password when prompted.
Use mysqldump to make a backup of your database(s) to the server.
mysqldump -uroot -p mydatabase1 > mydatabase1.sql
mysqldump -uroot -p mydatabase2 > mydatabase2.sql
...
Type your MySQL password when prompted and then the sql files (backups of your databases) will be created. I would suggest you don't make the backups on a publicly available directory of your server.
If you are on a Unix system you can type "ll" or "ls" to see that the .sql files have been created. Make a note of the directory in your server where the backups are located.
Terminate the SSH session:
exit
Then use your favourite SFTP program to connect to your server or use terminal like this:
sftp root#mywebsite.com
Type your password when prompted.
Navigate to the directory where the backups are located and download them by using the "GET" command:
get mydatabase1.sql
Your mydatabase1.sql backup file will be downloaded to your local machine.
Don't forget to close the session:
exit
Now SFTP to your other HDD to upload the database backups:
sftp root#xxx.xxx.xxx.xx2
where xxx.xxx.xxx.xx2 is the IP address of your other machine. Give password when prompted.
Don't forget to close the SFTP session:
exit
Now that you have uploaded the databases, you can connect again with SSH to the other HDD/server just like before:
ssh root#xxx.xxx.xxx.xx2
Once connected, create the new database:
mysql -uroot -e "create database mydatabase1"
Import the backup to the database:
mysql -uroot -p mydatabase1 < mydatabase1.sql
Now the database backup should be imported in the new server/hdd. I hope this helps.