I want to use MySQLDump to backup the db on a weekly basis using a cron job.
I don't want to hardcode the credentials in the shell script.
The MySQL db version is 5.1, so mysql-config-editor is not available.
I am aware of the options file, which I can secure using linux file permissions of 600.
Is there a way to encrypt the credentials and make them unreadable?
Is there a way to encrypt the credentials and make them unreadable?
Ask yourself who do you want to protect the file from and why is encryption going to help besides normal file permissions.
If you are going to encrypt the file containing the password, you have to make sure that the legitimate backup process has access to the encryption keys so it can read the password from the file. Then you have to make sure all the other processes don't have access to those keys.
Since this further complicates things, this increases the risk on a leak without adding much security on top of the basic file system security model. So I would recommend to stick with the right ownership and file permissions on the .my.cnf file.
Further reading: http://benlog.com/articles/2012/04/30/encryption-is-not-gravy/
I personally run mysqldump daily as root via cron. In order to break this an attacker needs to break basic file system privileges before it can access /root/.my.cnf (mode is 600 and owned by root). If an attacker is able to do that, he probably can directly access the database files as well so an encrypted password file wouldn't have helped here.
You can also setup a dedicated system user for the sole purpose of running mysqldump as long as the mode on ~/.my.cnf is 600 and the ownership is set to that system user.
ps. this is the mysql backup script I run daily on my machines:
https://gist.github.com/timkuijsten/6067107
The latest version of MySQL 5.6 Addresses this problem.
You can now encrypt the password for a command line login using mysql_config_editor
Related
On my shared webhost, I have very limited ssh access (only via imscp instantSSH plugin). I want to set up a script to download my whole mysql database as an sql file, but I cant figure it out.
I cant use mysqldump so I tried the mysql command, which is available, but it isn working.
I have to specify username, password, host and database, and my password contains special characters.
Anyone can help me?
If your shared host allow mysql remote connections then you can use any MySQL software to connect to the database and then extract whatever information you need. Tools like these are: HeidiSQL, NavicatGui etc.
Another way would be the one suggested by Akshay Khale.
A 3rd one would be to use phpMyAdmin (most shared web hosting have this installed by default).
A 4th one would to create a simple php script that runs mysqldump locally, saves the .sql dump file either locally on your shared hosting or remotely via FTP/SFTP or any other protocal. Also the same script can email you that file (inline or as an attachment to the email). This kind of automation can be configured using a cron job.
There are multiple ways to achieve this. It all depends on which one suits you best.
As the author of the InstantSSH plugin for i-MSCP, I can say you that you should ask your HPs to make the mysqldump command available from your chrooted shell. The host administrator can add any command to the chroot by editing the InstantSSH plugin configuration file.
Is there a simple way to do an automated backup of an entire website on a host like GoDaddy via the command-line?
So far, I know I need to backup all the files in my home directory recursively. I could possibly automated SFTP to connect and issue a get -R * command to get the full file dump, or just use SCP.
The other half of the puzzle is getting all of the tables available, mostly WordPress tables. My guess is that maybe there's a command-line command I could issue which dumps the database contents to a flat file, which I could then also pull via SFTP. If such a command exists, my plan is to use a combination of Telnet and EXPECT scripts to login to the GoDaddy site, issue some commands, then disconnect back to my local shell.
The end result should be that I have a folder with all of my server content in it, plus the flat file backup of the SQL database from the server. I know there are WordPress backup plugins, but they tend to provide a slew of ZIP files, when all I want is the raw data directly so I can put it in my private SVN server for backup and versioning.
So my question: how do I extract all of the databases on my GoDaddy server via the command-line to a file?
Thank you.
In the end, I found a working solution.
First, I used 2 separate expect scripts.
Telnet into the server, delete old backups, use mysqldump to extract all tables to a flat file via mysqldump -u db_owner -p --all-databases > output.sql, and create a massive tarball of everything. Logout.
Use SCP to pull the newly created tarball, extract it to a local SVN controlled working copy folder.
Use a second expect script to login to the server and delete the backup. Logout.
From there, I just manually svn add and svn commit as needed.
I own a machine running third party software. I input data into this software and it stores that data into its own mysql database. I'd like query the mysql database directly, but I don't know the credentials that the application is using.
I have read and write access for all files in the machine, including the files in the mysql data directory. Theoretically, I should be able to read the data directly from these files (.ibd and .frm files). But practically, I don't know where to start. I'm thinking that these data files are somewhat readable since encrypting them would destroy their index-ability.
Is this feasible? Or would I have to reverse engineer the data file format in order to read it?
Or even better - is there some config file that I can change which would implicitly trust all local connections similar to postgres?
You could read the mysql files directly, but even if they're now encrypted, the columns names might be weird and you could have to spend some time reading them.
Another point could be looking for config files from that software, that could have the login/password (very very low probability, but who knows?)
And the best would be:
make a backup of the mysql files
in another mysql instalation / computer (to not break your software), follow the reset mysql password guide
Try accessing it via the command line on the local machine:
shell> mysql db_name
(from MySQL documentation)
From here, you can create yourself an account if you need to connect from other client software.
Or have you already tried that?
If you have root access to the machine that MySQL is running on, then you can reset the MySQL root password by following the procedure at: http://www.cyberciti.biz/tips/recover-mysql-root-password.html. Once you've reset the root password, you can then login to MySQL as the root MySQL user, and access any of the databases, and query them. The only caveat to keep in mind is that changing the MySQL root password could potentially prevent your application from accessing the MySQL database, but that would be surprising as the application should be designed to connect to the database using a MySQL user account (with limited privileges) other than the root MySQL user.
I am writing a bash script that I plan to execute via cron. In this script, I want to execute a command against a MySQL database, something like this:
$ mysql -u username -ppassword -e 'show databases;'
For clarity and those not familiar with mysql, the "-u" switch accepts the username for accessing the database and the "-p" is for password (space omitted purposely).
I am looking for a good way to keep the username/password handy for use in the script, but in a manner that will also keep this information secure from prying eyes. I have seen strategies that call for the following:
Keep password in a file: pword.txt
chmod 700 pword.txt (remove permissions for all except the file's owner"
Cat pword.txt into a variable in the script when needed for login.
but I don't feel that this is very secure either (something about keeping passwords in the clear makes me queasy).
So how should I go about safeguarding password that will be used in an automated script on Linux?
One way you can obfuscate the password is to put it into an options file. This is usually located in ~/.my.cnf on UNIX/Linux systems. Here is a simple example showing user and password:
[client]
user=aj
password=mysillypassword
The only truly safe way to guard your password is to encrypt it. But then you have the problem of safeguarding the encryption key. This problem is turtles all the way down.
When the good people who build OpenSsh tackled this problem, they provided a tool called ssh-agent which will hold onto your credentials and allow you to use them to connect to a server at need. But even ssh-agent holds a named socket in the filesystem, and anybody who can get access to that socket can act using your credentials.
I think the only two alternatives are
Have a person type a password.
Trust the filesystem.
I'd trust only a local filesystem, not a remote mounted one. But I'd trust it.
Security is hell.
Please see the doc for some guidelines. an extra step you can take is to restrict the use of ps command for normal users, if they have the permission to access the server.
I'll agree with Norman that you should have someone type the password. If you just supply the -p flag without an accompanying password, it will prompt the user for it.
I have a test database on a separate remote server than my production DB. Every once in awhile, I want to try and test things by uploading a copy of my production DB to my testing DB. Unfortunately, the backup file is now half a gig and I'm having trouble transferring it via FTP or SSH. Is there an easy way that I can use the mysql restore command between servers? Also, is there another way to move over large files that I'm not considering? Half a gig doesn't seem that big, I would imagine that people run into this issue frequently.
Thanks!
Are the servers accessible to each other?
If so, you can just pipe the data from one db to another without using a file.
ex: mysqldump [options] | mysql -h test -u username -ppasswd
0.Please consider whether you really need production data (especially if it contains some sensitive information)
1.The simplest solution is to compress the backup on the source server (usually gzip), transfer it across the wire, then decompress on the target server.
http://www.techiecorner.com/44/how-to-backup-mysql-database-in-command-line-with-compression/
2.If you don't need the exact replica of production data (e.g. you don't need some application logs, errors, some other technical stuff) you can consider creating a backup and restore on a source server to a different DB name, delete all unnecessary data and THEN take a backup that you will use.
3.Restore full backup once on your reference server in your Dev environment and then copy transaction logs only (to replay them on the reference server). Depending on the usage pattern transaction logs may take a lot less space as the whole database.
Mysql allows you to connect to a remote database server to run sql commands. Using this feature, we can pipe the output from mysqldump and ask mysql to connect to the remote database server to populate the new database.
mysqldump -u root -p rootpass SalesDb | mysql --host=185.32.31.96 -C SalesDb
Use an efficient transfer method, rather than ftp.
If you have a dump file created by mysqldump, on the test db server, and you update it every so often. I think you could save time (if not disk space) by using rsync to transfer it. Rsync will use ssh and compress data for the transfer, but I think both the local and remote files should/could be uncompressed.
Rsync will only transfer the changed portion of a file.
It may take some time to decide what, precisely, has changed in a dump file, but the transfer should be quick.
I must admit though, I've never done it with a half-gigabyte dump file.