Linux Perform MySQL Backup to time-based path from Webmin - mysql

I was trying to setup a scheduled task in Linux Ubuntu Server 12.04 (CronJob) to perform a daily backup of all my MySQL Databases on midnight.
I have installed the known Webmin (A nice web interface for managing the Web Servers).
So my issue is: whenever the backup is being performed, the files are getting overwritten!
That means: The backup of the day before yesterday are LOST, only the "Yesterday" backup is being saved!
I have tried something like setting dynamic file path like:
/var/www/mysqlbackups/%d-%m-%y
but I had no success with that :(
Can anybody help me.
Thanks alot guys.

MySQL Database Server > Module Config > Chose Yes on "Do strftime substitution of backup destinations?"
It work for me! :)

You can use dynamic path, but before you must enable it in module config:
System > Filesystem Backup > Module Config > Do strftime substitution of backup destinations? > Yes
(If you are not sure about placeholders, just click on the text "Do strftime substitution of backup destinations?" in the config and help will show you.)

I had the same problem and solved like this :
From Server select MySQL Database Server.
Go to Module config (top left), and select Do strftime substitution of backup destinations? to Yes.
I use db_%d-%m-%Y.sql format for backups. (db_13-03-2013.sql)

I made a shell script (not for webmin). Put it in /etc/cron.daily.
The scripts makes a backup of the database (stores it as .gz), then uploads it by ssh to another server. For the auth. i setup ssh keys, so no password is needed.
The backup files have a unique name, so you don't overwrite the backup files.
This is how you can create a filename within script:
now=`date +%Y%m%d_%H%M`
dst_path=/var/local/backups
filename="$dst_path/$database.$now.sql.gz"
Then you should write a a small script that removes all backup files that are older then x days.
#!/bin/sh
#
# Creates a backup of a MySQL databases and uses ssh (sFTP) to send if to another server
# This script shouldbe called from the crontab
PATH=/usr/sbin:/usr/bin:/sbin:/bin
# MySQL user and password
mysql_cmd=/opt/bitnami/mysql/bin/mysqldump
mysql_usr=user_name
mysql_pass=password
# destination ssh
dst_user=user_name
dst_hostname=192.168.1.1
# Database to backup
database=test
# create timestamp
now=`date +%Y%m%d_%H%M`
# where we store the files
dst_path=/var/local/backups
# backup filename
filename="$dst_path/$database.$now.sql.gz"
dst_filename="$database.$now.sql.gz"
# run backup
$mysql_cmd -u $mysql_usr --password=$mysql_pass $database | gzip > $filename
# upload to sever (ssh)
scp $filename $dst_user#$dst_hostname:

You can do it in following way
Webmin > MySQL Database Server > select database > backup Database > file path
in Other backup option you can use > Command to run after backup
Enter ======> mv file path/filename file path/filename_date +"%Y%m%d%H%M%S".sql

Related

typo3: mysql database not useable

Currently I try to migrate a typo3 based Webserver to a new machine. (its my first migration, so please don't judge if I did smth wrong).
What I did so far:
transfer Files via wget on new machine
create dbdump with mysqldumb
transfer dump with wget
create database with mysql source <dumpfile.sql>
create user with access to the db
When I try to connect with the server, typo3 doesn't response.
And when I try to install typo3 from skretch and replace the new database with the old one, I also run into internal server errors.
Is there a solution on how to migrate the database correctly?
Yours Sincerely,
Sebastian
Mh,
this should not be an issue in general.
We often use following steps:
[SRC] BackupDatabase: MYSQL_PWD="DBPASS" mysqldump -uDBUSER --opt -e -Q --skip-comments --single-transaction=true | gzip >dump.sql.gz
[SRC] Pack the installation and the used core: tar -czf transfer.tar.gz ./typo-webfolder ./typo3_src-VERSION
Transfer both .gz files to new server (wget, scp, ftp etc )
[NEW] Deflate files: tar -xzf transfer.tar.gz
[NEW] Create a empty database, using your fav tool
[NEW] Import database: gunzip <dump.sql.gz | MYSQL_PWD="DBPASS" mysql -uDBUSER [-hDBHOST] NEWDBNAME
[NEW] Adjust DatabaseCredentials in `typo3conf/LocalConfiguration.php'
[NEW] Recheck symlinks (typo3_src, typo3, index.php)
[NEW] Recheck .htaccess files - maybe missed to pack and transfer ?
[NEW] Create FlagFile touch typo-webfolder/typo3conf/ENABLE_INSTALL_TOOL
[NEW] Open install tool in Webbrowser ( http://newdomain.tld/typo3/install ), checking requirements, maybe fixing folderstructure and so on, clearing all caches
Eventually clear the typo3temp folder (can be repopulated by the system)
In our projects, we are setting the DB Credentials through AdditionalConfiguration.php based on Enviroment Variables (read from a .env file )
So in generell there should not be any issues, but withour more information it is hard to help you further.
Some things:
Proxy/TrustedProxy settings
DomainRecord Settings in the Database ( sys_domain )
RealUrl Config With DomainName based settings
.htaccess Canonical rewrite rules based on domain/hostname
Missing needed php modules etc., wrong php version, checking php error log
in general your workflow is usable. (don't forget the filesystem fileadmin/ and typo3conf/ext/)
but there are some traps.
be sure to delete the corresponding caches for all changes in filesystem or database.
if you transfer the database: make sure you always use UTF-8 coding of everything!
regarding filesystem: there could be thumbnails or other resized images (folder __processed__/) but there also are entries in the database for each file and each resizing.
all extensions or configuration are cached in typo3temp/Code/*, also have in mind the autoloader files.
in most cases you can do a clean-up in the install tool.
so the first thing should be:
start the install tool, do all checks and remove all temporary information.

How can I make a MySQL Script to run automatically whenever a the MySQL Server reboots on a Linux environment

I'd like to automatically populate Memory tables each time the MySQL Server reboots. Is there a way I can set a trigger which is based on that event? Or a script which is run by either the Mysqd or mysqld_safe startup scripts?
Thanks in advance
You can use the below startup script for linux :
add the followin in init.d file.
vi /etc/init.d/ you have to set it executable with: chmod +x /etc/init.d/start_my_app And dont forget to add #!/bin/sh on top of that file
And put the complete location of your script in it, like /var/myscripts/test.php instead of just start_my_app
in test.php page you can have mysql query executed.
You can set a command line option "--init-file=file_name" whenever mysql start
--init-file=file_name
Command-Line Format --init-file=file_name
Option-File Format init-file
Read SQL statements from this file at startup. Each statement must be on a single line and should not include comments.
This option is unavailable if MySQL was configured with the --disable-grant-options option.
Source : Mysql developer Documentation
For More Detail
http://dev.mysql.com/doc/refman/5.1/en/server-options.html#option_mysqld_init-file

Unix SSH without password

Hey all I'm completely new to Unix and I need to write up a "shell script" (?) to connect to another terminal and run a few SQL queries. How on earth do I do this? I've been browsing a few answers from this and other boards and if I have found my answer I don't understand it.
I am able to manually connect, enter password, etc, but I need to automate the process. I don't have access to Perl (as a few answers have suggested) and I am unable to edit the etc/shadow file. So I assume this has to be done strictly through Unix itself. This is what I am currently using:
X=`vUser='USER-NAME'
vPass='PASSWORD'
vTable='TABLENAME'
vHOST='HOST-NAME'
vPORT=4443
ssh root#vHost
expect {
"root#the-host password:"{
send -s "'vPass'\r"
}
}
SQL_Query='select * from vTable limit 10'
mysql -p$vPASS -D$vTable -u$vUser P$vPort<<EOF
$SQL_Query
EOF`
echo $X>Output.dat
Please explain all answers in full. I'm trying to learn.
Might not be a 100% relevant but I had to do the same on Linux.
First off you want to make a new user account on the other server that has SSH access and generate an SSH keypair, even if you're going to do this as root, keypairs are far superior over standard passwords because they're stronger, and they allow you to log in automatically over SSH.
There's no real way of automating the password entry process (at least, not on Linux), hence the reason SSH keys are required to do this.
You can basically send a chain of commands as a parameter to the SSH tool.
Like so,
ssh user#host "ls; cat *; yes;"
Hope that helped.
Try this:
Copy your SSH public key to your clipboard (output of cat ~/.ssh/id_rsa.pub). If you don't have an SSH key pair then generate it with this tutorial.
Paste your public key to your server's ~/.ssh/authorized_keys file. If it doesn't have one, create it with nano ~/.ssh/authorized_keys and paste it there.
In your computer, you can run the script in the server with the following command:
ssh user#server_ip 'bash -s' < local_script.sh
Or if you have a single command to run then this will do:
ssh user#server_ip "echo Test | tee output.log"
If you don't like SSH asking you for the password all the time, use ssh-agent
For SQL-specific scripts, you can put all your SQL commands in a single file, say query.sql. You should copy query.sql to your server (scp query.sql user#server_ip:~/) and then run
ssh user#server_ip "mysql -uyourusername -pyourpassword < query.sql | tee output.log"
The output will be saved in output.log. Check this answer too.
There is a Linux command ssh-copy-id will do this for you, it is also available to Mac as a homebrew formula.

MySQL tables on external hard drive

I have a large amount of text data I need to import into MySQL. I'm doing this on a MacBook and don't have enough space for it so I want to store it in an external hard drive (I'm not really concerned about speed at this point - this is just for testing).
What's the best way to do it?
Install MySQL on the external hard drive (is this possible on a Mac?)
Install MySQL on the laptop's hard drive and have the tables on the external (how?)
One simple hack is to create an symbolic link replacing your current mysql database file location pointing to the external disk. Google symbolic link.
sample usage would be after you shutdown mysql, change the old mysql db folder name to something else, and create the symbolic link using the ln command like below
ln -s [EXTERNAL DRIVE PATH] [MYSQL DB FOLDER PATH]
Then move all the previous content of the mysql db folder to the new location.
Open /etc/mysql/my.cnf and find the value of the datadir. Alternatively, you can find this out in the mysql monitor with
mysql> select ##datadir;
Stop mysql
sudo systemctl stop mysql
Copy the data from there to your external drive
sudo rsync -av /var/lib/mysql /mnt/myHDD/somedir/mysql
Modify the location of the datadir in my.cnf.
Start mysql again
sudo systemctl start mysql
Verify that everything is still fine and remove the original data dir.
This page contains a more extensive guide but all the additional issues it warns about were not relevant for me on my raspberry PI. I.e. I skipped them and it worked.
For the second option, a tablespace might do the trick:
http://dev.mysql.com/doc/refman/5.1/en/create-tablespace.html
User user658991 answer is halfway there.
After adding the soft link, you will need to add the following line to /etc/apparmor.d/usr.sbin.mysqld beneath the 2 lines to the old mysql folder.
/path/to/mysql/folder/on/the/external/ r
/path/to/mysql/folder/on/the/external/ ** rwk
Without these 2 lines, MySQL fails to start complaining of:
Can't create test file /path/to/mysql/folder/on/the/external/hostname.lower-test
Can't create test file /path/to/mysql/folder/on/the/external/hostname.lower-test
mysqld: Can't change dir to '/path/to/mysql/folder/on/the/external/' (Errcode: 13)
Restart apparmor for the changes to take effect.
sudo invoke-rc.d apparmor restart
With this, MySQL starts normally.

How can I have MySQL write outfiles as a different user?

I'm working with a MySQL query that writes into an outfile. I run this query once every day or two and so I want to be able to remove the outfile without having to resort to su or sudo. The only way I can think of making that happen is to have the outfile written as owned by someone other than the mysql user. Is this possible?
Edit: I am not redirecting output to a file, I am using the INTO OUTFILE part of a select query to output to a file.
If it helps:
mysql --version
mysql Ver 14.12 Distrib 5.0.32, for pc-linux-gnu (x86_64) using readline 5.2
The output file is created by the mysqld process, not by your client process. Therefore the output file must be owned by the uid and gid of the mysqld process.
You can avoid having to sudo to access the file if you access it from a process under a uid or gid that can access the file. In other words, if mysqld creates files owned by uid and gid "mysql"/"mysql", then add your own account to group "mysql". Then you should be able to access the file, provided the file's permission mode includes group access.
Edit:
You are deleting a file in /tmp, with a directory permission mode of rwxrwxrwt. The sticky bit ('t') means you can remove files only if your uid is the same as the owner of the file, regardless of permissions on the file or the directory.
If you save your output file in another directory that doesn't have the sticky bit set, you should be able to remove the file normally.
Read this excerpt from the man page for sticky(8):
STICKY DIRECTORIES
A directory whose `sticky bit' is set becomes an append-only directory, or, more accurately, a directory in which the deletion of files is restricted. A file in a sticky directory may only be removed or renamed by a user if the user has write permission for the directory and the user is the owner of the file, the owner of the directory, or the super-user. This feature is usefully applied to directories such as /tmp which must be publicly writable but should deny users the license to arbitrarily delete or rename each others' files.
Not using the "SELECT...INTO OUTFILE" syntax, no.
You need to run the query (ie client) as another user, and redirect the output. For example, edit your crontab to run the following command whenever you want:
mysql db_schema -e 'SELECT col,... FROM table' > /tmp/outfile.txt
That will create /tmp/outfile.txt as the user who's crontab you've added the command to.
I just do
sudo gedit /etc/apparmor.d/usr.sbin.mysqld
and add
/var/www/codeigniter/assets/download/* w,
and
sudo service mysql restart
And that's it, I can do easily SELECT INTO OUTFILE any filename
If you have another user run the query from cron, it will create the file as that user.