How to make automatic backups of mysql db´s on Goddady with Apache servers - mysql

Im trying to automate a daily backup of mysql database on a shared hosting with Godaddy.com using Apache servers.
For this I researched and found out about bash scripts.
Goddady hosting lets me do cron jobs also so I did the following:
My bash script looks something like this (I masked the sensible data only):
<br>
#/bin/sh<p></p>
<p>mysqldump -h myhost-u myuser -pMypassword databasename > dbbackup.sql<br>
gzip dbbackup.sql<br>
mv dbbackup.sql.gz _db_backups/`date +mysql-BACKUP.sql-%y-%m-%d.gz`<br>
</p>
I configured the cron job which points to this file and executes it every 24 hours.
I have the cron job utility configured to send me a log message to my email every time it runs.
And this is the log message:
/var/chroot/home/content/01/3196601/html/_db_backups/backup.sh: line
1: br: No such file or directory
/var/chroot/home/content/01/3196601/html/_db_backups/backup.sh: line
3: p: No such file or directory
/var/chroot/home/content/01/3196601/html/_db_backups/backup.sh: line
4: br: No such file or directory
/var/chroot/home/content/01/3196601/html/_db_backups/backup.sh: line
5: br: No such file or directory
/var/chroot/home/content/01/3196601/html/_db_backups/backup.sh: line
6: /p: No such file or directory
Its like it doesn't understand the language. Should I edit my .htaccess file for this?
Any ideas?

Remove those html tags from the bash script, error messages are all related to them . Your script should be as the following.
#!/bin/sh
mysqldump -h myhost-u myuser -pMypassword databasename > dbbackup.sql
rm -rf dbbackup.sql.gz
gzip dbbackup.sql
mv dbbackup.sql.gz _db_backups/`date +mysql-BACKUP.sql-%y-%m-%d.gz`

Related

OpenGrok) How can I use '--symlink' command in OpenGrok?

I'm not sure how to use the --symlink command in OpenGrok, so I'm asking.
OpenGrok's source root folder is '/opengrok/src'.
In this folder, I created a symbolic link file with the following command.
ln -s /home/A/workspace/tmp tmp
And I did indexing with the following command.
java -Djava.util.logging.config.file=/opengrok/etc/logging.properties -jar /opengrok/dist/lib/opengrok.jar -c /usr/local/bin/ctags -s /opengrok/src -d /opengrok/data -P -S -W /opengrok/etc/configuration.xml --symlink /opengrok/src/tmp -U http://localhost:8080/source
When I connect to localhost/source, the tmp file is displayed, but when I click it, the files in tmp are not displayed and the following error message is displayed.
Error: File not found!
The requested resource is not available.
Resource lacks history info. Was remote SCM side up when indexing occurred? Cleanup history cache dir(or just the .gz for the file or db record) and rerun indexer making sure remote side will respond during indexing.
How can I access and view the files in tmp using OpenGrok?

Docker container: /bin/sh: cat: No such file or directory

I'm using the mysql/mysql-server image to create a mysql server in docker. Since I want to setup my database (add users, create tables) automatically, I've created a SQL file that does that for me. In order to automatically run that script, I extended the image with this dockerfile:
FROM mysql/mysql-server:latest
RUN mkdir /scripts
WORKDIR /scripts
COPY ./db_setup.sql .
RUN mysql -u root -p password < cat db_setup.sql
but for some reason, this happens:
/bin/sh: cat: No such file or directory
ERROR: Service 'db' failed to build : The command '/bin/sh -c mysql -u root -p password < cat db_setup.sql' returned a non-zero code: 1
How do I fix this?
You can just remove the cat command from your RUN command:
RUN mysql -u root -p password < db_setup.sql
No such file or directory is returned since cat cannot be found in the current directory set by WORKDIR. You can just redirect the stdin of mysql to be from the db_setup.sql file. Edited to clarify < sh redirection is expecting the file name to use for input.
EDIT 2: Keep in mind your example is a RUN command that is attempting to run mysql and creating a layer at docker image build time. You may want to have this run during the mysql entrypoint script at runtime instead (e.g. scripts are run from thedocker-entrypoint-initdb.d/ directory by the docker-entrypoint.sh script of the official mysql image) or using other features that are documented for the official image.
RUN is a build time command. MySQL isn't running at this point.
If you where/are using a standard image there is a location for database initialization:
FROM mysql:8.0
COPY db_setup.sql /docker-entrypoint-initdb.d
Command cat is not present in mysql/mysql-server:latest image.
Moreover, you would only need to provide filename afetr redirection.
RUN mysql -u root -p password < db_setup.sql

Iterating through multiple subdirectories to load large batch data

newbie to coding and am making a legislative database for use in academic work. I have downloaded the California legislative information into a directory on a partitioned portion of my HD. Loaded the schema to the MySQL DB with no issues, downloaded the data and am having problems getting it uploaded. Lets call my workspace home directory home, within that directory are my modules (I have node in there but I would love to avoid using it until I make an app), my json package and settings files and a subdirectory called pubinfo. This is all set up.
Within the pubinfo directory are my sql table files, and shell commands for loading the data into mysql where I have a DB with tables ready for data insertion, as well as subdirectories for legislative sessions labeled from 2001-2019 by sessions (2001, 2003, and so on by 2 years). The loadData.sh file is below, and the instructions from the California data website said to download these files, unzip them, then to run them on my pubinfo directory...
if [ $# -gt 0 ]; then
echo Usage: .loadData.sh
exit 1
fi
if [ -z "$MYSQL_PWD" ]; then
read -p "Please enter root password:" MYSQL_PWD
export MYSQL_PWD=${MYSQL_PWD}
fi
do
if [ -e ${lcTable}.dat ]; then
echo Processing table: ${lcTable}
if [ -z "$MYSQL_PWD" ]; then
mysql -uroot -p -Dcapublic -v -v -f < ${lcTable}.sql 2>&1 > ${lcTable}.log
else
mysql -uroot -Dcapublic -v -v -f < ${lcTable}.sql 2>&1 > ${lcTable}.log
fi
fi
done < "tables_lc.lst"
When ran, the out put on my zsh terminal is '/usr/local/bin/loadData.sh: line 29: location_code_tbl.sql: No such file or directory', I also have to add that I symlinked the shell file into my path so that the variable could be called in a global setting. I plan to eliminate it once this is all uploaded. I suppose I could symlink all the sql tables as well, but I know there has to be an easier way to iterate through subdirectories while using the sql tables and files in my main directory. I just am not familiar with zsh or bash, I had to take an Udemy course just to set up the MySQL DB. Anyways, I was hoping someone would be able to help, if you have any questions that I did not address here I can answer. Oh and if there is any question on my machine, it is a newer Mac book pro, running the most current mysql version and my editor is visual studio code in addition to the good old terminal.
Thanks!

OwnCloud: How to synchronyze the FileSystem with the DB

I have to "insert" a lot of files into an owncloud server (8.2).
A user give me a USB key with the files and tell me to copy of all them into his owncloud data files repository.
Do you know if is it possible ?
Is it possible to synchronyze the ownCloud data fileSystem with the ownCloud database?
My environment is Linux CentOS7 (Apache 2.4, mySQL 5.6, php 5.6)
Thanks,
owncloud brings a command line utility that allows to manually trigger some tasks. Among those is the files:scan function which re-scans a users file system.
So you can import those files by following these steps:
1. you copy the files into the physical file system of the user(s) inside ownclouds data folder
2. you fire the command line utility to re-scan the files. That takes care to update the database according to the files found.
This is an example for the manual trigger:
sudo -u www-data php occ files:scan <user name>
Here <user name> obviously has to be replaced. Also the account name the sudo command switches to depends on the linux distribution and its setup. The command has to be started inside ownclouds base folder. THe command can be called in a loop with different user names, that can be done by means of standard scripting.
Here is a documentation of the utility: https://doc.owncloud.org/server/8.0/admin_manual/configuration_server/occ_command.html
I just made a try myself using an owncloud-8.2 installation and succeeded.
Before I could sucessfully scan my files again as arkascha explained, I needed to change the ownder and the group of the new folder to www-data (for Debian OS - others see OC-Docu 1) and set rights of the new directory to 755
Change ownder:
sudo chown -R www-data:www-data <path>
Change rights:
sudo chmod 755 <path>
whwere is the path to the newly added directory and could for example look like this example: /media/hdd/owncloud/data/<username>/files/<newFolderName>
OC-Docu:
https://doc.owncloud.org/server/9.0/admin_manual/configuration_server/occ_command.html

Automatically Backup MySQL database on linux server

I need a script that automatically makes a backup of a MySql Database. I know there are a lot of posts and scripts out there on this topic already but here is where mine differs.
The script needs to run on the machine hosting the MySql database (It is a linux machine).
The backups must be saved onto the same server that the database is on.
A backup needs to be made every 30 minutes.
When a backup is older than a week it is deleted unless it is the very first backup created that week. i.e out of these backups backup_1_12_2010_0-00_Mon.db, backup_1_12_2010_0-30_Mon.db, backup_1_12_2010_1-00_Mon.db ... backup_7_12_2010_23-30_Sun.db etc only backup_1_12_2010_0-00_Mon.db is kept.
Anyone have anything similar or any ideas where to start?
Answer: A cron
Description:
Try creating a file something.sh with this:
#!/bin/sh
mysqldump -u root -p pwd --opt db1.sql > /respaldosql/db1.sql
mysqldump -u root -p pwd --opt db2.sql > /respaldosql/db2.sql
cd /home/youuser/backupsql/
tar -zcvf backupsql_$(date +%d%m%y).tgz *.sql
find -name '*.tgz' -type f -mtime +2 -exec rm -f {} \;
Give the adequate permission to the file
chmod 700 mysqlrespaldo.sh
or
sudo chmod 700 something.sh
and then create a cron with
crontab -e
setting it like
**0 1 * * *** /home/youruser/coolscripts/something.sh
Remember that the numbers or '*' characters have this structure:
Minutes (range 0-59)
Hours (0-23)
Day of month (1-31)
Month (1-12)
Day of the week (0-6 being 0=Domingo)
Absolute path to script or program to run
You can also use the helper folder available in newer versions of linux distros, where you find /etc/cron.daily, /etc/cron.hourly, /etc/cron.weekly, etc. In this case, you can create a symlink to your script into the chosen folder and OS will take charge of running it with the promised recurrence (from a powerful comment by #Nick).
Create a shell script like the one below:
#!/bin/bash
mysqldump -u username -p'password' dbname > /my_dir/db_$(date+%m-%d-%Y_%H-%M-%S).sql
find /mydir -mtime +10 -type f -delete
Replace username, password and your backup directory(my_dir). Save it in a directory(shell_dir) as filename.sh
Schedule it to run everyday using crontab -e like:
30 8 * * * /shell_dir/filename.sh
This will run everyday at 8:30 AM and backup the database. It also deletes the backup which is older than 10 days. If you don't wanna do that just delete the last line from the script.
Doing pretty much the same like many people.
The script needs to run on the machine hosting the MySql database (It is a linux machine).
=> Create a local bash or perl script (or whatever) "myscript" on this machine "A"
The backups must be saved onto the same server that the database is on.
=> in the script "myscript", you can just use mysqldump. From the local backup, you may create a tarball that you send via scp to your remote machine. Finally you can put your backup script into the crontab (crontab -e).
Some hints and functions to get you started as I won't post my entire script, it does not fully do the trick but not far away :
#!/bin/sh
...
MYSQLDUMP="$(which mysqldump)"
FILE="$LOCAL_TARBALLS/$TARBALL/mysqldump_$db-$SNAPSHOT_DATE.sql"
$MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS $db > $FILE && $GZIP $GZ_COMPRESSION_LEVEL $FILE
function create_tarball()
{
local tarball_dir=$1
tar -zpcvf $tarball_dir"_"$SNAPSHOT_DATE".tar.gz" $tarball_dir >/dev/null
return $?
}
function send_tarball()
{
local PROTOCOLE_="2"
local IPV_="4"
local PRESERVE_="p"
local COMPRESSED_="C"
local PORT="-P $DESTINATION_PORT"
local EXECMODE="B"
local SRC=$1
local DESTINATION_DIR=$2
local DESTINATION_HOST=$DESTINATION_USER"#"$DESTINATION_MACHINE":"$DESTINATION_DIR
local COMMAND="scp -$PROTOCOLE_$IPV_$PRESERVE_$COMPRESSED_$EXECMODE $PORT $SRC $DESTINATION_HOST &"
echo "remote copy command: "$COMMAND
[[ $REMOTE_COPY_ACTIVATED = "Yes" ]] && eval $COMMAND
}
Then to delete files older than "date", you can look at man find and focus on the mtime and newer options.
Edit: as said earlier, there is no particular interest in doing a local backup except a temproray file to be able send a tarball easily and delete it when sent.
You can do most of this with a one-line cronjob set to run every 30 minutes:
mysqldump -u<user> -p<pass> <database> > /path/to/dumps/db.$(date +%a.%H:%M).dump
This will create a database dump every 30 minutes, and every week it'll start overwriting the previous week's dumps.
Then have another cronjob that runs once a week that copies the most recent dump to a separate location where you're keeping snapshots.
After a brief reading the question and the good answers i would add few more points. Some of them are mentioned already.
The backup process can involve next steps:
Create a backup
Compress the backup file
Encrypt the compressed backup
Send the backup to a cloud (DropBox, OneDrive, GoogleDrive, AmazonS3,...)
Get a notification about results
Setup a schedule to run the backup process periodically
Delete the old backup files
To compound a script to cover all the backup steps you need an effort and knowledge.
I would like to share a link to an article (i'm one of the writers) which describes the most used ways to backup MySQL databases with some details:
Bash script
# Backup storage directory
backup_folder=/var/backups
# Notification email address
recipient_email=<username#mail.com>
# MySQL user
user=<user_name>
# MySQL password
password=<password>
# Number of days to store the backup
keep_day=30
sqlfile=$backup_folder/all-database-$(date +%d-%m-%Y_%H-%M-%S).sql
zipfile=$backup_folder/all-database-$(date +%d-%m-%Y_%H-%M-%S).zip
# Create a backup
sudo mysqldump -u $user -p$password --all-databases > $sqlfile
if [ $? == 0 ]; then
echo 'Sql dump created'
else
echo 'mysqldump return non-zero code' | mailx -s 'No backup was created!' $recipient_email
exit
fi
# Compress backup
zip $zipfile $sqlfile
if [ $? == 0 ]; then
echo 'The backup was successfully compressed'
else
echo 'Error compressing backup' | mailx -s 'Backup was not created!' $recipient_email
exit
fi
rm $sqlfile
echo $zipfile | mailx -s 'Backup was successfully created' $recipient_email
# Delete old backups
find $backupfolder -mtime +$keep_day -delete
Automysqlbackup
sudo apt-get install automysqlbackup
wget https://github.com/sixhop/AutoMySQLBackup/archive/master.zip
mkdir /opt/automysqlbackup
mv AutoMySQLBackup-master.zip
cd /opt/automysqlbackup
tar -zxvf AutoMySQLBackup-master.zip
./install.sh
sudo nano /etc/automysqlbackup/automysqlbackup.conf
CONFIG_configfile="/etc/automysqlbackup/automysqlbackup.conf"
CONFIG_backup_dir='/var/backup/db'
CONFIG_mysql_dump_username='root'
CONFIG_mysql_dump_password='my_password'
CONFIG_mysql_dump_host='localhost'
CONFIG_db_names=('my_db')
CONFIG_db_exclude=('information_schema')
CONFIG_mail_address='mail#google.com'
CONFIG_rotation_daily=6
CONFIG_rotation_weekly=35
CONFIG_rotation_monthly=150
automysqlbackup /etc/automysqlbackup/automysqlbackup.conf
Third party tools
Hope it would be helpful!
My preference is for AutoMySQLBackup which comes with Debian. It's really easy and creates daily backups, which can be configured. As well, it stores on weekly and then one monthly backup as well.
I have had this running for a while and it's super easy to configure and use!
You might consider this Open Source tool, matiri, https://github.com/AAFC-MBB/matiri which is a concurrent mysql backup script with metadata in Sqlite3. Features (more than you were asking for...):
Multi-Server: Multiple MySQL servers are supported whether they are co-located on the same or separate physical servers.
Parallel: Each database on the server to be backed up is done separately, in parallel (concurrency settable: default: 3)
Compressed: Each database backup compressed
Checksummed: SHA256 of each compressed backup file stored and the archive of all files
Archived: All database backups tar'ed together into single file
Recorded: Backup information stored in Sqlite3 database
Full disclosure: original matiri author.