Access Access Design View field descriptions - ms-access

I have an Access database with field descriptions that (theoretically) are visible in Design View. I don't have a copy of access. I can export the data and schema using mdbtools, but those don't come with the descriptions. Are there ways to programmatically extract those descriptions?

Turns out there was an un/under-documented mdbutils command that will give metadata for a table: mdb-prop. Here's a shell script that will list out the metadata of every field, adapted from a script whose provenance I have forgotten:
#!/usr/bin/env bash
# Usage: mdb-export-all.sh full-path-to-db
command -v mdb-tables >/dev/null 2>&1 || {
echo >&2 "I require mdb-tables but it's not installed. Aborting.";
exit 1;
}
command -v mdb-export >/dev/null 2>&1 || {
echo >&2 "I require mdb-export but it's not installed. Aborting.";
exit 1;
}
fullfilename=$1
filename=$(basename "$fullfilename")
dbname=${filename%.*}
mkdir "$dbname"
IFS=$'\n'
for table in $(mdb-tables -1 "$fullfilename"); do
echo "Check table $table"
# Save a file with with all metadata for every field
mdb-prop "$fullfilename" "$table" > "$dbname/$table.txt"
# Save a file with with just the descriptions:
cat "$dbname/$table.txt" | grep -E 'name|Description' > "$dbname/info_$table.txt"
done

Related

How can I execute MySQL commands line by line from bash and capture the output?

If there is an alternative to bash, I will appreciate it too.
I have a large dump of MySQL commands (over 10 GB)
When restoring the dump I get a few warnings and occasionally an error. I need to execute those commands and process all warnings and errors. And, preferibly to do it automatically.
mysql --show-warnings
tee logfile.log
source dump.sql
The logfile will contain many lines telling each command was successful, and will display some warnings, particulartly truncate colums. But the original file has tens of thousands of very large INSERTs, the log is not particularly helpful. Despite it requires some kind of supervised interaction. (I cannot program a crontab, for example.)
#!/bin/bash
echo "tee logfile.log" > script.sql
echo "source $1" > script.sql
mysql --show-warnings < script.sql > tmpfile.log 2>&1
cat tmpfile.log >> logfile.log
The tee command doesn't work in this batch environment. I can capture all the warnings, but I cannot figure out which command produced each warning.
So I came down with this small monstruosity:
#!/bin/bash
ERRFILE=$(basename "$0" .sh).err.log
LOGFILE=$(basename "$1" .sql).log
log_action() {
WARN=$(cat)
[ -z "$WARN" ] || echo -e "Line ${1}: ${WARN}\n${2}" >> "$LOGFILE"
}
echo 0 > "$ERRFILE"
log_error() {
ERNO=$(cat "$ERRFILE")
ERR=$(cat)
[ -z "$ERR" ] || echo -e "*** ERROR ***\nLine ${1}: ${ERR}\n${2}" >> "$LOGFILE"
(( ERNO++ ))
echo $ERNO > "$ERRFILE"
}
COUNT=0
COMMAND=''
echo -e "**** BEGIN $(date +%Y-%m-%d\ %H:%M:%S)\n" > "$LOGFILE"
exec 4> >(log_action $COUNT "$COMMAND")
exec 5> >(log_error $COUNT "$COMMAND")
exec 3> >(mysql --show-warnings >&4 2>&5)
while IFS='' read -r LINE || [[ -n "$line" ]]
do
(( COUNT++ ))
[ ${#LINE} -eq 0 ] && continue # discard blank lines
[ "${LINE:0:2}" = "--" ] && continue # discard comments
COMMAND+="$LINE" # build command
[ "${LINE: -1}" != ";" ] && continue # if not finnished keep building
echo $COMMAND >&3 # otherwise execute
COMMAND=''
done < "$1"
exec 3>$-
exec 5>$-
exec 4>$-
echo -e "**** END $(date +%Y-%m-%d\ %H:%M:%S)\n" >> "$LOGFILE"
ERRS=$(cat "$ERRFILE")
[ "ERRS" = 0 ] || echo "${ERRS} Errors." >&2
This scans the file at $1 and sends the commands to an open MySQL connection at &3. That part is working fine.
The capture of warnings and errors is not working though.
It only records the first error.
It only records the first warning.
I haven't find a good way to pass the line number $COUNT and offending command $COMMAND to the recording functions.
The only error is after the time stamps, and the only warning is after the error, which is not the chronology of the script.

Adding header to all .csv files in folder and include filename

I'm a command line newbie and I'm trying to figure out how I can add a header to multiple .csv files. The new header should have the following: 'TaxID' and 'filename'
I've tried multiple commands like sed, ed, awk, echo but if it worked it only changed the first file it found (I said *.csv in my command) and I can only manage this for TaxID.
Can anyone help me to get the filename into the header as well and do this for all my csv files?
(Note, I'm using a Mac)
Thank you!
Here's one way to do it, there are certainly others:
$ for i in *.csv;do echo $i;cp "$i" "$i.bak" && { echo "TaxID,$i"; cat "$i.bak"; } >"$i";done
Here's a sample run:
$ cat file1.csv
1,2
3,4
$ cat file2.csv
a,b
c,d
$ for i in *.csv;do echo $i;cp "$i" "$i.bak" && { echo "TaxID,$i"; cat "$i.bak"; } >"$i";done
file1.csv
file2.csv
$ cat file1.csv.bak
1,2
3,4
$ cat file1.csv
TaxID,file1.csv
1,2
3,4
$ cat file2.csv.bak
a,b
c,d
$ cat file2.csv
TaxID,file2.csv
a,b
c,d
Breaking it down:
$ for i in *.csv; do
This loops over all the files ending in .csv in the current directory. Each will be put in the shell variable i in turn.
echo $i;
This just echoes the current filename so you can see the progress. This can be safely left out.
cp "$i" "$i.bak"
Copy the current file (whose name is in i) to a backup. This is both to preserve the file if something goes awry, and gives subsequent commands something to copy from.
&&
Only run the subsequent commands if the cp succeeds. If you can't make a backup, don't continue.
{
Start a group command.
echo "TaxID,$i";
Output the desired header.
cat "$i.bak";
Output the original file.
}
End the group command.
>"$i";
Redirect the output of the group command (the new header and the contents of the original file) to the original file. This completes one file.
done
Finish the loop over all the files.
For fun, here are a couple of other ways (one JRD beat me to), including one using ed!
$ for i in *.csv;do echo $i;perl -p -i.bak -e 'print "TaxID,$ARGV\n" if $. == 1' "$i";done
$ for i in *.csv;do echo $i;echo -e "1i\nTaxID,$i\n.\nw\nq\n" | ed "$i";done
Here is on way in perl that modifies the files in place by adding a header of TaxID,{filename}, ignoring adding the header if it thinks it already exists.
ls
a.csv b.csv
cat a.csv
1,a.txt
2,b.txt
cat b.csv
3,c.txt
4,d.txt
ls *.csv | xargs -I{} -n 1 \
perl -p -i -e 'print "TaxID,{}\n" if !m#^TaxID# && !$h; $h = 1;' {}
cat a.csv
TaxID,a.csv
1,a.txt
2,b.txt
cat b.csv
TaxID,b.csv
3,c.txt
4,d.txt
You may want to create some backups of your files, or run on a few sample copies before running in earnest.
Explanatory:
List all files in directory with .csv extenstion
ls *.csv
"Pipe" the output of ls command into xargs so the perl command can run for each file. -I{} allows the filename to be subsequently referenced with {}. -n tells xargs to only pass 1 file at a time to perl.
| xargs -I{} -n 1
-p print each line of the input (file)
-i modifying the file in place
-e execute the following code
perl -p -i -e
Perl will implicitly loop over each line of the file and print it (due to -p). Print the header if we have not printed the header already and the current line doesn't already look like a header.
'print "TaxID,{}\n" if !m#^TaxID# && !$h; $h = 1;'
This is replaced with the filename.
{}
All told, in this example the commands to be run would be:
perl -p -i -e 'print "TaxID,{}\n" if !m#^TaxID# && !$h; $h = 1;' a.csv
perl -p -i -e 'print "TaxID,{}\n" if !m#^TaxID# && !$h; $h = 1;' b.csv
perl -p -i -e 'print "TaxID,{}\n" if !m#^TaxID# && !$h; $h = 1;' c.csv
perl -p -i -e 'print "TaxID,{}\n" if !m#^TaxID# && !$h; $h = 1;' d.csv

What's a Good Way to Commit All MySQL Settings Needed to Get a Django App Running?

I'm in the middle of making my first django app, and I'd like to commit it to git in such a way that someone can clone it down and start working on it with the least amount of trouble. One of the things I needed to do to get things up and running was to create a new db in my local mysql installation and create a new user there. I'd love to let someone clone things down and have that done automatically for them. Is there a good way to do this?
Use mysql-python and write a python script to create the database or alter my shell script to make it suit your needs.
#!/bin/bash
function pre_checks() {
if [[ "$1" -ne 3 ]]; then
echo "Usage: $0 [DATABASE NAME] [USERNAME] [HOST]"
return 1
fi
if ! command -v /usr/bin/mysql >/dev/null 2>&1; then
echo "Mysql is not installed."
return 1
fi
echo -n "Create the database '${2}' and the user '${3}' now? (y/n) "
read ANSWER
case "$ANSWER" in
"y"|"Y")
echo -n "Password for ${3}: "
read -s USER_PW
echo
return 0 ;;
"n"|"N"| *)
echo "Bye."
return 1 ;;
esac
}
function create_db() {
Q1="CREATE DATABASE IF NOT EXISTS ${1} CHARACTER SET utf8;"
Q2="GRANT ALL ON *.* TO '${2}'#'${3}' IDENTIFIED BY '$USER_PW';"
Q3="FLUSH PRIVILEGES;"
Q4="SHOW DATABASES;"
SQL="${Q1} ${Q2} ${Q3} ${Q4}"
echo "Query:"
echo "${SQL}"
echo -n "Run query now? (y/n) "
read ANSWER
case "$ANSWER" in
"y" | "Y" )
/usr/bin/mysql -uroot -p -e "$SQL" || echo "Failure."
;;
"n" | "N" | *)
echo "Bye."
return 1
;;
esac
}
pre_checks "$#" "$1" "$2" && create_db "$1" "$2" "$3"

Easily import a MySQL --tab dump

I have dumped a MySQL database with the --tab option, which creates 2 files per table (a SQL file with the create table and a tab-separated-values file with the data).
Is there an easy way to import this directory of files back into a MySQL server? I can't find the option in mysqlimport.
for i in `ls *.sql`; do
sql_file=$i;
table_name=`echo $sql_file | sed "s/.sql$//"`
mysql -u root database_name < $sql_file
echo "LOAD DATA LOCAL INFILE '$table_name.txt' INTO TABLE $table_name" | mysql -u root database_name
done
You can do this several ways - the most direct would be
mysql db < sql_structure_file
This creates the tables. Then do (from mysql client)
LOAD DATA LOCAL INFILE tab_delimited_file INTO TABLE
(with appropriate names, delimiters, etc )
I'm using this bash script which first imports all the sql files to build the tables, then the txt files. The data is loaded using background processes in parallel — basically emulating the multi-thread option in mysqlimport. Usage is like this:
./import_table.sh database_name /path/to/dump/files
SCRIPT:
#!/bin/bash
DIR=$(echo $2 | sed 's/\/$//')
function import_sql() {
mysql $1 < $2;
echo "mysql $1 < '$2'";
}
function import_txt() {
mysqlimport --silent $1 $2;
echo "mysqlimport --silent $1 '$2'";
}
for filename in $DIR/*.sql; do
[ -e "$filename" ] || continue
import_sql $1 $filename &
done
wait
echo 'ALL SQL IMPORTED';
for filename in $DIR/*.txt; do
[ -e "$filename" ] || continue
import_txt $1 $filename &
done
wait
echo 'ALL TXT IMPORTED';

dump all mysql tables into separate files automatically?

I'd like to get dumps of each mysql table into separate files. The manual indicates that the syntax for this is
mysqldump [options] db_name [tbl_name ...]
Which indicates that you know the table names before hand. I could set up the script that knows each table name now, but say I add a new table down the road and forget to update the dump script. Then I'm missing dumps for one or more table.
Is there a way to automagically dump each existing table into a separate file? Or am I going to have to do some script-fu; query the database, get all the table names, and dump them by name.
If I go the script-fu route, what scripting langauges can access a mysql database?
Here's a script that dumps table data as SQL commands into separate, compressed files. It does not require being on the MySQL server host, doesn't hard-code the password in the script, and is just for a specific db, not all db's on the server:
#!/bin/bash
# dump-tables-mysql.sh
# Descr: Dump MySQL table data into separate SQL files for a specified database.
# Usage: Run without args for usage info.
# Author: #Trutane
# Ref: http://stackoverflow.com/q/3669121/138325
# Notes:
# * Script will prompt for password for db access.
# * Output files are compressed and saved in the current working dir, unless DIR is
# specified on command-line.
[ $# -lt 3 ] && echo "Usage: $(basename $0) <DB_HOST> <DB_USER> <DB_NAME> [<DIR>]" && exit 1
DB_host=$1
DB_user=$2
DB=$3
DIR=$4
[ -n "$DIR" ] || DIR=.
test -d $DIR || mkdir -p $DIR
echo -n "DB password: "
read -s DB_pass
echo
echo "Dumping tables into separate SQL command files for database '$DB' into dir=$DIR"
tbl_count=0
for t in $(mysql -NBA -h $DB_host -u $DB_user -p$DB_pass -D $DB -e 'show tables')
do
echo "DUMPING TABLE: $DB.$t"
mysqldump -h $DB_host -u $DB_user -p$DB_pass $DB $t | gzip > $DIR/$DB.$t.sql.gz
tbl_count=$(( tbl_count + 1 ))
done
echo "$tbl_count tables dumped from database '$DB' into dir=$DIR"
The mysqldump command line program does this for you - although the docs are very unclear about this.
One thing to note is that ~/output/dir has to be writable by the user that owns mysqld. On Mac OS X:
sudo chown -R _mysqld:_mysqld ~/output/dir
mysqldump --user=dbuser --password --tab=~/output/dir dbname
After running the above, you will have one tablename.sql file containing each table's schema (create table statement) and tablename.txt file containing the data.
If you want a dump with schema only, add the --no-data flag:
mysqldump --user=dbuser --password --no-data --tab=~/output/dir dbname
You can accomplish this by:
Get the list of databases in mysql
dump each database with mysqldump
# Optional variables for a backup script
MYSQL_USER="root"
MYSQL_PASS="something"
BACKUP_DIR=/srv/backup/$(date +%Y-%m-%dT%H_%M_%S);
test -d "$BACKUP_DIR" || mkdir -p "$BACKUP_DIR"
# Get the database list, exclude information_schema
for db in $(mysql -B -s -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' | grep -v information_schema)
do
# dump each database in a separate file
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS "$db" | gzip > "$BACKUP_DIR/$db.sql.gz"
done
Here is the corresponding import.
#!/bin/bash
# import-files-mysql.sh
# Descr: Import separate SQL files for a specified database.
# Usage: Run without args for usage info.
# Author: Will Rubel
# Notes:
# * Script will prompt for password for db access.
[ $# -lt 3 ] && echo "Usage: $(basename $0) <DB_HOST> <DB_USER> <DB_NAME> [<DIR>]" && exit 1
DB_host=$1
DB_user=$2
DB=$3
DIR=$4
DIR=$DIR/*
echo -n "DB password: "
read -s DB_pass
echo
echo "Importing separate SQL command files for database '$DB' into '$DB'"
file_count=0
for f in $DIR
do
echo "IMPORTING FILE: $f"
gunzip -c $f | mysql -h $DB_host -u $DB_user -p$DB_pass $DB
(( file_count++ ))
done
echo "$file_count files importing to database '$DB'"
#!/bin/bash
for i in $(mysql -uUser -pPASSWORD DATABASE -e "show tables;"|grep -v Tables_in_);do mysqldump -uUSER -pPASSWORD DATABASE $i > /backup/dir/$i".sql";done
tar -cjf "backup_mysql_"$(date +'%Y%m%d')".tar.bz2" /backup/dir/*.sql
I have had recently the need to backup a big database (more than 250GB uncompressed dump file) and I found the answers to this question really helpful.
I started using #Trutane approach and it worked like a charm. But I was concerned about dumping tables in different mysql sessions because that could, in some moment, drive to a non-consistent backup.
After some research and testing, I have developed a different solution based on gawk. The basic idea is creating a dump of the whole database using mysqldump with --single-transaction=true and then process the output with gawk to produce a different file for every table.
So I can call:
mysqldump --single-transaction=true -u DBUSERNAME -p DBNAME | \
gawk -v 'database=DBNAME' -f 'backup.awk' -
And it produces, in current folder, a bunch of $database.$table.sql files with the schema of every table and $database.$table.sql.gz files with the content of every table. Thanks to the param --single-transaction=true, all the dump happens in a single transaction and data consistency is ensured.
The content of backup.awk is:
# Split mysqldump output in different files, two per table:
# * First file is named $database.$table.sql and it contains the table schema
# * Second file is named $database.$table.sql.gz and it contains the table data
# The 'database' variable is expected to be provided in command-line
BEGIN {
insert=0
filename=sprintf("%s.header.sql", database);
}
# A line starting with "INSERT INTO" activates inserting mode
/INSERT INTO/ { insert=1 }
# A line containing "-- Table structure for table `name-of-table`" finishes inserting mode
# It is also used to detect table name and change file names accordingly
match($0, /-- Table structure for table `(.*)`/, m) {
insert=0;
table=m[1];
filename=sprintf("%s.%s.sql", database, table);
print sprintf("Dumping table %s\n", table);
}
# If in inserting mode, line is piped to a gzipped file,
# if it is not, it is redirected to an uncompressed schema file
{
if (insert == 1) {
output = sprintf("gzip > %s.gz", filename);
print | output
} else {
print > filename;
}
}
It looks everybody here forgot of autocommit=0;SET unique_checks=0;SET foreign_key_checks=0; that is suppose to speed up the import process ...
#!/bin/bash
MYSQL_USER="USER"
MYSQL_PASS="PASS"
if [ -z "$1" ]
then
echo "Dumping all DB ... in separate files"
for I in $(mysql -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' -s --skip-column-names);
do
echo "SET autocommit=0;SET unique_checks=0;SET foreign_key_checks=0;" > "$I.sql"
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS $I >> "$I.sql";
echo "SET autocommit=1;SET unique_checks=1;SET foreign_key_checks=1;commit;" >> "$I.sql"
gzip "$I.sql"
done
echo "END."
else
echo "Dumping $1 ..."
echo "SET autocommit=0;SET unique_checks=0;SET foreign_key_checks=0;" > "$1.sql"
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS $1 >> "$1.sql";
echo "SET autocommit=1;SET unique_checks=1;SET foreign_key_checks=1;commit;" >> "$1.sql"
gzip "$1.sql"
fi
If You want to dump all tables from all databases just combine Elias Torres Arroyo's and Trutane's answer:
And if You don't want to give Your password on terminal, just store Your password in an extra config file (chmod 0600)- see Mysqldump launched by cron and password security
#!/bin/bash
# this file
# a) gets all databases from mysql
# b) gets all tables from all databases in a)
# c) creates subfolders for every database in a)
# d) dumps every table from b) in a single file
# this is a mixture of scripts from Trutane (http://stackoverflow.com/q/3669121/138325)
# and Elias Torres Arroyo (https://stackoverflow.com/a/14711298/8398149)
# usage:
# sk-db.bash parameters
# where pararmeters are:
# d "dbs to leave"
# t " tables to leave"
# u "user who connects to database"
# h "db host"
# f "/backup/folder"
user='root'
host='localhost'
backup_folder=''
leave_dbs=(information_schema mysql)
leave_tables=()
while getopts ":d:t:u:h:f:" opt; do
case $opt in
d) leave_dbs=( $OPTARG )
;;
t) leave_tables=( $OPTARG )
;;
u) user=$OPTARG
;;
h) host=$OPTARG
;;
f) backup_folder=$OPTARG
;;
\?) echo "Invalid option -$OPTARG" >&2
;;
esac
done
echo '****************************************'
echo "Database Backup with these options"
echo "Host $host"
echo "User $user"
echo "Backup in $backup_folder"
echo '----------------------------------------'
echo "Databases to emit:"
printf "%s\n" "${leave_dbs[#]}"
echo '----------------------------------------'
echo "Tables to emit:"
printf "%s\n" "${leave_tables[#]}"
echo '----------------------------------------'
BACKUP_DIR=$backup_folder/$(date +%Y-%m-%dT%H_%M_%S);
CONFIG_FILE=/root/db-config.cnf
function contains() {
local n=$#
local value=${!n}
for ((i=1;i < $#;i++)) {
if [ "${!i}" == "${value}" ]; then
echo "y"
return 0
fi
}
echo "n"
return 1
}
test -d "$BACKUP_DIR" || mkdir -p "$BACKUP_DIR"
# Get the database list, exclude information_schema
database_count=0
tbl_count=0
for db in $(mysql --defaults-extra-file=$CONFIG_FILE -B -s -u $user -e 'show databases' )
do
if [ $(contains "${leave_dbs[#]}" "$db") == "y" ]; then
echo "leave database $db as requested"
else
# dump each database in a separate file
(( database_count++ ))
DIR=$BACKUP_DIR/$db
[ -n "$DIR" ] || DIR=.
test -d $DIR || mkdir -p $DIR
echo
echo "Dumping tables into separate SQL command files for database '$db' into dir=$DIR"
for t in $(mysql --defaults-extra-file=$CONFIG_FILE -NBA -h $host -u $user -D $db -e 'show tables')
do
if [ $(contains "${leave_tables[#]}" "$db.$t") == "y" ]; then
echo "leave table $db.$t as requested"
else
echo "DUMPING TABLE: $db.$t"
# mysqldump --defaults-extra-file=$CONFIG_FILE -h $host -u $user $db $t > $DIR/$db.$t.sql
tbl_count=$(( tbl_count + 1 ))
fi
done
echo "Database $db is finished"
echo '----------------------------------------'
fi
done
echo '----------------------------------------'
echo "Backup completed"
echo '**********************************************'
And also, this helped:
Check if bash array contains value
arrays in bash
named arguments in script
I'm not bash master, but I'd just do it with a bash script. Without hitting MySQL, with knowledge of the data directory and database name, you could just scan for all .frm files (one for every table in that db/directory) for a list of tables.
I'm sure there are ways to make it slicker and accept arguments or whatnot, but this worked well for me.
tables_in_a_db_to_sql.sh
#!/bin/bash
database="this_is_my_database"
datadir="/var/lib/mysql/"
datadir_escaped="\/var\/lib\/mysql\/"
all_tables=($(ls $datadir$database/*.frm | sed s/"$datadir_escaped$database\/"/""/g | sed s/.frm//g))
for t in "${all_tables[#]}"; do
outfile=$database.$t.sql
echo "-- backing up $t to $outfile"
echo "mysqldump [options] $database $t > $outfile"
# mysqldump [options] $database $t > $outfile
done
Fill in the [options] and desired outfile convention as you need, and uncomment the last mysqldump line.
For Windows Servers, you can use a batch file like so:
set year=%DATE:~10,4%
set day=%DATE:~7,2%
set mnt=%DATE:~4,2%
set hr=%TIME:~0,2%
set min=%TIME:~3,2%
IF %day% LSS 10 SET day=0%day:~1,1%
IF %mnt% LSS 10 SET mnt=0%mnt:~1,1%
IF %hr% LSS 10 SET hr=0%hr:~1,1%
IF %min% LSS 10 SET min=0%min:~1,1%
set backuptime=%year%-%mnt%-%day%-%hr%-%min%
set backupfldr=C:\inetpub\wwwroot\backupfiles\
set datafldr="C:\Program Files\MySQL\MySQL Server 5.5\data"
set zipper="C:\inetpub\wwwroot\backupfiles\zip\7za.exe"
set retaindays=21
:: Switch to the data directory to enumerate the folders
pushd %datafldr%
:: Get all table names and save them in a temp file
mysql --skip-column-names --user=root --password=mypassword mydatabasename -e "show tables" > tables.txt
:: Loop through all tables in temp file so that we can save one backup file per table
for /f "skip=3 delims=|" %%i in (tables.txt) do (
set tablename = %%i
mysqldump --user=root --password=mypassword mydatabasename %%i > "%backupfldr%mydatabasename.%backuptime%.%%i.sql"
)
del tables.txt
:: Zip all files ending in .sql in the folder
%zipper% a -tzip "%backupfldr%backup.mydatabasename.%backuptime%.zip" "%backupfldr%*.sql"
echo "Deleting all the files ending in .sql only"
del "%backupfldr%*.sql"
echo "Deleting zip files older than 21 days now"
Forfiles /p %backupfldr% /m *.zip /d -%retaindays% /c "cmd /c del /q #path"
Then schedule it using Windows Task Scheduler.
Also, if you want to exclude certain tables in your backup, note that you can use a where clause on the "show tables" statement, but the column name depends on your database name.
So for example, if your database name is "blah" then your column name in the "show tables" result set will be "tables_in_blah". Which means you could add a where clause something similar to this:
show tables where tables_in_blah <> 'badtable'
or
show tables where tables_in_blah like '%goodtable%'
They complete the route where they would store the backups. East
case we are creating one per day of the week, so we have 7 days of backup and they are recycled.
check how many databases it has and then how many tables each database has. and create a file named by db.tablename.sql
which can then be restored.
regards
#!/bin/bash
USER="root"
MYSQL_PASSWORD="password"
RUTA=/hdd/backup/mysql
diasemana=$(date +\%w)
mkdir -m 7777 $RUTA
mkdir -m 7777 $RUTA/infodb
mkdir -m 7777 $RUTA/$diasemana
mysql -u$USER -p$MYSQL_PASSWORD -e "SHOW DATABASES where \`Database\` <> 'information_schema' and \`Database\` <> 'mysql' and \`Database\` <> 'sys' and \`Database\` <> 'performance_schema';" -N > $RUTA/infodb/db.txt;
for i in $(cat $RUTA/infodb/db.txt);
do
mysql -u$USER -p$MYSQL_PASSWORD -e "USE $i;show tables;" -N >$RUTA/infodb/$i.txt;
for j in $(cat $RUTA/infodb/$i.txt);
do
mysqldump -u$USER -p$MYSQL_PASSWORD $i $j > $RUTA/$diasemana/$i"_"$j".sql";
echo $RUTA/$diasemana/$i"_"$j".sql"
done
done
See the following article by Pauli Marcus:
Howto split a SQL database dump into table-wise files
Splitting a sql file containing a whole database into per-table files
is quite easy: Grep the .sql for any occurence of DROP TABLE. Generate
the file name from the table name that is included in the DROP TABLE
statement. Echo the output to a file. Here is a little script that
expects a .sql file as input:
#!/bin/bash
file=$1 # the input file
directory="$file-splitted" # the output directory
output="$directory/header" # the first file containing the header
GREP="DROP TABLE" # what we are looking for
mkdir $directory # create the output directory
while read line
do
# if the current line contains the wanted statement
if [ $(echo "$line" | grep -c "$GREP") == "1" ]
then
# extract the file name
myfile=$(echo $line | awk '{print $5}' | sed -e 's/`//g' -e 's/;//g')
# set the new file name
output="$directory/$myfile"
fi
echo "$line" >> $output # write to file
done < $file