MySql: dump BLOB into local file - mysql

how do i dump the contents of a blob to a file? the thing is that the resulting file should be stored on a client and not on the server, and the solution should be handled in a shell script.
SELECT ... INTO OUTFILE/DUMPFILE ...
will fail because this would save the file directly on the server, not on the client.
echo "USE my_db; SELECT my_blob FROM my_table LIMIT 1" | mysql --<connection params> > $OUTFILE
writes garbled data into the local $OUTFILE, i guess including some formatting.
is there a way to disbale all formatting, or how can i get a 1:1 dump to a file?
any help is greatly appreciated!

You can accomplish this with the MySQL client as long as you use the proper options.
In particular you should use double-silent mode to suppress table formatting and the column name, and use raw mode so no characters are escaped.
Here's an update of the command you tried that should get you on the right track:
mysql --<connection params> \
my_db \
--raw \
--silent \
--silent \
--execute \
"SELECT my_blob FROM my_table LIMIT 1" > $OUTFILE

I had to extract all files out of a table and came up with a simple bash script based on previous answer.
It's not very smart, but it does the job :)
#!/bin/bash
myUser=user
myPass=secret
myHost=localhost
myDb=database
myCmd="mysql -u$myUser -p$myPass -h$myHost -D$myDb"
$myCmd -e "SELECT id, file_name FROM files;" |
while read id file_name; do
echo $id $file_name;
$myCmd --raw --silent --silent -e "SELECT file_data FROM files WHERE id=$id LIMIT 1;" > "$file_name"
done;

Related

connect to mysql db and execute query and export result to variable - bash script

I want to connect to mysql databse and execute some queries and export its result to a varibale, and do all of these need to be done entirely by bash script
I have a snippet code but does not work.
#!/bin/bash
BASEDIR=$(dirname $0)
cd $BASEDIR
mysqlUser=n_userdb
mysqlPass=d2FVR0NA3
mysqlDb=n_datadb
result=$(mysql -u $mysqlUser -p$mysqlPass -D $mysqlDb -e "select * from confs limit 1")
echo "${result}" >> a.txt
whats the problem ?
The issue was resolved in the chat by using the correct password.
If you further want to get only the data, use mysql with -NB (or --skip-column-names and --batch).
Also, the script needs to quote the variable expansions, or there will be issues with usernames/passwords containing characters that are special to the shell. Additionally, uppercase variable names are usually reserved for system variables.
#!/bin/sh
basedir=$(dirname "$0")
mysqlUser='n_userdb'
mysqlPass='d2FVR0NA3'
mysqlDb='n_datadb'
cd "$basedir" &&
mysql -NB -u "$mysqlUser" -p"$mysqlPass" -D "$mysqlDb" \
-e 'select * from confs limit 1' >a.txt 2>a-err.txt
Ideally though, you'd use a my.cnf file to configure the username and password.
See e.g.
MySQL Utilities - ~/.my.cnf option file
mysql .my.cnf not reading credentials properly?
Do this:
result=$(mysql -u $mysqlUser -p$mysqlPass -D $mysqlDb -e "select * from confs limit 1" | grep '^\|' | tail -1)
The $() statement of Bash has trouble handling variables which contain multiple lines so the above hack greps only the interesting part: the data

HOWTO read mysql SELECT into bash variables, then use those variables for INSERT INTO a different table

i am fairly new to this so please be patient, my understanding of bash however is that i can run
mysql --host=hostname --user=username --password=password -e "SELECT * FROM database.table;"
but i have less than no idea from other manuals how to get those into actual bash variables someone mentioned using
read a b c
do while
echo "..${a}..${b}..${c}.."
but i fail to see how that will read them into the variables?
also on reading the varibles back in i will be doing something like
#>WGET $a
then login to mysql again and doing something like
LOAD DATA INFILE data.csv INTO thattable ON DUPLICATE UPDATE
i want to also so something like
INSERT INTO thattable WHERE (i just loaded the info) date = today
but because there will be multiple dates how do i do this, and yes this all needs to be bash-able php is too slow and C i want to avoid unless it's the only way,
thanks i know this is a lot!
-AW
Option 1
Use "select ... \G", store result in a tmp file and grep for the columns.
By example:
mytmp=$(mktemp /tmp/mytemp.XXXXXX)
mysql --host=hostname --user=username --password=password -e "SELECT * FROM database.table \G;" > $mytmp
column_foo=$( fgrep COLUMN_FOO $mytmp | cut -d ':' -f2-)
column_bar=$( fgrep COLUMN_BAR $mytmp | cut -d ':' -f2-)
echo $column_foo
echo $column_bar
Option 2
If the amount of columns is high store all them in a hash array:
mytmp=$(mktemp /tmp/mytemp.XXXXXX)
mysql --host=hostname --user=username --password=password -e "SELECT * FROM database.table \G;" | xargs -I{} echo {} > $mytmp
declare -A a
while IFS=':' read k s; do a[$k]=$s; done < $mytmp
echo ${a[COLUMN_FOO]}
echo ${a[COLUMN_BAR]}

saving blob field to disk from bash

I have a mysql database with a blob field containing a zip and I need to save it as a file on disk, from bash. I'm doing the following but the end result doesn't read as a zip... Am I doing something wrong or is the file stored not actually a zip (the entry in the database is actually created by a seismological station, so I have no control over it)?
echo "USE database; SELECT blobcolumn FROM table LIMIT 1" | mysql -u root > file.zip
then I open file.zip with a file editor and remove first line which contains the column header. Then 'unzip' doesn't recognize it as a zip file.
For a gzipped blob you can use:
echo "use db; select blob from table where id=blah" | mysql -N --raw -uuser -ppass > mysql.gz
I have not tried this with a zip file.
The proper way to do this would be to use DUMPFILE, otherwise mysql will mess up your data.
mysql -uroot -e "SELECT blobcolumn INTO DUMPFILE '/tmp/file.zip' FROM table LIMIT 1" database
I know this is an old question, but I needed the answer myself, so this is what worked for me.
I found that mysql appends a newline character at the end, which needs to be removed before the correct binary value remains.
echo "USE database; SELECT blobcolumn FROM table LIMIT 1" | mysql -N --raw -u root | head -c -1 > file.zip
you would need to skip column, like
sql="USE database; SELECT blobcolumn FROM table LIMIT 1"
mysql -u root -N <<< $sql > file.zip

Backing Up Views with Mysql Dump

I want to back up only the Views with mysqldump.
Is this possible?
If so, how?
NOTE: This answer from Ken moved from suggested edit to own answer.
here's a full command line example using a variant of the above
mysql -u username INFORMATION_SCHEMA
--skip-column-names --batch
-e "select table_name from tables where table_type = 'VIEW'
and table_schema = 'database'"
| xargs mysqldump -u username database
> views.sql
This extracts all of the view names via a query to the INFORMATION_SCHEMA database, then pipes them to xargs to formulate a mysqldump command. --skip-column-names and --batch are needed to make the output xargs friendly. This command line might get too long if you have a lot of views, in which case you'd want to add some sort of additional filter to the select (e.g. look for all views starting with a given character).
Backing up views over multiple databases can be done by just using information_schema:
mysql --skip-column-names --batch -e 'select CONCAT("DROP TABLE IF EXISTS ", TABLE_SCHEMA, ".", TABLE_NAME, "; CREATE OR REPLACE VIEW ", TABLE_SCHEMA, ".", TABLE_NAME, " AS ", VIEW_DEFINITION, "; ") table_name from information_schema.views'
I modified Andomar's excellent answer to allow the database (and other settings) to only be specified once:
#!/bin/bash -e
mysql --skip-column-names --batch -e \
"select table_name from information_schema.views \
where table_schema = database()" $* |
xargs --max-args 1 mysqldump $*
I save this as mysql-dump-views.sh and call it via:
$ mysql-dump-views.sh -u user -ppassword databasename >dumpfile.sql
By backup, I'm assuming you mean just the definition without the data.
It seems that right now mysqldump doesn't distinguish between VIEWs and TABLEs, so perhaps the best thing to do is to either specify the VIEWs explicitly on the command line to mysqldump or figure out this list dynamically before mysqldump and then passing it down like before.
You can get all the VIEWs in a specific database using this query:
SHOW FULL TABLES WHERE table_type='view';
In terms of answering this question, olliiiver's answer is the best for doing this directly. For my answer I will try to build that into a comprehensive full backup and restore solution.
With the help of the other answers in this question, and a few other resources, I came up with this script for easily replacing the database on my development server with a live copy from the production server on demand. It works on one database at a time, rather than all databases. While I do have a separate script for that, it is not safe to share here as it basically drops and recreates everything except for a select few databases, and your environment may vary.
The script assumes root system and MySQL user on both machines (though that can be changed), working passwordless SSH between servers, and relies on a MySQL password file /root/mysqlroot.cnf on each machine, which looks like this:
[client]
password=YourPasswordHere
File: synctestdb.sh, optionally symlinked to /usr/sbin/synctestdb for ease of use
Usage: synctestdb DBNAME DESTSERVER
Run it from the production server.
Here it is:
#!/bin/bash
if [ "${1}" != "" ] && [ "${1}" != "--help" ] && [ "${2}" != "" ] ; then
DBNAME=${1}
DESTSERVER=${2}
BKDATE=$( date "+%Y-%m-%d" );
SRCHOSTNAME=$( /bin/hostname )
EXPORTPATH=/tmp
EXPORTFILE=/tmp/${SRCHOSTNAME}_sql_${BKDATE}_devsync.sql
CREDSFILE=/root/mysqlroot.cnf
SSHUSER=root
DBEXISTS=$( echo "SHOW DATABASES LIKE '${DBNAME}'" \
| mysql --defaults-extra-file=${CREDSFILE} -NB INFORMATION_SCHEMA )
if [ "${DBEXISTS}" == "${DBNAME}" ] ; then
echo Preparing --ignore-tables parameters for all relevant views
echo
#build --ignore-table parameters list from list of all views in
#relevant database - as mysqldump likes to recreate views as tables
#we pair this with an export of the view definitions later below
SKIPVIEWS=$(mysql --defaults-extra-file=${CREDSFILE} \
-NB \
-e "SELECT \
CONCAT( '--ignore-table=', TABLE_SCHEMA, '.', TABLE_NAME ) AS q \
FROM INFORMATION_SCHEMA.VIEWS \
WHERE TABLE_SCHEMA = '${DBNAME}';" )
if [ "$?" == "0" ] ; then
echo Exporting database ${DBNAME}
echo
mysqldump --defaults-extra-file=${CREDSFILE} ${SKIPVIEWS} \
--add-locks --extended-insert --flush-privileges --no-autocommit \
--routines --triggers --single-transaction --master-data=2 \
--flush-logs --events --quick --databases ${DBNAME} > ${EXPORTFILE} \
|| echo -e "\n\nERROR: ${SRCHOSTNAME} failed to mysqldump ${DBNAME}"
echo Exporting view definitions
echo
mysql --defaults-extra-file=${CREDSFILE} \
--skip-column-names --batch \
-e "SELECT \
CONCAT( \
'DROP TABLE IF EXISTS ', TABLE_SCHEMA, '.', TABLE_NAME, \
'; CREATE OR REPLACE VIEW ', TABLE_SCHEMA, '.', TABLE_NAME, ' AS ', \
VIEW_DEFINITION, '; ') AS TABLE_NAME FROM INFORMATION_SCHEMA.VIEWS \
WHERE TABLE_SCHEMA = '${DBNAME}';" >> ${EXPORTFILE} \
|| echo -e "\n\nERROR: ${SRCHOSTNAME} failed to mysqldump view definitions"
echo Export complete, preparing to transfer export file and import
echo
STATUSMSG="SUCCESS: database ${DBNAME} synced from ${SRCHOSTNAME} to ${DESTSERVER}"
scp \
${EXPORTFILE} \
${SSHUSER}#${DESTSERVER}:${EXPORTPATH}/ \
|| STATUSMSG="ERROR: Failed to SCP file to remote server ${DESTSERVER}"
ssh ${SSHUSER}#${DESTSERVER} \
"mysql --defaults-extra-file=${CREDSFILE} < ${EXPORTFILE}" \
|| STATUSMSG="ERROR: Failed to update remote server ${DESTSERVER}"
ssh ${SSHUSER}#${DESTSERVER} \
"rm ${EXPORTFILE}" \
|| STATUSMSG="ERROR: Failed to remove import file from remote server ${DESTSERVER}"
rm ${EXPORTFILE}
echo ${STATUSMSG}
else
echo "ERROR: could not obtain list of views from INFORMATION_SCHEMA"
fi
else
echo "ERROR: specified database not found, or SQL credentials file not found"
fi
else
echo -e "Usage: synctestdb DBNAME DESTSERVER \nPlease only run this script from the live production server\n"
fi
So far it appears to work, though you may want to tweak it for your purposes. Be sure that wherever your credentials file is, it is set with secure access rights, so that unauthorized users cannot read it!
As it seems to be difficult to export views properly, I adapted olliiiver's answer to make it so that first we delete any tables or views with the exact names of valid views on the database we are importing into in case they exist, then importing all tables, which may erroneously create those views as tables, then delete those tables and define those views properly.
Basically here is how it works:
verify existence of the database you specified on the command line
use MYSQLDUMP to create a dump file
SCP the dump file from production to the specified test server
issue import commands on the specified test server over SSH and return output
remove dump file from both servers after complete
issue some reasonable output for most steps along the way
I would stick as closely as possible to the output of mysqldump like the OP asked, since it includes a slew of information about the view that can't be reconstructed with a simple query from the INFORMATION_SCHEMA.
This is how I create a deployment view script from my source database:
SOURCEDB="my_source_db"
mysql $SOURCEDB --skip-column-names -B -e \
"show full tables where table_type = 'view'" \
| awk '{print $1}' \
| xargs -I {} mysqldump $SOURCEDB {} > views.sql
Thanks for this - very useful.
One hiccup though - perhaps as I have a slightly convoluted set of views that reference other views etc:
I found that the "definer" user needs to exist and have the right permissions on the target schema, otherwise mysql will not generate the views that reference other views as it things definitions are insufficient.
In the generated:
/*!50013 DEFINER=<user>#<host> SQL SECURITY DEFINER */
--> ensure <user>#<host> is ok on your target instance, or replace this string with a user that does.
Thanks
Thorstein

Dump a mysql database to a plaintext (CSV) backup from the command line

I'd like to avoid mysqldump since that outputs in a form that is only convenient for mysql to read. CSV seems more universal (one file per table is fine). But if there are advantages to mysqldump, I'm all ears. Also, I'd like something I can run from the command line (linux). If that's a mysql script, pointers to how to make such a thing would be helpful.
If you can cope with table-at-a-time, and your data is not binary, use the -B option to the mysql command. With this option it'll generate TSV (tab separated) files which can import into Excel, etc, quite easily:
% echo 'SELECT * FROM table' | mysql -B -uxxx -pyyy database
Alternatively, if you've got direct access to the server's file system, use SELECT INTO OUTFILE which can generate real CSV files:
SELECT * INTO OUTFILE 'table.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM table
In MySQL itself, you can specify CSV output like:
SELECT order_id,product_name,qty
FROM orders
INTO OUTFILE '/tmp/orders.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
From http://www.tech-recipes.com/rx/1475/save-mysql-query-results-into-a-text-or-csv-file/
You can dump a whole database in one go with mysqldump's --tab option. You supply a directory path and it creates one .sql file with the CREATE TABLE DROP IF EXISTS syntax and a .txt file with the contents, tab separated. To create comma separated files you could use the following:
mysqldump --password --fields-optionally-enclosed-by='"' --fields-terminated-by=',' --tab /tmp/path_to_dump/ database_name
That path needs to be writable by both the mysql user and the user running the command, so for simplicity I recommend chmod 777 /tmp/path_to_dump/ first.
The select into outfile option wouldn't work for me but the below roundabout way of piping tab-delimited file through SED did:
mysql -uusername -ppassword -e "SELECT * from tablename" dbname | sed 's/\t/","/g;s/^/"/;s/$/"/' > /path/to/file/filename.csv
Here is the simplest command for it
mysql -h<hostname> -u<username> -p<password> -e 'select * from databaseName.tableNaame' | sed 's/\t/,/g' > output.csv
If there is a comma in the column value then we can generate .tsv instead of .csv with the following command
mysql -h<hostname> -u<username> -p<password> -e 'select * from databaseName.tableNaame' > output.csv
If you really need a "Backup" then you also need database schema, like table definitions, view definitions, store procedures and so on. A backup of a database isn't just the data.
The value of the mysqldump format for backup is specifically that it is very EASY to use it to restore mysql databases. A backup that isn't easily restored is far less useful. If you are looking for a method to reliably backup mysql data to so you can restore to a mysql server then I think you should stick with the mysqldump tool.
Mysql is free and runs on many different platforms. Setting up a new mysql server that I can restore to is simple. I am not at all worried about not being able to setup mysql so I can do a restore.
I would be far more worried about a custom backup/restore based on a fragile format like csv/tsv failing. Are you sure that all your quotes, commas, or tabs that are in your data would get escaped correctly and then parsed correctly by your restore tool?
If you are looking for a method to extract the data then see several in the other answers.
You can use below script to get the output to csv files. One file per table with headers.
for tn in `mysql --batch --skip-page --skip-column-name --raw -uuser -ppassword -e"show tables from mydb"`
do
mysql -uuser -ppassword mydb -B -e "select * from \`$tn\`;" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > $tn.csv
done
user is your user name, password is the password if you don't want to keep typing the password for each table and mydb is the database name.
Explanation of the script: The first expression in sed, will replace the tabs with "," so you have fields enclosed in double quotes and separated by commas. The second one insert double quote in the beginning and the third one insert double quote at the end. And the final one takes care of the \n.
If you want to dump the entire db as csv
#!/bin/bash
host=hostname
uname=username
pass=password
port=portnr
db=db_name
s3_url=s3://buckera/db_dump/
DATE=`date +%Y%m%d`
rm -rf $DATE
echo 'show tables' | mysql -B -h${host} -u${uname} -p${pass} -P${port} ${db} > tables.txt
awk 'NR>1' tables.txt > tables_new.txt
while IFS= read -r line
do
mkdir -p $DATE/$line
echo "select * from $line" | mysql -B -h"${host}" -u"${uname}" -p"${pass}" -P"${port}" "${db}" > $DATE/$line/dump.tsv
done < tables_new.txt
touch $DATE/$DATE.fin
rm -rf tables_new.txt tables.txt
Check out mk-parallel-dump which is part of the ever-useful maatkit suite of tools. This can dump comma-separated files with the --csv option.
This can do your whole db without specifying individual tables, and you can specify groups of tables in a backupset table.
Note that it also dumps table definitions, views and triggers into separate files. In addition providing a complete backup in a more universally accessible form, it also immediately restorable with mk-parallel-restore
Two line PowerShell answer:
# Store in variable
$Global:csv = (mysql -uroot -p -hlocalhost -Ddatabase_name -B -e "SELECT * FROM some_table") `
| ConvertFrom-Csv -Delimiter "`t"
# Out to csv
$Global:csv | Export-Csv "C:\temp\file.csv" -NoTypeInformation
Boom-bata-boom
-D = the name of your database
-e = query
-B = tab-delimited
There's a slightly simpler way to get all the tables into tab delimited fast:
#!/bin/bash
tablenames=$(mysql your_database -e "show tables;" -B |sed "1d")
IFS=$'\n'
tables=($tablenames)
for table in ${tables[#]}; do
mysql your_database -e "select * from ${table}" -B > "${table}.tsv"
done
Here's a basic python script that does the work! You can also choose to export only the headers (column names) or headers & data both.
Just change the database credentials and run the script. It will output all the data to the output folder.
To run the script -
Run: pip install mysql-connector-python
Change database credentials in the "INPUT" section
Run: python filename.py
import mysql.connector
from pathlib import Path
import csv
#========INPUT===========
databaseHost=""
databaseUsername=""
databasePassword=""
databaseName=""
outputDirectory="./WITH-DATA/"
exportTableData=True #MAKING THIS FIELD FALSE WILL STORE ONLY THE TABLE HEADERS (COLUMN NAMES) IN THE CSV FILE
#========INPUT END===========
Path(outputDirectory).mkdir(parents=True, exist_ok=True)
mydb = mysql.connector.connect(
host=databaseHost,
user=databaseUsername,
password=databasePassword
)
mycursor = mydb.cursor()
mycursor.execute("USE "+databaseName)
mycursor.execute("SHOW TABLES")
tables = mycursor.fetchall()
tableNames=[table[0] for table in tables]
print("================================")
print("Total number of tables: "+ str(len(tableNames)))
print(tableNames)
print("================================")
for tableName in tableNames:
print("================================")
print("Processing: "+ str(tableName))
mydb = mysql.connector.connect(
host=databaseHost,
user=databaseUsername,
password=databasePassword
)
mycursor = mydb.cursor()
mycursor.execute("USE "+databaseName)
if exportTableData:
mycursor.execute("SELECT * FROM "+tableName)
else:
mycursor.execute("SELECT * FROM "+tableName+" LIMIT 1")
print(mycursor.column_names)
with open(outputDirectory+tableName+".csv", 'w', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(mycursor.column_names)
if exportTableData:
myresult = mycursor.fetchall()
csvwriter.writerows(myresult)