I have a folder containing around 20.000 png images with size of around 500kb (barcodes) and i want to batch insert them into a mysql table.
I found this script but i could not get it to work properly
#! /bin/bash
dir=/folder/barcodes
ext=png
chmod a+r $dir/*.$ext
mysql -u root -p DBNAME <<eot
USE DBNAME;
drop table if exists t1;
create table t1 (name varchar(128), data mediumblob, PRIMARY KEY(ID));
USE DBNAME;
eot
ls -1 $dir/*.$ext | perl -e 'print "insert into t1(name,data) values ".join(",",map {chop;$f="\"".$_."\""; "($f,load_file($f))"} <>);' | mysql -u root -p DBNAME
my issues are:
table does not get a primary key (i add one afterwords using phpmyadmin)
the pictures are somehow not loades into the database (all values are NULL)
Related
In MySQL how to copy data from one table to another in the same database table?
I know insert into select, but it is taking forever to do this, especially on a live database, we can't take a risk.
Some conditions are there:
1. table1 is a source table and table1_archives is a destination table.
2. table1_archives already have data so we can only append.
My attempt:
time mysqldump --log-error=$logfile --complete-insert --insert-ignore
--no-create-info --skip-triggers --user=$dbuser --host=$host $dbname table1
--where="created < now()-interval 10 month" > $filename
But it has the name of table1, so I can't insert it into table1_archives.
Any guidance will be appreciated.
Thanks in advance.
In your output file, you need to change the table name table1 to table1_archives. Unfortunately mysqldump does not have any way to do this. You will have to do this on the fly using sed, which will rename everything in output file from table1 to table1_archives.
Since your columns can also contain the content like table1, its better to search and replace by enclosing them in backticks.
You can also use gzip to compress the output file.
Here is the command that worked for me
mysqldump -u USER -h HOST -p --skip-add-drop-table --no-create-info --skip-triggers --compact DB table1 |\
sed -e 's/`table1`/`table1_archives`/' | gzip > filename.sql.gz
"but it is taking forever to do this"
There is a small trick to avoid this and then insert into will work faster:
Insert into table1 select * from table2
Trick:
step-1: drop all indices from table2
step-2: execute query
step-3: create indices again
How do I copy database1 to database2 from the mysql command line?
I know that mysqldump is one option, or I can do
drop table if exists table2;
create table table2 like table1;
insert into table2 select * from table1;
But, i won't want to do this manually for each table name. Is this possible?
The key here is "from the mysql command line"
mysql> ...
First create the duplicate database:
CREATE DATABASE database2;
Make sure the user and permissions are all in place and:
mysqldump -u admin -p database1| mysql -u backup -pPassword database2;
You can also refer to the following link for executing this on mysql shell.
http://dev.mysql.com/doc/refman/5.5/en/mysqldump-copying-to-other-server.html
In a stored procedure, loop over the results of
SELECT table_name FROM information_schema.tables WHERE table_schema = 'sourceDB';
At each iteration, prepare and execute a dynamic SQL statement:
-- for each #tableName in the query above
CREATE TABLE targetDB.#tableName LIKE sourceDB.#tableName;
INSERT INTO targetDB.#tableName SELECT * FROM sourceDB.#tableName;
Sorry, the MySQL syntax for stored procedure being a serious pain in the neck, I am too lazy to write the full code right now.
Resources:
CREATE PROCEDURE
PREPARE and EXECUTE
CURSORS
Mysqldump can be used from mysql command line also.
Using: system (\!): Execute a system shell command.
Query:
system mysqldump -psecret -uroot -hlocalhost test > test.sql
system mysql -psecret -uroot -hlocalhost < test.sql
Background
We currently dump our database basically like this:
mysqldump --complete-insert --opt --hex-blob --all-databases -u -p
The dump will look something like this:
USE `DB1`
-- Table structure for table `MYTABLE`
DROP TABLE IF EXISTS `MYTABLE`
CREATE TABLE `MYTABLE`
...
INSERT INTO `MYTABLE` ...
-- Table structure for table `NEXTABLE`
...
USE `DB2`
-- Table structure for table `MYTABLE`
DROP TABLE IF EXISTS `MYTABLE`
CREATE TABLE `MYTABLE`
...
INSERT INTO `MYTABLE` ...
-- Table structure for table `NEXTABLE`
Problem
in some recovery scenarios we need to pull a specific table out of the backup. We might do something like this:
cat backup | sed -n -e '/Table structure for table .MYTABLE.$/,/Table structure for table .NEXTABLE.$/p' | mysql -u -p DB2
Because the individual table statements do not qualify the dbspace then in this case the table information for DB1.MYTABLE is going to be extract and thus DB2 is going to be populated with the backup from DB1
Question
Is there a way to get the backup to qualify the dbspace name on each table statement such that the USE statement becomes unnecessary for this scenario? E.g.
USE `DB2`
-- Table structure for table `DB2`.`MYTABLE`
DROP TABLE IF EXISTS `DB2`.`MYTABLE`
CREATE TABLE `DB2`.`MYTABLE`
...
INSERT INTO `DB2`.`MYTABLE` ...
-- Table structure for table `DB2`.`NEXTABLE`
With no answer and seemingly no way to add the space name to the dump I am forced to scan the dump differently. Also note that this is a recovery scenario so we cannot simply change the way we already dumped the database as it is too late at that point.
Since the table name within a space is unique what I ended up going with was to first isolate the dbspace instructions in the dump and then isolate that table.
Use this to restore the table from dump.sql to the same space it came from:
sed -n '/^USE .SPACENAMEHERE.;$/,/^USE .*$/p' dump.sql | sed -E -n '/^(USE .*;|-- Table structure for table .TABLENAMEHERE.)$/,/^-- Table structure for table /p' | mysql -u -p
You will substitute the SPACENAMEHERE with the dbspace name and TABLENAMEHERE with the table name. Because this usage of sed will include the USE statement in the output we do not need to qualify which database to connect to on the mysql command line. As long as the user has permissions to "USE" that space it will work. But if you want to insert this into a different dbspace (i.e. temporary) then you will use this.
Use this to restore the table from dump.sql to a different space (e.g. a temporary one):
sed -n '/^USE .SPACENAMEHERE.;$/,/^USE .*$/p' dump.sql | sed -E -n '/^-- Table structure for table .TABLENAMEHERE.$/,/^-- Table structure for table /p' | mysql -u -p DESTINATIONSPACE
I have two databases. I want to dump data from one table in 1st database and insert to another table with an another name in 2nd database.
So I have DB1 that has tables tbl1 and tabl2, and DB2 that has tables tbl3 and tbl4. I know that tabl1 and tabl3 have the same structure. How to copy data from one to another by using mysqldump command?
I've tried to do this, but it's not work.
mysqldump --user root --password=password --no-create-info DB1 tbl1 > c:/dump.sql
mysql --user root --password=password DB2 tbl3 < c:/dump.sql
This is not going to work due to different table name
if both database are sitting in the same server using the same daemon, you can directly
insert into DB2.tbl3 select * from DB1.tbl1;
if tbl1 is not existing in DB2,
pseudo code for this :
# import as tbl1 from DB1 into tbl1 in DB2
mysqldump DB1 tbl1 | mysql DB2
# then rename tbl1 in DB2 to tbl3
mysql DB2 -N <<< "rename table tbl1 to tbl3"
I am using in a linux shell command line
mysqldump --user=username --password=xxxx dbname | mysql --host=remotehost.com --user=username --password=xxxx -C dname
this transfers it from the local host to a remote host, the whole database.
IF you want to also copy the contents of the table you can do:
CREATE TABLE `new_table_name` LIKE `old_table_name`;
INSERT INTO `new_table_name` SELECT * FROM `old_table_name`;
If you have to copy table from one database to another database then use following
CREATE TABLE `db1.new_table_name` LIKE `db2.old_table_name`;
INSERT INTO `db1.new_table_name` SELECT * FROM `db2.old_table_name`;
It works for me as dumping single table and importing was throwing syntax error with MariaDB
By default, mysqldump takes the backup of an entire database. I need to backup a single table in MySQL. Is it possible? How do I restore it?
Dump and restore a single table from .sql
Dump
mysqldump db_name table_name > table_name.sql
Dumping from a remote database
mysqldump -u <db_username> -h <db_host> -p db_name table_name > table_name.sql
For further reference:
http://www.abbeyworkshop.com/howto/lamp/MySQL_Export_Backup/index.html
Restore
mysql -u <user_name> -p db_name
mysql> source <full_path>/table_name.sql
or in one line
mysql -u username -p db_name < /path/to/table_name.sql
Dump and restore a single table from a compressed (.sql.gz) format
Credit: John McGrath
Dump
mysqldump db_name table_name | gzip > table_name.sql.gz
Restore
gunzip < table_name.sql.gz | mysql -u username -p db_name
mysqldump can take a tbl_name parameter, so that it only backups the given tables.
mysqldump -u -p yourdb yourtable > c:\backups\backup.sql
try
for line in $(mysql -u... -p... -AN -e "show tables from NameDataBase");
do
mysqldump -u... -p.... NameDataBase $line > $line.sql ;
done
$line cotent names tables ;)
We can take a mysql dump of any particular table with any given condition like below
mysqldump -uusername -p -hhost databasename tablename --skip-lock-tables
If we want to add a specific where condition on table then we can use the following command
mysqldump -uusername -p -hhost databasename tablename --where="date=20140501" --skip-lock-tables
You can either use mysqldump from the command line:
mysqldump -u username -p password dbname tablename > "path where you want to dump"
You can also use MySQL Workbench:
Go to left > Data Export > Select Schema > Select tables and click on Export
You can use easily to dump selected tables using MYSQLWorkbench tool ,individually or group of tables at one dump then import it as follow: also u can add host information if u are running it in your local by adding -h IP.ADDRESS.NUMBER after-u username
mysql -u root -p databasename < dumpfileFOurTableInOneDump.sql
You can use the below code:
For Single Table Structure alone Backup
-
mysqldump -d <database name> <tablename> > <filename.sql>
For Single Table Structure with data
-
mysqldump <database name> <tablename> > <filename.sql>
Hope it will help.
You can use this code:
This example takes a backup of sugarcrm database and dumps the output to sugarcrm.sql
# mysqldump -u root -ptmppassword sugarcrm > sugarcrm.sql
# mysqldump -u root -p[root_password] [database_name] > dumpfilename.sql
The sugarcrm.sql will contain drop table, create table and insert command for all the tables in the sugarcrm database. Following is a partial output of sugarcrm.sql, showing the dump information of accounts_contacts table:
--
-- Table structure for table accounts_contacts
DROP TABLE IF EXISTS `accounts_contacts`;
SET #saved_cs_client = ##character_set_client;
SET character_set_client = utf8;
CREATE TABLE `accounts_contacts` (
`id` varchar(36) NOT NULL,
`contact_id` varchar(36) default NULL,
`account_id` varchar(36) default NULL,
`date_modified` datetime default NULL,
`deleted` tinyint(1) NOT NULL default '0',
PRIMARY KEY (`id`),
KEY `idx_account_contact` (`account_id`,`contact_id`),
KEY `idx_contid_del_accid` (`contact_id`,`deleted`,`account_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
SET character_set_client = #saved_cs_client;
--
just use mysqldump -u root database table
or if using with password mysqldump -u root -p pass database table
I've come across this and wanted to extend others' answers with our fully working example:
This will backup the schema in it's own file, then each database table in its own file.
The date format means you can run this as often as your hard drive space allows.
DATE=`date '+%Y-%m-%d-%H'`
BACKUP_DIR=backups/
DATABASE_NAME=database_name
mysqldump --column-statistics=0 --user=fake --password=secure --host=10.0.0.1 --routines --triggers --single-transaction --no-data --databases ${DATABASE_NAME} | gzip > ${BACKUP_DIR}${DATE}-${DATABASE_NAME}--schema.sql.gz
for table in $(mysql --user=fake --password=secure --host=10.0.0.1 -AN -e "SHOW TABLES FROM ${DATABASE_NAME};");
do
echo ""
echo ""
echo "mysqldump --column-statistics=0 --user=fake --password=secure --host=10.0.0.1 --routines --triggers --single-transaction --databases ${DATABASE_NAME} --tables ${table} | gzip > ${BACKUP_DIR}${DATE}-${DATABASE_NAME}-${table}.sql.gz"
mysqldump --column-statistics=0 --user=fake --password=secure --host=10.0.0.1 --routines --triggers --single-transaction --databases ${DATABASE_NAME} --tables ${table} | gzip > ${BACKUP_DIR}${DATE}-${DATABASE_NAME}-${table}.sql.gz
done
We run this as bash script on an hourly basis, and actually have HOUR checks and only backup some tables through the day, then all tables in the night.
to keep some space on the drives, the script also runs this to remove backups older than X days.
# HOW MANY DAYS SHOULD WE KEEP
DAYS_TO_KEEP=25
DAYSAGO=$(date --date="${DAYS_TO_KEEP} days ago" +"%Y-%m-%d-%H")
echo $DAYSAGO
rm -Rf ${BACKUP_DIR}${DAYSAGO}-*
echo "rm -Rf ${BACKUP_DIR}${DAYSAGO}-*"