I have hundreds of SQL file which I want to restore all of the databases in different database name for each file.
I look around for a solution, but what I got is something like concat all the files into one SQL file using cat.* and then restore using the concatenated file.
But, what I want is to restore it to a different database so, I think concat is not suitable for my case.
Here's one solution: alternate USE commands with your sql files, so you change the default database before the respective database's content. Gather the whole collection together and then pipe that to the input of the mysql client.
Example using bash syntax:
(
echo "USE database1;"
cat file1.sql
echo "USE database2;"
cat file2.sql
...
) | mysql
Another solution is to run the mysql client once for each file, and specify the database name as the argument:
mysql database1 < file1.sql
mysql database2 < file2.sql
...
Re your comment:
You can write a loop in bash too.
for file in *.sql
do
db=...
mysql $db < $file
done
The tricky part above is the "..." — deciding which db goes with each input SQL file. You haven't described any way to match them, so I don't know what you'd have to do to figure that out. But if you can make that inference somehow from the filename, then you can do this without having to type every file.
I have access to a MySQL database hosted on a remote server. I am attempting to migrate this to a local SQLite database. To do this, I am using this script, as suggested by this question. The usage is
./mysql2sqlite mysqldump-opts db-name | sqlite3 database.sqlite
I tried doing exactly that (with no dump options) and sqlite3 returned an error:
Error: near line 4: near "SET": syntax error
So far, I have found that when I only specify one of my tables in the dump options like so
./mysql2sqlite db-name table-B | sqlite3 database.sqlite
It appears to work fine, but when I specify the first table (let's call it table-A) it returns this error. I'm pretty sure it's returning this error because of the output of mysql2sqlite. The 4th line (I guess the 4th logical line, or the command that starts on the 4th actual line) of the dump file looks like this:
CREATE TABLE "Association_data_interaction" (
"id" int(10) DEFAULT NULL,
...
"Comments" text CHARACTER SET latin1,
...
"Experiment" text CHARACTER SET latin1,
"Methods" text CHARACTER SET latin1,
...
);
With many other rows removed. I don't really know SQL that well, but as far as I can tell, the migration script is trying to output a dump file with commands that can create a new database, but the script has to translate between MySQL's output commands and the commands sqlite3 wants to create a database, and is failing to properly handle the text fields. I know that when I run SHOW COLUMNS; in the MySQL database the Comments, Experiment, and Methods columns are of the "text" type. What can I do make sqlite3 accept the database?
Note: I have editing access to the database, but I would much prefer to avoid that if at all possible. I do not believe I have administrative access to the database. Also, if it's relevant, the database has about 1000 tables, most of which have about 10,000 rows and 10-50 columns. I'm not too interested in the performance characteristics of the database; they're currently good enough for me.
That script is buggy; one of the bugs is that it expects a space before the final comma:
gsub( /(CHARACTER SET|character set) [^ ]+ /, "" )
Replace that line with:
gsub( /(CHARACTER SET|character set) [^ ]+/, "" )
Here's my code.
SELECT *
FROM `accounts`
WHERE NOT name REGEXP '^[[.NUL.]-[.DEL.]]*$'
I want all non keyboard characters across all tables to be replaced with a space.
Hoping that someone can actually do this.
You aren't going to be able to do this easily in SQL.
The most straightforward approach would be to take a logical backup of your database, use sed or perl (or some similar tool to do the string replacement), and then re-import the data.
You should test this by importing the data into a test database (or at least a test schema) to make sure it doesn't harm your data.
Assuming by "non keyboard" characters you are referring to "non-printable" characters, and that you are on linux, you can do this with a combination of mysqldump and sed like so:
# dump your schema and data
mysqldump --single-transaction your_schema > /tmp/your_schema.sql
# copy the dump file and replace all non-printable characters with a space
sed -e 's/[^[:print:]]/ /g' /tmp/your_schema.sql > /tmp/your_schema_test.sql
# create an empty test schema to test the import
mysqladmin create your_schema_test
# import the data into the test schema
mysql -f your_schema_test < /tmp/your_schema_test.sql
I'm not sure if this is a Mysql problem or an SSH problem. However the issue does not happen when using another terminal program such as nano or a bash script.
I have a mysqldump file containing a bunch of lines that look like
INSERT INTO `issues` VALUES (10,'2010-06-21 16:16:08','2010-08-31 11:4...
with a lot of different entries (the lines are really long). I'm trying to paste it into my terminal to insert this data into a remote database, but when I paste, it seems to do it in pieces, inserting the chunks out of order. Here is an example post-mangling:
','May 2011',8,45);','April 2011',7,45),(21,'2011-05-09 09:31:28','2011-05-09 09:31:28','2011-05-12 08:48:16','','March 2011',6,45),(20,'2011-04-07 13:45:14','2011-04-07 13:45:14','2011-04-13 16:00:28','','February 2011',5,45),(19,'2011-03-03 13:36:26','2011-03-03 13:36:26','2011-03-10 08:34:19','','December 2010',4,45),(18,'2011-02-01 13:43:16','2011-02-01 13:43:16','2011-02-15 11:22:09','','November 2010',3,45),(17,'2010-12-07 12:04:53','2010-12-07 12:04:53','2010-12-09 10:00:02','','October 2010',2,45),(16,'2010-11-05 13:04:06','2010-11-05 13:04:06','2010-11-15 11:29:29','','September 2010',1,45),(14,'2010-10-05 08:58:27','2010-10-11 13:28:54','2010-10-12 07:21:20','INSERT INTO `issues` VALUES (10,'2010-06-21 16:16:08','2010-08-31 11:44:18','2010-10-11 12:33:46','\0','June 2010',0,45),(11,'2010-06-24 15:13:00','2010-06-24 15:13:22',NULL,'\0','May 2010',0,45),(12,'2010-08-25 12:47:42','2010-08-31 11:44:02','2010-10-11 12:33:59','
Does anyone know what is causing this issue and a workaround? I've tried assuming it's a display issue and inserting it anyway but that is a no-go. It seems it really is out of order. I'm using 10.6.6 Snow Leopard & Apple's terminal.
Thanks
Had a similar issue with pasting to both mysql and bash over ssh. The culprit was Unicode characters which got interpreted as control characters.
Symptoms: When pasting the text, the insert position would move back to pos 1 several times, seemingly randomly, without starting a new line, and then some new text would be inserted or some already pasted text would be overwritten, resulting in garbled mess.
Cause: It turned out that the text to paste had some unicode dash-like characters in them where dashes should be, and these were interpreted by bash and mysql as kind of control characters to move the insert column position.
So make sure your text does not contain unwanted characters before pasting.
Test: A good test for me has been to open vi (via ssh or not), go into insert mode (press "i") and insert (press Shift+Insert). It will display Unicode characters broken up into Unicode sequences.
Example: My text to paste started like this:
mysqldump –-opt –-no-create-db
Pasting this into bash or mysql via ssh resulted in:
--no-create-dbopt mysqldump
Pasting this into vi made the bad dash-like characters visible, resulting in:
mysqldump �~#~S-opt �~#~S-no-create-db
Means the first dash of every option was the wrong character. I corrected them and now everything worked fine:
mysqldump --opt --no-create-db
I like to do:
ssh hostname mysqldump database | mysql localdatabase
And just do the whole thing in one command.
You can of course add various options to the dump command to skip table drop and creation or other things you don't need.
What's the easiest way to get the data for a single table, delete a single table or break up the whole dump file into files each containing individual tables? I usually end up doing a lot of vi regex munging, but I bet there are easier ways to do these things with awk/perl, etc. The first page of Google results brings back a bunch of non-working perl scripts.
When I need to pull a single table from an sql dump, I use a combination of grep, head and tail.
Eg:
grep -n "CREATE TABLE" dump.sql
This then gives you the line numbers for each one, so if your table is on line 200 and the one after is on line 269, I do:
head -n 268 dump.sql > tophalf.sql
tail -n 69 tophalf.sql > yourtable.sql
I would imagine you could extend upon those principles to knock up a script that would split the whole thing down into one file per table.
Anyone want a go doing it here?
Another bit that might help start a bash loop going:
grep -n "CREATE TABLE " dump.sql | tr ':`(' ' ' | awk '{print $1, $4}'
That gives you a nice list of line numbers and table names like:
200 FooTable
269 BarTable
Save yourself a lot of hassle and use mysqldump -T if you can.
From the documentation:
--tab=path, -T path
Produce tab-separated data files. For each dumped table, mysqldump
creates a tbl_name.sql file that contains the CREATE TABLE statement
that creates the table, and a tbl_name.txt file that contains its
data. The option value is the directory in which to write the files.
By default, the .txt data files are formatted using tab characters
between column values and a newline at the end of each line. The
format can be specified explicitly using the --fields-xxx and
--lines-terminated-by options.
Note This option should be used only when mysqldump is run on the
same machine as the mysqld server. You must have the FILE privilege,
and the server must have permission to write files in the directory
that you specify.
This shell script will grab the tables you want and pass them to splitted.sql.
It’s capable of understanding regular expressions as I’ve added a sed -r option.
Also MyDumpSplitter can split the dump into individual table dumps.
Maatkit seems quite appropriate for this with mk-parallel-dump and mk-parallel-restore.
I am a bit late on that one, but if it can help anyone, I had to split a huge SQL dump file in order to import the data to another Mysql server.
what I ended up doing was splitting the dump file using the system command.
split -l 1000 import.sql splited_file
The above will split the sql file every 1000 lines.
Hope this helps someone