I've got a situation where I need to copy several tables from one SQL Server DB to a separate SQL Server DB. The databases are both on the same instance. The tables I'm copying contain a minimum of 4.5 million rows and are about 40GB upwards in size.
I've used BCP before but am not hugely familiar with it and have been unable to find any documentation about whether or not you can use BCP to copy direct from table to table without writing to file in between.
Is this possible? If so, how?
EDIT: The reason we're not using a straightforward INSERT is because we have limited space on the log drive on the server, which disappears almost instantly when attempting to INSERT. We did try it but the query quickly slowed to snail's pace as the log drive filled up.
from my answer at Table-level backup
I am using bcp.exe to achieve table-level backups
to export:
bcp "select * from [MyDatabase].dbo.Customer " queryout "Customer.bcp" -N -S localhost -T -E
to import:
bcp [MyDatabase].dbo.Customer in "Customer.bcp" -N -S localhost -T -E -b 10000
as you can see, you can export based on any query, so you can even do incremental backups with this.
BCP is for dumping to / reading from a file. Use DTS/SSIS to copy from one DB to another.
Here are the BCP docs at MSDN
SQL Import/Export wizard will do the job ... just connect twice to same database (source and destination) and copy one table onto other (empty and indexed), you might want to instruct to ignore autonumeric Id key field if exists. This approach works for me with tables over 1M+ records.
Related
I am in the process of upgrading our AWS Aurora RDS cluster from MySQL 5.7 to MySQL 8.0. During the upgrade of the cluster, the process failed, and according to AWS, they recommended we run Optimize Table on a number of DB tables before proceeding again. I was able to generate a list of tables that need to be optimized by running a script from AWS, but there are over 1200 tables that need this to be run on. I am not very experienced with SQL scripting, but I have used Powershell and Bash scripting before many times.
What is the best way to script this out, so I can provide a list of tables in a text file or a db table then run Optimize Table on each row or line.
I have done similar scripts in Bash or Powershell as For Each loops and have the tables listed in a text file, but I am not sure how to do a similar process in SQL. Any help is appreciated.
Thanks!
Output list of tables formatted into OPTIMIZE TABLE statements:
mysql -B -N -e "select concat('optimize table ',table_schema,'.',table_name,';')
from information_schema.tables where table_type='BASE TABLE' AND ..." > myscript.sql
Add any conditions you want where I put ..., to filter the tables you want to list. For example: AND TABLE_SCHEMA='myschema'.
Run that result as an SQL script:
mysql -e "source myscript.sql"
(I omitted options for --hostname, --user, and --password or any other options you may need to use to connect to your instance. I usually put these into ~/.my.cnf anyway.)
How do I put these codes in MySQL workbench to create a logical backup of a BOOK table without using the command line? I put the codes below in the workbench and it shows an error: "mysqldump" is not valid at this position.
mysqldump [arguments] > file-name
mysqldump csit115 BOOK --user csit115 --password
--verbose --lock_tables > book.bak
Physical backups consist of raw copies of the directories and files that store database contents. This type of backup is suitable for large, important databases that need to be recovered quickly when problems occur.
Logical backups save information represented as logical database structure (CREATE DATABASE, CREATE TABLE statements) and content (INSERT statements or delimited-text files). This type of backup is suitable for smaller amounts of data where you might edit the data values or table structure, or recreate the data on a different machine architecture.
The problem is, that one MYI and one MYD file from MySQL database has been accidentally deleted. The only file left intact is FRM one. Only one table from the whole database is damaged that way, all other tables are OK and the database works generally fine, except the table with deleted files, which is obviously inaccessible.
There's a full database dump in pure SQL format available.
The question is, how do I re-create these files and table in safe and proper manner?
My first idea was to extract the full create table command from the dump and run it on live database. It's not so easy, as the whole dump file has over 10GB, so any operations within its content are really pain in . Yes, I know about sed and know how to use it - but I consider it the last option to choose.
Second and current idea is to create copy of this database on independent server, make a dump of the table in question and then use resulting SQL file to create the table again on the production server. I'm not quite experienced with MySQL administration tasks (well, just basic ones), but for me this option seems to be safe and reasonable.
Will the second option work as I expect?
Is it the best option, or are there any more recommendable solutions?
Thank you in advance for your help.
The simplest solution is to copy the table you deleted. There's a chance mysqld still has an open file handle to the data files you deleted. On UNIX/Linux/OS X, a file isn't truly deleted while some process still has an open file handle to it.
So you might be able to do this:
mysql> CREATE TABLE mytable_copy LIKE mytable;
mysql> INSERT INTO mytable_copy SELECT * FROM mytable;
If you've restarted MySQL Server since you deleted the files, this won't work. If the server has closed its file handle to the data file, this won't work. If you're on Windows, I have no idea.
The next simplest solution is to restore your existing 10GB dump file to a temporary instance of MySQL Server, as you said. I'd use MySQL Sandbox but some people would use a virtual machine, or if you're using an AWS environment, launch a spot EC2 instance or a small RDS instance.
Then dump just the table you need:
mysqldump -h tempserver mydatabase mytable > mytable.sql
Then restore it to your real server.
mysql -h realserver mydatabase < mytable.sql
(I'm omitting the user & password options, I prefer to put those in .my.cnf anyway)
I'd like to download a copy of a MySQL database (InnoDB) to use it locally. Since the database is growing rapidly, I want to find out a way to speed up this process and save bandwidth.
I'm using this command to copy the database to my local computer (Ubuntu):
ssh myserver 'mysqldump mydatabase --add-drop-database | gzip' | zcat | mysql mydatabase
I've added multiple --ignore-tables to ignore tables that don't need to be up to date.
I've already got an (outdated) version of the database, so there is no need to download all tables (some tables hardly change). I'm thinking of using the checksum for each table and add unchanged tables to --ignore-tables.
Since I can't find many example of using checksums and mysqldump, I'm brilliant (not very likely) or there is an even better way to download (or better: one-way sync) the database in a smart way.
Database replication is not what I'm looking for, since that requires a binary log. That's a bit overkill.
What's the best way to one-way sync a database, ignoring tables that haven't been changed?
One solution could be using the mysqldump --tab option. mysqldump delimited
mkdir /tmp/dbdump
chmod 777 /tmp/dbdump
mysqldump --user=xxx --password=xxx --skip-dump-date --tab=/tmp/dbdump database
Then use rsync with --checksum to send over changed files to destination. Run the create scripts, then load data using LOAD DATA INFILE
I have a PostgreSQL database with 4-5 tables (some of those have more than 20 million rows). i have to replicate this entire database onto another machine. However, there I have MySQL (and for some reason cannot install PostgreSQL) on that machine.
The database is static and is not updated or refreshed. No need to sync between the databases once replication is done. So basically, I am trying to backup the data.
There is a utility called pg_dump which will dump the contents onto a file. I can zip and ftp this onto the other server. However, I do not have psql on the other machine to reload this into a database. Is there a possibility that mysql might parse and decode this file into a consistent database?
Postgres is version 9.1.9 and mysql is version 5.5.32-0ubuntu0.12.04.1.
Is there any other simple way to do this without installing any services?
Depends on what you consider "simple". Since it's only a small number of tables, the way I'd do it is like this:
dump individual tables with pg_dump -t table_name --column-inserts
edit the individual files, change the schema definitions to be compatible with mysql (e.g. using auto_increment instead of serial, etc. : like this: http://www.xach.com/aolserver/mysql-to-postgresql.html only in reverse)
load the files into the mysql utility like you would any other mysql script.
If the files are too large for step #2, use the -s and -a arguments to pg_dump to dump the data and the schema separately, then edit only the schema file and load both files in mysql.