I've tried searching for this but so far I'm only finding results for "exporting the table schema without data," which is exactly the opposite of what I want to do. Is there a way to export data from a SQL table without having the script recreate the table?
Here's the problem I'm trying to solve if someone has a better solution: I have two databases, each on different servers; I'll call them the raw database and the analytics database. The raw database is the "real" database and collects records sent to its server, and stores them in a table using the transactional InnoDB engine. The analytics database is on an internal LAN, and is meant to mirror the raw database and will periodicly be updated so that it matches the raw database. It's separated like this because we have a program that will do some analysis and processing of the data, and we don't want to do it on the live server.
Because the analytics database is just a copy, it doesn't need to be transactional, and I'd like it to use the MyISAM engine for its table because I've found it to be much faster to import data into and query against. The problem is that when I export the table from the live raw database, the table schema gets exported too and the table engine is set to InnoDB, so when I run the script to import the data into the analytics database, it drops the MyISAM table and recreates it as an InnoDB table. I'd like to automate this process of exporting/importing data, but this problem of the generated sql script file changing the table engine from MyISAM to InnoDB is stopping me, and I don't know how to get around it. The only way I know is to write a program that has direct access to the live raw database, do a query, and update the analytics database with the results, but I'm looking for alternatives to this.
Like this?
mysqldump --no-create-info ...
Use the no-create-info option
mysqldump --no-create-info db [table]
Related
I'd like a snapshot of a live MySQL DB to work with on my development machine. The problem is that the DB is too large, so my thought was to execute:
mysqldump [connection-info-here] --no-autocommit --where="1 limit 1000" mydb > /dump.sql
I think this will give me the first thousand rows of every table in database mydb. I anticipate that the resulting dataset will break a lot of foreign key constraints since some records will be missing. As a result the application I mean to run on the dev machine will fail.
Is there a way to mysqldump a sample of the database while ensuring that all records dumped abide by key constraints? (for instance if a foreign key is dumped, the matching record in the foreign table will also be dumped).
If that isn't possible, how do you guys deal with this problem?
No, there's no option for mysqldump to dump only rows that match in foreign key relationships. You already know about the --where option, and that won't do it.
I've had the same task as you, to dump a subset of data but only data that is related. For example, for creating a test instance.
I've been using MySQL for many years, I've worked as a MySQL consultant and trainer, and I try to keep up with current tools. I have never heard of any MySQL tool that does this operation.
The only solution I can suggest is to write your own script to dump table by table using SELECT...INTO OUTFILE.
It's sometimes easier to write a custom script just for your specific schema, than for someone to write a general-purpose tool that works for everyone's schema.
How I have dealt with this problem in the past is I don't copy data from the live database. I find some other way to create a subset of fake data for testing. It's probably better to create synthetic data anyway, because then you don't risk accidentally using live data in your dev/test environment, in case some of it is private data.
I have two different databases. I have to access data from one database and insert them into another ( with some data processing included, it is not only to copy data ) Also, the schema is really complex and each table has many rows, so copying data into schema in the second database is not an option. I have to do that using MySQL Workbench, so I have to do it using SQL queries. Is there a way to create a connection from one database to another and access its data?
While MySQL Workbench can be used to transfer data between servers (e.g. as part of a migration process) it is not useful when you have to process the data first. Instead you have 2 other options:
Use a dedicated tool you write yourself to do that (as eddwinpaz mentioned).
Use the capabilities of your server. That is, copy the data to the target server, into a temporary table (using dump and restore). Then use queries to modify the data as you need it. Finally copy it to the target table.
We are handling a data aggregation project by having several microsoft sql server databases combining to one mysql database. all mssql database have the same schema.
The requirements are :
each mssql database can be imported to mysql independently
before being able to import each record to mysql we need to validates each records with a specific createrias via php.
each imported mssql database can be rollbacked. It means even it already imported to mysql, all the mssql database can be removed from the mysql.
we would still like to know where does each record imported to the mysql come from what mssql database.
All import process will be done with PHP .
we have difficulty in many aspects. we don't know what is the best approach to solve our problem.
your help will be highly appreciated.
ps: each mssql database has around 60 tables and each table can have a few hundred thousands .
Don't use PHP as a database administration utility. Any time you build a quick PHP script to transfer records directly from one database to another, you're going to cause yourself a world of hurt when that script becomes required for production operation.
You have a number of problems that you need solved:
You have multiple MSSQL databases with similar if not identical tables.
You have a single MySQL database that you want to merge the data into.
The imported data must be altered in a specific way before being merged.
You want to prevent all duplicate records in your import.
You want to know what database each record originally came from.
The solution?
Analyze the source MSSQL databases and create a merge strategy for them.
Create a database structure on the MySQL database that fits the merge strategy in #1, including all the new key constraints (like unique and foreign keys) required for the consolidation.
At this point you have two options left:
Dump the data from each of the source databases into raw data using your RDBMS administration utility of choice. Alter that data to fit your merge strategy and constraints. Document this, and then merge all of the data into your new database structure.
Use a tool like opendbcopy to map columns from one database to another and run a mass import.
Hope this helps.
I am facing a problem for a task I have to do at work.
I have a MySQL database which holds the information of several clients of my company and I have to create a backup/restore procedure to backup and restore such information for any single client. To clarify, if my client A is losing his data, I have to be able to recover such data being sure I am not modifying the data of client B, C, ...
I am not a DB administrator, so I don't know if I can do this using standard mysql tools (such as mysqldump) or any other backup tools (such as Percona Xtrabackup).
To backup, my research (and my intuition) led my to this possibile solution:
create the restore insert statement using the insert-select syntax (http://dev.mysql.com/doc/refman/5.1/en/insert-select.html);
save this inserts into a sql file, either in proper order or allowing this script to temporarily disable the foreign key checks to meet foreign keys' constraint;
of course, I do this for all my clients on a daily base, using a file for each client (and day).
Then, in the case I have to restore the data for a specific client:
I delete all his data left;
I restore the correct data using his sql file I created during the backup.
This way I believe I may recover the right data of client A without touching the data of client B. Is my solution eventually working? Is there any better way to achieve the same result? Or do you need more information about my problem?
Please, forgive me if this question is not well-formed, but I am new here and this is my first question so I may be unprecise...thanks anyway for the help.
Note: we will also backup the entire database with mysqldump.
You can use the --where parameter, you could provide a condition like *client_id=N* . Of course I am making an assumption since you don't provide any information on your schema.
If you have a Star schema , then you could probably write a small script that backups all lookup tables (considering they are adequately small) by using this parameter --tables and use the --where condition for your client data table. For additional performance, perhaps you could partition the table by the client_id.
I would like to select data from a second MySQL database in order to migrate data from one server to another.
I'm looking for syntax like
SELECT * FROM username:password#serverip.databaseName.tableName
Is this possible? I would be able to do this in Microsoft SQL Server using linked servers, so I'm assuming it's possible in MySQL as well.
You can create a table using FEDERATED storage engine:
CREATE TABLE tableName (id INT NOT NULL, …)
ENGINE=FEDERATED
CONNECTION='mysql://username:password#serverip/databaseName/tableName'
SELECT *
FROM tableName
Basically, it will serve as a view over the remote tableName.
There are generally two approaches you can take, although neither of them sound like what you're after:
Use replication and set up a master/slave relationship between the two databases.
Simply dump the data (using the command line mysqldump tool) from the 1st database and import it into the 2nd.
However, both of these will ultimately migrate all of the data (i.e.: not a subset), although you can specify specific table(s) via mysqldump. Additionally, if you use the mysqldump approach and you're not using InnoDB you'll need to ensure that the source database isn't in use (i.e.: has integrity) when the dump is created.
You can't do this directly, but as someone else alluded to in a comment, you can use mysqldump to export the contents of a table as a SQL script.
At that point you could run the script on the new server to create the table, or if more manipulation of the data is required, import that data into a table with a different name on the new server, then write a query to copy the data from there.