Is there a SQL command to copy many tables with specific prefix (ie yot_) between two MYSQL databases? The DB user has access to both of the DB
There's no SQL statement of any kind that operates on tables using wildcards. You must name tables explicitly.
You can, however, generate the statements by querying the INFORMATION_SCHEMA:
SELECT CONCAT(
'RENAME TABLE my_old_schema.`', TABLE_NAME, '` '
' TO my_new_schema.`', TABLE_NAME, '`;'
) AS _stmt
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME LIKE 'yot\_%'
AND TABLE_SCHEMA='my_old_schema';
That's an example of generating a series of RENAME TABLE statements, which will move the tables from one schema to another. But it demonstrates the technique
You can try to make table-copy instead of move, with CREATE TABLE new_table LIKE old_table; followed by INSERT INTO new_table SELECT * FROM old_table;
In my project I have two code paths that interact with MySQL during first time setup. The first step is the database structure creation, in here, a user has the ability to pick and choose the features they want - and some tables may not end up being created in the database depending on what the user selects.
In the second part, I need to preload the tables that did get created with some basic data - how could I go about inserting rows into these tables, only if the table exists?
I know of IF NOT EXISTS but as far as I know, that only works with creating tables, I am trying to do something like this
INSERT INTO table_a ( `key`, `value` ) VALUES ( "", "" ) IF EXISTS table_a;
This is loaded through a file that contains a lot of entries, so letting it throw an error when the table does not exist is not an option.
IF (SELECT count(*)FROM information_schema.tables WHERE table_schema ='databasename'AND table_name ='tablename') > 0
THEN
INSERT statement
END IF
Use information schema to check existence if table
If you know that a particular table does exist with at least 1 record (or you can create a dummy table with just a single record) then you can do a conditional insert this way without a stored procedure.
INSERT INTO table_a (`key`, `value`)
SELECT "", "" FROM known_table
WHERE EXISTS (SELECT *
FROM information_schema.TABLES
WHERE (TABLE_SCHEMA = 'your_db_name') AND (TABLE_NAME = 'table_a')) LIMIT 1;
I have ran into trouble when copying my MySQL Tables to a new one, excluding the data, using the query:
CREATE TABLE foo SELECT * FROM bar WHERE 1=0.
The tables are copied, the structure and column names are correctly inserted. But there is a problem with the auto_increment fields and the primary key fields as they are not inserted as they were on the original table. (The fields are not PKs and AI anymore) I am using MySQL 5.5 and PMA 3.5.8.2
I hope someone can help me out.
Thank you SO.
You will probably have to run 2 queries.
CREATE TABLE foo LIKE bar;
ALTER TABLE foo AUTO_INCREMENT = (SELECT `AUTO_INCREMENT` FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'DatabaseName' AND TABLE_NAME = 'bar');
You would have to replace DatabaseName with the name of your database. This is untested, but I think it will give you what you are looking for.
So I tried testing the above query and the ALTER TABLE statement seems to fail due to the select. There might be a better way, but the way that worked for me was to set the auto increment value to a variable and then prepare the statement and execute it.
For example you would go ahead and create your table first:
CREATE TABLE foo LIKE bar;
Then set your ALTER TABLE statement into a variable
SET #ai = CONCAT("ALTER TABLE foo AUTO_INCREMENT =", (SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE TABLE_SCHEMA = 'databasename' AND TABLE_NAME = 'bar'));
Finally, you would prepare and execute the statement.
PREPARE query FROM #ai;
EXECUTE query;
DEALLOCATE PREPARE query;
Other than your columns, the table structure: indexes, primary keys, triggers, etc. are not copied by this kind of statement. You either need to run a bunch of alter table statements to add your structure or you need to create the table with all the surrounding structure first, then load it with your select.
I am using MySQL. I have a table called EMP, and now I need create one more table (EMP_TWO) with same schema, same columns, and same constraints. How can I do this?
To create a new table based on another tables structure / constraints use :
CREATE TABLE new_table LIKE old_table;
To copy the data across, if required, use
INSERT INTO new_table SELECT * FROM old_table;
Create table docs
Beware of the notes on the LIKE option :
Use LIKE to create an empty table based on the definition of another
table, including any column attributes and indexes defined in the
original table:
CREATE TABLE new_table LIKE original_table; The copy is created using the same
version of the table storage format as the original table. The SELECT
privilege is required on the original table.
LIKE works only for base tables, not for views.
CREATE TABLE ... LIKE does not preserve any DATA DIRECTORY or INDEX
DIRECTORY table options that were specified for the original table, or
any foreign key definitions.
If you want to copy only Structure then use
create table new_tbl like old_tbl;
If you want to copy Structure as well as data then use
create table new_tbl select * from old_tbl;
Create table in MySQL that matches another table?
Ans:
CREATE TABLE new_table AS SELECT * FROM old_table;
Why don't you go like this
CREATE TABLE new_table LIKE Select * from Old_Table;
or You can go by filtering data like this
CREATE TABLE new_table LIKE Select column1, column2, column3 from Old_Table where column1 = Value1;
For having Same constraint in your new table first you will have to create schema then you should go for data for schema creation
CREATE TABLE new_table LIKE Some_other_Table;
by only using the following command on MySQL command line 8.0 the following ERROR is displayed
[ mysql> select * into at from af;]
ERROR 1327 (42000): Undeclared variable: at
so just to copy the exact schema without the data in it you can use the create table with like statement as follows:
create table EMP_TWO like EMP;
and to copy table along with the data use:
create table EMP_TWO select * from EMP;
to only copy tables data after creating an empty table:
insert into EMP_TWO select * from EMP;
I'm using a MySQL GUI to migrate some sites to a new version of a CMS by selecting certain tables and running the INSERT statement generated from a backup dump into an empty table (the new schema). There are a few columns in the old tables that don't exist in the new one, so the script stops with an error like this:
Script line: 1 Unknown column 'user_id' in 'field list'
Cherry-picking the desired columns to export, or editing the dump file would be too tedious and time consuming. To work around this I'm creating the unused columns as the errors are generated, importing the data by running the query, then dropping the unused columns when I'm done with that table. I've looked at INSERT IGNORE, but this seems to be for ignoring duplicate keys (not what I'm looking for).
Is there any way to preform an INSERT while ignoring columns that don't exist in the target table? I'm looking for something "painless", like some existing SQL functionality.
To clarify, I'm working with a bunch of backup files and importing the data to a local database for testing before moving it to the live server. Example of the kind of solution I'm hoping for:
-- Don't try to insert data from columns that don't exist in "new_table"
INSERT INTO `new_table` {IGNORE UNKNOWN COLUMNS} (`id`, `col1`, `col2`) VALUES
(1, '', ''),
(2, '', '');
If something like this simply doesn't exist, I'm happy to accept that as an answer and continue to use my current workaround.
Your current technique seems practical enough. Just one small change.
Rather than waiting for error and then creating columns one by one, you can just export the schema, do a diff and find out all the missing columns in all the tables.
That way it would be less work.
Your gui will be capable of exporting just schema or the following switch on mysqldump will be useful to find out all the missing columns.
mysqldump --no-data -uuser -ppassword --database dbname1 > dbdump1.sql
mysqldump --no-data -uuser -ppassword --database dbname2 > dbdump2.sql
Diffing the dbdump1.sql and dbdump2.sql will give you all the differences in both the databases.
you can write a store function like that:
sf_getcolumns(table_name varchar(100))
return string contatining the filed list like this:
'field_1,field_2,field_3,...'
then create a store procedure
sp_migrite (IN src_db varchar(50), IN target_db varchar(50))
that runs trugh the tables and for each table gets the filed lists and then creates a string like
cmd = 'insert into ' || <target_db><table_name> '(' || <fileds_list> || ') SELECT' || <fileds_list> || ' FROM ' <src_db><table_name>
then execute the string for each table
Is there any way to preform an INSERT while ignoring columns that don't exist in the target table? I'm looking for something "painless", like some existing SQL functionality.
No, there is no "painless" way to do so.
Instead, you must explicitly handle those columns which do not exist in the final tables. For example, you must remove them from the input stream, drop them after the fact, play dirty tricks (engine=BLACKHOLE + triggers to INSERT only what you want to the true target schema), whatever.
Now, this doesn't necessarily need to be manual -- there are tools (as Devart noted) and ways to query the db catalog to determine column names. However, it's not as easy as simply annotating your INSERT statements.
Perhaps the CMS vendor can supply a reasonable migration script?
dbForge Studio for MySQL will give you an opportunity to compare and synchronize data between two databases.
By default data comparison is performed only for the objects with the same names; you can use automatic mapping or map database objects manually. dbForge Studio allows you to customize mapping of tables and columns, so you can compare data of objects with non-equal names, owners, and structure. You may also map columns with different types, however this may result in data truncation, rounding, and errors during synchronization for certain types.
I carefully read all these posts because I have the same challenge. Please review my solution:
I did it in c# but you can do it any language.
Check the insert statement for columnnames. If any is missing from your actual table ADD them as a TEXT column coz TEXT can eat anything.
When finished inserting into that table, remove the added columns.
Done.
found your question interesting.
I knew that there was a way to select the column names from a table in MySQL; it's show columns from tablename. What I'd forgotten was that all of the MySQL table data is held in a special database, called "information_schema".
This is the logical solution, but it doesn't work:
mysql> insert into NEW_TABLE (select column_name from information_schema.columns where table_name='NEW_TABLE') values ...
I'll keep looking, though. If it's possible to grab a comma-delimited value from the select column_name query, you might be in business.
Edit:
You can use the select ... from ... into command to generate a one-line CSV, like the following:
mysql> select column_name from information_schema.columns where table_name='NEW_TABLE' into outfile 'my_file' fields terminated by '' lines terminated by ', '
I don't know how to get this to output to the MySQL CLI stdout, though.
If your tables and databases are on the same server and you know that the common columns are compatible, you can easily do this with GROUP_CONCAT, INFORMATION_SCHEMA.COLUMNS, and a prepared statement.
This example creates two similar tables
and inserts the data from the common columns in
table_a into table_b. Since the two tables have
a common primary key column, it is excluded. (Note: the database [table_schema] I am using is called 'test')
create table if not exists `table_a` (
`id` int(11) not null auto_increment,
`a` varchar(2) default null,
`b` varchar(2) default null,
`c` varchar(2) default null,
`d` varchar(2) default null,
PRIMARY KEY (`id`)
);
create table if not exists `table_b` (
`id` int(11) not null auto_increment,
`a` varchar(2) default null,
`b` varchar(2) default null,
`c` varchar(2) default null,
PRIMARY KEY (`id`)
);
insert into table_a
(a, b, c, d)
values
('a1', 'b1', 'c1', 'd1'),
('a2', 'b2', 'c2', 'd2'),
('a3', 'b3', 'c3', 'd3');
-- This creates a comma delimited list of common
-- columns in table_a and table_b. It also excludes
-- any columns you don't want from table_a
set #common_columns = (
select
group_concat(column_name)
from
information_schema.columns
where
table_schema = 'test'
and
table_name = 'table_a'
and
column_name not in ('id')
and
column_name in (
select
column_name
from
information_schema.columns
where
table_schema = 'test'
and
table_name = 'table_b'
)
);
set #stmt = concat(
'insert into table_b (', #common_columns, ') ',
'select ', #common_columns, ' from table_a;'
);
prepare stmt from #stmt;
execute stmt;
deallocate prepare stmt;
select * from table_b;
The prepared statement ends up looking like this:
insert into table_b (a, b, c)
select a, b, c from table_a;
Don't forget to change the values for table_name and table_schema to match your tables and database (table_schema). If it is useful you could create a stored procedure to do this task.