Copying data from old table to new table in SQL (ORACLE) - mysql

I'm aware that copying data will only copy the data and no constraints, I'll lose my pk and fk etc.
But what if i were to copy the data back into a table with primary keys and foreign. because i want to remodel my db on workbench but don't want to lose the data i have imput into my tables so i was thinking of making a copy deleting the original remodelling the db and forward engineering and copying the data back into the table will this work?

Not sure if I gut your question right but from what it looks you only need a new table with the data to mass with. have you tried using CREATE TABLE AS SELECT?
http://dev.mysql.com/doc/refman/5.0/en/create-table-select.html
CREATE TABLE t AS SELECT * from old_t

Related

Amazon RDS change collate for mysql database in production without downtime

I saw a solution like this below:
create a new table like your source table.
alter that new table the way you want.
insert your data into the new table.
create indexes etc. as needed on the new table.
rename your old table to something like ..._old or whatever.
rename your new table to the former name of the old one.
copy any missing rows from the _old table to the new one.
Reference for an above solution
But the above solution might cause data unavailability if there is a huge amount of data added before copying any missing rows from the _old table to the new one.
Is there any better solution than this, using AWS DMS, etc?
I also want to change the collate of all tables present in the database.
Is it possible to get all the data replicated between two RDS DBs continuously, like new data entered in database A gets into Database B and viceversa?

Is there any way to import database table schema only

I have two databases. Now I'm trying to import all table schema only from first database to second database. I already exported all table structure from first database using phpmyadmin. But I can't import the table structures in the second database. phpmyadmin says
#1050 - Table 'XXXXXX' already exists
How can I import only the table structure correctly?
Note: Both databases had same table and all table had same structure. I have changed some table structures in the first database that I can't remember right now. Now I need to merge both table structure only. Also both database contains different data set and I can't afford any data loss from both databases.
Before executing any command I would recommend taking full database backup, otherwise you may lost a few days of sleep.
Using ALTER command
Use ALTER command to modify table structure. Here's sql that adds age not nullable age field to users table.
ALTER TABLE users ADD age int(11) not null
Re-creating table
I wouldn't recommend this method because you'll have data loss. Drop old table then create with new schema.
DROP TABLE mytable;
CREATE TABLE mytable (
...
);
Or if you want to keep data you can:
Duplicate or rename table to different name.
Create a new table with new schema.
Copy data from old table: INSERT INTO newtable SELECT * FROM oldtable
Renaming tables might cause relationship issues. I would recommend using ALTER command as much as possible. You can also take a look at scheme migration(aka: code first migration, database migration).
The main Issue is merging the tables. To identify the differences between the two tables I use a small software called SQL Power Architect.

Importing new data to master database versus temporary database?

I am designing a MySQL database for a new project. I will be importing 50-60 MB of data on a daily basis.
There will be a main table with a primary key. Then there will be child tables with their own primary key and a foreign key pointing back to the main table.
New data has to be parsed from a giant text file and then some minor manipulations made prior to importing into the master database. The parsing and import operation may involve a significant amount of troubleshooting so I want to import new data into a temporary database and ensure its integrity before adding to the master.
For this reason, I thought initially to parse and import new data into a separate, temporary database each day. In this way, I would be able to inspect the data prior to adding to the master and at the same time I would have each day's data stored as a separate database should I ever need to rebuild the master later on from the individual temporary databases.
I am considering the use of primary keys / foreign keys with the InnoDB engine in order to maintain relational integrity across tables. This means I have to worry about auto-increment ids (primary key) not having any duplicates when I go to import the new data each day.
So, given this situation, what would be best?
Make a copy of the master and import directly into the copy of the master each day. Replace existing master with the new copy.
Import new data into a temporary database each day but change auto-increment start value of the primary keys to be greater than the maximum in the master. Would I then also change the auto-increment values for the primary keys for all tables (main table and its children)?
Import new data into a temporary database each day, not worrying about the primary key values. Find some other way to merge the temporary database with the master without collisions of the primary keys? If using this strategy, how can I update the primary key in the main table for the new data while making sure all the relationships with the child tables remain correct?
I'm not sure this is as complicated as you are making it?
Why not just do this:
Import raw data into temporary table (why does it have to be a separate database?)
Run your transformations/integrity checks on the temporary table.
When the data is good, insert it directly into the master table.
Use auto incrementing ids on the master table that are not dependent on your data being imported. That allows you to have a unique id and the original ids that might have existed in your import.
Add a field to your master table(s) that gives you a record of which import the records came from.
In addition to copying the data to your master table, make a log that ties back to the data you merged. Helps you back out the data if you find it's wrong/bad and gives you an audit trail.
In the end just set up a sandbox database, write a bunch of stored procedures and test the crap out of it. =)

How to insert a new column in a huge MYSQL Database Table?

I have this table in MYSQL databse which has about 10 million records/rows. I want to insert a new column in the table. However a simple insert column query doesn't seem to work well for me.
This is what I have tried,
ALTER TABLE contacts ADD processed INT(11);
I waited for about 5 hours, but nothing happened. Is there any way to insert a new column in such a huge table?
Hope I am clear with my question. Any help would be appreciated.
If it's production:
You should use pt-online-schema-change of Percona Toolkit.
pt-online-schema-change emulates the way that MySQL alters tables internally, but it works on a copy of the table you wish to alter. This means that the original table is not locked, and clients may continue to read and change data in it.
pt-online-schema-change works by creating an empty copy of the table to alter, modifying it as desired, and then copying rows from the original table into the new table. When the copy is complete, it moves away the original table and replaces it with the new one. By default, it also drops the original table.
Or oak-online-alter-table which is part of openark kit
oak-online-alter-table allows for non blocking ALTER TABLE operations, table rebuilds and creating a table's ghost.
Altering tables will be slower, but it doesn't lock tables.
If it's not production and downtime is okay, try this approach:
CREATE TABLE contacts_tmp LIKE contacts;
ALTER TABLE contacts_tmp ADD COLUMN ADD processed INT UNSIGNED NOT NULL;
INSERT INTO contacts_tmp (contact_table_fields) SELECT * FROM contacts;
RENAME TABLE contacts_tmp TO contacts, contacts TO contacts_old;
DROP TABLE contacts_old;

How do I efficiently change a MySQL table structure on a table with millions of entries?

I have a MySQL database that is up to about 17 GB in size and has 38 million entries. At the moment, I need to both increase the size of one column (varchar 40 to varchar 80) and add more columns.
Many of the fields are indexed including the one that I need to change. It is part of a unique pair that is necessary for the applications to work. In attempting to just make the change yesterday, the query ran for almost four hours without finishing, when I decided to cut our outage and just bring the service back up.
What is the most efficient way to make changes to something of this size?
Many of these entries are also old and if there is a good way to sort of shard off entries but still have them available that might help with this problem by making the table a much more manageable size.
You have some choices.
In any case you should take a backup before you do this stuff.
One possibility is to take your service offline and do it in place, as you have tried. If you do that you should disable key checks and constraints.
ALTER TABLE bigtable DISABLE KEYS;
SET FOREIGN_KEY_CHECKS=0;
ALTER TABLE (whatever);
ALTER TABLE (whatever else);
...
SET FOREIGN_KEY_CHECKS=1;
ALTER TABLE bigtable ENABLE KEYS;
This will allow the ALTER TABLE operation to go faster. It will regenerate the indexes all at once when you do ENABLE KEYS.
Another possibility is to create a new table with the new schema you want, then disable the keys on the new table, then do as #Bader suggested and insert the contents of the old table.
After your new table is built you will re-enable the keys on it, then rename the old table to some name like "old_bigtable" then rename the new table to "bigtable".
It's possible that you can keep your service online while you're populating the new table. But that might work poorly.
A third possibility is to dump your giant table (to a flat file) and then load it to a new table with the new layout. That is pretty much like the second possibility except that you get a table backup for free. You can make this go pretty fast with SELECT DATA INTO OUTFILE and LOAD DATA INFILE. You'll need to have access to your server machine's file system to do this.
In all cases, disable, then re-enable, the constraints and keys to get things to go fast.
Create a new table with the new structure you want with a different name for example NewTable.
Then insert data into this new table from the old table using the following query:
INSERT INTO NewTable (field1, field2, etc...) SELECT field1, field2, ... FROM OldTable
After this is done, you can drop the old table and rename the new table to the original name
DROP TABLE `OldTable`;
RENAME TABLE `NewTable` TO `OldTable` ;
I have tried this approach on a very large table and it's much much faster than altering the table.
With MySQL 5.1 and again with 5.5 certain alter statements were enhanced to just modify the structure without rewriting the entire table ( http://dev.mysql.com/doc/refman/5.5/en/alter-table.html - search for in-place). The availability of this though varies by the type of change you are making and the engine in use, the most value comes from InnoDB Plugin. In the case of your specific changes though the entire table would be rewritten.
When we encounter these issues, we typically try to leverage replica databases. As long as you are adding and not removing you can run your DDL against the replica first and then schedule a brief outage for promoting the replica to the master role. If you happen to be on RDS this is even one of their suggested uses for their replica instances http://aws.amazon.com/about-aws/whats-new/2012/10/11/amazon-rds-mysql-rr-promotion/.
Some other alternatives include:
Selecting out a subset of records into a new table with the desired structure (use INTO OUTFILE to avoid a table lock). Once complete you can schedule a maintenance window and REPLACE INTO or UPDATE any records that have changed in the origin table since the initial data copy. Once the update is complete a RENAME TABLE... of both tables wraps the changes up.
Using a tool like Percona's pt-online-schema-change: http://www.percona.com/doc/percona-toolkit/2.1/pt-online-schema-change.html. This tool works with triggers so if you already have triggers on the tables you want to change this may not fit your needs.