Let's say I have table A, it has 3000 rows and I know the first 2000 rows are corrupt and I have a clean records sitting in another mysql server. What would be the easiest way to restore that 2000 rows? Thanks so much.
Using Maatkit's mk-table-checksum tool you can determine the differences between the tables of two hosts.
mk-table-sync is used to generate and/or run only the statements necessary to update the corrupted table.
What you want is to copy the mysql file from the backup server and delete the file on the production server.
Related
I have the same tables in two different databases, one is on MySQL and the other one on SQL Server. I want to run a query to get the data from a MySQL table to a SQL Server table to update the records on daily basis.
E.g. I have 200 record in MySQL today by tomorrow it might be 300. I want to update 200 records today and the only 100 new record tomorrow.
Can any one help me please?
Thanks in advance
Probably the best way to manage this is from the SQL Server database. This allows that database to pull the data in every day, rather than having the MySQL database push the data.
The place to start is by linking the servers. Start with the documentation on the subject. Next, set up a job in SQL Server Agent. This job would do the following:
Connect to the MySQL server.
Load the data into a staging table.
Validate the data.
Update or insert the data into the final table.
You can schedule this job to run every day.
Note that 200 or 300 records is very small by the standards of databases (unless the records are really, really big).
There is no straight forward way for this. But you can approach this way
Use mysqldump to create a dump of the table data.
restore that in your SQL Server in a temporary / auxiliary table.
perform the update to main table JOIN with that temporary table.
delete the temporary table
We are running a service where we have to setup a new database for each new site. The database is exactly the same so we can simply dump from a backup file or clone from a sample database (which is created only for clone purpose, no transaction will be run there thus no worry about corrupting data) from the same server. The database it self contains around 100 tables and with some data, taking around 1-2mins to import, which is too slow.
I'm trying to find a way to do it as fast as possible, the first thought came to mind was to copy the files within the sample database data_dir, but it seems like I also need to somehow edit the table lists or mysql wont be able to read my new database's tables eventhough it still shows them there.
You're duplicating the database the wrong way, it will be much faster if you do it properly.
Here is how you duplicate a database:
create database new_database;
create table new_database.table_one select * from source_database.table_one;
create table new_database.table_two select * from source_database.table_two;
create table new_database.table_three select * from source_database.table_three;
...
I just did a performance test, this takes 81 seconds to duplicate 750MB of data across 7 million table rows. Presumably your database is smaller than that?
I don't think you are going to find anything faster. One thing you could do is already have a queue of duplicate databases on standby ready to be picked up and used at any time. So you don't need to create a new database at all, you just rename an existing database from a queue of available ones. And have a cron job running to make sure the queue never runs empty.
Why mysql not able to read or what you changes in table lists?
I think there may be problem of permissions to read by mysql, otherwise it would be fine..
Thanks
Which one is quicker, exporting and importing a database, or creating a database from scratch with all the tables and views.
Assuming that the tables are empty and that you have 7-15 tables/views. Also the entire thing is done by code.
Also Does it differ based on whether you are using MSSQL, MySQL etc.?
For MySQL fastest way to move a DB from one place to another is to do a binary copy. Just taking the /var/lib/mysql/databasename/ (location may vary) from one server to another and restarting MySQL on that new server is enough to move the data to that new server.
I have no idea about MSSQL.
I perform a full MySQL db backup twice a day creating a MySQL dump file.
Some records were accidentally deleted but when I realized the missing data, several more records had been added.
What's the best way to restore the missing data without losing the newer data as well? Maybe by replacing the INSERT with REPLACE in the dump file? Or is there a better way?
Yeah, REPLACE is the best solution according to minimum efforts and clear work.
We have a live MySQL database that is 99% INSERTs, around 100 per second. We want to archive the data each day so that we can run queries on it without affecting the main, live database. In addition, once the archive is completed, we want to clear the live database.
What is the best way to do this without (if possible) locking INSERTs? We use INSERT DELAYED for the queries.
http://www.maatkit.org/ has mk-archiver
archives or purges rows from a table to another table and/or a file. It is designed to efficiently “nibble” data in very small chunks without interfering with critical online transaction processing (OLTP) queries. It accomplishes this with a non-backtracking query plan that keeps its place in the table from query to query, so each subsequent query does very little work to find more archivable rows.
Another alternative is to simply create a new database table each day. MyIsam does have some advantages for this, since INSERTs to the end of the table don't generally block anyway, and there is a merge table type to being them all back together. A number of websites log the httpd traffic to tables like that.
With Mysql 5.1, there are also partition tables that can do much the same.
I use mysql partition tables and I've achieve wonderful results in all aspects.
Sounds like replication is the best solution for this. After the initial sync the slave gets updates via the Binary Log, thus not affecting the master DB at all.
More on replication.
MK-ARCHIVER is a elegant tool to archive MYSQL data.
http://www.maatkit.org/doc/mk-archiver.html
MySQL replication would work perfectly for this.
Master -> the live server.
Slave -> a different server on the same network.
Could you keep two mirrored databases around? Write to one, keep the second as an archive. Switch every, say, 24 hours (or however long you deem appropriate). Into the database that was the archive, insert all of todays activity. Then the two databases should match. Use this as the new live db. Take the archived database and do whatever you want to it. You can backup/extract/read all you want now that its not being actively written to.
Its kind of like having mirrored raid where you can take one drive offline for backup, resync it, then take the other drive out for backup.