I have a table that has 7000 rows,
I added a new column to this table
The table has a mysql DateTime so.
When i updated the table to fill in this new table it updated the datetime,
I took an sql dump just before i did the update so now i need to use the sql dump to revert the datetime back (and only that column).
How do i do that?
There are a couple ways I can think of to do this off the top of my head.
First is to create another mysql database and load the dump into that database (make sure it's not going to load into the first database from a use commmand in the dump), and then use the data from that database to construct the update queries for the first.
The second, easier, more hackish way, is to open the dump in a text editor, pull out just that table, and find and replace to make update statements for just that column based on primary key instead of inserts. You'd need to be able to find and replace on patterns.
A third way would be to load the dump in an abstract sql tool letting it do the parsing for you, and write new queries from the data in the abstract syntax trees.
A fourth, again hackish, possibility, if this isn't a live system, is to rollback and re-perform the more recent transformations (only if they are simple).
Restore the dump to a second table. Select the ID and datetime from that table. Use those results to update the rows in the original table corresponding to the IDs you got.
Related
today I come to you for inspiration or maybe ideas how to solve a task not killing my laptop with massive and repetitive code.
I have a CSV file with around 10k records. I also have a database with respective records in it. I have four fields inside both of these structures: destination, countryCode,prefix and cost
Every time I update a database with this .csv file I have to check if the record with given destination, countryCode and prefix exist and if so, I have to update the cost. That is pretty easy and it works fine.
But here comes the tricky part: there is a possibility that the destination may be deleted from one .csv file to another and I need to be aware of that and delete that unused record from the database. What is the most efficient way of handling that kind of situation?
I really wouldn't want to check every record from the database with every row in a .csv file: that sounds like a very bad idea.
I was thinking about some time_stamp or just a bool variable which will tell me if the record was modified during the last update of the DB BUT: there is also a chance that neither of params within the record change, thus: no need to touch that record and mark it as modified.
For that task, I use Python 3 and mysql.connector lib.
Any ideas and advice will be appreciated :)
If you're keeping a time stamp why do you care if it's updated even if nothing was changed in the record? If the reason is that you want to save the date of the latest update you can add another column saving a time stamp of the last time the record appeared in the csv and afterwords delete all the records that the value of this column in them is smaller than the date of the last csv.
If the .CSV is a replacement for the existing table:
CREATE TABLE new LIKE real;
load the .csv into `new` (Probably use LOAD DATA...)
RENAME TABLE real TO old, new TO real;
DROP TABLE old;
If you have good reason to keep the old table and patch it, then...
load the .csv into a table
add suitable indexes
do one SQL to do deletes (no loop needed). It is probably a multi-table DELETE.
do one sql to update the prices (no loop needed). It is probably a multi-table UPDATE.
You can probably do the entire task (either way) without touching Python.
I use create Raw Data Files for mysql-master-slave replication,after setup,It's return table xxx doesn't exists when query on the partitioned tables,but it's work ok on the other tables.
And,When I change to use mysqldump, It's all work ok.
Can anyone help me to fix this problem?
If the partition table did not work but the other tables did and the mysqldump worked fine, my best guess would be that your Partitioned data is not stored in the same place as the rest of your data. Thus, when you used the tar, zip, or rsync method to copy your data directory, you left out the data that made up the partitioned table. You would need to locate where the partitioned data is stored and moved that over along with the rest of the data directory.
Based on your comment below, however, you have what is called the famous Schrodinger table. Based on Schrodinger's Cat paradox, This is where Mysql thinks that the table exists, because it shows up when you run show tables, but does not allow you to query of it; as in it exist but does not exist.
Usually this is as a result of not copy over the metadata (as in the ibdata1 file, and the ib_logfiles) correctly. One thing that you can do to test this is, if possible, remove the partition from the tables and try your rsync again. If you are still getting this error, it has nothing to do with the fact that the table is partitioned. Then, this test would lead me to believe that you did not copy all the data over correctly.
I'm receiving a MySQL dump file .sql daily from an external server, which I don't have any control of. I created a local database to store all data in the .sql file. I hope I can set up a script to automatically update my local database daily. The sql file I'm receiving daily contains old data that is in the local database already. How can I avoid duplicates of such old data and only insert into the local MySQL server new data? Thank you very much!
You can use a third-party database compare tool such as those from Red Gate to create two databases, one current (your "master") and the new dump. You can then run the compare tool between the two versions and update only changes between them, updating your master.
Use unique constraints on field, that you want to be unique.
Also, as Danny Beckett mentioned, to avoid errors in output (which I would prefer to redirect into file for future analysis, to check, if I haven't missed anything in process), you can use INSERT IGNORE construct instead of INSERT.
You can use a constraint supported with IGNORE statement.
The second option, you can first insert the data to a temp table. Then insert only the difference.
Using the second option you may use some restriction to do not search for duplication through add records stored in database.
You need to create a primary key in your table. It should be a unique combination of column values. Using the INSERT query with IGNORE will avoid adding duplicates in this table.
see http://dev.mysql.com/doc/refman/5.5/en/insert.html
If this is a plain vanilla mysqldump file, then normally it includes DROP TABLE IF EXISTS... statements and create table statements, so the tables are recreated when the data is imported. So duplicte data should not be a problem, unless I'm missing something.
I'm trying to most efficiently manage a database table and get rid of old entries that will never be accessed. Yes they could probably easily be persisted for many years but I'd just like to get rid of them. I could do this maybe once every month. Would it be more efficient to copy the entries I want to keep into a new table then simply delete the old table. Or should a query manually delete each entry after that threshhold that I set.
I'm using MySQL with JPA/JPQL JEE6 with entity annotations and Java persistence manager.
Thanks
Another solution is to design the table with range or list PARTITIONING, and then you can use ALTER TABLE to drop or truncate old partitions.
This is much quicker than using DELETE, but it may complicate other uses of the table.
A single delete query will be the most efficient solution.
Copying data from one database to another can be lengthy if you have a lot of data to keep. It means you have to retrieve all the data with a single query (or multiple, if you want to batch), and issue a lot of insert statements in the other database.
Using JPQL, you can issue a single query to delete all old statements, something like
DELETE FROM Entity e WHERE e.date = ?
This will translated to a single SQL query, and the database will take care of deleting all the unwanted records.
If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc..
Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database.
Using MySQLDump etc.. won't work. Here is the detail. For example: I have FOUR old databases, I can name them as 'DB_a', 'DB_b', 'DB_c', 'DB_d'. Now the old table A has 28 columns, I want to add each row in table A into the new database with a new column ID 'DB_x' (x to indicate which database it comes from). If I can't differentiate the database ID by the row's content, the only way I can identify them is going through some user input parameters.
Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while.
Thanks!!
I don't entirely understand your situation with the columns (wouldn't it be more sensible to add any new columns after migration?), but one of the arguably fastest methods to copy a database across servers is mysqlhotcopy. It can copy myISAM only and has a number of other requirements, but it's awfully fast because it skips the create dump / import dump step completely.
Generally when you migrate a database to new servers, you don't apply a bunch of schema changes at the same time, for the reasons that you're running into right now.
MySQL has a dump tool called mysqldump that can be used to easily take a snapshot/backup of a database. The snapshot can then be copied to a new server and installed.
You should figure out all the changes that have been done to your "new" database, and write out a script of all the SQL commands needed to "upgrade" the old database to the new version that you're using (e.g. ALTER TABLE a ADD COLUMN x, etc). After you're sure it's working, take a dump of the old one, copy it over, install it, and then apply your change script.
Use mysqldump to dump the data, then echo output.txt > msyql. Now the old data is on the new server. Manipulate as necessary.
Sure there are tools that can help you achieving what you're trying to do. Mysqldump is a premier example of such tools. Just take a glance here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
What you could do is:
1) You make a dump of the current db, using mysqldump (with the --no-data option) to fetch the schema only
2) You alter the schema you have dumped, adding new columns
3) You create your new schema (mysql < dump.sql - just google for mysql backup restore for more help on the syntax)
4) Dump your data using the mysqldump complete-insert option (see link above)
5) Import your data, using mysql < data.sql
This should do the job for you, good luck!
Adding extra rows can be done on a live database:
ALTER TABLE [table-name] ADD [row-name] MEDIUMINT(8) default 0;
MySql will default all existing rows to the default value.
So here is what I would do:
make a copy of you're old database with MySql dump command.
run the resulting SQL file against you're new database, now you have an exact copy.
write a migration.sql file that will modify you're database with modify table commands and for complex conversions some temporary MySql procedures.
test you're script (when fail, go to (2)).
If all OK, then goto (1) and go live with you're new database.
These are all valid approaches, but I believe you want to write a sql statement that writes other insert statements that support the new columns you have.