I have a MySQL Server which has one database called "Backup".
It only has one table with the name "storage".
In the Backup db the storage table contains about 5 Millions datarows.
Now I wanted to append new rows to the table by using the "source" command in the SQL command line.
So what happend is, that source uploaded all the new files in the table, but it overwrote the existing entries (seems that he first deleted all data)
What I have to say is that the sql file that I want to update comes from another server where this table has the same name and structure as "storage".
What I want is to append the new entries that are in the sql file to the one in my datebase. I do not want to overwrite them.
The structure in the two tables is exactly the same. I use the Backup datebase as the name says for backup uses, so that from time to time I can backup my data.
Has anyone an idea how to solve this?
Look in the .sql file you're reading with the SOURCE command, and remove the DROP TABLE and CREATE TABLE statements that appear there. They are the cause of your table being overwritten; what's actually happening is that the table is being replaced.
You could also look into using SELECT ... INTO OUTFILE and LOAD DATA INFILE as a faster and less potentially destructive way to get data from one server to the other in a file.
Related
I have a website feeded with large mysql tables (>50k of rows in some tables). Lets name one table "MotherTable". Every night I update the site with a new csv file (produced locally) that has to substitute "MotherTable" data.
The way I do this currently (I am not an expert, as you see), is:
- First, I TRUNCATE the MotherTable table.
- Second, I import the csv file to the empty table, with columns separated by "/" and skipping 1 line.
As the csv file is not very small, there are some seconds (or even a minute) when the MotherTable is empty, so the web users that make SELECTS on this table find nothing.
Obviously, I don't like that. Is there any procedure to update MotherTable in a way users note nothing? If not, what would be the quickest way to update the table with the new csv file?
Thank you!
I know how to import a text file into MySQL database by using the command
LOAD DATA LOCAL INFILE '/home/admin/Desktop/data.txt' INTO TABLE data
The above command will write the records of the file "data.txt" into the MySQL database table. My question is that I want to erase the records form the .txt file once it is stored in the database.
For Example: If there are 10 records and at current point of time 4 of them have been written into the database table, I require that in the data.txt file these 4 records get erased simultaneously. (In a way the text file acts as a "Queue".) How can I accomplish this? Can a java code be written? Or a scripting language is to be used?
Automating this is not too difficult, but it is also not trivial. You'll need something (a program, a script, ...) that can
Read the records from the original file,
Check if they were inserted, and, if they were not, copy them in another file
Rename or delete the original file, and rename the new file to replace the original one.
There might be better ways of achieving what you want to do, but, that's not something I can comment on without knowing your goal.
I use create Raw Data Files for mysql-master-slave replication,after setup,It's return table xxx doesn't exists when query on the partitioned tables,but it's work ok on the other tables.
And,When I change to use mysqldump, It's all work ok.
Can anyone help me to fix this problem?
If the partition table did not work but the other tables did and the mysqldump worked fine, my best guess would be that your Partitioned data is not stored in the same place as the rest of your data. Thus, when you used the tar, zip, or rsync method to copy your data directory, you left out the data that made up the partitioned table. You would need to locate where the partitioned data is stored and moved that over along with the rest of the data directory.
Based on your comment below, however, you have what is called the famous Schrodinger table. Based on Schrodinger's Cat paradox, This is where Mysql thinks that the table exists, because it shows up when you run show tables, but does not allow you to query of it; as in it exist but does not exist.
Usually this is as a result of not copy over the metadata (as in the ibdata1 file, and the ib_logfiles) correctly. One thing that you can do to test this is, if possible, remove the partition from the tables and try your rsync again. If you are still getting this error, it has nothing to do with the fact that the table is partitioned. Then, this test would lead me to believe that you did not copy all the data over correctly.
Thanks for viewing this. I need a little bit of help for this project that I am working on with MySql.
For part of the project I need to load a few things into a MySql database which I have up and running.
The info that I need, for each column in the table Documentation, is stored into text files on my hard drive.
For example, one column in the documentation table is "ports" so I have a ports.txt file on my computer with a bunch of port numbers and so on.
I tried to run this mysql script through phpMyAdmin which was
LOAD DATA INFILE 'C:\\ports.txt" INTO TABLE `Documentation`(`ports`).
It ran successfully so I went to do the other load data i needed which was
LOAD DATA INFILE 'C:\\vlan.txt' INTO TABLE `Documentation` (`vlans`)
This also completed successfully, but it added all the rows to the vlan column AFTER the last entry to the port column.
Why did this happen? Is there anything I can do to fix this? Thanks
Why did this happen?
LOAD DATA inserts new rows into the specified table; it doesn't update existing rows.
Is there anything I can do to fix this?
It's important to understand that MySQL doesn't guarantee that tables will be kept in any particular order. So, after your first LOAD, the order in which the data were inserted may be lost & forgotten - therefore, one would typically relate such data prior to importing it (e.g. as columns of the same record within a single CSV file).
You could LOAD your data into temporary tables that each have an AUTO_INCREMENT column and hope that such auto-incremented identifiers remain aligned between the two tables (MySQL makes absolutely no guarantee of this, but in your case you should find that each record is numbered sequentially from 1); once there, you could perform a query along the following lines:
INSERT INTO Documentation SELECT port, vlan FROM t_Ports JOIN t_Vlan USING (id);
If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc..
Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database.
Using MySQLDump etc.. won't work. Here is the detail. For example: I have FOUR old databases, I can name them as 'DB_a', 'DB_b', 'DB_c', 'DB_d'. Now the old table A has 28 columns, I want to add each row in table A into the new database with a new column ID 'DB_x' (x to indicate which database it comes from). If I can't differentiate the database ID by the row's content, the only way I can identify them is going through some user input parameters.
Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while.
Thanks!!
I don't entirely understand your situation with the columns (wouldn't it be more sensible to add any new columns after migration?), but one of the arguably fastest methods to copy a database across servers is mysqlhotcopy. It can copy myISAM only and has a number of other requirements, but it's awfully fast because it skips the create dump / import dump step completely.
Generally when you migrate a database to new servers, you don't apply a bunch of schema changes at the same time, for the reasons that you're running into right now.
MySQL has a dump tool called mysqldump that can be used to easily take a snapshot/backup of a database. The snapshot can then be copied to a new server and installed.
You should figure out all the changes that have been done to your "new" database, and write out a script of all the SQL commands needed to "upgrade" the old database to the new version that you're using (e.g. ALTER TABLE a ADD COLUMN x, etc). After you're sure it's working, take a dump of the old one, copy it over, install it, and then apply your change script.
Use mysqldump to dump the data, then echo output.txt > msyql. Now the old data is on the new server. Manipulate as necessary.
Sure there are tools that can help you achieving what you're trying to do. Mysqldump is a premier example of such tools. Just take a glance here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
What you could do is:
1) You make a dump of the current db, using mysqldump (with the --no-data option) to fetch the schema only
2) You alter the schema you have dumped, adding new columns
3) You create your new schema (mysql < dump.sql - just google for mysql backup restore for more help on the syntax)
4) Dump your data using the mysqldump complete-insert option (see link above)
5) Import your data, using mysql < data.sql
This should do the job for you, good luck!
Adding extra rows can be done on a live database:
ALTER TABLE [table-name] ADD [row-name] MEDIUMINT(8) default 0;
MySql will default all existing rows to the default value.
So here is what I would do:
make a copy of you're old database with MySql dump command.
run the resulting SQL file against you're new database, now you have an exact copy.
write a migration.sql file that will modify you're database with modify table commands and for complex conversions some temporary MySql procedures.
test you're script (when fail, go to (2)).
If all OK, then goto (1) and go live with you're new database.
These are all valid approaches, but I believe you want to write a sql statement that writes other insert statements that support the new columns you have.