I have the following issue related to I have in localhost(my computer) a Table in a database which I use to update the data for a month. Once data is correct, I need to update the Table in the database which resides in the server.
I use Navicat to do the work and it only transfer data deleting the actual database in the server and sending all the data from my localhost.
The problem is that the Table now has almost 300.000 records stored and it takes too long transfering the data leaving the database empty for some time.
Is there any way I could use that only update the data without deleting the whole table?
export local table with different name as mysqldump or just csv, 300k rows is not a big deal and use a different table now.
then upload the table 2 to db and use a query to update table 1 using table2 data.
Related
Problem: I have an Aurora RDS database that has a table where the data for a certain column was deleted. I have a snapshot of the DB from a few days ago that I want to use to populate the said column with the values from the snapshot. The issue is that certain rows have been deleted from the live DB in the meantime and I don't want to include them again.
I want to mount the snapshot, connect to it and then SELECT INTO OUTFILE S3 the table that interests me. Then I will LOAD DATA FROM S3 into the live DB, selecting only the column that interests me. But I haven't found information about what happens if the number of rows differ, namely if the snapshot has rows that were deleted in the meantime from the live DB.
Does the import command take the ID column into consideration when doing the import? Should I also import the ID column? I don't want to recreate the rows in question, I only want to populate the existing rows with the values from the column I want from the snapshot.
ALTER TABLE the destination table to add the column you are missing. It will be empty of data for now.
LOAD DATA your export into a different table than the ultimate destination table.
Then do an UPDATE with a JOIN between the destination table and the imported table. In this update, copy the values for the column you're trying to restore.
By using an inner join, it will only match rows that exist in both tables.
I want to query data from two different MySQL databases to a new MySQL database.
I have two databases with a lot of irrelevant data and I want to create what can be seen as a data warehouse where only relevent data should be present coming from the two databases.
As of now all data gets sent to the two old databases, however I would like to have scheduled updating so the new database is up to speed. There is a key between the two databases so in best case I would like all data to be present in one table however this is not crucial.
I have done similar work with Logstash and ES, however I do not know how to do it when it comes to MySQL.
Best way to do that is create a ETL process with Pentaho Data Integrator or any ETL tool. Where your source will be two different databases, in the transformation part you can remove or add any business logic then load those data into new database.
If you create this ETL you can schedule it once a day so that your database will be up to date.
If you want to do this without an ETL than your database must be in same host. Than you can just add database name just before table name in query. like SELECT * FROM database.table_name
I have a MySQL Server which has one database called "Backup".
It only has one table with the name "storage".
In the Backup db the storage table contains about 5 Millions datarows.
Now I wanted to append new rows to the table by using the "source" command in the SQL command line.
So what happend is, that source uploaded all the new files in the table, but it overwrote the existing entries (seems that he first deleted all data)
What I have to say is that the sql file that I want to update comes from another server where this table has the same name and structure as "storage".
What I want is to append the new entries that are in the sql file to the one in my datebase. I do not want to overwrite them.
The structure in the two tables is exactly the same. I use the Backup datebase as the name says for backup uses, so that from time to time I can backup my data.
Has anyone an idea how to solve this?
Look in the .sql file you're reading with the SOURCE command, and remove the DROP TABLE and CREATE TABLE statements that appear there. They are the cause of your table being overwritten; what's actually happening is that the table is being replaced.
You could also look into using SELECT ... INTO OUTFILE and LOAD DATA INFILE as a faster and less potentially destructive way to get data from one server to the other in a file.
Thanks for viewing this. I need a little bit of help for this project that I am working on with MySql.
For part of the project I need to load a few things into a MySql database which I have up and running.
The info that I need, for each column in the table Documentation, is stored into text files on my hard drive.
For example, one column in the documentation table is "ports" so I have a ports.txt file on my computer with a bunch of port numbers and so on.
I tried to run this mysql script through phpMyAdmin which was
LOAD DATA INFILE 'C:\\ports.txt" INTO TABLE `Documentation`(`ports`).
It ran successfully so I went to do the other load data i needed which was
LOAD DATA INFILE 'C:\\vlan.txt' INTO TABLE `Documentation` (`vlans`)
This also completed successfully, but it added all the rows to the vlan column AFTER the last entry to the port column.
Why did this happen? Is there anything I can do to fix this? Thanks
Why did this happen?
LOAD DATA inserts new rows into the specified table; it doesn't update existing rows.
Is there anything I can do to fix this?
It's important to understand that MySQL doesn't guarantee that tables will be kept in any particular order. So, after your first LOAD, the order in which the data were inserted may be lost & forgotten - therefore, one would typically relate such data prior to importing it (e.g. as columns of the same record within a single CSV file).
You could LOAD your data into temporary tables that each have an AUTO_INCREMENT column and hope that such auto-incremented identifiers remain aligned between the two tables (MySQL makes absolutely no guarantee of this, but in your case you should find that each record is numbered sequentially from 1); once there, you could perform a query along the following lines:
INSERT INTO Documentation SELECT port, vlan FROM t_Ports JOIN t_Vlan USING (id);
I want to copy the data from one MySQL table to another table. The source table contains 30 million records. the SQL connection gets lost when I tried to copy the data using the SQL query
INSERT table2 SELECT * FROM table1
Is there any external tool avaliable to do this job from the shell
Thanks
Sree
The mysql command line tool should be able to handle this just fine.