It was easy using phpMyAdmin to pull a list of WordPress admin comments that were missing an IP location, but now that I've got that data in hand I'm looking for a quick way to insert it into the table. My guess is that'll probably be the use (upload) of a .sql file.
ยป WordPress ERD
I've currently got my fresh data in an Excel sheet, with columns for comment_ID and comment_author_IP. There are several hundred records, with a variety of IP addresses.
UPDATE:
The winning query:
UPDATE wp_comments x, temp xx
SET x.comment_author_IP = xx.IP
WHERE x.comment_ID = xx.ID;
If you are looking into a manual and easy process
Use phpMyAdmin to upload your Excel to a new table. (it can import Excels too);
add some indexes on foreign keys on the new created table;
then join the new table with the actual table and update the relevant fields.
Related
Problem: I have an Aurora RDS database that has a table where the data for a certain column was deleted. I have a snapshot of the DB from a few days ago that I want to use to populate the said column with the values from the snapshot. The issue is that certain rows have been deleted from the live DB in the meantime and I don't want to include them again.
I want to mount the snapshot, connect to it and then SELECT INTO OUTFILE S3 the table that interests me. Then I will LOAD DATA FROM S3 into the live DB, selecting only the column that interests me. But I haven't found information about what happens if the number of rows differ, namely if the snapshot has rows that were deleted in the meantime from the live DB.
Does the import command take the ID column into consideration when doing the import? Should I also import the ID column? I don't want to recreate the rows in question, I only want to populate the existing rows with the values from the column I want from the snapshot.
ALTER TABLE the destination table to add the column you are missing. It will be empty of data for now.
LOAD DATA your export into a different table than the ultimate destination table.
Then do an UPDATE with a JOIN between the destination table and the imported table. In this update, copy the values for the column you're trying to restore.
By using an inner join, it will only match rows that exist in both tables.
I am not database engineering. But I have a question about the possibility of an issue about the MySQL database.
Is it possible to write SQL to get the data from several tables and then use these data (what we get) to updated a new table?
Also, this work should be scheduled daily.
The reason why I ask this question is because I am in this situation:
Our IT department has maintained a big database, but the database/tables are not meet our department's business need (we only have read permission). Our department has a small database (have all the permission), which we can use custom SQL to create some special table and updated them by daily.
So go back to the question, it is possible to set up the SQL and schedule it to make sure these SQL keep updating our tables?
Thank you so much!!!
Is it possible to write SQL to get the data from several tables and
then use these data (what we get) to updated a new table?
Yes it is possible. You can use a UPDATE .. JOIN construct to get the data from several table using SELECT statement and then JOIN with that inline view and perform the update operation to your other table.
Example:
UPDATE Your_Table a
JOIN (
//Select query to get data from multiple other tables
) xxx ON a.some_column = xxx.some_matching_column
SET a.column_c = xxx.column_c;
Also, this work should be scheduled daily
Sure, use MySQL Event Schedular
In reviewing many of the answers, don't see a solution something I feel should be simple.
I'm attempting to update a couple of fields in my Production database on one server from a restored database on another server because of a loss of data by our ERP vendor's update.
Anyway, have both servers connected in SSMS and just want to run the query below:
USE coll18_production;
GO
USE coll18_test2;
GO
UPDATE coll18_Production.dbo.PERSON
SET W71_ID_CRD_NO = T2.PERSON_USER1, W71_ID_CRD_DATE = T2.PERSON_USER9
FROM coll18_test2.dbo.PERSON as T2
WHERE coll18_Production.dbo.PERSON.ID = T2.ID;
I would think this would be a simple update, but can't make a query for 2 different server databases via the same tables.
Thanks if anyone can make this simple,
Donald
Okay, thanks for the input. In the essence of time I'm going to do something similar to what cpaccho recommended. Create a temp table containing the 2 fields that I want to update from in my Production database. Then I'm going to connect to my Test2 database that I restored from backup. Export these two fields as a csv file with the primary key and simply restore this table data into the temp. table in my production database. Then simply run my update from this temp table into the 2 fields in my production PERSON table where the ID's equal each other.
Have a great weekend,
Donald
The problem is that since the databases are on 2 different servers in order to join between them you will need a way for the servers to talk to each other.
The way to do that is through linked servers. Then you can set up your query to join the 2 tables together using 4 part naming (server.DB.Schema.Table) and accomplish your goal. The query will look sort of like this:
UPDATE Server.DB.Schema.Table1
SET column = b.column
FROM Server1.DB.Schema.Table1 a
INNER JOIN Server2.DB.Schema.Table2 b
ON a.column = b.column
Where a.column = something
You will only need to set up the linked server on one side and the Server name in the query will be the name you give the linked server. The only caveat is that this can be slow because in order to join the tables SQL Server may have to copy the entire table from one server to the other. I would also set up the linked server on the server you are updating (so that you run the update on the same server as the DB you are updating)
How to set up Linked Server Microsoft KB
A simple, rather hacky way would be to hard copy the table from database to database...
First create a table that containts the changes you want:
USE coll18_test2;
GO
SELECT PERSON_USER1, PERSON_USER9, ID
INTO dbo.MyMrigationOrWhateverNameYouLike
FROM coll18_test2.dbo.PERSON
Then go to SSMS, right click on coll18_test2 database --> Task --> Generate scripts and go with the assistant to generate a script for the newly created table. Don't forget to setup, in the advanced options, "Type of data to script" to "Schema and "Data".
Now that you have your script, just run it in your production database, and make your query based on that table.
UPDATE dbo.PERSON
SET W71_ID_CRD_NO = T2.PERSON_USER1, W71_ID_CRD_DATE = T2.PERSON_USER9
FROM dbo.MyMrigationOrWhateverNameYouLike as T2
WHERE coll18_Production.dbo.PERSON.ID = T2.ID;
Finally drop the MyMrigationOrWhateverNameYouLike table and you're done...
I have a question which I'm sure has been asked before, but I don't know the terminology for this question. (Hence, I have tried searching for an answer to this question on this, but no luck.)
I am on SQL Server Management Studio 2008. I have a table that was created by an import of a flat file. At the beginning of every month, I want to update this table with a new version of the given flat file. The headers on the flat file / table will always stay the same, and no previous records will be lost. Data on previous records may change, and new records will be included.
What is the best way for doing this each month? Right now, my solution is to delete the current table and re-create it with an import of the new flat file. (Or, I could run a truncate, and then re-import.)
One of the faster methods would be to drop all indexes, truncate, re-import, and re-create all indexes. Note that with a flat file you could automate using SSIS, or you could use a BULK INSERT for a job schedule. For instance, if the file is in the same location every month and all the delimiters and details are the same, a procedure or TSQL script that BULK INSERTs the file would work when called by a job once a month on a schedule.
BULK INSERT MonthlyTable
FROM 'C:\MonthlyFileDrop\MonthlyFile.txt'
WITH (
FIELDTERMINATOR = ','
,ROWTERMINATOR = '0x0a'
,FIRSTROW=2
)
Another approach (one that I'm not partial to) would be to insert the data into a stage table, compare what data are not in the existing table from the staging table, populate those data, then re-index the existing table and drop the staging table.
I just had my site hacked and they were able to do some damage to my current database.
Anyway, I have a backup from few days ago. The problem is that the current database had a few thousand more posts and threads / users.
I am simply wondering how I could possibly go about merging the two databases?
The two databases have the exact structure, and I want the backup to overwrite any record from the current database, just that I want the current database to populate any new records like posts, threads, users and such.
I know you can merge two tables, if they have the exact structure, but what about two databases that have the same structure?
I assume you have a schema s1 and a schema s2.
To insert all rows of a table in s1 into a table in s2, while overwriting existing lines, you can use:
REPLACE INTO s2.table_name
SELECT * FROM s1.table_name;
If you do not want to touch existing lines:
INSERT INTO s2.table_name
SELECT * FROM s1.table_name
ON DUPLICATE KEY IGNORE;
there was some ways to do it:
1.) use Command line tools like schema Sync and mysql diff
2.) or Using SQLyog
find out more here
http://blog.webyog.com/2012/10/16/so-how-do-you-sync-your-database-schema/
In my experience ON DUPLICATE KEY IGNORE did not work. Instead I found
INSERT IGNORE ta2_table
SELECT * FROM ta1_table;
worked like a charm