I have the same tables in two different databases, one is on MySQL and the other one on SQL Server. I want to run a query to get the data from a MySQL table to a SQL Server table to update the records on daily basis.
E.g. I have 200 record in MySQL today by tomorrow it might be 300. I want to update 200 records today and the only 100 new record tomorrow.
Can any one help me please?
Thanks in advance
Probably the best way to manage this is from the SQL Server database. This allows that database to pull the data in every day, rather than having the MySQL database push the data.
The place to start is by linking the servers. Start with the documentation on the subject. Next, set up a job in SQL Server Agent. This job would do the following:
Connect to the MySQL server.
Load the data into a staging table.
Validate the data.
Update or insert the data into the final table.
You can schedule this job to run every day.
Note that 200 or 300 records is very small by the standards of databases (unless the records are really, really big).
There is no straight forward way for this. But you can approach this way
Use mysqldump to create a dump of the table data.
restore that in your SQL Server in a temporary / auxiliary table.
perform the update to main table JOIN with that temporary table.
delete the temporary table
Related
I'm fairly new to SSIS and am doing a simple migration of data from a DB2 Server table to a SQL table. In deciding how to perform an update of the data, I elected to use a Staging Table to store all the rows that have changed in some way and need to be updated. My goal is to just do a set based update from that Staging Table to the destination to avoid having to do a row-by-row update.
The problem I am running into, and maybe it isn't a problem, maybe I'm just a newb; but, I was wondering...Does the Staging Table need to be in the same database as the table that is needing to be updated?
No, it can be in a different database if you want it to be.
Well, your question makes some sense.
In general ETL scenario - no, tables can be from different DBs.
However, in Microsoft SQL, there is a technique called partition switching for speedy ETL process. In this technique, you partition your destination table (for example, on weekly basis), and update the whole week data by switching complete partition to staging table, updating data and switching staging table back to partition being modified. In this particular case tables have to be in the same DB.
I am very new to Mysql and do not have any experience in writing stored procedures/functions. i know only basic SQL queries and worked mainly in oracle.
Now i have a task to generate reports based on a data model.
Since the data model is little complex and its a normalized one with more than 18 tables and connected with foreign key constraints we would like to move the older records to another database for reporting purpose.
I would like to know how to move the records of one database(with 18 tables-normalized one) to another reporting server but store in denormalized form so its easier for me to write queries. otherwise i may need to write join queries on 18 tables.
and also moving the records must be done on a daily basis. like a cron job that move the old records to reporting server. how to configure this?
how to write stored procedure?
I have an VFP based application with a directory full of DBFs. I use ODBC in .NET to connect and perform transactions on this database. I want to mirror this data to mySQL running on my webhost.
Notes:
This will be a one-way mirror only. VFP to mySQL
Only inserts and updates must be supported. Deletes don't matter
Not all tables are required. In fact, I would prefer to use a defined SELECT statement to only mirror psuedo-views of the necessary data
I do not have the luxury of a "timemodified" stamp on any VFP records.
I don't have a ton of data records (maybe a few thousand total) nor do I have a ton of concurrent users on the mySQL side, want to be as efficient as possible though.
Proposed Strategy for Inserts (doesn't seem that bad...):
Build temp table in mySQL, insert all primary keys of the VFP table/view I want to mirror
Run "SELECT primaryKey from tempTable not in (SELECT primaryKey from mirroredTable)" on mySQL side to identify missing records
Generate and run the necessary INSERT sql for those records
Blow away the temp table
Proposed Strategy for Updates (seems really heavyweight, probably breaks open queries on mySQL dropped table):
Build temp table in mySQL and insert ALL records from VFP table/view I want to mirror
Drop existing mySQL table
Alter tempTable name to new table name
These are just the first strategies that come to mind, I'm sure there are more effective ways of doing it (especially the update side).
I'm looking for some alternate strategies here. Any brilliant ideas?
It sounds like you're going for something small, but you might try glancing at some replication design patterns. Microsoft has documented some data replication patterns here and that is a good starting point. My suggestion is to check out the simple Move Copy of Data pattern.
Are your VFP tables in a VFP database (DBC)? If so, you should be able to use triggers on that database to set up the information about what data needs to updated in MySQL.
Let's say I have table A, it has 3000 rows and I know the first 2000 rows are corrupt and I have a clean records sitting in another mysql server. What would be the easiest way to restore that 2000 rows? Thanks so much.
Using Maatkit's mk-table-checksum tool you can determine the differences between the tables of two hosts.
mk-table-sync is used to generate and/or run only the statements necessary to update the corrupted table.
What you want is to copy the mysql file from the backup server and delete the file on the production server.
I need to transfer a large number of rows from a SQL Server database to a MySQL db (MyISAM) on a daily basis. I have the MySQL instance set-up as a linked server.
I have a simple query which returns the rows that need to be transferred. The number of rows will grow to approximately 40,000 rows, each row has only 2 char(11) columns. This should equate to roughly 0.8MB in size. The execution time of the SELECT query is negligible. I'll be truncating the MySQL table prior to inserting the records each day.
I'm using an INSERT...SELECT query into the MySQL linked server to perform the insert. With 40,000 rows it takes approximately 40 seconds. To see how long it would take to move that number of rows from one MySQL table to another I executed a remote query from MSSQL and it took closer to 1 second.
I can't see much of what's going on looking at the execution plan in SSMS but it appears as though an INSERT statement is being executed for every one of the rows rather than a single statement to insert all of the rows.
What can I do to improve the performance? Is there some way I can force the rows to be inserted into MySQL in a single statement if that is what's going on?
LOAD DATA INFILE is much faster in MySQL than INSERT. If you can set up your MS SQL server to output a temporary CSV output file, you can then pull it in to MySQL either with the commandline mysqlimport tool, or with LOAD DATA INFILE in a MySQL SQL statement.
The problem is that the table you are selecting from is on the local server and the table you are inserting to is on the remote server. As such the linked server is going to have to translate each row into a INSERT INTO Table (Field1, Field2) VALUES ('VALUE1','VALUE2') or similar on the MySQL server. What you could do is to keep a checksum on each row in the SQL server. Instead of truncating and reinserting the entire table you can simply delete and reinsert changed and new records. Unless most of your records change every day this should cut the amount of data you have to transfer down enourmously without having to mess about exporting and reimporting text files.
I am not sure whether that makes it faster but a bulk download and upload would be the alternative.
On the mySQL side you could do a LOAD DATA INFILE
Don't know how to unload it on SQL Server side but there is probably something similar.
dump into a file and then user LOAD DATE INFILE
data inserts from a file are much quicker