Hi I have around 1000 million record in production Table and around 1500 million record in Staging table,Here I need to compare staging data with production data if any new record then insert it and if there are some changes on record then update the column or insert a new column.So what is the best approach for this condition?Please suggest.
If you want to get a copy of a table from one database to another four times a year, one way is to simply truncate and reload the target table. Don't mess around trying to update individual rows, it's often quicker to delete the lot and reload them.
You need to do some analysis and see if there is any field you can use in the source to reliably load a subset of data. For example if there is a 'created date' on the record you can use that just load the data created in the last three months data (instead of all of it)
If you add some more info then we can be more specific about a solution.
i.e.
-This has to be fast but we have lots of disk space
-This must be simple to maintain and can take a day to load
Also... I assume the source and target is SQL Server? If so is it the Enterprise edition?
Related
My requirement is to load data on daily basis from source table from one server to destination table in another server. Both servers are different i.e one is sql server and another one is Oracle server.
what i want is to make the source query fast. Suppose today I Execute the package I should get only the new records instead of all records from source. If I read the whole table it takes much time. Even I am using Lookup transforming to check the record exists or not it is taking much time.
Please look into this
I'm writing the back-end for a web app in Spring and it uses a MySQL database on an AWS RDS instance to keep track of user data. Right now the SQL tables are separated by user groups (just a value in a column), so different groups have different access to data. Whenever a person using the app does a certain operation, we want to back up their part of the database, which can be viewed later, or replace their data in the current branch if they want.
The only way I can figure out how to do this is to create separate copies of every table for each backup and keep another table to keep track of what all the names of the tables are. This feels very inelegant and labor intensive.
So far all operations I do on the database are SQL queries from the server, and I would like to stay consistent with that.
Is there a nice way to do what I need?
Why would you want a separate table for each backup? You could have a single table that mirrored the main table but had a few additional fields to record some metadata about the change, for example the person making it, a timestamp, and the type of change either update or delete. Whenever a change is made, simply copy the old value over to this table and you will then have a complete history of the state of the record over time. You can still enforce the group-based access by keeping that column.
As for doing all this with queries, you will need some for viewing or restoring these archived changes, but the simplest way for maintaining the archived records is surely to create TRIGGERS on the main tables. If you add BEFORE UPDATE and BEFORE DELETE TRIGGERS these can copy the old version of each record over to the archive (and also add the metadata at the same time) each time a record is updated or deleted.
I have an excel file that contains contents from the database when downloaded. Each row is identified using an identifier called id_number. Users can add new rows on the file with a new unique id_number. When it is uploaded, for each excel row,
When the id_number exist on the database, an update is performed on the database row.
When the id_number does not exist on the database, an insert is performed on the database row.
Other than the excel file, data can be added or updated individually using a file called report.php. Users use this page if they only want to add one data for an employee, for example.
Ideally, I would like to do an insert ... on duplicate key update for maximum performance. I might also put all of them in a transaction. However, I believe this overall process have some flaws:
Before any add/updates, validation checks have to be done on all excel rows against their corresponding database rows. The reason is because there are many unique columns in the table. That's why I'll have to do some select statements to insure that the data is valid before performing any add/update. Is this efficient on tables with 500 rows and 69 columns? I could probably just get all the data and store all of them in a php array and do the validation check on the array, but what happens if someone adds a new row (with an id_number of 5) through report.php? Then suppose the excel file I uploaded also contains a row with an id_number 5? That could probably destroy my validations because I can not be sure my data is up to date without performing a lot of select statements.
Suppose the system is in the middle of a transaction adding/updating the data retrieved from the excel file, then someone from report.php adds a row because all the validations have been satisfied (E.G. no duplicate id_numbers). Suppose at this point in time the next row to be added from the excel file and the row that will be added by the user on report.php have the same id_number. What happens then? I don't have much knowledge on transactions, I think they at least prevents two queries changing a row at the same time? Is that correct?
I don't really mind these kinds of situations that much. But some files have many rows and it might take a long time to process all of them.
One way I've thought of fixing this is: while the excel file upload is processing, I'll have to prevent users using report.php to modify the rows currently held by the excel file. Is this fine?
What could be the best way to fix these problems? I am using mysql.
If you really need to allow the user to generate their own unique ID then the you could lock the table in question while you're doing you validation and inserting.
If you acquire a write lock, then you can be certain the table isn't changed while you do your work of validation and inserting.
`mysql> LOCK TABLES tbl_name WRITE`
don't forget to
`mysql> UNLOCK TABLES;`
The downside with locking is obvious, the table is locked. If it is high traffic, then all your traffic is waiting, and that could lead all kinds of pain, (mysql running out of connections, would be one common one)
That said, I would suggest a different path altogether, let mysql be the only one who generates a unique id. That is make sure the database table have an auto_increment unique id (primary key) and then have new records in the spreadsheet entered without without the unique id given. Then mysql will ensure that the new records get a unique id, and you don't have to worry about locking and can validate and insert without fear of a collision.
In regards to the question as to performance with a 500 records 69 column table, I can only say that if the php server and the mysql server are reasonably sized and the columns aren't large data types then this amount of data should be readily handled in a fractions of a second. That said performance can be sabotaged by one bad line of code so if your code is slow to perform, I would take that as a separate optimisation problem.
At my office we have a legacy accounting system that stores all of its data in plaintext files (TXT extension) with fixed-width records. Each data file is named e.g., FILESALE.TXT. My goal is to bring this data into our MySQL server for read-only usage by many other programs that can't interface with the legacy software. Each file is essentially one table.
There are about 20 files in total that I need to access, roughly 1gb of total data. Each line might be 350-400 characters wide and have 30-40 columns. After pulling the data in, no MySQL table is much bigger than 100mb.
The legacy accounting system can modify any row in the text file, delete old rows (it has a deleted record marker -- 0x7F), and add new rows at any time.
For several years now I have been running a cron job every 5 minutes that:
Checks each data file for last modification time.
If the file is not modified, skip it. Otherwise:
Parse the data file, clean up any issues (very simple checks only), and spit out a tab-delimited file of the columns I need (some of the columns I just ignore).
TRUNCATE the table and imports the new data into our MySQL server like this:
START TRANSACTION;
TRUNCATE legacy_sales;
LOAD DATA INFILE '/tmp/filesale.data' INTO TABLE legacy_sales;
COMMIT;
The cron script runs each file check and parse in parallel, so the whole updating process doesn't really take very long. The biggest table (changed infrequently) takes ~30 seconds to update, but most of the tables take less than 5 seconds.
This has been working ok, but there are some issues. I guess it messes with database caching, so each time I have to TRUNCATE and LOAD a table, other programs that use the MySQL database are slow at first. Additionally, when I switched to running the updates in parallel, the database can be in a slightly inconsistent state for a few seconds.
This whole process seems horribly inefficient! Is there a better way to approach this problem? Any thoughts on optimizations or procedures that might be worth investigating? Any neat tricks from anyone who faced a similar situation?
Thanks!
Couple of ideas:
If the rows in the text files have a modification timestamp, you could update your script to keep track of when it runs, and then only process the records that have been modified since the last run.
If the rows in the text files have a field that can act as a primary key, you could maintain a fingerprint cache for each row, keyed by that id. Use this to detect when a row changes, and skip unchanged rows. I.e., in the loop that reads the text file, calculate the SHA1 (or whatever) hash of the whole row, and then compare that to the hash from your cache. If they match, the row hasn't changed, so skip it. Otherwise, update/insert the MySQL record and the store the new hash value in the cache. The cache could be a GDBM file, a memcached server, a fingerprint field in your MySQL tables, whatever. This will leave unchanged rows untouched (and thus still cached) on MySQL.
Perform updates inside a transaction to avoid inconsistencies.
Two things come to mind and I won't go into too much detail but feel free to ask questions:
A service that offloads the processing of the file to an application server and then just populates the mySQL table, you can even build in intelligence by checking for duplicate records, rather than truncating the entire table.
Offload the processing to another mysql server and replicate / transfer it over.
I agree with alex's tips. If you can, update only modified fields and mass update with transactions and multiple inserts grouped. an additional benefit of transactions is faster updat
if you are concerned about down time, instead of truncating the table, insert into a new table. then rename it.
for improved performance, make sure you have proper indexing on the fields.
look at database specific performance tips such as
_ delayed_inserts in mysql improve performance
_ caches can be optimized
_ even if you do not have unique rows, you may (or may not) be able to md5 the rows
I have a question.I'm working on loading a Very large table having existing data of the order of 150 million records which will keep on growing by adding 1 million records on a daily basis.Few days back the ETL started failing even after running for 24 hrs. In the DFT, we have source query pulling 1 million records which is LOOKed UP against the Destination table having 150 million records to check for new records. It is failing as the LOOKUP cannot hold data for 150 million records. I have tried changing the LOOKUP to Merge Join without success. Can you please suggest alternative designs to load the data in the large table successfully. Moreover, there is no way I can reduce the size of destination table. I already have indexes on all required columns. Hope I'm clear in explaining the scenario.
You can try table partitioning to import a large amount of data. Take a look here for an example. Another link that might be useful. Also, check msdn for more information about creating and maintaining partitioned tables.
Separate the journey into two.
Create a new data flow task (or separate package) to stage your source table "as is" in an environment close to your target database
Alter your existing data flow task to query across both your staged table and the target table to pull out only new (and changed?) records and process these accordingly within SSIS. This should eliminate the need to have the expensive look-up performing a round trip to the database per record