I just joined my new office as database administrator. Here we are using SQL Server merge replication. It surprises me that 3 of major replication tables
Msmergre_contents
Msmergre_genhistory
Msmergre_tombstone
Size of Msmergre_contents grew up to 64GB & no of records about to 64 billion and this is happening due to None set as the Expiration Period for subscriptions.
Now I want to clean up this table. As we are using a Simple recovery model, when I wrote a delete query on this table, everything got stuck. I have no downtime to stop/pause replication process.
Can anyone help me out how to minimize its size or delete half of its data?
You should not be directly deleting from the Merge system tables, that is not supported.
Instead, the proper way to cleanup the metadata from the Merge system tables is to set your Subscription expiration to something other than None, the default is 14 days. Metadata cleanup will be run when the Merge Agent runs, it executes sp_mergemetadataretentioncleanup. More information on subscription expiration and metadata cleanup can be found in How Merge Replication Manages Subscription Expiration and Metadata Cleanup.
However, since you most likely have a lot of metadata that needs to be cleaned up, I would gradually reduce the retention period. An explanation of this approach can be found here:
https://blogs.technet.microsoft.com/claudia_silva/2009/06/22/replication-infinite-retention-period-causing-performance-issues/
Hope this helps.
Related
I came across this problem a few days ago and have been tinkering with- and pondering about several different approaches, but I cannot seem to find a good answer:
I have two MySQL servers, one master/hot and one slave/archive. All write requests go to the master, and shall also (eventually) be replicated/copied to the slave. However, certan data in the master grows "stale" after a while (say a week) and shall then be purged, so to keep the master's tables short. This purge should however not affect the slave. How can I go about achiving this?
Essentially, my master database acts sort of like a "hot" database, where data is fresh and is purged once it goes old. It should contain data that users might need quickly, and thus we want to keep the tables small. My slave on the other hand works more like an archive, which should contain all data, regardless of "hotness". Queries to the slave doesn't need to execute quickly, and the slaves data can lag behind a few minutes, but it needs to contain all records since our beginning of time.
My initial thought was to utilize ordinary replication, but can I somehow filter certain queries to not affect the slave? I was thinking of creating a purge query, which removes old data from the master but doesn't effect the slave. From reading the MySQL documentation, it seems that this filtering can only be done on Database or Tabel level.
Another thought was to do this via an external application, and manually SELECT data from the master and INSERT it into the slave, and then use some clever logic to decide what data to select. This works good for log-tables, which will only ever add data, but it doesn't work good for tables that represents states, such as user settings. This approach will probably also include a lot of special cases, as I cannot find a good, consistent way of describing all tables in our database (there are log-tables, state-tables, config-tables and a few which I cannot really categorize).
None of these approaches seem to solve the problem in a simple fashion, but I feel I cannot be the first to have this problem. Any ideas are welcome, and thanks in advance.
If more info is needed, feel free to comment and I'll edit it in
Just use regular replication. When you delete data on the master you do in the same session
SET sql_log_bin = 0;
DELETE FROM my_table WHERE whatever = true;
SET sql_log_bin = 1;
This prevents that those statements are written to the binary log. And therefore it won't be replicated to the slave.
read more about it here
I run a website with ~500 real time visitors, ~50k daily visitors and ~1,3million total users. I host my server on AWS, where I use several instances of different kind. When I started the website the different instances cost rougly the same. When the website started to gain users the RDS instance (MySQL DB) CPU constantly keept hitting the roof, I had to upgrade it several times, now it have started to take up the main part of the performance and monthly cost (around 95% of (2,8k$/month)). I currently use a database server with 16vCPU and 64GiB of RAM, I also use Multi-AZ Deployment to protect against failures. I wonder if it is normal for the database to be that expensive, or if I have done something terribly wrong?
Database Info
At the moment my database have 40 tables with the most of them have 100k rows, some have ~2millions and 1 have 30 millions.
I have a system the archives rows that are older then 21 days when they are not needed anymore.
Website Info
The website mainly use PHP, but also some NodeJS and python.
Most of the functions of the website works like this:
Start transaction
Insert row
Get last inserted id (lastrowid)
Do some calculations
Updated the inserted row
Update the user
Commit transaction
I also run around 100bots wich polls from the database with 10-30sec interval, they also inserts/updates the database sometimes.
Extra
I have done several things to try to lower the load on the database. Such as enable database cache, use a redis cache for some queries, tried to remove very slow queries, tried to upgrade the storage type to "Provisioned IOPS SSD". But nothing seems to help.
This is the changes I have done to the setting paramters:
I have though about creating a MySQL cluster of several smaller instances, but I don't know if this would help, and I also don't know if this works good with transactions.
If you need any more information, please ask, any help on this issue is greatly appriciated!
In my experience, as soon as you ask the question "how can I scale up performance?" you know you have outgrown RDS (edit: I admit my experience that leads me to this opinion may be outdated).
It sounds like your query load is pretty write-heavy. Lots of inserts and updates. You should increase the innodb_log_file_size if you can on your version of RDS. Otherwise you may have to abandon RDS and move to an EC2 instance where you can tune MySQL more easily.
I would also disable the MySQL query cache. On every insert/update, MySQL has to scan the query cache to see if there any results cached that need to be purged. This is a waste of time if you have a write-heavy workload. Increasing your query cache to 2.56GB makes it even worse! Set the cache size to 0 and the cache type to 0.
I have no idea what queries you run, or how well you have optimized them. MySQL's optimizer is limited, so it's frequently the case that you can get huge benefits from redesigning SQL queries. That is, changing the query syntax, as well as adding the right indexes.
You should do a query audit to find out which queries are accounting for your high load. A great free tool to do this is https://www.percona.com/doc/percona-toolkit/2.2/pt-query-digest.html, which can give you a report based on your slow query log. Download the RDS slow query log with the http://docs.aws.amazon.com/cli/latest/reference/rds/download-db-log-file-portion.html CLI command.
Set your long_query_time=0, let it run for a while to collect information, then change long_query_time back to the value you normally use. It's important to collect all queries in this log, because you might find that 75% of your load is from queries under 2 seconds, but they are run so frequently that it's a burden on the server.
After you know which queries are accounting for the load, you can make some informed strategy about how to address them:
Query optimization or redesign
More caching in the application
Scale out to more instances
I think the answer is "you're doing something wrong". It is very unlikely you have reached an RDS limitation, although you may be hitting limits on some parts of it.
Start by enabling detailed monitoring. This will give you some OS-level information which should help determine what your limiting factor really is. Look at your slow query logs and database stats - you may have some queries that are causing problems.
Once you understand the problem - which could be bad queries, I/O limits, or something else - then you can address them. RDS allows you to create multiple read replicas, so you can move some of your read load to slaves.
You could also move to Aurora, which should give you better I/O performance. Or use PIOPS (or allocate more disk, which should increase performance). You are using SSD storage, right?
One other suggestion - if your calculations (step 4 above) takes a significant amount of time, you might want look at breaking it into two or more transactions.
A query_cache_size of more than 50M is bad news. You are writing often -- many times per second per table? That means the QC needs to be scanned many times/second to purge the entries for the table that changed. This is a big load on the system when the QC is 2.5GB!
query_cache_type should be DEMAND if you can justify it being on at all. And in that case, pepper the SELECTs with SQL_CACHE and SQL_NO_CACHE.
Since you have the slowlog turned on, look at the output with pt-query-digest. What are the first couple of queries?
Since your typical operation involves writing, I don't see an advantage of using readonly Slaves.
Are the bots running at random times? Or do they all start at the same time? (The latter could cause terrible spikes in CPU, etc.)
How are you "archiving" "old" records? It might be best to use PARTITIONing and "transportable tablespaces". Use PARTITION BY RANGE and 21 partitions (plus a couple of extras).
Your typical transaction seems to work with one row. Can it be modified to work with 10 or 100 all at once? (More than 100 is probably not cost-effective.) SQL is much more efficient in doing lots of rows at once versus lots of queries of one row each. Show us the SQL; we can dig into the details.
It seems strange to insert a new row, then update it, all in one transaction. Can't you completely compute it before doing the insert? Hanging onto the inserted_id for so long probably interferes with others doing the same thing. What is the value of innodb_autoinc_lock_mode?
Do the "users" interactive with each other? If so, in what way?
This is regarding SQL Server 2008 R2 and SSIS.
I need to update dozens of history tables on one server with new data from production tables on another server.
The two servers are not, and will not be, linked.
Some of the history tables have 100's of millions of rows and some of the production tables have dozens of millions of rows.
I currently have a process in place for each table that uses the following data flow components:
OLEDB Source task to pull the appropriate production data.
Lookup task to check if the production data's key already exists in the history table and using the "Redirect to error output" -
Transfer the missing data to the OLEDB Destination history table.
The process is too slow for the large tables. There has to be a better way. Can someone help?
I know if the servers were linked a single set based query could accomplish the task easily and efficiently, but the servers are not linked.
Segment your problem into smaller problems. That's the only way you're going to solve this.
Let's examine the problems.
You're inserting and/or updating existing data. At a database level, rows are packed into pages. Rarely is it an exact fit and there's usually some amount of free space left in a page. When you update a row, pretend the Name field went from "bob" to "Robert Michael Stuckenschneider III". That row needs more room to live and while there's some room left on the page, there's not enough. Other rows might get shuffled down to the next page just to give this one some elbow room. That's going to cause lots of disk activity. Yes, it's inevitable given that you are adding more data but it's important to understand how your data is going to grow and ensure your database itself is ready for that growth. Maybe, you have some non-clustered indexes on a target table. Disabling/dropping them should improve insert/update performance. If you still have your database and log set to grow at 10% or 1MB or whatever the default values are, the storage engine is going to spend all of its time trying to grow files and won't have time to actually write data. Take away: ensure your system is poised to receive lots of data. Work with your DBA, LAN and SAN team(s)
You have tens of millions of rows in your OLTP system and hundreds of millions in your archive system. Starting with the OLTP data, you need to identify what does not exist in your historical system. Given your data volumes, I would plan for this package to have a hiccup in processing and needs to be "restartable." I would have a package that has a data flow with only the business keys selected from the OLTP that are used to make a match against the target table. Write those keys into a table that lives on the OLTP server (ToBeTransfered). Have a second package that uses a subset of those keys (N rows) joined back to the original table as the Source. It's wired directly to the Destination so no lookup required. That fat data row flows on over the network only one time. Then have an Execute SQL Task go in and delete the batch you just sent to the Archive server. This batching method can allow you to run the second package on multiple servers. The SSIS team describes it better in their paper: We loaded 1TB in 30 minutes
Ensure the Lookup is a Query of the form SELECT key1, key2 FROM MyTable Better yet, can you provide a filter to the lookup? WHERE ProcessingYear = 2013 as there's no need to waste cache on 2012 if the OLTP only contains 2013 data.
You might need to modify your PacketSize on your Connection Manager and have a network person set up Jumbo frames.
Look at your queries. Are you getting good plans? Are your tables over-indexed? Remember, each index is going to result in an increase in the number of writes performed. If you can dump them and recreate after the processing is completed, you'll think your SAN admins bought you some FusionIO drives. I know I did when I dropped 14 NC indexes from a billion row table that only had 10 total columns.
If you're still having performance issues, establish a theoretical baseline (under ideal conditions that will never occur in the real world, I can push 1GB from A to B in N units of time) and work your way from there to what your actual is. You must have a limiting factor (IO, CPU, Memory or Network). Find the culprit and throw more money at it or restructure the solution until it's no longer the lagging metric.
Step 1. Incremental bulk import of appropriate proudction data to new server.
Ref: Importing Data from a Single Client (or Stream) into a Non-Empty Table
http://msdn.microsoft.com/en-us/library/ms177445(v=sql.105).aspx
Step 2. Use Merge Statement to identify new/existing records and operate on them.
I realize that it will take a significant amount of disk space on the new server, but the process would run faster.
I am using Microsoft SQL Server 2008 R2.I have copied database A(myproduction database) to database B(Myreportin database) by creating SSIS package.Both databases are in same server.I want to run a job so that If any change(data modifications like inserting new rows or updating values of any row in any table) take place in database A that will also take place in my B database and sql job will run and acomplish the changing automatically.I don't want that in database B table will be dropped and recreated (as its not our business rule )instead only the change will take place.
Can any one help me please.Thanks in Advance.
I would suggest that you investigate using replication. Specifically, transactional replication if you need constant updates. Here's a bit from MSDN:
Transactional replication typically starts with a snapshot of the publication database objects and data. As soon as the initial snapshot is taken, subsequent data changes and schema modifications made at the Publisher are usually delivered to the Subscriber as they occur (in near real time). The data changes are applied to the Subscriber in the same order and within the same transaction boundaries as they occurred at the Publisher; therefore, within a publication, transactional consistency is guaranteed.
If you don't need constant updating (that comes at a price in performance, of course), you can consider the alternatives of merge replication or snapshot replication. Here's a page to start examining those alternatives.
We have a performance issue with the current transactional replication setup on sql server 2008.
When a new snapshot is created and the snapshot is applied to the subscriber, we see network utilization on the publisher and the distributor jump to 99%, and we are seeing disk queues going to 30
This is causing application timeouts.
Is there any way, we can throttle the replicated data that is being sent over?
Can we restrict the number of rows being replicated?
Are there any switches which can be set on/off to accomplish this?
Thanks!
You have an alternative to deal with this situation
While setting up transaction replication on a table that has millions of records
Initial snapshot would take time for the records to be delivered to subscriber
In SQL 2005 we have an option to create the tables on both transaction and publish server, populate dataset and setup replication on top of it
When you add subscription with command EXEC sp_addsubscription set The #sync_type = 'replication support only'.
Reference article http://www.mssqltips.com/tip.asp?tip=1117
Our DBA has forced us to break down dml code to run in batches of 50000 rows at a time with a couple of minutes in between. He plays with that batch size time to time but this way our replicating databases are ok.
For batching, everything has to go into temp tables, a new column (name it Ordinal) that does row_number(), and then a BatchID to be like Ordinal / 50000. Finally comes a loop to count BatchID and update target table batch by batch. Hard on devs, easier for DBAs and no need to pay more for infrastructure.