SSIS's BufferSizeTuning says "Memory pressure was alleviated, buffer manager is not throttling allocations anymore" - sql-server-2008

I have SSIS packages that do a simple data transfer between 2 SQL Servers. There's one parent package and 6 child packages that have been built the same way. I have setup child packages to be run as separate processes (ExecuteOutOfProcess=True property). Also I have enabled logging of User:BufferSizeTuning property in each child package.
Everything works fine in DEV server which is pretty the same as PROD server. But in the PROD server I'm getting the following two messages from the User:BufferSizeTuning property (taken from sysssislog table):
Memory pressure was alleviated, buffer manager is not throttling allocations anymore
Buffer manager is throttling allocations to keep in-memory buffers around 199MB
Furthermore the job in PROD server usually runs about 2-3 hours (in some cases 11 hours!!) when in DEV - ~30 mins. Both server are SSIS 2008 servers.

Related

SSIS low on virtual memory in debug mode

I run packages in debug mode in Visual Studio, but often get 'low on virtual memory' warnings,
I also often run multiple instances of Visual Studio to enable multiple packages to run for different purposes simultaneously.
The machine has 64 gig of memory, and more than 30 gig is free at the time I start getting the following in the output window. Any ideas? (I've tried it with larger values for DefaultBufferMaxRows and corresponding increases in DefaultBufferSize)
Information: 0x4004800C at Data Flow Task, SSIS.Pipeline: The buffer manager detected that the system was low on virtual memory, but was unable to swap out any buffers. 32 buffers were considered and 32 were locked. Either not enough memory is available to the pipeline because not enough is installed, other processes are using it, or too many buffers are locked.
Information: 0x4004800F at Data Flow Task: Buffer manager allocated 40 megabyte(s) in 4 physical buffer(s).

MySQL hanging in Writing to Net

I have problem, when MySQL thread sometime stuck at status "Writing to net".
I have 4 Apache server (2.4) (requests are load-balanced on them) a 1 MySQL (MariaDB 10). Apache is executing php56. All Apache servers have same configuration. All servers runs on CentOS 7. SElinux is disabled on Apache servers for debug reasons. No problems in audit logs on DB server. All servers are virtual and located on same cluster (VMware).
Problem appear only on specific pages and specific queries to DB.
Usually there is around 100-200 separate queries on page and most of them takes 0.0001-0.0010 s. But then I have one query that takes around 1-2sec. The query itself take much lesser time (around 0.0045s).
Problematic query returns around 8984 rows and when executed from CLI from debug script, it is executed fast as expected.
Strange is that in time some Apache servers execute that page quickly, and some slowly. It changes (during day). Also I tried remove one Apache server from cluster and then send same request. If server is not under any load, it usually responds fast.
All server have enough resources (CPU and RAM) so it is definitely not load issue. They usually have around 4-10 active Apache workers (prefork) and have capacity for 100 active workers.
I tried debugging with tcpdump and when requesting page, I can see packet flow for fast queries and then it stops for a while and resumes. Not sure if the problem is on MySQL server or on Apache server.
My guess is that I am hitting some kind of limit, but I have no idea which one.
The solution is quite odd.
First few more details:
All Apache severs have same application data (PHP files, images, etc.) Mounted from NFS. The NFS share was working fine (low latency, no data corruption).
Solution:
When I was desperate I went through every possible log. Then I noticed that iptables are dropping some packets from NFS server. Well I said to myself that I should probably fix that, even when its not related.
But after I allowed all traffic from NFS to my Apache servers, MySQL status "writing to net" disappeared and all websites started to respond quickly.

Periodic MySql timeout followed by connection spike in ASP.NET website

Every couple of days we have been getting a small number of MySql timeout errors that correspond with a large spike in CPU and DB connections on our MySQL RDS instance. These are queries that are typically very fast (<5ms) that suddenly timeout.
At this point, database operations are very slow for a minute or so (likely because new connections are being allocated). The number of new connections often doubles and seem to correspond to the entire Connection Pool being recycled.
The timeouts do not seem to correspond with heavy database load. The CPU is often under 7% when this happens spiking up to around 12%.
Once these connections are created, the old connections seem to stay around for several hours.
We have some theories:
An occasional network hiccup between EC2 and RDS
A connection pool recycle (is there such a thing?)
Resource contention on the server that backs up all queries (no deadlocks present)
Any help on debugging this would be very much appreciated.
System Details:
Windows 2012 EC2 instances
.NET 4.5
MySql Connector 6.8.3
Entity Framework 6.0.2
MySql.Data.Entities 6.8.3
MySql 5.6.12 (Hosted in Amazon's RDS)
I wanted to put this as a comment not an answer but "...must have 50 reputation to comment..."
Are you maxing out on connections? show variables like 'max_connections'; show process_list; (as root user)
How's your disk I/O: iostat -x 5 via command line and pay special attention to queue sizes & service/wait times. If its an issue you can purchase AWS reserved IOPS for better reliability & performance.
You can profile it - i like Jet Profiler, simple & low load.

Rails app generating MySQL load though now database access triggered

I have a Rails 4.x application running on server A and MySQL on server B.
Using ab to do a load test of my API calls I notice that the MySQL server is showing CPU activity. So I go back to the code and check, but no SQL statements are triggered, to be sure I also deactivate all before filters, but still the MySQL server shows CPU load.
I went to MySQL and run
show processlist;
but that also shows no active SQL statements
Why would there be load on the DB server?
A Rails application initializes connection pools to the configured database on app load and also loads basic schema data for each ActiveModel defined to populate runtime mappings from the DB to instances of that model.
These connections/queries will happen as soon as you have loaded the application and running traffic.
If this is not what is responsible for the activity on your database server, you will need to use other tools to see what is responsible. For example, NewRelic's system monitoring tools are great for snapshotting CPU/memory usage over time correlated to what processes were running. This will help you rule out MySQL itself using resources vs. other things running on the DB server.
According to this article, storage engines like Innodb may have their own per thread and/or global memory allocations which is probably accounting for the CPU overhead. If this is a stock (non-tuned) MySql install, you're probably just seeing baseline CPU activity. The article mentions a number of places to look that might indicate areas that can be tuned to reduce this footprint.

Quartz JobStoreTX instances disappear on cluster recovery

I have configured two Java WARs with quartz schedulers (version 2.2.1) starting with XMLSchedulingProcessorPlugin. Both web applications are also running in a cluster mode (they are deployed in two identical machines), so I enabled the properties for quartz:
#===========================================================================
# Clustering
#===========================================================================
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 60000
Both applications are running in JBOSS AS 7.1 configured with Quartz´s JobStoreTX. They save their jobs, triggers and so on into a MySQL database, which currently is configured with Galera DB (1 virtual IP address, 2 real nodes).
Currently, I am testing the failure of one of the real nodes so the jobs keep up firing even when a power outage occurs. In that case I noticed some failures, such as the one described in this Terracotta issue (the patch is not applied in the current version of Quartz).
In my case, I should have 4 Quartz instances in the QRTZ_SCHEDULER_STATE table...even if one of the MySQL nodes restart. The fact is that sometimes one or two instances are deleted from the table (maybe the ones which do not have any active job) so I am afraid it is possible to lose both instances of an application during the cluster recovery.
Has anyone experienced the same? Any other solution than restarting the JBOSS in order to reload Jobs and Triggers?
Thanks in advance.