SSRS connection errors - sql-server-2008

I'm getting the error below occasionally in SQL Server 2008 R2 Reporting Services. I have around 25 subscriptions that run close to midnight every night and a couple times they've all failed with this error. I'm not sure if it's a red herring but I killed most of the connections (90% of the connections to this server are from SSRS, and most of those are to ReportServer db) last night around 10:00 and no errors occurred for several hours. This is a relatively new installation but I didn't tweak anything when I migrated from the old server so I don't know why this is happening. I might be able to work around it by increasing the max pool size and killing unused connections but I'd rather not do that.
ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.DataSourceOpenException: , Microsoft.ReportingServices.Diagnostics.Utilities.DataSourceOpenException: Cannot create a connection to data source 'MyDB'. ---> System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
It seems like the problem is that connections are not being reused, but only by SSRS, not by other apps hitting the server. Why would that be?

There are a couple of things to consider especially since you have data-driven subscriptions.
Try to stagger the time they are scheduled to run at so that they aren't all competing for resources at the same time.
Adjust the timeout on the query for the data-driven subscriptions (this is probably your main issue). The report and the subscription each have their own separate timeouts set.

Related

Increase the amount of connections in my server MySQL

I have aplications that connect to a remote server (MySQL 5.5 on Windows Server 2012), at first I started receiving "too many connections" message which I solved by increasing MAX_CONNECTION value in my.inf to 500, then I start getting "can't create new thread" message so I decrease decrease timeouts to avoid idle connections using a socket, which didn't completely work. Now I get odd messages like 'file not found', as soon as I restart the service I stop getting the messages and everything works correctly.
The problem occurs when the server reaches around 170 connections at the same time.
Is there some configuration I'm missing?, I really don't know what info you need to give me a hint to fix this. I mean, there are servers that accept a lot morw of connections at the same time, right? waht I'm missing.
RAM and CPU of the system dosen't reach 35-40% at max connections (170).
Edit: Error occur at 2 'places', when running a query or at the attempt of conennection, it's like the MySQL service rejects the attempt. VB6 is the language used in the client app (ODBC connector). The app opens, executes and closes the connection.
Note: I have full control over client app and server config.

Do MySql connections closed from Jdbc stay opened for some time?

I get the following error accessing to a MySql Database from Jdbc:
java.sql.SQLNonTransientConnectionException: Too many connections
At the same time I am monitoring my connections. I added a counter that counts any opening and closing. The error ouccurs when I get to 380 opened and closed connections within 3 minutes.
Is it possible that it takes some time for MySql to acutally close the connection so that there are still too many opened even though I have send a command to close them?
I am just assuming certain points that might be the reason.
MySql Connections are maintained by MySql Connection Manager so once connection is released Manager will decide to kill that thread or return it back to pool.
In some cases if MySql Resultset is not closed after retrieving data and connection has been close on that time sending it back to pool might have some latency issue.
These two are points that i think might cause that, but i am not sure if these are correct or not.
There could be other reasons that i am not knowing about.
Hope it gives you some idea.

Quartz failure in notifyJobStoreJobComplete method

Scenario:
We have a scheduler which is using JDBC Job Store. Quartz version is 2.1.2.
The job which is being scheduling is also updating a database.
The database is same for both quartz and the job itself and is hosted in MySQL Server. Both application tables and quartz tables are stored in the same database.
Connection pool is different for both application and quartz. In the application we are using spring for connection pooling and quartz is forced to use connection pooling via quartz.properties.
Here is the snippet of quartz.properties
org.quartz.dataSource.qzDS.driver = com.mysql.jdbc.Driver
org.quartz.dataSource.qzDS.URL = jdbc:mysql://localhost:3306/dbname?autoReconnect=true
org.quartz.dataSource.qzDS.user = dbuser
org.quartz.dataSource.qzDS.password =dbpassword
org.quartz.dataSource.qzDS.maxConnections = 30
org.quartz.datasource.qzDS.validationQuery = select 1
#org.quartz.datasource.qzDS.minEvictableIdleTimeMillis=21600000
#org.quartz.datasource.qzDS.timeBetweenEvictionRunsMillis=1800000
#org.quartz.datasource.qzDS.numTestsPerEviction=-1
#org.quartz.datasource.qzDS.testWhileIdle=true
org.quartz.datasource.qzDS.debugUnreturnedConnectionStackTraces=true
org.quartz.datasource.qzDS.unreturnedConnectionTimeout=120
org.quartz.datasource.qzDS.initialPoolSize=5
org.quartz.datasource.qzDS.minPoolSize=5
org.quartz.datasource.qzDS.maxPoolSize=30
org.quartz.datasource.qzDS.acquireIncrement=5
org.quartz.datasource.qzDS.maxIdleTime=120
org.quartz.datasource.qzDS.validateOnCheckout=true
Database is clustered with MASTER-MASTER replication on two servers and they are being used via virtual IP everywhere in the application and quartz.
Scheduler i.e. quartz is also clustered on the same two machines where MySQL is clustered.
The problem:
One of the servers (till now we have got the problem with backup server machine) is occasionally throwing database connection error while calling notifyJobStoreJobComplete method. This is causing the job to stay in BLOCKED state even if the job itself has successfully completed but quartz was unable to update its status.
Questions:
What can be the cause of the problem?
How to move the BLOCKED jobs into WAITING state so that the jobs can be run on their next scheduled time at least. Direct editing the QRTZ_SIMPLE_TRIGGERS tables would not be a good solution, even if it works.
EDIT: To bump up the question.
the error during notifyJobStoreJobComplete is: org.quartz.impl.jdbcjobstore.JobStoreTX - Failed to override connection auto commit/transaction isolation.
[java] com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 619,082,686 milliseconds ago. The last packet sent successfully to the server was 619,082,686 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
I think main problem was communication link failure by MySQL which we solved it by increasing 'wait_timeout' to 14 days and as our maintenance is scheduled in every 15 days, we restart the each of MySQL server is our DB cluster (We have Master-Master replication in place). With approach we haven't get any communication link failure after that. In fact some time we don't restart the server in every 15 days but still no error(touch wood). :)
And as far as Quartz triggers being locked in BLOCKED state, we updated the quartz to 2.1.4 which possibly has the fix for the almost same problem. After the quartz update, we have faced the triggers being in BLOCKED state very very less frequent.
We are still unable to find out how to get the trigger out of BLOCKED state without directly modifying the quartz tables. Whenever we face this problem, we manually remove the entry for BLOCKED trigger from the qrtz_fired_triggers table and it solves the problem. I think enterprise version of quartz may have this feature from some web UI.

SQL Server "network-related or instance-specific error" once a day or so (perplexed!)

We are experiencing the same error as this StackOverflow Q ...
System.Data.SqlClient.SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.)
at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject)
at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)
at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)
at System.Data.SqlClient.SqlConnection.Open()
at System.Data.Linq.SqlClient.SqlConnectionManager.UseConnection(IConnectionUser user)
at System.Data.Linq.SqlClient.SqlProvider.get_IsSqlCe()
at System.Data.Linq.SqlClient.SqlProvider.InitializeProviderMode()
at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query)
... except that in the referenced StackOverflow Q, they need to restart SQL Server once the error occurs - and we do not. We'll get this error once a day, or once every few days - and all is fine after the error occurs, until the next time it occurs.
This makes us think it's not a "forgot to close connections" issue. We have a moderately busy ASP.NET 4.0 WebForms / SQL Server 2008 R2 app; but we're quite positive we're not exceeding the max # of database connections.
Any thoughts on this problem, or an approach to diagnose?
Thought I would comment on our progress with this.
While none of the SQL Server documentation/articles/blogs mention that this error can be caused by server busyness, I found a forum posting where some seasoned IT pro named Matt Neerincx states that it can be, as follows:
Possible reasons for this error include:
1. Poor network link from client to server.
2. Server is very busy (meaning high CPU) and cannot respond to new connection attempts.
3. Server is running out of memory (so high memory usage for SQL).
4. tcp-ip layer on client is over-saturated with connection attempts so tcp-ip layer rejects the connection.
5. tcp-ip layer on server side is over-staturated with connection attempts and so tcp-ip layer is rejecting new connections.
6. With SQL 2005 SP2 and later there could be a custom login trigger that rejects your connection.
You can increase the connect timeout to potentially alleviate issues #2, #3, #4, #5. Setting a longer connect timeout means the driver will try longer to connect and may eventually succeed.
To determine the root cause of these intermittent failures is not super easy to do unfortunately. What I normally do is start by examining the server environment, is the server constantly running in high CPU for example, this points to #2. Is the server using a hugh amount of memory, this points to #3. You can run SQL Profiler to monitor logins and look for patterns of logins, perhaps every morning at 9AM there is a flurry of connections etc...
So we are presently walking down this path - reducing the # of queries that execute at the same time in some of our batch queries, optimizing some of our queries, etc.
Also, in our app connection string, we increased the connection timeout, and set Min Pool Size to 20 (thinking it's good to try to ensure some existing, unused connections for the app to grab, rather than needing to establish a new connection).
At this moment, it's been almost 48 hours without receiving the error; making us very hopeful.

"Failed Attempt" in MySQL Connection

I am confused with MySQL connections. I have site that receives heavy requests during working hours. I use PHP to connect to MySQL database using persistant connection.
Few weeks back, I increased mysql connections to 500 that crashed my server then I put it back to 150.
Now users complaints that sometimes they cannot get on the site. I believe that this is due to limited connections.
Can you please give me some information that whether I use persistant or non-persistant? What sections of mysql do I need to tune to get optimized connection processing?
I have attached a screenshot that shows 11K Failed Attempts.
http://i.stack.imgur.com/GkxHP.jpg
Thank you so much...
Update Dec 17, 2011
When I asked this question, I changed the connection type to "non-persistant" and everything starts working fine. Today I surprised to see that the stats from phpmyadmin. Below are the values given by Phpmyadmin:
max. concurrent connections :: 16
Failed Attempts :: 43k
Please suggest some possible solutions? Which parameter should be optimized to avoid/minimize Failed attempts?
High traffic sites should not use persistent connections. I changed DB connection from persistent to non-persistent in php and problem solved!
Thanks for your help.
EDIT:
After changing connection type to non-persistent, don't forget to increase number of connections. In my case, I increased them to 500 with type set to non-persistent and that solved the issue.