I have a WCF Data Service that uses a DBML file to generate all the code required for my DataContext. My database is running on the SQL Azure Business tier (so still using the shared model) and I am using the Transient Fault Handling Application Block to wrap all my calls.
The problem I'm seeing is that I'm still getting a number of SqlExceptions around "timeout expired". My retry policy hooks into the Retrying event to log any retries but i never see anything in my logs except the timeout exception.
From my research it looks like the Retry block only retries the query and assumes it has a reliable connection. However, since i'm using a DataContext I don't actually have control of setting up that connection and since all my existing code is Linq2Sql I don't want to switch it.
Am I missing something simple? There doesn't seem to be any way to tell the DataService that the CurrentDataSource should be a reliable connection or anyway to use the RetryManager to use a policy on the connection itself.
Here's an example of one of my ServiceOperations:
[WebGet]
public MyTable GetDetailsById(string id)
{
try
{
var detail = retryPolicy.ExecuteAction<MyTable>(() =>
CurrentDataSource.MyTable
.Where(l => l.id == id)
.FirstOrDefault());
return detail;
}
catch (Exception ex)
{
Trace.TraceError(ex.Message);
}
}
Any ideas?
Update: My query doesn't take longer than 30 seconds.
Command timeouts occur when the client is not getting a response from the server within x amount of seconds. The is not a transient error and hence not handled by the retry logic. The general recommendation is to change the command timeout to at least 30 seconds for Azure SQL Database. In addition, if you have long running queries, you should consider to increase the timeout for these queries beyond your application wide default.
Documentation on how to set the command timeout can be found here.
Related
We have a web service built using Asp.net Web API. We use NHibernate as our ORM connecting to a MySQL database.
We have a couple of controller methods that do a large number (1,000-3,000) of relatively cheap queries.
We're looking at improving the performance of these controller methods and almost all of the time is spent doing the NHibernate queries so that's where we're focusing our attention.
In the medium term the solutions are things like reducing the number of queries (perhaps by doing fewer larger queries) and/or to parallelize the queries (which would take some work since NHibernate does not have an async api and sessions are single threaded) and things like that.
In the short term we're looking at improving the performance without taking on either of those larger projects.
We've done some performance profiling and were surprised to find that it looks like a lot of the time in each query (over half) is spent opening the connection to MySQL.
It appears that NHibernate is opening a new connection to MySQL for each query and that MySqlConnection.Open() makes two round trips to the database each time a connection is opened (even when the connection is coming from the pool).
Here's a screenshot of one of our performance profiles where you can see these two things:
We're wondering if this is expected or if we're missing something like a misconfiguration/misuse of NHibernate or a way to eliminate the two round trips to the database in MySqlConnection.Open().
I've done some additional digging and found something interesting:
If we add .SetProperty(Environment.ReleaseConnections, "on_close") to the NHibernate configuration then Open() is no longer called and the time it takes to do the query drops by over 40%.
It seems this is not a recommended setting though: http://nhibernate.info/doc/nhibernate-reference/transactions.html#transactions-connection-release
Based on the documentation I expected to get the same behavior (no extra calls to Open()) if I wrapped the reads inside a single NHibernate transaction but I couldn’t get it to work. This is what I did in the controller method:
using (var session = _sessionFactory.OpenSession()) {
using (var transaction = session.BeginTransaction()) {
// controller code
transaction.Commit();
}
}
Any ideas on how to get the same behavior using a recommended configuration for NHibernate?
After digging into this a bit more it turns out there was a mistake in my test implementation and after fixing it using transactions eliminates the extra calls to Open() as expected.
Not using transaction is considered a bad practice, so starting to add them is anyway welcome.
Moreover, as you seem to have find out by yourself, the default connection release mode auto currently always translates to AfterTransaction, which with NHibernate (v2 to v4 at least) releases connections after each statement when no transactions are ongoing for the session.
From Connection Release Modes:
Note that with ConnectionReleaseMode.AfterTransaction, if a session is considered to be in auto-commit mode (i.e. no transaction was started) connections will be released after every operation.
So simply transacting your session usages should do it. As this is not the case for your application, I suspect other issues.
Is your controller code using other sessions? NHibernate explicit transactions apply only to the session from which their were started (or to sessions opened from that session with ISession.GetSession(EntityMode.Poco)).
So you need to handle a transaction for each opened session.
You may use a TransactionScope instead for wrapping many sessions in a single transaction. But each session will still open a dedicated connection. This will in most circumstances promote the transaction to distributed, which has a performance penalty and may fail if your server is not configured to enable it.
You may configure and use a contextual session instead for replacing many sessions per controller action by only one. Of course you can use dependency injection instead for achieving this too.
Notes:
About reducing the number of queries issued by an application, there are some easy to leverage features in NHibernate:
Batching of lazy-loads (Batch fetching): configure your lazily loaded entities and collections of entities to not only load themselves, but also some others awaiting entities (of the same class) or collections of entities (same collections of other parent entities). Add batch-size attribute on collections and classes. I have written a detailed explanation of it in this other answer.
Second level cache, which will allow caching of data across http requests. Transactions mandatory for it to work.
Future queries, as proposed by Low.
Going parallel for a web API looks to me as a doomed road. Threads are a valuable ressource for web application. The more threads a request uses, the less requests the web application will be able to serve in parallel. So going that way will very likely be a major pain for your application scalability.
The OnClose mode is not recommended because it delays connection releasing to session closing, which may occur quite late after the last transaction, especially when using contextual session. Since it looks like your session usage is very localized, likely with a closing very near the last query, it should not be an issue for your application.
parallelize the queries (which would take some work since NHibernate
does not have an async api and sessions are single threaded) and
things like that.
You can defer the execution of the queries using NHibernate Futures,
Following code (extracted from reference article) will execute single query despite there are 2 values retrieved,
using (var s = sf.OpenSession())
using (var tx = s.BeginTransaction())
{
var blogs = s.CreateCriteria<Blog>()
.SetMaxResults(30)
.Future<Blog>();
var countOfBlogs = s.CreateCriteria<Blog>()
.SetProjection(Projections.Count(Projections.Id()))
.FutureValue<int>();
Console.WriteLine("Number of blogs: {0}", countOfBlogs.Value);
foreach (var blog in blogs)
{
Console.WriteLine(blog.Title);
}
tx.Commit();
}
You can also use NHibernate Batching to reduce the number of queries
My app is working with MySQL database, to connection I'm using FireDAC components. Last time I have got a network problem, I test it and it is looks like (time to time) it losing 4 ping request. My app return me error: "[FireDAC][Phys][MySQL] Lost connection to MySQL server during query". Now the question: setting fdconnection.TFDUpdateOptions.LockWait to true (default is false) will resolve my problem or make new problems?
TFDUpdateOptions.LockWait has no effect on your connection to the database. It determines what happens when a record lock can't be obtained immediately. The documentation says it pretty clearly:
Use the LockWait property to control whether FireDAC should wait while the pessimistic lock is acquired (True), or return the error immediately (False) if the record is already locked. The default value is False.
The LockWait property is used only if LockMode = lmPessimistic.
FireDAC can't wait to get a lock if it loses the connection, as clearly there is no way to either request the lock or determine if it was obtained. Therefore, changing LockWait will not change the lost connection issue, and it may slow many other operations against the data.
The only solution to your lost ping requests is to fix your network connection so it stops dropping packets. Simply randomly changing options on TFDConnection isn't going to fix networking issues.
Our mysql hoster has a limit of concurrent db connections. As it is rather pricey to expand that limit the following question came up:
Info:
I have a web app (been developed by an external coder- in case you might wonder about this question).
The web app is distributed and installed on many servers (every user installs it on their pc). These satellites are sending data into a mysql db. Right now the satellites are posting into the db directly.To improve security and error handling i would like to have the satellites posting to a XML-rpc (wordpress api) which then further posts into the db.
Question:
would such api reduce number of concurrent connections or not?
(right now as every satellite connects directly. It is like 1 user = 1 connection)
If 10 satellites are posting to one file, this file then processes the data and posts them into the db -> has this been one connection? (or as many connections as different data sets have been processed.)
What if the api throttles a little bit, so as there is only posting at a time. Would this lead to just one connection or not?
Any pointers are well appreciated!
Thank you in advance!
If you want to improve concurrent connections to the database (because the fact is, creating a connection to the database is "expensive"). You should look into using a ConnectionPool (example with Java).
How are concurrent database connections counted?
(source: iforce.co.nz)
A connectionless server
Uses a connectionless IPC API (e.g., connectionless datagram socket)
Sessions with concurrent clients can be interleaved.
A connection-oriented server
Uses a connection-oriented IPC API (e.g. stream-mode socket )
Sessions with concurrent clients can only be sequential unless the server is threaded.
(Client-server distributed computing paradigm, N.A.)
Design and Performance Issues
Database connections can become a bottleneck. This can be addressed
by using connection pools.
Compiled SQL statements can be re-used by using PreparedStatements
instead of statements. These statements can be parameterized.
Connections are usually not created directly by the servlet but either
created using a factory (DataSource) or obtained from a naming service
(JNDI).
It is important to release connections (close them or return them to the
connection pool). This should be done in a finally clause (so that it is done
in any case). Note that close() also throws an exception!
try
{
Console.WriteLine("Executing the try statement.");
throw new NullReferenceException();
}
catch (NullReferenceException e)
{
Console.WriteLine("{0} Caught exception #1.", e);
}
catch
{
Console.WriteLine("Caught exception #2.");
}
finally
{
Console.WriteLine("Executing finally block.");
}
There are various problems when developing interfaces between OO and
RDBMS. This is called the “paradigm mismatch”. The main problem is that
databases use reference by value while OO languages use reference by
address. So-called middleware/ object persistency framework software
tries to ease this.
Dietrich, R. (2012). Web Application Architecture, Server Side Scripting Servlets. Palmerston North: Massey University.
It depends on the way you implement the centralized service.
If the service after receiving a request immediatly posts the data to mysql, you may have many connections if there are simultaneous requests. But using connection pooling you can control precisely how many open connections you will have. In the limit, you can have just one connection open. This might cause contention if there are many concurrent requests as each request has to wait for the connection to be released.
If the service receives requests, store them in some place (other then the database), and processes them in chunks, you can also have just one connection. But this case is more complex to implement because you have to control the access (reading and writing) to the temporary data buffer.
I have an application which connects to a MySql database using Delphi's TAdoConnection object. This is a very query intensive application. So I create the connection and keep it open to avoid the high resource expense of open/closing database connections. But obviously problems can arise (database restart, network connection failure, etc). So I have built in code to free my database object, recreate it, and reconnect when queries fail.
I have a common function to connect to the database. The relevant code is this:
try
AdoConnection.open;
result := Adoconnection.Connected
except
result := False;
......
end;
I ran some test by turning on and off the MySql database. Everything works fine if the database is off on application startup (i.e. it properly throws an exception). However, if I turn off the database after the application has already successfully connected, subsequent re-connections do not throw exceptions, and additionally falsley report true for AdoConnection.Connected. I am sure the connection object had been freed/recreated first.
It seems there is some sort of caching mechanism going on here (most likely at the hardware/driver level - not application level). Anyone have any ideas?
I observed this also.
Ideas for dealing with it:
If you get an error on running a query then check if it's a connection issue and if so try to reconnect and then run the query again.
If your program uses a timer and waits a while between running batches of queries then perform a simple query (maybe "select now()") before each batch and if this fails then try to reconnect.
I am using MySQL database and hibernate and JSP.using hibernate select database store value and prepared view and display using Ajax.i am polling database every 1 seconds using Java script timer that called a ajax function and return the new responds,it result me an error
JDBCExceptionReporter:78 - Data source rejected establishment of connection, message from server: "Too many connections"".
Help me to sort-out the above define problem.
Make sure you close the session (and connection) after using it
Make sure the maximum connection configured for mysql is sufficient
Use of some caching layer. It is insane to hit the database every 1 second from each user.
If you are making some chat application, consider comet and server-side pub-sub solutions (jmx for example).