What if I new up some DataContexts, read some data, and then only wrap the SubmitChanges in a TransactionScope?
string conn1 = GetConn1();
string conn2 = GetConn2();
using (DataContext1 dc1 = new DataContext1(conn1))
{
List<Customer> customers = ReadSomeData(dc1);
ModifySomeCustomers(customers); //performs local modification to Customer instances
using (DataContext2 dc2 = new DataContext2(conn2))
{
List<Order> orders = ReadSomeData(dc2);
ModifySomeOrders(orders); //performs local modification to Order instances
using (TransactionScope scope = new TransactionScope())
{
dc1.SubmitChanges();
dc2.SubmitChanges();
scope.Complete();
}
}
}
The first SubmitChanges call is expected to fetch a connection from the pool and enlist that connection in the scope.
MS DTC is enabled - the second SubmitChanges call is expected to promote the transaction to "distributed", fetch a connection from the pool and enlist that connection in the scope.
I'm concerned that ReadSomeData may have left the connection open, so SubmitChanges doesn't fetch a connection from the pool, and therefore doesn't enlist the connection in the scope.
SubmitChanges will participate in the TransactionScope.
If SubmitChanges finds an ambient transaction it enlists to that transaction, otherwise it creates a transaction itself for the lifetime of the SubmitChanges method.
There is an MSDN Article about the transaction handling in SubmitChanges.
Related
I'm implementing a RESTful API with Vert.x. I used its async jdbc client with MySQL and c3p0 as connection pool.
My problem is that although the closeConnection handler is successful, the actual database connection is not closed and reused. The pool gets full in seconds, resulting in: BasicResourcePool:204 - acquire test -- pool is already maxed out. [managed: 20; max: 20]
client.getConnection(connectionAsyncResult -> {
SQLConnection connection = connectionAsyncResult.result();
connection.queryWithParams("SELECT * FROM AIRPORTS WHERE ID = ?", new JsonArray().add(id), select -> {
ResultSet resultSet = select.result();
Airport $airport = resultSet.getRows()
.stream()
.map(Airport::new)
.findFirst()
.get();
asyncResultHandler.handle(Future.succeededFuture($airport));
connection.close(closeHandler -> {
if (closeHandler.succeeded()) {
LOG.debug("Database Connection closed");
}
else if (closeHandler.failed()) {
LOG.error("Database Connection failed to close!");
}
});
});
});
Any idea what I'm missing?
All the best!
You're probably facing some exception in your handler, if such thing happens, then the last line will not be executed, therefore the close is not called and the connection returned to the pool.
You should wrap your code with a try finally block to guarantee the connection is returned to the pool.
I am trying to create a Merge Replication using RMO Programming which i got from here!
string publisherName = "DataSourceName";
string publicationName = "AdvWorksSalesOrdersMerge";
string publicationDbName = "AdventureWorksDW2008R2";
ReplicationDatabase publicationDb;
MergePublication publication;
// Create a connection to the Publisher.
ServerConnection conn = new ServerConnection(publisherName);
try
{
//Connect to the Publisher.
conn.Connect();
// Enable the database for merge publication.
publicationDb = new ReplicationDatabase(publicationDbName, conn);
if (publicationDb.LoadProperties())
{
if (!publicationDb.EnabledMergePublishing)
{
publicationDb.EnabledMergePublishing = true;
}
}
else
{
// Do something here if the database does not exist.
throw new ApplicationException(String.Format(
"The {0} database does not exist on {1}.",
publicationDb, publisherName));
}
// Set the required properties for the merge publication.
publication = new MergePublication();
publication.ConnectionContext = conn;
publication.Name = publicationName;
publication.DatabaseName = publicationDbName;
// Enable precomputed partitions.
publication.PartitionGroupsOption = PartitionGroupsOption.True;
//Specify the Windows account under which the Snapshot Agent job runs.
// This account will be used for the local connection to the
// Distributor and all agent connections that use Windows Authentication.
publication.SnapshotGenerationAgentProcessSecurity.Login = userid;
publication.SnapshotGenerationAgentProcessSecurity.Password = password;
//Explicitly set the security mode for the Publisher connection
// Windows Authentication (the default).
publication.SnapshotGenerationAgentPublisherSecurity.WindowsAuthentication = true;
//Enable Subscribers to request snapshot generation and filtering.
publication.Attributes |= PublicationAttributes.AllowSubscriberInitiatedSnapshot;
publication.Attributes |= PublicationAttributes.DynamicFilters;
// Enable pull and push subscriptions.
publication.Attributes |= PublicationAttributes.AllowPull;
publication.Attributes |= PublicationAttributes.AllowPush;
if (!publication.IsExistingObject)
{
//Create the merge publication.
publication.Create();
// Create a Snapshot Agent job for the publication.
publication.CreateSnapshotAgent();
}
else
{
throw new ApplicationException(String.Format(
"The {0} publication already exists.", publicationName));
}
}
catch (Exception ex)
{
//Implement custom application error handling here.
throw new Exception(String.Format("The publication {0} could not be created.", publicationName), ex);
}
finally
{
conn.Disconnect();
}
but at this line
publicationDb.EnabledTransPublishing = true;
i am getting error -" An exception occurred while executing a Transact-SQL statement or batch."
So please help me out from this problem ..
waiting for your answers..
You Probably have you answer by now but for those who might be asking the same question. Its because you are using an express version of SQL Server and Publisher/distributors cannot be created in any version of SQl Server Express.
The instance you have in your code there is not a valid instance hence the exception is thrown.
take a look at:
http://msdn.microsoft.com/en-us/library/ms151819(v=sql.105).aspx
and the lines that say
Microsoft SQL Server 2008 Express can serve as a Subscriber for all types of replication, providing a convenient way to distribute data to client applications that use this edition of SQL Server. When using SQL Server Express in a replication topology, keep the following considerations in mind:
SQL Server Express cannot serve as a Publisher or Distributor. However, merge replication allows changes to be replicated in both directions between a Publisher and Subscriber.
Blockquote
I've got a integration test that looks like this:
using (var tran = Connection.BeginTransaction(IsolationLevel.ReadUncommitted))
{
try
{
// Act.
var result = controller.Create(something);
// Assert.
Assert.IsNotNull(result);
}
catch (Exception exc)
{
Assert.Fail(exc.Message);
}
finally
{
tran.Rollback();
Connection.Close();
}
}
Now, in that Create method, i end up calling a stored procedure which returns multiple result sets.
Here's the code to call that SP:
var cmd = Database.Connection.CreateCommand();
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = "exec dbo.MySP #SomeParam";
cmd.Parameters.Add(new SqlParameter { Value = "test", ParameterName = "SomeParam" });
using (var rdr = cmd.ExecuteReader()) <--- exception thrown here.
{
// code to read result sets.
}
I get the following exception:
System.InvalidOperationException: ExecuteReader requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized.
Which i guess makes sense, but i would have thought it would inherit the pending local transaction?
I previously had the above code open a new connection, but it just timed out due to an apparent lock the pending transaction had on certain tables, despite the read uncommitted isolation level.
Basically, i need to be able to have an integration test which:
Opens a transaction
Does some stuff, including saving some records to the db, then calling another stored procedure which accesses data which includes the newly created data
Rollback the transaction.
Any ideas guys?
This works:
using (new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions
{
IsolationLevel = IsolationLevel.ReadUncommitted
}))
This doesn't:
using (var tran = Connection.BeginTransaction(IsolationLevel.ReadUncommitted))
{
Must have something to do with the fact that TransactionScope lives outside of the actual Connection, so it wraps all connections that are opened inside of it, whilst the former code opens the transaction with a specific connection.
I'm using NHibernate 3.1.0 with the MySql Connector 6.3.5. As a general rule my repository methods are wrapped in an NHibernate transaction. However the service or application code calling the repository methods might also require a transaction scope - therefore the mixing of NHibernate transactions with .NET's TransactionScope. A simulated test looks like this:
[Test]
public void CanPerformNestedSave()
{
using (var tx = new TransactionScope())
{
var user = new AdminUser { Email = "user#test.com", Name = "Test User 1", Password = "123" };
using (ISession session = OpenSession())
{
using (var tx = session.BeginTransaction())
{
entity.ModifiedAt = DateTime.Now;
session.SaveOrUpdate(entity);
tx.Commit();
return entity;
}
}
tx.Complete();
}
}
The test fails with the following error:
NHibernate.TransactionException : Begin failed with SQL exception
----> System.InvalidOperationException : Nested transactions are not supported.
I've scoured the web to find a solution to this scenario and hopefully the community on StackOverflow can help.
I've blogged about this here.
In the blog post NServiceBus creates the outer TransactionScope for the handlers and the Nhibernate session and transactions are used inside the handler.
There are 19 methods in our DAO layer, each is some variation of this:
public TicketProp saveTicketProp(TicketProp prop) {
EntityManager em = this.emf.createEntityManager();
try {
em.getTransaction().begin();
prop = (TicketProp) em.merge(prop);
em.getTransaction().commit();
return prop;
} finally {
em.close();
}
}
Meaning: In each method we handle our own transaction and close it in a finally block. We're testing a Jersey app, so our JUnit tests extend JerseyTest. Each test method instantiates a Grizzly container, runs the test, then shuts down the container. EntityManagerFactory is injected by spring. We're using JPA over Hibernate.
I'm monitoring the connections to our MySQL test DB and they're always high. One test alone runs the MySQL "Max_used_connections" variable to 38. For fun, I went and commented out all the em.close() calls, and the test still uses 38 connections.
I'm using Hibernate's built-in connection pooling (not for prod use, I know). I still expected some sort of intelligent pooling.
Am I handling the EntityManager wrong? How else can I close connections?
You should close the EntityManagerFactory at the end of your test. From the javadoc of EntityManagerFactory#close():
void javax.persistence.EntityManagerFactory.close()
Close the factory, releasing any resources that it holds. After a factory instance has been closed, all methods invoked on it will throw the IllegalStateException, except for isOpen, which will return false. Once an EntityManagerFactory has been closed, all its entity managers are considered to be in the closed state.
As a side note, you should actually rollback the transaction before closing the EM in the finally clause:
public TicketProp saveTicketProp(TicketProp prop) {
EntityManager em = this.emf.createEntityManager();
try {
em.getTransaction().begin();
prop = (TicketProp) em.merge(prop);
em.getTransaction().commit();
return prop;
} finally {
if (em.getTransaction().isActive()) {
em.getTransaction().rollback();
}
if (em.isOpen()) {
em.close();
}
}
}
Why do you think that EntityManager.close() always physically closes underlying connection? It's up to connection pool (you probably need to configure it and set the maximum number of simultaneously open connections).