Nested Transactions with MySQL and Entity Framework Core - mysql

I'm using MySQL with EF Core. I am currently using Pomelo Provider for MySQL. I need to implement Unit Of Work Pattern for transactions. I have a Service which calls two methods in repository. I am not able to implement nested transactions. This is how my method in service looks now:
public void methodA(param)
{
using (TransactionScope tx = new
TransactionScope(TransactionScopeOption.Required))
{
repo1.save(data1);
repo2.save(data2);
tx.complete();
}
}
This is how save method in repo1 is implemented
private readonly UserDbContext appDbContext;
public repo1(UserDbContext _appDbContext)
{
appDbContext = _appDbContext;
}
public void save(User entity)
{
var dbset = appDbContext.Set<User>().Add(entity);
appDbContext.SaveChanges();
}
This is how save method in repo2 is implemented
private readonly UserDbContext appDbContext;
public repo2(UserDbContext _appDbContext)
{
appDbContext = _appDbContext;
}
public void save(UserRole entity)
{
var dbset = appDbContext.Set<UserRole>().Add(entity);
appDbContext.SaveChanges();
}
I am getting the following error while running method in service:
Error generated for warning 'Microsoft.EntityFrameworkCore.Database.Transaction.AmbientTransactionWarning: An ambient transaction has been detected. The current provider does not support ambient transactions. See http://go.microsoft.com/fwlink/?LinkId=800142'. This exception can be suppressed or logged by passing event ID 'RelationalEventId.AmbientTransactionWarning' to the 'ConfigureWarnings' method in 'DbContext.OnConfiguring' or 'AddDbContext'.
This is how I registered UserDbContext in Startup.cs
services.AddDbContext<UserDbContext>(options => options.UseLazyLoadingProxies().UseMySql("Server = xxxx; Database = xxx; Uid = xx;ConnectionReset=True;", b => b.MigrationsAssembly("AssemblyName")));
I even tried adding a middleware which starts transaction at the begining of request and commits/rollbacks during the response . But still I am not able to manage nested transactions.
This is how my middleware looks:
public class TransactionPerRequestMiddleware
{
private readonly RequestDelegate next_;
public TransactionPerRequestMiddleware(RequestDelegate next)
{
next_ = next;
}
public async Task Invoke(HttpContext context, UserDbContext
userDbContext)
{
var transaction = userDbContext.Database.BeginTransaction(
System.Data.IsolationLevel.ReadCommitted);
await next_.Invoke(context);
int statusCode = context.Response.StatusCode;
if (statusCode == 200 || statusCode==302)
{
transaction.Commit();
}
else
{
transaction.Rollback();
}
}
}
Can anyone help me please?

Related

How do I make a JMS ObjectMessage for a Unit Test?

I'm trying to write a unit test for an MDB. The goal of my test is to make sure that the logic in the MDB can identify the correct type of object in the ObjectMessage and process it. However, I can't figure out how to make an ObjectMessage so I can test it. I keep getting null pointer exceptions.
Here is my unit test:
/**
* Test of the logic in the MDB
*/
#RunWith(JMockit.class)
#ExtendWith(TimingExtension.class)
class MDBTest
{
protected MyMDB mdb;
#BeforeEach
public void setup() throws NamingException, CreateHeaderException, DatatypeConfigurationException, PropertiesDataException
{
mdb = new MyMDB();
}
/**
* Test the processing of the messages by the MDB
*/
#Test
void testReceivingMessage() throws JMSException, IOException
{
MyFirstObject testMsg = getTestMessage();
ObjectMessage msg = null;
Session session = null;
new MockUp<ObjectMessage>()
{
#Mock
public void $init()
{
}
#Mock
public Serializable getObject()
{
return testMsg;
}
};
new MockUp<Session>()
{
#Mock
public void $init()
{
}
#Mock
public ObjectMessage createObjectMessage(Serializable object)
{
return msg;
}
};
// !!!! Null pointer here on Session !!!!
ObjectMessage msgToSend = session.createObjectMessage(testMsg);
mdb.onMessage(msgToSend);
assertEquals(1, mdb.getNumMyFirstObjectMsgs());
}
/**
* Create a Test Message
*
* #return the test message
* #throws IOException
*/
protected MyFirstObject getTestMessage) throws IOException
{
MyFirstObject myObj = new MyFirstObject();
myObj.id = 0123;
myObj.description = "TestMessage";
return myObj;
}
}
I feel like I should be able to initialize Session somehow, but I need to do it without using an additional library like Mockrunner.
Any suggestions?
I would try to address this in a different style. Provide a mock client, that will just mock the right API.
We should mock only a set of functions required for message retrieval and processing but that means we might have to provide a custom implementation for some of the APIs available in the EJB/JMS library. The mock client will have a function to push messages on a given topic/queue/channel, message can be simple String.
A simple implementation might look like this, in this other methods have been omitted for simplicity.
// JMSClientImpl is an implementation of Connection interface.
public class MyJmsTestClient extends JMSClientImpl{
Map<String, String> channelToMessage = new ConcurrentHashMap<>();
public Map<String, String> getMessageMap(){
return channelToMessage;
}
public void enqueMessage(String channel, String message){
channelToMessage.put(channe, message);
}
#Override
public Session createSession(){
return new MyTestSession(this);
}
}
// A class that implements some of the methods from session interface
public MyTestSession extends SessionImpl{
private MyJmsTestClient jmsClient;
MyTestSession(MyJmsTestClient jmsClient){
this.jmsClient = jmsClient;
}
// override methods that fetches messages from remote JMS
// Here you can just return messages from MyJmsTestClient
// override other necessary methods like ack/nack etc
MessageConsumer createConsumer(Destination destination) throws JMSException{
// returns a test consume
}
}
A class that implements methods from MessageConsumer interface
class TestMessageConsumer extends MessageConsumerImpl {
private MyJmsTestClient jmsClient;
private Destination destination;
TestMessageConsumer(MyJmsTestClient jmsClient, Destination destination){
this.jmsClient = jmsClient;
this.destination = destination;
}
Message receive() throws JMSException{
//return message from client
}
}
There's no straight forward, you can see if there're any library that can provide you embedded JMS client feature.

How to connect to multiple MySQL databases as per the header in REST API request

I'm creating a multi tenant spring boot - JPA application.
In this application, I want to connect to MySQL Databases using DB name which is sent through API request as header.
I checked many multi tenant project samples online but still can't figure out a solution.
Can anyone suggest me a way to do this?
You can use AbstractRoutingDataSource to achieve this. AbstractRoutingDataSource requires information to know which actual DataSource to route to(referred to as Context), which is provided by determineCurrentLookupKey() method. Using example from here.
Define Context like:
public enum ClientDatabase {
CLIENT_A, CLIENT_B
}
Then you need to define Context Holder which will be used in determineCurrentLookupKey()
public class ClientDatabaseContextHolder {
private static ThreadLocal<ClientDatabase> CONTEXT = new ThreadLocal<>();
public static void set(ClientDatabase clientDatabase) {
Assert.notNull(clientDatabase, "clientDatabase cannot be null");
CONTEXT.set(clientDatabase);
}
public static ClientDatabase getClientDatabase() {
return CONTEXT.get();
}
public static void clear() {
CONTEXT.remove();
}
}
Then you can extend AbstractRoutingDataSource like below:
public class ClientDataSourceRouter extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return ClientDatabaseContextHolder.getClientDatabase();
}
}
Finally, DataSource bean configuration:
#Bean
public DataSource clientDatasource() {
Map<Object, Object> targetDataSources = new HashMap<>();
DataSource clientADatasource = clientADatasource();
DataSource clientBDatasource = clientBDatasource();
targetDataSources.put(ClientDatabase.CLIENT_A,
clientADatasource);
targetDataSources.put(ClientDatabase.CLIENT_B,
clientBDatasource);
ClientDataSourceRouter clientRoutingDatasource
= new ClientDataSourceRouter();
clientRoutingDatasource.setTargetDataSources(targetDataSources);
clientRoutingDatasource.setDefaultTargetDataSource(clientADatasource);
return clientRoutingDatasource;
}
https://github.com/wmeints/spring-multi-tenant-demo
Following this logic, I can solve it now. Some of the versions need to be upgraded and the codes as well.
Spring Boot version have changed.
org.springframework.boot
spring-boot-starter-parent
2.1.0.RELEASE
Mysql version has been removed.
And some small changed in MultitenantConfiguration.java
#Configuration
public class MultitenantConfiguration {
#Autowired
private DataSourceProperties properties;
/**
* Defines the data source for the application
* #return
*/
#Bean
#ConfigurationProperties(
prefix = "spring.datasource"
)
public DataSource dataSource() {
File[] files = Paths.get("tenants").toFile().listFiles();
Map<Object,Object> resolvedDataSources = new HashMap<>();
if(files != null) {
for (File propertyFile : files) {
Properties tenantProperties = new Properties();
DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create(this.getClass().getClassLoader());
try {
tenantProperties.load(new FileInputStream(propertyFile));
String tenantId = tenantProperties.getProperty("name");
dataSourceBuilder.driverClassName(properties.getDriverClassName())
.url(tenantProperties.getProperty("datasource.url"))
.username(tenantProperties.getProperty("datasource.username"))
.password(tenantProperties.getProperty("datasource.password"));
if (properties.getType() != null) {
dataSourceBuilder.type(properties.getType());
}
resolvedDataSources.put(tenantId, dataSourceBuilder.build());
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
}
// Create the final multi-tenant source.
// It needs a default database to connect to.
// Make sure that the default database is actually an empty tenant database.
// Don't use that for a regular tenant if you want things to be safe!
MultitenantDataSource dataSource = new MultitenantDataSource();
dataSource.setDefaultTargetDataSource(defaultDataSource());
dataSource.setTargetDataSources(resolvedDataSources);
// Call this to finalize the initialization of the data source.
dataSource.afterPropertiesSet();
return dataSource;
}
/**
* Creates the default data source for the application
* #return
*/
private DataSource defaultDataSource() {
DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create(this.getClass().getClassLoader())
.driverClassName(properties.getDriverClassName())
.url(properties.getUrl())
.username(properties.getUsername())
.password(properties.getPassword());
if(properties.getType() != null) {
dataSourceBuilder.type(properties.getType());
}
return dataSourceBuilder.build();
}
}
This change is here due to the DataSourceBuilder has been moved to another path and its constructor has been changed.
Also changed the MySQL driver class name in application.properties like this
spring.datasource.driver-class-name=com.mysql.jdbc.Driver

Intercepting explicit interface implementations with Castle Dynamic Proxy

I am having trouble getting Castle Dynamic Proxy to intercept methods that are explicit interface implementations. I read here http://kozmic.pl/category/dynamicproxy/ that it should be possible to do this.
Here are my classes;
internal interface IDomainInterface
{
string DomainMethod();
}
public class DomainClass : IDomainInterface
{
string IDomainInterface.DomainMethod()
{
return "not intercepted";
}
}
Here is my interceptor class;
public class DomainClassInterceptor : IInterceptor
{
public void Intercept(IInvocation invocation)
{
if (invocation.Method.Name == "DomainMethod")
invocation.ReturnValue = "intercepted";
else
invocation.Proceed();
}
}
And here is my test (which fails);
[TestClass]
public void can_intercept_explicit_interface_implementation()
{
// Create proxy
var generator = new ProxyGenerator();
var interceptor = new DomainClassInterceptor();
var proxy = (IDomainInterface)generator.CreateClassProxy(typeof(DomainClass), interceptor);
// Invoke proxy method
var result = proxy.DomainMethod();
// Check method was intercepted -- fails
Assert.AreEqual("intercepted", result);
}
In addition to not being able to intercept the explicit interface implementation, it also seems that I am not receiving a notification of a non-proxyable member.
Here is my proxy generation hook (which acts as a spy);
public class DomainClassProxyGenerationHook : IProxyGenerationHook
{
public int NonProxyableCount;
public void MethodsInspected() {}
public void NonProxyableMemberNotification(Type type, MemberInfo memberInfo)
{
NonProxyableCount++;
}
public bool ShouldInterceptMethod(Type type, MethodInfo methodInfo)
{
return true;
}
}
Here is my test (which again fails);
[TestMethod]
public void receive_notification_of_nonproxyable_explicit_interface_implementation()
{
// Create proxy with generation hook
var hook = new DomainClassProxyGenerationHook();
var options = new ProxyGenerationOptions(hook);
var generator = new ProxyGenerator();
var interceptor = new DomainClassInterceptor();
var proxy = (IDomainInterface)generator.CreateClassProxy(typeof(DomainClass), options, interceptor);
// Check that non-proxyable member notification was received -- fails
Assert.IsTrue(hook.NonProxyableCount > 0);
}
Has anyone had success in getting DP to intercept explicit interface implementations? If so, how?
You are creating a class proxy. Class proxy only intercepts virtual methods on the class, and an explicit implementation of an interface method in C# by definition is not virtual (since it's private).
If you want to intercept methods on the interface you need to explicitly tell DynamicProxy about it
var proxy = (IDomainInterface)generator.CreateClassProxy(typeof(DomainClass), new Type[] { typeof(IDomainInterface) }, interceptor);
Also your interface is marked as internal so made sure it's public for DynamicProxy (either make the interface public or add InternalsVisibleToAttribute).
With that your first test will pass, and the method will be intercepted.

StructureMap DBServiceRegistry and MVC-mini-profiler?

If I use this code in each Repository class then I get SQL profiling to work but I want to move that code from each class into the class where StructureMap handles the DB.
Example of a Repository class:
public DB CreateNewContext()
{
var sqlConnection = new SqlConnection(ConfigurationManager.ConnectionStrings["connection"].ConnectionString);
var profiledConnection = ProfiledDbConnection.Get(sqlConnection);
return DataContextUtils.CreateDataContext<DB>(profiledConnection);
}
public SqlRecipeRepository(DB dataContext)
{
_db = CreateNewContext();
}
Now I want the dataContext variable to be the profiled version and so come from my DBServiceRegistry class.
Here is the DBServiceRegistry class:
public class DBServiceRegistry : Registry
{
public DBServiceRegistry()
{
var sqlConnection = new SqlConnection(ConfigurationManager.ConnectionStrings["GetMeCooking.Data.Properties.Settings.server"].ConnectionString);
var profiledConnection = ProfiledDbConnection.Get(sqlConnection);
For<DB>().HybridHttpOrThreadLocalScoped().Use(() => DataContextUtils.CreateDataContext<DB>(profiledConnection));
//Original method just had this:
//For<DB>().HybridHttpOrThreadLocalScoped().Use(() => new DB());
}
}
This code does not cause any errors but I don't get the SQL profiling, what am I doing wrong?
The comment is correct, by creating the sql connection outwith the For line, you are overriding the scope command.
Far better to encapsulate the whole lot into an anonymous delegate
using System.Configuration;
using System.Data.SqlClient;
using System.Threading.Tasks;
using StructureMap;
using StructureMap.Configuration.DSL;
using Xunit;
public class DBServiceRegistry : Registry
{
private string connString = ConfigurationManager.ConnectionStrings["GetMeCooking.Data.Properties.Settings.server"].ConnectionString;
public DBServiceRegistry()
{
For<DB>().HybridHttpOrThreadLocalScoped().Use(
() =>
{
var sqlConnection = new SqlConnection(connString);
var profiledConnection = new StackExchange.Profiling.Data.ProfiledDbConnection(sqlConnection, MiniProfiler.Current);
return DataContextUtils.CreateDataContext<DB>(profiledConnection);
});
}
}
You can use unit tests to verify that the scope is correct (test syntax is xunit.net)
public class DBRegistryTests : IDisposable
{
private Container container;
public DBRegistryTests()
{
// Arrange (or test setup)
container = new Container(new DBServiceRegistry());
}
[Fact]
public void ConnectionsAreSameInThread()
{
// Create two connections on same thread
var conn1 = container.GetInstance<DB>();
var conn2 = container.GetInstance<DB>();
// Assert should be equal because hybrid thread is scope
// and test executes on same thread
Assert.Equal(conn1, conn2);
// Other assertions that connection is profiled
}
[Fact]
public void ConnectionAreNotSameInDifferentThreads()
{
var conn1 = container.GetInstance<DB>();
// Request second connection from a different thread
// (for < c# 4.0 use Thread instead of task)
var conn2 = new Task<DB>(() => this.container.GetInstance<DB>());
conn2.Start();
conn2.Wait();
// Assert that request from two different threads
// are not the same
Assert.NotEqual(conn1, conn2.Result);
}
public void Dispose()
{
// Test teardown
container.Dispose();
}
}

Asp.Net MVC UNitOfWork and MySQL and Sleeping Connections

I have a MVC web app that is based on the following architecture
Asp.Net MVC2, Ninject, Fluent NHibernate, MySQL which uses a unit of work pattern.
Every connection to MySQL generates a sleep connection that can be seen as an entry in the SHOW PROCESSLIST query results.
Eventually this will spawn enough connections to exeed the app pool limit and crash the web app.
I suspect that the connections are not being disposed correctly.
If this is the case where and how should this happen?
Here is a snapshot of the code that I am using:
public class UnitOfWork : IUnitOfWork
{
private readonly ISessionFactory _sessionFactory;
private readonly ITransaction _transaction;
public ISession Session { get; private set; }
public UnitOfWork(ISessionFactory sessionFactory)
{
_sessionFactory = sessionFactory;
Session = _sessionFactory.OpenSession();
Session.FlushMode = FlushMode.Auto;
_transaction = Session.BeginTransaction(IsolationLevel.ReadCommitted);
}
public void Dispose()
{
if (Session != null)
{
if (Session.IsOpen)
{
Session.Close();
Session = null;
}
}
}
public void Commit()
{
if (!_transaction.IsActive)
{
throw new InvalidOperationException("No active transation");
}
_transaction.Commit();
Dispose();
}
public void Rollback()
{
if (_transaction.IsActive)
{
_transaction.Rollback();
}
}
}
public interface IUnitOfWork : IDisposable
{
void Commit();
void Rollback();
}
public class DataService
{
int WebsiteId = Convert.ToInt32(ConfigurationManager.AppSettings["Id"]);
private readonly IKeyedRepository<int, Page> pageRepository;
private readonly IUnitOfWork unitOfWork;
public PageService Pages { get; private set; }
public DataService(IKeyedRepository<int, Page> pageRepository,
IUnitOfWork unitOfWork)
{
this.pageRepository = pageRepository;
this.unitOfWork = unitOfWork;
Pages = new PageService(pageRepository);
}
public void Commit()
{
unitOfWork.Commit();
}
}
public class PageService
{
private readonly IKeyedRepository<int, Page> _pageRepository;
private readonly PageValidator _pageValidation;
public PageService(IKeyedRepository<int, Page> pageRepository)
{
_pageRepository = pageRepository;
_pageValidation = new PageValidator(pageRepository);
}
public IList<Page> All()
{
return _pageRepository.All().ToList();
}
public Page FindBy(int id)
{
return _pageRepository.FindBy(id);
}
}
Your post does not give any information in which scope UoW's are created.
If it is transient. It won't be disposed at all and this is up to you.
In case of InRequestScope it will be disposed after the GC has collected the HttpContext. But as I told Bob recently in the Ninject Mailing List it is possible to release all objects in the end request event handler of the HttpApplication. I'll add support for this in the next release of Ninject.
I did some investigation into the root cause of this problem. Here is a bit more information and possible solutions:
http://blog.bobcravens.com/2010/11/using-ninject-to-manage-critical-resources/
Enjoy.
Ninject makes no guarantees about when and where your IDisposables will be Disposed.
Read this post from the original Ninject man
I'd also suggest having a look around here, this has come up for various persistence mechanisms and various containers - the key thing is you need to take control and know when you're hooking in the UOW's commit/rollback/dispose semantics and not leave it to chance or cooincidence (though Convention is great).