Ordered starting and waiting for containers - junit

I have two containers in my tests using #Testcontainers with Junit5, a Kafka and a KafkaConnect.
#Container
private final KafkaContainer kafka = new KafkaContainer()
.withNetwork(network)
.withNetworkAliases("kafka");
#Container
private final GenericContainer KafkaConnect =
new GenericContainer("confluentinc/cp-kafka-connect:latest")
.withEnv("CONNECT_BOOTSTRAP_SERVERS", "kafka:9092")
.withEnv("CONNECT_REST_PORT", 8083)
.withNetwork(network)
...
When I execute the tests, I find an error because Kafka Connect service in kafkaConnect is not is not started correctly (mapped port 8083 is not listening). It is because kafkaConnect is started before kafka and when the kafka:9092 url is reached during the kafkaConnect execution no response is obtained since kafka is not running yet.
Then, I have tried to pospone the kafkaConnect start up in order to wait for kafka to ensure kafka:9092 availability.
I have tried different approach to do this but I did not fix the problem.
I tried to add some configurations.
startupTimeout. As far as I know, this config does not pospone the start operation. It just increase the period to check if the container is started.
.withStartupTimeout(Duration.of(240, SECONDS))
I also tried some configurations for waitingFor, such as timeout-based, which,a s expected, produces the same result than withStartupTimeout
.waitingFor(Wait.defaultWaitStrategy().withStartupTimeout(...))
or port-based, which does not solve my issue because it does not point to kafka container but kafkaConnect.
.waitingFor(Wait.forHttp("http://kafka:9092"))
I have also tried to add a number of startup attempts but it does not solve the issue because the kafkaConnect is relaunch some times but always before kafka.
As a solution I removed #Container of kafkaConnect declaration in order to use manage manually its lifecycle, so I added the explicit starting to testcase, see below
#Test
test() {
kafkaConnect.start();
...
}
This ensure kafkaConnect is started after kafka. However, I did not find a solution to define the order during the container definition, by means of strategies, policies or something similar, in order to add dependencies between containers and to avoid a imperative and manual lifecycle management.
is it possible?

import org.junit.rules.RuleChain;
// #Container <-- Remove annotation.
private final KafkaContainer kafka = new KafkaContainer()...;
// #Container <-- Remove annotation.
private final GenericContainer kafkaConnect =
new GenericContainer("confluentinc/cp-kafka-connect:latest")
.withEnv("CONNECT_BOOTSTRAP_SERVERS", "kafka:9092")
...;
// Start the containers in the correct order. Prevents
// "Mapped port can only be obtained after the container is started"
// error.
#Rule
public final RuleChain chain =
RuleChain
// Started first.
.outerRule(kafka)
// Started later.
.around(kafkaConnect);

Related

Spring Data JPA - Pessimistic Locking Not Working

Using: Spring Boot 2.3.3, MySQL 5.7(currently via TestContainers), JUnit 5
I have a JpaRepository inside a Spring MVC application that has a method set to be #Lock(LockModeType.PESSIMISTIC_WRITE) and, while I do see the SELECT ... FOR UPDATE coming up in the resulting SQL, it doesn't seem to do much of anything.
I'll put the code below, but, if I try to spin up multiple threads that make the same call, each thread is able to read the same initial value in question and nothing ever seems to block/wait. And my understanding is that any "additionally" called methods that are also #Transactional (from the org.springframework.transaction namespace) are made part of the original transaction.
I can't figure out what I'm doing wrong. Any assistance would be appreciated, even if it means pointing out that my understanding/expectations are flawed.
Repository
public interface AccountDao extends JpaRepository<Account, Long> {
#Lock(LockModeType.PESSIMISTIC_WRITE)
public Optional<Account> findById(Long id);
}
Services
Account Service
#Service
public class AccountServiceImpl implements AccountService {
#Autowired
private FeeService feeService;
#Override
#Transactional // have also tried this with REQUIRES_NEW, but the same results occur
public void doTransfer(Long senderId, Long recipientId, TransferDto dto) {
// do some unrelated stuff
this.feeService.processFees(recipientId);
}
}
Fee Service
#Service
public class FeeServiceImpl implements FeeService {
#Autowired
private AccountDao accountDao;
#Override
#Transactional // have also tried removing this
public void processFees(Long recipientId) {
// this next line is actually done through another service with a #Transactional annotation, but even without that annotation it still doesn't work
Account systemAccount = this.accountDao.findById(recipientId);
System.out.println("System account value: " + systemAccount.getFunds());
systemAccount.addToFunds(5);
System.out.println("Saving system account value: " + systemAccount.getFunds());
}
}
Test
public class TheTest {
// starts a #SpringBootTest with ```webEnvironment = WebEnvironment.RANDOM_PORT``` so it should start up a dedicated servlet container
// also auto configures a WebTestClient
#Test
#Transactional
public void testLocking() {
// inserts a bunch of records to have some users and accounts to test with and does so via JPA, hence the need for #Transactional
// code here to init an ExecutorService and a synchronized list
// code here to create a series of threads via the ExecutorService that uses different user IDs as the sender, but the same ID for the recipient, hence the need for pessimistic locking
}
}
I can put in the testing code if necessary, but, I'm not sure what other details are necessary.
The resulting output (especially from the System.out.println calls in FeeServiceImpl) shows that the same "system account" value is read in across all threads, and the saved value is, therefore, also always the same.
When the application starts up, that value is 0, and all threads read that 0, with no apparent locking or waiting. I can see multiple transactions starting up and committing (I increased the logging level on Hibernate's TransactionImpl), but, it doesn't seem to matter.
Hopefully I'm overlooking or doing something dumb, but, I can't quite figure out what it is.
Thank you!
Of course, it was something buried that I wasn't expecting.
It turns out my tables had been created using MyISAM instead of InnoDB, oddly, since that hasn't been the default for table creation in MySQL in a long time.
So, here's what I did:
I thought I was using MySQL 8.0. Turns out TestContainers defaults (to 5.7.22 in my case) when using a JDBC connection string that doesn't specifically name the version. So I fixed that.
This still didn't fix things as MyISAM was still being used. It turns out this was because I had a legacy dialect setting in my configuration. Updating that to something like MySQL57Dialect corrected that.
This actually also explains the "weird" behaviour I was seeing in my JUnit tests, as values were popping into the DB right away and not rolling back, etc.
I hope this helps someone else in the future!

using Async inside a transaction in Spring application

I have a Spring application which updates particular entity details in MySQL DB using a #Transactional method, And within the same method, I am trying to call another endpoint using #Async which is one more Spring application which reads the same entity from MySql DB and updates the value in redis storage.
Now the problem is, every time I update some value for the entity, sometimes its updated in redis and sometimes it's not.
When I tried to debug I found that sometimes the second application when it reads the entity from MySql is picking the old value instead of updated value.
Can anyone suggest me what can be done to avoid this and make sure that second application always picks the updated value of that entity from Mysql?
The answer from M. Deinum is good but there is still another way to achieve this which may be simpler for you case, depending on the state of your current application.
You could simply wrap the call to the async method in an event that will be processed after your current transaction commits so you will read the updated entity from the db correctly every time.
Is quite simple to do this, let me show you:
import org.springframework.transaction.annotation.Transactional;
import org.springframework.transaction.support.TransactionSynchronization;
import org.springframework.transaction.support.TransactionSynchronizationManager;
#Transactional
public void doSomething() {
// application code here
// this code will still execute async - but only after the
// outer transaction that surrounds this lambda is completed.
executeAfterTransactionCommits(() -> theOtherServiceWithAsyncMethod.doIt());
// more business logic here in the same transaction
}
private void executeAfterTransactionCommits(Runnable task) {
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() {
public void afterCommit() {
task.run();
}
});
}
Basically what happens here is that we supply an implementation for the current transaction callback and we only override the afterCommit method - there are others methods there that might be useful, check them out. And to avoid typing the same boilerplate code if you want to use this in other parts or simply make the method more readable I extracted that in a helper method.
The solution is not that hard, apparently you want to trigger and update after the data has been written to the database. The #Transactional only commits after the method finished executing. If another #Async method is called at the end of the method, depending on the duration of the commit (or the actual REST call) the transaction might have committed or not.
As something outside of your transaction can only see committed data it might see the updated one (if already committed) or still the old one. This also depends on the serialization level of your transaction but you generally don't want to use an exclusive lock on the database for performance reason.
To fix this the #Async method should not be called from inside the #Transactional but right after it. That way the data is always committed and the other service will see the updated data.
#Service
public class WrapperService {
private final TransactionalEntityService service1;
private final AsyncService service2;
public WrapperService(TransactionalEntityService service1, AsyncService service2) {
this.service1=service1;
this.service2=service2;
}
public updateAndSyncEntity(Entity entity) {
service1.update(entity); // Update in DB first
service2.sync(entity); // After commit trigger a sync with remote system
}
}
This service is non-transactional and as such the service1.update which, presumable, is #Transactional will update the database. When that is done you can trigger the external sync.

How to implement using MYSQL jdbc in an Akka Actor

Hey I read this jdbc docs
https://www.playframework.com/documentation/2.1.0/ScalaDatabase
and this question
Is it good to put jdbc operations in actors?
Now I have an ActorClass for my mysql transaction, and this actor instantiated several times, whenever request comes. So each request would instantiate new actor. Is it safe for connection pool?
Can I use
val connection = DB.getConnection()
is connection object could handle async transaction?
So I could just a singleton to handle mysql connection and used it in all instantiated actors. Also if I want to use anorm, how do I make an implicit connection object?
Thanks
Your DB.getConnection() should be a promise[Connection] or a future[Connection] if you don't want to block the actor. (caveats at the end of the answer)
If your DB.getConnection() is synchronous (returning only connection without wrapping type) your actor will hang until it actually gets a connection from the pool while processing the actual message. It doesn't matter your DB being singleton or not, in the end it will hit the connection pool.
That being said, you can create actors to handle the messaging and other actors to handle persistence in the database, put them in different thread dispatchers giving more thread to database intensive. This is suggested also in the PlayFramework.
Caveats:
If you run futures inside the actor you are not ensure of the thread/timing it will run, I'm assuming you did something in the line of these (read the comments)
def receive = {
case aMessage =>
val aFuture = future(db.getConnection)
aFuture.map { theConn => //from previous line when you acquire the conn and when you execute the next line
//it could pass a long time they run in different threads/time
//that's why you should better create an actor that handles this sync and let
//akka do the async part
theConn.prepareStatemnt(someSQL)
//omitted code...
}
}
so my suggestion would be
//actor A receives,
//actor B proccess db (and have multiple instances of this one due to slowness from db)
class ActorA(routerOfB : ActorRef) extends Actor {
def recieve = {
case aMessage =>
routerOfB ! aMessage
}
}
class ActorB(db : DB) extends Actor {
def receive = {
case receive = {
val conn = db.getConnection //this blocks but we have multiple instances
//and enforces to run in same thread
val ps = conn.prepareStatement(someSQL)
}
}
}
You will need routing: http://doc.akka.io/docs/akka/2.4.1/scala/routing.html
As I know you couldn't run multiple concurrent query on a single connection for RDBMS ( I've not seen any resource/reference for async/non-blocking call for mysql even in C-API; ). To run your queries concurrently, you most have multiple connection instances.
DB.getConnection isn't expensive while you have multiple instances of connection. The most expensive area for working with DB is running sql query and waiting for its response.
To being async with your DB-calls, you should run them in other threads (not in the main thread-pool of Akka or Play); Slick does it for you. It manages a thread-pool and run your DB-calls on them, then your main threads will be free for processing income requests. Then you dont need to wrap your DB-calls in actors to being async.
I think you should take a connection from pool and return when it is done. If we go by single connection per actor what if that connection gets disconnect, you may need to reinitialize it.
For transactions you may want to try
DB.withTransaction { conn => // do whatever you need with the connection}
For more functional way of database access I would recommend to look into slick which has a nice API that can be integrated with plain actors and going further with streams.

Why EclipseLink 's auto commit doesn't work with MySQL?

Using the following code:
EntityManager manager = factory.createEntityManager();
manager.setFlushMode(FlushModeType.AUTO);
PhysicalCard card = new PhysicalCard();
card.setIdentifier("012345ABCDEF");
card.setStatus(CardStatusEnum.Assigned);
manager.persist(card);
manager.close();
when code runs to this line, the "card" record does not appear in the database. However, if using the FlushModeType.COMMIT, and using transaction like this:
EntityManager manager = factory.createEntityManager();
manager.setFlushMode(FlushModeType.COMMIT);
manager.getTransaction().begin();
PhysicalCard card = new PhysicalCard();
card.setIdentifier("012345ABCDEF");
card.setStatus(CardStatusEnum.Assigned);
manager.persist(card);
manager.getTransaction().commit();
manager.close();
it works fine. From the eclipselink's log i can see the previous code doesn't issue an INSERT statement while the second code does.
Do I miss something here? I'm using EclipseLink 2.3 and mysql connection/J 5.1
I am assuming that you are using EclipseLink in a Java SE application, or in a Java EE application but with an application managed EntityManager instead of a container managed EntityManager.
In both scenarios, all updates made to the persistence context are flushed only when the transaction associated with the EntityManager commits (using EntityTransaction.commit), or when the EntityManager's persistence context is flushed (using EntityManager.flush). This is the reason why the second code snippet issues the INSERT as it invokes the EntityTransaction's begin and commit methods, while the first doesn't; an invocation of em.persist does not issue an INSERT.
As far as FlushModeType values are concerned, the API documentation states the following:
COMMIT
public static final FlushModeType COMMIT
Flushing to occur at transaction commit. The provider may flush at
other times, but is not required to.
AUTO
public static final FlushModeType AUTO
(Default) Flushing to occur at query execution.
Since, queries haven't been executed in the first case case, no flushing i.e. no INSERT statements corresponding to the persistence of the PhysicalCard entity will be issued. It is the explicit commit of the EntityTransaction in the second, that is resulting in the INSERT statement being issued.

org.hibernate.StaleObjectStateException when using Grails with PostgreSQL

I've written a grails service with the following code:
EPCGenerationMetadata requestEPCs(String indicatorDigit, FilterValue filterValue,
PartitionValue partitionValue, String companyPrefix, String itemReference,
Long quantity) throws IllegalArgumentException, IllegalStateException {
//... code
//problematic snippet bellow
def serialGenerator
synchronized(this) {
log.debug "Generating epcs..."
serialGenerator = SerialGenerator.findByItemReference(itemReference)
if(!serialGenerator) {
serialGenerator = new SerialGenerator(itemReference: itemReference, serialNumber: 0l)
}
startingPoint = serialGenerator.serialNumber + 1
serialGenerator.serialNumber += quantity
serialGenerator.save(flush: true)
}
//code continues...
}
Being a grails service a singleton by default, I thought I'd be safe from concurrent inconsistency by adding the synchronized block above. I've created a simple client for testing concurrency, as the service is exposed by http invoker. I ran multiple clients at the same time, passing as argument the same itemReference, and had no problems at all.
However, when I changed the database from MySQL to PostgreSQL 8.4, I couldn't handle concurrent access anymore. When running a single client, everything is fine. However, if I add one more client asking for the same itemReference, I get instantly a StaleObjectStateException:
Exception in thread "main" org.springframework.orm.hibernate3.HibernateOptimisticLockingFailureException: Object of class [br.com.app.epcserver.SerialGenerator] with identifier [10]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [br.com.app.epcserver.SerialGenerator#10]
at org.springframework.orm.hibernate3.SessionFactoryUtils.convertHibernateAccessException(SessionFactoryUtils.java:672)
at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:412)
at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:411)
at org.springframework.orm.hibernate3.HibernateTemplate.executeWithNativeSession(HibernateTemplate.java:374)
at org.springframework.orm.hibernate3.HibernateTemplate.flush(HibernateTemplate.java:881)
at org.codehaus.groovy.grails.orm.hibernate.metaclass.SavePersistentMethod$1.doInHibernate(SavePersistentMethod.java:58)
(...)
at br.com.app.EPCGeneratorService.requestEPCs(EPCGeneratorService.groovy:63)
at br.com.app.epcclient.IEPCGenerator$requestEPCs.callCurrent(Unknown Source)
at br.com.app.epcserver.EPCGeneratorService.requestEPCs(EPCGeneratorService.groovy:29)
at br.com.app.epcserver.EPCGeneratorService$$FastClassByCGLIB$$15a2adc2.invoke()
(...)
Caused by: org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [br.com.app.epcserver.SerialGenerator#10]
Note: EPCGeneratorService.groovy:63 refers to serialGenerator.save(flush: true).
I don't know what to think, as the only thing that I've changed was the database. I'd appreciate any advice on the matter.
I'm using:
Grails 1.3.3
Postgres 8.4 (postgresql-8.4-702.jdbc4 driver)
JBoss 6.0.0-M4
MySQL:
mysqld Ver 5.1.41 (mysql-connector-java-5.1.13-bin driver)
Thanks in advance!
That's weird, try disabling transaction.
This is indeed a strange behavior, but you could try to workaround by using a "select ... for upgrade", via hibernate lock method.
Something like this:
def c = SerialGenerator.createCriteria()
serialgenerator = c.get {
eg "itemReferece", itemReference
lock true
}