Too many connections using Hibernate and mysql - mysql

I am using Hibernate 3 and mysql sever 5.5 for myweb application with spring 3.0
I am getting exception as Too many connections......
My java file where I create session is as follows:
public class DBConnection {
static{
}
public Session getSession(){
Session session = null;
SessionFactory sessionFactory= null;
sessionFactory = new Configuration().configure().buildSessionFactory();
session = sessionFactory.openSession();
return session;
}
}
and I call this method where I need session
as
Session session=new DBConnection().getSession();
and after
transaction.commit();
I close session by using
session.close();
please help me in solving problem.......
my hibernate.cfg.xml is :
<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE hibernate-configuration PUBLIC
"-//Hibernate/Hibernate Configuration DTD//EN"
"http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="hibernate.connection.url">jdbc:mysql://localhost:3306/dbname</property>
<property name="hibernate.connection.username">root</property>
<property name="hibernate.connection.password">lax</property>
<property name="hibernate.connection.pool_size">100</property>
<property name="show_sql">true</property>
<property name="dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="hibernate.hbm2ddl.auto">update</property>
<property name=""></property>
<property name="hibernate.connection.release_mode">on_close</property>
</session-factory>
</hibernate-configuration>

This is because you are using a connection pool which got created as soon as you build SessionFactory, but the connections are acquired only when you open a Session. Now, you are closing the session, due to which connections are released, but are not closed and are held up by the pool. Now, you are again creating a SessionFactory, hence creating a new pool, then getting a session, hence creating a new connection and so on.. which will eventually reach the maximum number of connections allowed.
What you have to do is using one Connection Pool (using one SessionFactory) and getting and releasing the connections from the same pool.
public class DBConnection {
private static SessionFactory factory;
static {
factory = new Configuration().configure().buildSessionFactory();
}
public Session getSession() {
return factory.openSession();
}
public void doWork() {
Session session = getSession();
// do work.
session.close();
}
// Call this during shutdown
public static void close() {
factory.close();
}
}

You create new SessionFactorys every time you need a Session and don't close them.
Usually you need to create session factory only once during startup of your application, and close it during shutdown. For example, as follows:
public class DBConnection {
private static SessionFactory factory;
static {
factory = new Configuration().configure().buildSessionFactory();
}
public Session getSession() {
return factory.openSession();
}
// Call this during shutdown
public static void close() {
factory.close();
}
}
Also take a look at contextual sessions pattern.

I have used
Session session = SessionFactoryUtils.getSession(this.sessionFactory,
true);
to get session. Above the DAO layer, transaction with annotations (#Transactional) is used in add, edit or delete. Our application uses shiro for authentication and authorization. In each page access, it creates at least 5 connections. So the subsequent request keeps on increasing mysql connection. Thanks to #Venu I found out that
show variables like "max_connections";
shows default maximum connection available. Initially (I guess) it was set to 151. This number would soon exhaust because another variable "innodb_open_files" was set to 300. By intuition I can say if one table reaches maximum connection of 300 it would only go for clean up (just like garbage collection in java).
One would think, if we would increase the value of "max_connections" to 300 it solve this error, but as I said earlier "innodb_open_files" does not go for cleanup until one table reaches that limit. In our case user table which would be called frequently would reach that limit for cleanup to happen. This means, you cannot know before hand what is the perfect number for "max_connections".
In mysql configuration file "/etc/mysql/my.cnf" max_connections is set as
max_connections = 1000
I did not change
value of "innodb_open_files", which is default to "300"
mysql> show variables like "innodb_open_files";
During execution of your program you can use
mysql> show processlist;
to find out how many connections are open. I guess all the processes in Sleep state can be deleted in next cycle of cleanup (but its only my guess).
To simulate more than 1000 users, executing some url in your web project, you can use software like jmeter and set number of thread (users) to 1000. Use of more user also depends upon available RAM in your computer. I was able to simulate more than 3000 connections sending requests at 1 second interval. A good and sufficient tutorial on how to do this is listed on roseindia
Now my query. I have not closed session on any place. I guess #Transaction was created for that purpose (mysql proves that, because all those connections get cleaned once the limit reaches). Due to merge problem of detached objects, I had to used OpenSessionInViewFilter. OpenSessionInViewFilter (can be OpenSessionInViewInterceptor) opens the session a little longer for the merge to happen, which might be the reason so many connections remain open in mysql (Don't know clearly). Since it works for so many users it seems a good solution, but is there a better solution to make it work under default max connection size of mysql. i.e 151 connections.

I was facing the same issues with Hibernate mysql conenctions
The following two things helped fix my connection problems
import javax.inject.Singleton;
#Singleton
public class DBConnection {
private static SessionFactory factory;
static public Session getSession() {
if (factory == null) {
System.out.println("--------New connection----");
factory = new Configuration().configure("hibernate.cfg.xml")
.addAnnotatedClass(SomeClass.class)
.buildSessionFactory();
}
//System.out.println("--------old connection----");
return factory.openSession();
}
// Call this during shutdown
public static void close() {
factory.close();
}
}
Create a singleton class for getting a DBConnection
Do not forget to perform session.close() when your query is complete
Also in "hibernate.cfg.xml"
add this line
<hibernate-configuration>
<session-factory>
....
<property name="connection.pool_size">100</property>
....
</session-factory>
</hibernate-configuration>
under

JAVA ISSUE
Please check this answer : https://stackoverflow.com/a/10785770/598424
MYSQL ISSUE
YOU NEED TO LOGIN AS SUPERUSER. THEN EXECUTE THIS CODE:
SET GLOBAL MAX_CONNECTIONS = 200;
HOWEVER, THIS ONLY LASTS UNTIL THE MYSQL SERVER RESTARTS. IF YOU WANT TO CHANGE IT PERMANENTLY, YOU CAN ADD THE FOLLOWING LIKE UNDER MYSQLD SECTION.
MAX_CONNECTIONS = 500
OR
IF YOU HAVE TOO MANY PROCESSES LINGERING AROUND ON THE DATABASE SERVER, THEN YOU MAY REACH A LIMIT AND NO NEW ONES WILL BE ALLOWED TO BE STARTED.
To see if this is the case, take a look at your processes from the MySQL Monitor prompt by typing:
show processlist;
Then you will see the lingering processes. Each has an ID, and you can kill them by process ID right in the MySQL Monitor by typing:
kill 2309344;

Related

Spring Data JPA Too many connections

me and my collegues are working on a SpringBoot project, we work simultaneously and we all connect to the same mysql database.
Trouble is that after a while some of us will no longer be able to connect, error is Too many connections, now I've spoken to the db administrator and he raised the number of max connections to 500 (it was something like 150 before this), but we still get the error, how can I fix this? the only configuration properties we use are these:
spring.datasource.url= ...
spring.datasource.username= ...
spring.datasource.password= ...
spring.datasource.hikari.maximum-pool-size=25
Maybe jpa opens a new connection every time he does a query but doesn't close it? I don't know I'm clueless here
EDIT:
I've been asked to show some code regarding the interactions with the database so here it is:
#Autowired
private EmployeeDAO employeeDAO;
#Autowired
private LogDAO logDAO;
#Autowired
private ContrattoLavoroDAO contrattoLavoroDAO;
#Override
public void deleteEmployeeById(Long idEmployee, String username) {
contrattoLavoroDAO.deleteContrattoByEmpId(idEmployee);
employeeDAO.deleteById(idEmployee);
LogEntity log = new LogEntity();
LocalDateTime date = LocalDateTime.now();
log.setData(date);
log.setUser(username);
log.setCrud("delete");
log.setTabella("Employees");
log.setDescrizione("l'utente " + username + " ha rimosso il dipendente con matricola " + idEmployee);
logDAO.save(log);
}
and here's the model for a DAO:
public interface ContrattoLavoroDAO extends JpaRepository<ContrattoLavoroEntity, Long> {
#Modifying
#Query(value = "DELETE contratto_lavoro, employee_contratto FROM contratto_lavoro" + " INNER JOIN"
+ " employee_contratto ON employee_contratto.id_contratto = contratto_lavoro.id_contratto" + " WHERE"
+ " contratto_lavoro.id_contratto = ?1", nativeQuery = true)
public void deleteContrattoByEmpId(Long empId);
}
You set the Hikari maximum pool size to 25.
Which is pretty high for a development environment.
But this shouldn't be a problem, because it's only the maximum, right?
Well Hikaris documentation says:
🔢minimumIdle
This property controls the minimum number of idle connections that HikariCP tries to maintain in the pool. If the idle connections dip below this value and total connections in the pool are less than maximumPoolSize, HikariCP will make a best effort to add additional connections quickly and efficiently. However, for maximum performance and responsiveness to spike demands, we recommend not setting this value and instead allowing HikariCP to act as a fixed size connection pool. Default: same as maximumPoolSize
🔢maximumPoolSize
This property controls the maximum size that the pool is allowed to reach, including both idle and in-use connections. Basically this value will determine the maximum number of actual connections to the database backend. A reasonable value for this is best determined by your execution environment. When the pool reaches this size, and no idle connections are available, calls to getConnection() will block for up to connectionTimeout milliseconds before timing out. Please read about pool sizing. Default: 10
If I'm reading this correct it means each developer on your team opens 25 connections when they start the application. Another 25 when they start an integration test which starts a new application context. If the tests of the test suite have different configuration each set of configurations will have their own set of 25 connections in use.
The quick solution is to reduce the maximumPoolSize significantly for your development environment. I'd recommend 2. This is enough to allow for one normal transaction and one background process.
It will throw exceptions if the application requires more connections, which is probably a good thing since in most cases it shouldn't.
Above that you might want to set the minimumIdle to 1 or 0, so the application doesn't consume shared resources if an application is just running because no one shut it down yet.
Mid term you probably want to get rid of having a central database for development.
With the availability of TestContainers there really isn't a reason anymore to not have a local database for each developer.
As a nice side effect it will ensure that all the schema update scripts work properly.

Spring Data JPA - Pessimistic Locking Not Working

Using: Spring Boot 2.3.3, MySQL 5.7(currently via TestContainers), JUnit 5
I have a JpaRepository inside a Spring MVC application that has a method set to be #Lock(LockModeType.PESSIMISTIC_WRITE) and, while I do see the SELECT ... FOR UPDATE coming up in the resulting SQL, it doesn't seem to do much of anything.
I'll put the code below, but, if I try to spin up multiple threads that make the same call, each thread is able to read the same initial value in question and nothing ever seems to block/wait. And my understanding is that any "additionally" called methods that are also #Transactional (from the org.springframework.transaction namespace) are made part of the original transaction.
I can't figure out what I'm doing wrong. Any assistance would be appreciated, even if it means pointing out that my understanding/expectations are flawed.
Repository
public interface AccountDao extends JpaRepository<Account, Long> {
#Lock(LockModeType.PESSIMISTIC_WRITE)
public Optional<Account> findById(Long id);
}
Services
Account Service
#Service
public class AccountServiceImpl implements AccountService {
#Autowired
private FeeService feeService;
#Override
#Transactional // have also tried this with REQUIRES_NEW, but the same results occur
public void doTransfer(Long senderId, Long recipientId, TransferDto dto) {
// do some unrelated stuff
this.feeService.processFees(recipientId);
}
}
Fee Service
#Service
public class FeeServiceImpl implements FeeService {
#Autowired
private AccountDao accountDao;
#Override
#Transactional // have also tried removing this
public void processFees(Long recipientId) {
// this next line is actually done through another service with a #Transactional annotation, but even without that annotation it still doesn't work
Account systemAccount = this.accountDao.findById(recipientId);
System.out.println("System account value: " + systemAccount.getFunds());
systemAccount.addToFunds(5);
System.out.println("Saving system account value: " + systemAccount.getFunds());
}
}
Test
public class TheTest {
// starts a #SpringBootTest with ```webEnvironment = WebEnvironment.RANDOM_PORT``` so it should start up a dedicated servlet container
// also auto configures a WebTestClient
#Test
#Transactional
public void testLocking() {
// inserts a bunch of records to have some users and accounts to test with and does so via JPA, hence the need for #Transactional
// code here to init an ExecutorService and a synchronized list
// code here to create a series of threads via the ExecutorService that uses different user IDs as the sender, but the same ID for the recipient, hence the need for pessimistic locking
}
}
I can put in the testing code if necessary, but, I'm not sure what other details are necessary.
The resulting output (especially from the System.out.println calls in FeeServiceImpl) shows that the same "system account" value is read in across all threads, and the saved value is, therefore, also always the same.
When the application starts up, that value is 0, and all threads read that 0, with no apparent locking or waiting. I can see multiple transactions starting up and committing (I increased the logging level on Hibernate's TransactionImpl), but, it doesn't seem to matter.
Hopefully I'm overlooking or doing something dumb, but, I can't quite figure out what it is.
Thank you!
Of course, it was something buried that I wasn't expecting.
It turns out my tables had been created using MyISAM instead of InnoDB, oddly, since that hasn't been the default for table creation in MySQL in a long time.
So, here's what I did:
I thought I was using MySQL 8.0. Turns out TestContainers defaults (to 5.7.22 in my case) when using a JDBC connection string that doesn't specifically name the version. So I fixed that.
This still didn't fix things as MyISAM was still being used. It turns out this was because I had a legacy dialect setting in my configuration. Updating that to something like MySQL57Dialect corrected that.
This actually also explains the "weird" behaviour I was seeing in my JUnit tests, as values were popping into the DB right away and not rolling back, etc.
I hope this helps someone else in the future!

org.hibernate.AssertionFailure: null id in entry (don't flush the Session after an exception occurs)

I have a hibernate and JSF2 application going to the deployment server and suddenly throwing an org.hibernate.AssertionFailure: null id in exception. I will provide the stack trace and code immediately but here are four important issues first:
This happens only on the deployment server (Jboss & MySql running on Windows Sever 2008.) It does not happen on my development machine (Tomcat and MySql running on Windoes 7 Pro) and also not on the staging environment (Jboss and MySql running on Linux.)
Researching this, it seems that people get this error when trying to insert an object. But I get the error when I'm doing a simple query. (various different queries, actually, as the error pops up on several pages randomly.)
The error hits only every now and then. If I do a Jboss restart it goes away, but a time later returns. Also, it's not consistent, on some clicks it's there, on others it's not. Even when it hits, when I do a simple refresh of the page it returns fine.
I'm using c3p0 (config below)
Any idea what's going on?
The code details:
This happens on an address object. Here's the full hbm:
<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping PUBLIC
"-//Hibernate/Hibernate Mapping DTD 3.0//EN"
"http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd">
<hibernate-mapping package="com.idex.auctions.model">
<class name="Address" table="address" lazy="true">
<id name="addressID" column="AddressID">
<generator class="native"/>
</id>
<property name="street" column="street"/>
<property name="city" column="city"/>
<property name="zip" column="zip"/>
<property name="state" column="state"/>
<property name="region" column="region"/>
<property name="country" column="country"/>
<many-to-one name="user"
class="com.idex.auctions.model.User"
column="userid"
unique="true"
cascade="save-update"/>
</class>
</hibernate-mapping>
The Java class is straight forward:
public class Address implements Serializable {
private static final long serialVersionUID = 7485582614444496906L;
private long addressID;
private String street;
private String city;
private String zip;
private String state;
private String region;
private String country;
private User user;
public Address() {
}
public long getAddressID() {
return addressID;
}
public void setAddressID(long addressID) {
this.addressID = addressID;
}
public String getStreet() {
return street;
}
public void setStreet(String street) {
this.street = street;
}
public String getCity() {
return city;
}
public void setCity(String city) {
this.city = city;
}
public String getZip() {
return zip;
}
public void setZip(String zip) {
this.zip = zip;
}
public String getState() {
return state;
}
public void setState(String state) {
this.state = state;
}
public String getRegion() {
return region;
}
public void setRegion(String region) {
this.region = region;
}
public String getCountry() {
return country;
}
public void setCountry(String country) {
this.country = country;
}
public User getUser() {
return user;
}
public void setUser(User user) {
this.user = user;
}
}
The c3p0 configuration:
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.idle_test_period">1000</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.timeout">1800</property>
<property name="hibernate.c3p0.max_statements">0</property>
<property name="connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
The versions used are
hibernate3.jar
c3p0-0.9.1.2.jar
myfaces-api-2.1.4.jar
myfaces-impl-2.1.4.jar
mysql-connector-java-5.1.20-bin.jar
The full stacktrace
org.hibernate.AssertionFailure: null id in com.idex.auctions.model.Address entry
(don't flush the Session after an exception occurs)
org.hibernate.event.def.DefaultFlushEntityEventListener.checkId(
DefaultFlushEntityEventListener.java:78)
org.hibernate.event.def.DefaultFlushEntityEventListener.getValues(
DefaultFlushEntityEventListener.java:187)
org.hibernate.event.def.DefaultFlushEntityEventListener.onFlushEntity(
DefaultFlushEntityEventListener.java:143)
org.hibernate.event.def.AbstractFlushingEventListener.flushEntities(
AbstractFlushingEventListener.java:219)
org.hibernate.event.def.AbstractFlushingEventListener.flushEverythingToExecutions(
AbstractFlushingEventListener.java:99)
org.hibernate.event.def.DefaultAutoFlushEventListener.onAutoFlush(
DefaultAutoFlushEventListener.java:58)
org.hibernate.impl.SessionImpl.autoFlushIfRequired(SessionImpl.java:997)
org.hibernate.impl.SessionImpl.list(SessionImpl.java:1142)
org.hibernate.impl.QueryImpl.list(QueryImpl.java:102)
com.idex.auctions.manager.DatabaseManager.getAllObjects(DatabaseManager.java:464)
com.idex.auctions.ui.NavBean.gotoHome(NavBean.java:40)
sun.reflect.GeneratedMethodAccessor350.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
java.lang.reflect.Method.invoke(Unknown Source)
javax.el.BeanELResolver.invokeMethod(BeanELResolver.java:735)
javax.el.BeanELResolver.invoke(BeanELResolver.java:467)
javax.el.CompositeELResolver.invoke(CompositeELResolver.java:246)
org.apache.el.parser.AstValue.getValue(AstValue.java:159)
org.apache.el.ValueExpressionImpl.getValue(ValueExpressionImpl.java:189)
org.apache.myfaces.view.facelets.el.ContextAwareTagValueExpression.getValue(
ContextAwareTagValueExpression.java:96)
javax.faces.component._DeltaStateHelper.eval(_DeltaStateHelper.java:246)
javax.faces.component.UIOutcomeTarget.getOutcome(UIOutcomeTarget.java:50)
org.apache.myfaces.shared.renderkit.html.HtmlRendererUtils.getOutcomeTargetHref(
HtmlRendererUtils.java:1542)
org.apache.myfaces.shared.renderkit.html.HtmlLinkRendererBase.renderOutcomeLinkStart(
HtmlLinkRendererBase.java:908)
org.apache.myfaces.shared.renderkit.html.HtmlLinkRendererBase.encodeBegin(
HtmlLinkRendererBase.java:143)
javax.faces.component.UIComponentBase.encodeBegin(UIComponentBase.java:502)
javax.faces.component.UIComponent.encodeAll(UIComponent.java:744)
javax.faces.component.UIComponent.encodeAll(UIComponent.java:758)
javax.faces.component.UIComponent.encodeAll(UIComponent.java:758)
org.apache.myfaces.view.facelets.FaceletViewDeclarationLanguage.renderView(
FaceletViewDeclarationLanguage.java:1900)
org.apache.myfaces.application.ViewHandlerImpl.renderView(ViewHandlerImpl.java:285)
com.ocpsoft.pretty.faces.application.PrettyViewHandler.renderView(
PrettyViewHandler.java:163)
javax.faces.application.ViewHandlerWrapper.renderView(ViewHandlerWrapper.java:59)
org.apache.myfaces.tomahawk.application.ResourceViewHandlerWrapper.renderView(
ResourceViewHandlerWrapper.java:93)
com.idex.auctions.ui.CustomViewHandler.renderView(CustomViewHandler.java:98)
org.apache.myfaces.lifecycle.RenderResponseExecutor.execute(RenderResponseExecutor.java:115)
org.apache.myfaces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:241)
javax.faces.webapp.FacesServlet.service(FacesServlet.java:199)
com.ocpsoft.pretty.PrettyFilter.doFilter(PrettyFilter.java:126)
com.ocpsoft.pretty.PrettyFilter.doFilter(PrettyFilter.java:118)
The exception:
org.hibernate.AssertionFailure: null id in entry (don't flush the Session after an exception occurs)
Tells us that the session exception has happened before the point where this org.hibernate.AssertionFailure is thrown.
To be exact, the org.hibernate.AssertionFailure is thrown when the session.flush() is happening, not the point where the error ocurred.
The above is a fact, thus a possible conclusion from it is: something could be suppressing the original exception.
So look for other possible points of error: A save() or saveOrUpdate() is possibly trying to persist an entity with a null field where, in the table, the column is NOT NULL?
TIP:
To help in the debugging, try adding a session.flush() after every interaction with the Session object (e.g. session.save(obj), session.merge(obj), etc.), this will hopefully cause the org.hibernate.AssertionFailure to happen earlier, closer to where the real problem is taking place. (Of course, after the debugging, remove those session.flush().)
In my case, the **real** exception was taking place inside a `try/catch {}` block where the `catch` suppressed the exception (didn't rethrow or warn me about it).
I would bet for a concurrency issue but it may occur at different levels:
a hibernate session may be shared between different users if the classical "open session in view" pattern is not properly implemented
an entity is shared between two user sessions because of improper hibernate cache settings
a JDBC connection is shared between two different hibernate session (less likely)
Apart from these potential sources of troubles, I would remove c3p0 (maybe just rumors...) as your stack already provides DataSource with connection pooling integrated with the transaction manager.
The #asdcjunior has answered correctly. Something has happened before the exception is thrown.
In that kind of situations (it happens often on integration tests when you dealing with single transaction for one test - for example #Transaction annotation) I'm invoking the method:
session.clear()
It helps because all the 'dirty' objects are removed from current session so when the next flush is executed the problem does not appear.
Example flow:
insert the assignment entity (many-to-many relation with constraint that could exist only single assignment) -> everything ok
insert the same assignment entity one more time -> everything ok, controller in this case return some kind of bad request exception, under the hood Spring throws the IntegrityViolationException -> in test everything looks ok
get the repository and execute findAll().size() to check the count of existed assigned to be sure that we have only single assignment -> the mentioned exception is thrown ;/ what happend? on the session exist still dirty object, normally the session would be destroyed (controller return error) but here we have the next assertions to check regarding database, so the solution here is additional session.clear() before next db related method executions
Example correct flow:
insert the assignment entity
insert the same assignment entity
session.clear()
get the repository and execute findAll().size()
Hope it helps ;)
You are probably hitting some Hibernate bug. (I'd recommend upgrading to at least Hibernate 3.3.2.GA.)
Meanwhile, Hibernate does better when your ID is nullable so that Hibernate can always tell the difference between a new object that has not yet been persisted to the database and one that's already in the database. Changing the type of addressID from long to Long will probably work around the problem.
The stack trace you provided shows that you are seeing the problem on a query because your query is forcing buffered writes to be flushed to the database before the query is executed and that write is failing, probably with the same insert problem other people are seeing.
I was facing this issue
I just add try catch block and in catch block I wrote seesion.clear();
now I can proceed with the rest of records to insert in database.
OK, I continued researching based among other things on other answers in this thread. But in the end, since we were up against a production deadline, I had to choose the emergency rout. So instead of figuring out hibernate I did these two things:
Removed a jQuery library I was using to grab focus on one of the forms. I did this because I read somewhere that this type of bug may happen due to a form posting a null value -- causing the null id down the line. I suspected the jQuery library may not sit well with PrimeFaces, and cause some form to malfunction. Just a hunch.
I killed the hibernate implemented relationship I had between user and address. (just one required, not one to many) and wrote the code myself when needed. Luckily it only affected one page significantly, so it wasn't much work.
The bottom line: we went live and the application has been running for several days without any errors. So this solution may not be pretty -- and I'm not proud of myself -- but I have a running app and a happy client.
Problem flow :
You create a new transient entity instance (here an Address instance)
You persist it to the database (using save, merge or persist in hibernate Session / JPA EntityManager)
As the entity identifier is generated by the database hibernate has to trigger the database insertion (it flushes the session) to retrieve the generated id
The insert operation trigger an exception (or any pending unflushed change in the session)
You catch the exception (without propagating it) and resume the execution flow (at this point your session still contains the unpersisted instance without the id, the problem is that hibernate seems to consider the instance as managed but the instance is corrupted as a managed object must have an id)
you reach the end of your unit of work and the session is automatically flushed before the current transaction is committed, the flush fails with an assertion failure as the session contains a corrupted instance
You have many possible ways to mitigate this error :
Simplest one and as hibernate stands "don't flush the Session after an exception occurs" ie. immediately give up and roll back the current transaction after a persistence exception.
Manually evict (JPA : detach) the corrupted instance from the session after catching the error (at point 5, but if the error was triggered by another pending change instead of the entity insert itself, this will be useless)
Don't let the database handle the id generation (use UUID or distributed id generation system, in this case the final flush will throw the real error preventing the persistence of the new instance instead of an hibernate assertion failure)
org.hibernate.AssertionFailure: null id in entry (don't flush the Session after an exception occurs)
This just happened to us and I thought I'd add some details for posterity. Turns out that we were trying to create an entity with a duplicate field that violated a condition:
Caused by: org.hibernate.exception.ConstraintViolationException: Duplicate entry '' for key
'Index_schools_name'
This exception however was being masked because hibernate was trying to commit the session even though the create failed. When the created failed then the id was not set hence the assert error. In the stack trace we could see that hibernate was committing:
at org.springframework.orm.hibernate4.HibernateTransactionManager.doCommit
(HibernateTransactionManager.java:480)
It should have been rolling back the session, not committing it. This turned out to be a problem with our rollback configuration. We are using the old XML configs and the exception path was incorrect:
<prop key="create*">PROPAGATION_REQUIRED,-org.x.y.LocalException</prop>
The LocalException path was incorrect and hibernate didn't throw an error (or it was buried in the startup log spew). This would probably also be the case if you are using the annotations and don't specify the right exception(s):
// NOTE: that the rollbackFor exception should match the throws (or be a subclass)
#Transactional(rollbackFor = LocalException.class)
public void create(Entity entity) throws AnotherException {
Once we fixed our hibernate wiring then we properly saw the "duplicate entry" exception and the session was properly being rolledback and closed.
One additional wrinkle to this was that when hibernate was throwing the AssertionFailure, it was holding a transaction lock in MySQL that then had to be killed by hand. See: https://stackoverflow.com/a/39397836/179850
This happened to me in the following situation:
New entity is persisted.
Entity is configured with javax.persistence.EntityListeners. javax.persistence.PostPersist runs.
PostPersist needs some data from the database to send a message via STOMP. A org.springframework.data.repository.PagingAndSortingRepository query is executed.
Exception.
I fixed it by using the following in the EntityListeners:
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler;
import java.time.Instant;
...
ThreadPoolTaskScheduler scheduler = ApplicationContextHolder.getContext().getBean(ThreadPoolTaskScheduler.class);
scheduler.schedule(() -> repository.query(), Instant.now());
Where ApplicationContextHolder is defined as:
import org.springframework.beans.BeansException;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.stereotype.Component;
#Component
public class ApplicationContextHolder implements ApplicationContextAware {
private static ApplicationContext context;
#Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
context = applicationContext;
}
public static ApplicationContext getContext() {
return context;
}
}
In my case the problem was the length parameter of an entity's field. When I tried to save an object with too long String value in one of its fields, I got the error. The solution was to set the proper value of parameter "length" in hibernate configuration.
<property name="status" type="string" length="150" not-null="false" access="field"/>
It can also be done with annotation #Length like that:
#Length(max=150)
private String status;
The hibernate exception's message was very misleading in my case, as was stacktrace. The fastest way to locate where the problem occures is to follow your code with debugger and evaluate session.flush(); after every save() and saveOrUpdate() method.
This is nothing to do with the Query that is being executed. This just triggers the flush. At this point Hibernate is trying to assign an identifier to the entity and seems to have failed for some reason.
Could you try changing the generator class:
<generator class="identity"/>
see if that makes a difference. Also have you made sure that the database you have deployed has the correct auto-incrementing column set up on the table?
It sounds like your issue is similar to this one.
Changing the generator class:
<generator class="identity" />
to
<generator class="assigned" />
I have the same exception too, in hibernate config file:
<property name="operateType" type="java.lang.Integer">
<column name="operate_type" not-null="true" />
</property>
when pass null value at object, occur exception
"org.hibernate.AssertionFailure: null id in com.idex.auctions.model.Address entry",
I think the reason because Hibernaye will check 'not-null' property, so, remove 'not-null' property or set 'not-null' for 'false', will resolve the problem.
Sometimes this happens when length of string is greater than that allowed by DB.
DataIntegrityViolationException translates to this exception which is a weird behavior by hibernate.
So if you have Column annotation on the String field of the entity with length specified and the actual value is greater than that length, above exception is thrown.
Ref: https://developer.jboss.org/thread/186341?_sscc=t
org.hibernate.AssertionFailure: null id in entry (don't flush the Session after an exception occurs)
Your getting this error while using the save method, if your maintaining the version history of the user activity and try to set the following values
setCreatedBy(1);
setModifiedBy(1);
setCreationDate();
setChangeDate();
}
You will get the above error to solve this you need to create the following columns on table.
Created_By
Modified_By
Creation_Date
Change_Date
if you are getting same error while Update method to solve this problem Just you need to change the Update() method to merge() method that it
i hope helped you.
I had the same error. In my case it was because before this exception I executing create query with exception. Exception is caught and don't rollback the transaction in catch block. Then I use this broken transaction in other operation and after a few time I got the same exception. At first I set flush mode to manual
public final Session getCurrentSession()
{
Session session = sessionFactory.getCurrentSession();
session.setFlushMode(FlushMode.MANUAL);
return session;
}
Then I got another exception, that explained to me what happened in fact. Then I done transaction rollback in catch block of my create method. And it helped to me.
I'm hitting the same error when I make session.getCurrentSession.refresh(entity) it looks more like a bug to me instead of an issue with my code. I'm getting this error in a unit test when I'm trying to refresh an entity in the beginning of a test and that entity is created in the test setup (annotated with junit's #Before). What is strange is that I'm creating 3 entities from the same class with random data at the same time and by the same way in the setup and I can refresh the first two created but the refresh fails for the last one. So for example If I create 3 entities User in the test setup I can refresh user1 and user2 and it fails for user3. I was able to resolve this by adding session.flush() at the end of the code that is creating the entity in the setup. I don't get any errors and I should but I cannot explain why the extra flush is needed. Also I can confirm that the entities are actually in the test DB even without flush because I can query them in the test but still failing the refresh(entity) method.
In my case, I traced out the error and found that I had not marked my table's primary key i.e. 'ID' as 'Auto_Increment' AI. Just tick the AI checkbox and it would work.
I don't know if im late or not, but my issue here was that i was opening an transaction and commiting -> flushing -> closing after the request. However between those did i have a nhibernate save() operator which does this automatically, in that case it complained.
Threw exception:
session.BeginTransaction();
model.save(entity);
session.Transaction.commit();
Solved for me
model.save(entity) //this one should open transaction, save and commit/flush by itself
However alot of people says that you should use both, ex NHibernate: Session.Save and Transaction.Commit.
But for some reason does it work for me now without transactions..
Roll back your transaction in the catch block

Why EclipseLink 's auto commit doesn't work with MySQL?

Using the following code:
EntityManager manager = factory.createEntityManager();
manager.setFlushMode(FlushModeType.AUTO);
PhysicalCard card = new PhysicalCard();
card.setIdentifier("012345ABCDEF");
card.setStatus(CardStatusEnum.Assigned);
manager.persist(card);
manager.close();
when code runs to this line, the "card" record does not appear in the database. However, if using the FlushModeType.COMMIT, and using transaction like this:
EntityManager manager = factory.createEntityManager();
manager.setFlushMode(FlushModeType.COMMIT);
manager.getTransaction().begin();
PhysicalCard card = new PhysicalCard();
card.setIdentifier("012345ABCDEF");
card.setStatus(CardStatusEnum.Assigned);
manager.persist(card);
manager.getTransaction().commit();
manager.close();
it works fine. From the eclipselink's log i can see the previous code doesn't issue an INSERT statement while the second code does.
Do I miss something here? I'm using EclipseLink 2.3 and mysql connection/J 5.1
I am assuming that you are using EclipseLink in a Java SE application, or in a Java EE application but with an application managed EntityManager instead of a container managed EntityManager.
In both scenarios, all updates made to the persistence context are flushed only when the transaction associated with the EntityManager commits (using EntityTransaction.commit), or when the EntityManager's persistence context is flushed (using EntityManager.flush). This is the reason why the second code snippet issues the INSERT as it invokes the EntityTransaction's begin and commit methods, while the first doesn't; an invocation of em.persist does not issue an INSERT.
As far as FlushModeType values are concerned, the API documentation states the following:
COMMIT
public static final FlushModeType COMMIT
Flushing to occur at transaction commit. The provider may flush at
other times, but is not required to.
AUTO
public static final FlushModeType AUTO
(Default) Flushing to occur at query execution.
Since, queries haven't been executed in the first case case, no flushing i.e. no INSERT statements corresponding to the persistence of the PhysicalCard entity will be issued. It is the explicit commit of the EntityTransaction in the second, that is resulting in the INSERT statement being issued.

org.hibernate.StaleObjectStateException when using Grails with PostgreSQL

I've written a grails service with the following code:
EPCGenerationMetadata requestEPCs(String indicatorDigit, FilterValue filterValue,
PartitionValue partitionValue, String companyPrefix, String itemReference,
Long quantity) throws IllegalArgumentException, IllegalStateException {
//... code
//problematic snippet bellow
def serialGenerator
synchronized(this) {
log.debug "Generating epcs..."
serialGenerator = SerialGenerator.findByItemReference(itemReference)
if(!serialGenerator) {
serialGenerator = new SerialGenerator(itemReference: itemReference, serialNumber: 0l)
}
startingPoint = serialGenerator.serialNumber + 1
serialGenerator.serialNumber += quantity
serialGenerator.save(flush: true)
}
//code continues...
}
Being a grails service a singleton by default, I thought I'd be safe from concurrent inconsistency by adding the synchronized block above. I've created a simple client for testing concurrency, as the service is exposed by http invoker. I ran multiple clients at the same time, passing as argument the same itemReference, and had no problems at all.
However, when I changed the database from MySQL to PostgreSQL 8.4, I couldn't handle concurrent access anymore. When running a single client, everything is fine. However, if I add one more client asking for the same itemReference, I get instantly a StaleObjectStateException:
Exception in thread "main" org.springframework.orm.hibernate3.HibernateOptimisticLockingFailureException: Object of class [br.com.app.epcserver.SerialGenerator] with identifier [10]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [br.com.app.epcserver.SerialGenerator#10]
at org.springframework.orm.hibernate3.SessionFactoryUtils.convertHibernateAccessException(SessionFactoryUtils.java:672)
at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:412)
at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:411)
at org.springframework.orm.hibernate3.HibernateTemplate.executeWithNativeSession(HibernateTemplate.java:374)
at org.springframework.orm.hibernate3.HibernateTemplate.flush(HibernateTemplate.java:881)
at org.codehaus.groovy.grails.orm.hibernate.metaclass.SavePersistentMethod$1.doInHibernate(SavePersistentMethod.java:58)
(...)
at br.com.app.EPCGeneratorService.requestEPCs(EPCGeneratorService.groovy:63)
at br.com.app.epcclient.IEPCGenerator$requestEPCs.callCurrent(Unknown Source)
at br.com.app.epcserver.EPCGeneratorService.requestEPCs(EPCGeneratorService.groovy:29)
at br.com.app.epcserver.EPCGeneratorService$$FastClassByCGLIB$$15a2adc2.invoke()
(...)
Caused by: org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [br.com.app.epcserver.SerialGenerator#10]
Note: EPCGeneratorService.groovy:63 refers to serialGenerator.save(flush: true).
I don't know what to think, as the only thing that I've changed was the database. I'd appreciate any advice on the matter.
I'm using:
Grails 1.3.3
Postgres 8.4 (postgresql-8.4-702.jdbc4 driver)
JBoss 6.0.0-M4
MySQL:
mysqld Ver 5.1.41 (mysql-connector-java-5.1.13-bin driver)
Thanks in advance!
That's weird, try disabling transaction.
This is indeed a strange behavior, but you could try to workaround by using a "select ... for upgrade", via hibernate lock method.
Something like this:
def c = SerialGenerator.createCriteria()
serialgenerator = c.get {
eg "itemReferece", itemReference
lock true
}