I am using Spring Boot to build a scheduled-job data processing application. The main logic would be in a scheduled job that takes a batch of records and process them. I should be running 2 instances of the application that should not pick the same record twice. I tried to utilize the PESSIMISTIC LOCK with NO WAIT to resolve any records selection conflict.
Things are not working as expected. Both instances are picking the same records, although I was expecting only one instance to lock and process a few records and the other instance skip what was locked by the first instance.
Spring Boot version: 2.2.4.RELEASE
Database: MySQl
First I tried using the #Lock and #QueryHint annotations:
#Lock(value = LockModeType.PESSIMISTIC_WRITE) // adds 'FOR UPDATE' statement
#QueryHints(value={#QueryHint(name = "javax.persistence.lock.timeout", value = LockOptions.SKIP_LOCKED+"")})
Page<Transaction> findByStatus(String status, Pageable pageable);
Even with WAIT_FOREVER, there is no change in behavior as if #QueryHints are totally ignored..
The other option I tried is using NativeQuery:
#Query(value ="select * from transaction t where t.status = ?1 limit ?2 for update SKIP LOCKED",
countQuery="select count(*) from transaction t where t.status = ?1",
nativeQuery = true)
List<Transaction> findByStatusNQ(String status, Integer pageSize);
Same behavior. No locking, both app instances are selecting the same set of data
This is the defined entity:
#Entity
public class Transaction {
#Id
private Long id;
private String description;
private String status;
private String managedBy;
#Temporal(TemporalType.TIMESTAMP)
private Date manageDate;
...
}
The caller service component is annotated with #Transactional to enforce creating new transaction for each execution:
#Transactional(propagation = Propagation.REQUIRES_NEW)
public List<Transaction> updateTrxStatus(String oldStatus,String newStatus){
List<Transaction> trxs = this.executeUsingNQ(oldStatus);
if(trxs.size()>0) {
logger.info( "Start updating Data");
trxs.forEach(transaction -> {
transaction.setStatus(newStatus);
transaction.setManagedBy(instanceName);
transaction.setManageDate(new Date(System.currentTimeMillis()));
});
}else{
logger.info(" Nothing to process");
}
return trxs;
}
#Transactional(propagation = Propagation.REQUIRED)
public List<Transaction> executeUsingNQ(String oldStatus){
List<Transaction> trxs = trxRepo.findByStatusNQ(oldStatus,2);
return trxs;
}
#Transactional(propagation = Propagation.REQUIRED)
public List<Transaction> executeWithPage(String oldStatus){
Pageable firstPageWithTwoElements = PageRequest.of(0, 2);
Page<Transaction> trxs = trxRepo.findByStatus(oldStatus, firstPageWithTwoElements);
return trxs.getContent();
}
Hopefully someone can help identifying whether there is some coding issue or missing coniguration!!!!
It runs that the issue was caused by using an incorrect Dialect with MySql. That version of Dialect "MySQLDialect" assumes "MyISAMStorageEngine" as a default storage engine while creating tables. That engine does not support any type of transactions.
The only storage engine that supports transactions is "InnoDB" which is being selected as the default choice when using other Dialects like "MySQL55Dialect", "MySQL57Dialect" or "MySQL8Dialect"
Related
I need to update single column(status) of more than 1000-2000 transactions at once.The value to be updated is same e.g "PROCESSED" for all the rows. I don't want to iterate on my list and update but looking for a bulk update in jpa itself. I also want to avoid explicit use of entity manager.
Something like this will work, with standard Spring-boot JPA:
public interface WidgetRepository extends CrudRepository<Widget, Long> {
#Override
List<Widget> findAll();
#Modifying(clearAutomatically = true, flushAutomatically = true)
#Query("update Widget w set w.status = :status where w in :widgets")
void setStatus(#Param("status") String status, #Param("widgets") List widgets);
}
Test code:
List<Widget> updates = new ArrayList<>();
updates.add(all.get(0));
updates.add(all.get(2));
repository.setStatus("status B", updates);
It should do all the updates in a single transaction.
I'm having an issue with inserting new rows into my MySQL database. I'm using Spring Boot with Spring Boot Data JPA.
Since MySQL doesn't support sequences, I decided to try and make my own sequence generator table. This is basically what I've done.
I created a sequences table that uses an auto increment field (used as my id's for my tables).
Created a function, sequences_nextvalue() which inserts into the sequences table and returns the new auto incremented id.
I then created triggers on each table that get triggered before insertion and replaces the id field with the result of calling sequences_nextvalue().
So this is working fine when inserting new rows. I'm getting unique ids across all tables. The issue I'm having is with my JPA entities.
#Entity
#Inheritance(strategy=InheritanceType.TABLE_PER_CLASS)
public abstract class AbstractBaseClass {
#Id
private Integer id = -1;
...
}
#Entity
public class ConcreteClass1 extends AbstractBaseClass {
...
}
#Entity
public class ConcreteClass2 extends AbstractBaseClass {
...
}
I want to be able to query from the abstract base class so I've placed my #Id column in that class and used #Entity with InheritanceType.TABLE_PER_CLASS. I've also initialized the id to -1 since an id is required to call save() from my spring crud repository.
After calling the save() function of my Spring data CrudRepository, the -1 for id properly gets replaced by the MySQL trigger but the resulting entity returned by save() doesn't return with the new id but instead retains the -1. After looking at the SQL logs, a select statement is not being called after insertion to get the new id but instead the original entity is being returned.
Is it possible to force Hibnerate to re-select the entity after insertion to get the new id when you're not using #GeneratedValue?
Any help is greatly appreciated.
Just wanted to provide an update on this question. Here is my solution.
Instead of creating MySQL TRIGGER's to replace the id on INSERT, I created a Hibernate IdentifierGenerator which executes a CallableStatement to get and return a new id.
My abstract base class now looks like this.
#Entity
#Inheritance(strategy=InheritanceType.TABLE_PER_CLASS)
public abstract class AbstractBaseClass {
#Id
#GenericGenerator(name="MyIdGenerator", strategy="com.sample.model.CustomIdGenerator" )
#GeneratedValue(generator="MyIdGenerator" )
private Integer id;
...
}
and my generator looks like this.
public class CustomIdGenerator implements IdentifierGenerator {
private Logger log = LoggerFactory.getLogger(CustomIdGenerator.class);
private static final String QUERY = "{? = call sequence_nextvalue()}";
#Override
public Serializable generate(SessionImplementor session, Object object) throws HibernateException {
Integer id = null;
try {
Connection connection = session.connection();
CallableStatement statement = connection.prepareCall(QUERY);
statement.registerOutParameter(1, java.sql.Types.INTEGER);
statement.execute();
id = statement.getInt(1);
} catch(SQLException e) {
log.error("Error getting id", e);
throw new HibernateException(e);
}
return id;
}
}
And just for reference
The sequences table.
CREATE TABLE sequences (
id INT AUTO_INCREMENT PRIMARY KEY,
thread_id INT NOT NULL,
created DATETIME DEFAULT CURRENT_TIMESTAMP
) ^;
The sequence_nextvalue function
CREATE FUNCTION sequence_nextvalue()
RETURNS INTEGER
NOT DETERMINISTIC
MODIFIES SQL DATA
BEGIN
DECLARE nextvalue INTEGER;
INSERT INTO sequences (thread_id) VALUE (CONNECTION_ID());
SELECT id FROM sequence_values ORDER BY created DESC LIMIT 1 INTO nextvalue;
RETURN nextvalue;
END ^;
I'm using Hibernate on a web project, which a have two different classes (Node, Interface)
Node
#Entity
#Table(name="NODES")
public class Node {
//...
#OneToMany(mappedBy="node", cascade={CascadeType.ALL, CascadeType.MERGE, CascadeType.PERSIST}, orphanRemoval=true)
private Set<Interface> interfaces = new HashSet<>();
//...
}
Interface
#Entity
#Table(name="INTERFACES")
public class Interface {
//...
#ManyToOne(fetch=FetchType.LAZY)
#JoinColumn(name="REF_NODE")
private Node node;
//...
}
Everything is fine,
My question is how can I add an Interface(chlid) to a Node(Parent) that already persisted? means if i already has some Node in the database with 2 Interfaces for exemple and I want to add the third one, how i can do it?
My first quick solution is to use native sql in Hibernate, like this:
public void addInteface(Interface i, Long idNode) {
//OpenSession..
Session session = HibernateUtil.getSessionFactory().openSession();
Transaction tx = null;
try {
//Start transaction
tx = session.beginTransaction();
//Native SQL in Hibernate
SQLQuery query = session.createSQLQuery("INSERT INTO INTERFACES (ID_INTERFACE, ALIAS, ID_NODE) VALUES (NULL, :Alias, :idNode)");
query.setParameter("ifAlias", i.getAlias());
query.setParameter("refNode", idNode);
//Some other parameters...
//Execute and Commit
query.executeUpdate();
tx.commit();
} catch (Exception e) {
if (tx != null) tx.rollback();
throw e;
} finally {
session.close();
}
}
It works, but it's not the best solution i believe.
NB: I found some topics here in stackoverflow with the almost same title but didn't answer my question.
It can be done in 2 ways.
1st Option:
node.getInterfaces().add(new Interface(....)); // you have orphan removal to true, this will work
session.saveOrUpdate(node);
tx.commit();
session.close();
Or
2nd OPtion
newInterface.setNode(nodeObject);
session.saveOrUpdate(newInterface);
tx.commit();
session.close();
In the above 2 methods, 2nd option works great compared to the first one. 1st one will pull all the children when you call getInterfaces(), which does not perform better.
//Syntax may not be on point, also some psuedcode
Lets say you want to add an interface to Node with an id that is 1:
Query query = session.createQuery("Select n from Node n where n.id = '1'");//JPQL
Node n = query.getSingleResult();
Then you will make a new Interface
Interface i = new Interface();
Then set the variables to what you want but also set the node
i.setVariables(..);
i.setNode(n);
Then update
session.merge(i);
Now all this is assuming generator id is correct and such.
I have an JPA entity like this:
#Entity
#Table(name = "category")
public class Category implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Basic(optional = false)
#Column(name = "id")
private Integer id;
#Basic(optional = false)
#Column(name = "name")
private String name;
#OneToMany(cascade = CascadeType.ALL, mappedBy = "category")
private Collection<ItemCategory> itemCategoryCollection;
//...
}
Use Mysql as the underlying database. "name" is designed as a unique key. Use Hibernate as JPA provider.
The problem with using merge method is that because pk is generated by db, so if the record already exist (the name is already there) then Hibernate will trying inserting it to db and I will get an unique key constrain violation exception and not doing the update . Does any one have a good practice to handle that? Thank you!
P.S: my workaround is like this:
public void save(Category entity) {
Category existingEntity = this.find(entity.getName());
if (existingEntity == null) {
em.persist(entity);
//code to commit ...
} else {
entity.setId(existingEntity.getId());
em.merge(entity);
//code to commit ...
}
}
public Category find(String categoryName) {
try {
return (Category) getEm().createNamedQuery("Category.findByName").
setParameter("name", categoryName).getSingleResult();
} catch (NoResultException e) {
return null;
}
}
How to use em.merge() to insert OR update for jpa entities if primary key is generated by database?
Whether you're using generated identifiers or not is IMO irrelevant. The problem here is that you want to implement an "upsert" on some unique key other than the PK and JPA doesn't really provide support for that (merge relies on database identity).
So you have AFAIK 2 options.
Either perform an INSERT first and implement some retry mechanism in case of failure because of a unique constraint violation and then find and update the existing record (using a new entity manager).
Or, perform a SELECT first and then insert or update depending on the outcome of the SELECT (this is what you did). This works but is not 100% guaranteed as you can have a race condition between two concurrent threads (they might not find a record for a given categoryName and try to insert in parallel; the slowest thread will fail). If this is unlikely, it might be an acceptable solution.
Update: There might be a 3rd bonus option if you don't mind using a MySQL proprietary feature, see 12.2.5.3. INSERT ... ON DUPLICATE KEY UPDATE Syntax. Never tested with JPA though.
I haven't seen this mentioned before so I just would like to add a possible solution that avoids making multiple queries. Versioning.
Normally used as a simple way to check whether a record being updated has gone stale in optimistic locking scenario's, columns annotated with #Version can also be used to check whether a record is persistent (present in the db) or not.
This all may sound complicated, but it really isn't. What it boils down to is an extra column on the record whose value changes on every update. We define an extra column version in our database like this:
CREATE TABLE example
(
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
version INT, -- <== It really is that simple!
value VARCHAR(255)
);
And mark the corresponding field in our Java class with #Version like this:
#Entity
public class Example {
#Id
#GeneratedValue
private Integer id;
#Version // <-- that's the trick!
private Integer version;
#Column(length=255)
private String value;
}
The #Version annotation will make JPA use this column with optimistic locking by including it as a condition in any update statements, like this:
UPDATE example
SET value = 'Hello, World!'
WHERE id = 23
AND version = 2 -- <-- if version has changed, update won't happen
(JPA does this automatically, no need to write it yourself)
Then afterwards it checks whether one record was updated (as expected) or not (in which case the object was stale).
We must make sure nobody can set the version field or it would mess up optimistic locking, but we can make a getter on version if we want. We can also use the version field in a method isPersistent that will check whether the record is in the DB already or not without ever making a query:
#Entity
public class Example {
// ...
/** Indicates whether this entity is present in the database. */
public boolean isPersistent() {
return version != null;
}
}
Finally, we can use this method in our insertOrUpdate method:
public insertOrUpdate(Example example) {
if (example.isPersistent()) {
// record is already present in the db
// update it here
}
else {
// record is not present in the db
// insert it here
}
}
I use spring boot + MySQL5 database.
There is a periodic service that runs and need to do the following transaction:
Delete records (with condition)
Insert records
In addition another service does select queries and should see a snapshot of the records without interfering with the delete+insert transactions.
I have the following code:
#Service
public class BulkInsert
{
public static final String DELETE_ALL_ROWS_QUERY = "DELETE FROM GnsEntity where is_synced = true and was_removed = false";
#Inject
private EntityManager entityManager;
#Transactional
public void save(List<GnsEntity> gnsEntityList)
{
Session session = entityManager.unwrap(Session.class);
Query entity = session.createQuery(DELETE_ALL_ROWS_QUERY);
entity.executeUpdate();
for (int i = 0; i < gnsEntityList.size(); ++i)
{
try
{
session.persist(gnsEntityList.get(i));
}
catch(NonUniqueObjectException nonUniEx)
{
}
}
}
}
In general it seems to work good.. though a lot of times there's a deadlock exception and I have no clue why..
Thats why I was wondering if my code is relatively fine?
I get the following errors every now and then:
DEBUG","message":"Creating new transaction with name
[com.ddd.swiss.microservices.gnssynchronizer.BulkInsert.save]:
PROPAGATION_REQUIRED,ISOLATION_DEFAULT","service":"GNSSynchronizer","instanceId":"1","application":"Start","space":"ngampel","class":"org.springframework.orm.jpa.JpaTransactionManager","thread":"pool-3-thread-1","X-B3-TraceId":"5db000bfb3de1a6d49a53edd707419a0","X-B3-SpanId":"49a53edd707419a0"}
{"#timestamp":"2019-10-23T07:27:24.318Z","logLevel":"DEBUG","message":"Opened
new EntityManager
[org.hibernate.jpa.internal.EntityManagerImpl#5a445da1] for JPA
transaction","service":"GNSSynchronizer","instanceId":"1","application":"Start","space":"ngampel","class":"org.springframework.orm.jpa.JpaTransactionManager","thread":"pool-3-thread-1","X-B3-TraceId":"5db000bfb3de1a6d49a53edd707419a0","X-B3-SpanId":"49a53edd707419a0"}
{"#timestamp":"2019-10-23T07:27:24.318Z","logLevel":"DEBUG","message":"begin","service":"GNSSynchronizer","instanceId":"1","application":"Start","space":"ngampel","class":"org.hibernate.engine.transaction.internal.TransactionImpl","thread":"pool-3-thread-1","X-B3-TraceId":"5db000bfb3de1a6d49a53edd707419a0","X-B3-SpanId":"49a53edd707419a0"}
{"#timestamp":"2019-10-23T07:27:24.319Z","logLevel":"DEBUG","message":"Exposing
JPA transaction as JDBC transaction
[org.springframework.orm.jpa.vendor.HibernateJpaDialect$HibernateConnectionHandle#241c36b8]","service":"GNSSynchronizer","instanceId":"1","application":"Start","space":"ngampel","class":"org.springframework.orm.jpa.JpaTransactionManager","thread":"pool-3-thread-1","X-B3-TraceId":"5db000bfb3de1a6d49a53edd707419a0","X-B3-SpanId":"49a53edd707419a0"}
{"#timestamp":"2019-10-23T07:27:24.319Z","logLevel":"DEBUG","message":"delete
from gns_entity where is_synced=1 and
was_removed=0","service":"GNSSynchronizer","instanceId":"1","application":"Start","space":"ngampel","class":"org.hibernate.SQL","thread":"pool-3-thread-1","X-B3-TraceId":"5db000bfb3de1a6d49a53edd707419a0","X-B3-SpanId":"49a53edd707419a0"}
{"#timestamp":"2019-10-23T07:27:25.451Z","logLevel":"DEBUG","message":"could
not execute statement
[n/a]","stackTrace":"com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException:
Deadlock found when trying to get lock; try restarting
transaction\n\tat
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Nativ
Thanks for the help!