Include additional columns in Where clause of Hibernate/JPA Generated UPDATE Query - mysql

I am using Hibernate/JPA.
When i do an entity.save() or session.update(entity), hibernate generates a query like this :-
update TABLE1 set COL_1=? , COL_2=? , COL_3=? where COL_PK=?
Can I include an additional column in the WHERE clause by means of any annotation in the entity, so it can result in a query like :-
update TABLE1 set COL_1=? , COL_2=? , COL_3=? where COL_PK=? **AND COL_3=?**
This is because our DB is sharded based on COL_3 and this needs to be present in where clause
I want to be able to achieve this using the session.update(entity) or entity.save() only.

If I understand things correctly, essentially what you are describing is that you want hibernate to act like you have a composite primary key even though your database has a single-column primary key (where you also have a #Version column to perform optimistic locking).
Strictly speaking, there is no need for your hibernate model to match your db-schema exactly. You can define the entity to have a composite primary key, ensuring that all updates occur based on the combination of the two values. The drawback here is that your load operations are slightly more complicated.
Consider the following entity:
#Entity
#Table(name="test_entity", uniqueConstraints = { #UniqueConstraint(columnNames = {"id"}) })
public class TestEntity implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "id", nullable = false, unique = true)
private Long id;
#Id
#Column(name = "col_3", nullable = false)
private String col_3;
#Column(name = "value", nullable = true)
private String value;
#Version
#Column(nullable = false)
private Integer version;
... getters & setters
}
Then you can have the following method (in my case, I created a simple JUnit test)
#Test
public void test() {
TestEntity test = new TestEntity();
test.setCol_3("col_3_value");
test.setValue("first-value");
session.persist(test);
long id = test.getId();
session.flush();
session.clear();
TestEntity loadedTest = (TestEntity) session
.createCriteria(TestEntity.class)
.add(Restrictions.eq("id", id))
.uniqueResult();
loadedTest.setValue("new-value");
session.saveOrUpdate(loadedTest);
session.flush();
}
This generates the following SQL statements (enabled Hibernate logging)
Hibernate:
call next value for hibernate_sequence
Hibernate:
insert
into
test_entity
(value, version, id, col_3)
values
(?, ?, ?, ?)
Hibernate:
select
this_.id as id1_402_0_,
this_.col_3 as col_2_402_0_,
this_.value as value3_402_0_,
this_.version as version4_402_0_
from
test_entity this_
where
this_.id=?
Hibernate:
update
test_entity
set
value=?,
version=?
where
id=?
and col_3=?
and version=?
This makes loading slightly more complicated as you can see - I used a criteria here, but it satisfies your criteria, that your update statements always include the column col_3 in the 'where' clause.

The following solution works, however I recommend you to just wrap your saveOrUpdate method in a way that you ends up using a more natural approach. Mine is fine... but is a bit hacky.
Solution:
You can create your own annotation and inject your extra condition to hibernate save method using a hibernate interceptor. The steps are the following:
1. Create a class level annotation:
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.TYPE)
public #interface ForcedCondition {
String columnName() default "";
String attributeName() default ""; // <-- this one is just in case your DB column differs from your attribute's name
}
2. Annotate your entity specifying your column DB name and your entity attribute name
#ForcedCondition(columnName = "col_3", attributeName= "col_3")
#Entity
#Table(name="test_entity")
public class TestEntity implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "id", nullable = false, unique = true)
private Long id;
#Column(name = "col_3", nullable = false)
private String col_3;
public String getCol_3() {
return col_3;
}
... getters & setters
}
3. Add a Hibernate interceptor and inject the extra condition:
public class ForcedConditionInterceptor extends EmptyInterceptor {
private boolean forceCondition = false;
private String columnName;
private String attributeValue;
#Override
public boolean onSave(
Object entity,
Serializable id,
Object[] state,
String[] propertyNames,
Type[] types) {
// If your annotation is present, backup attribute name and value
if (entity.getClass().isAnnotationPresent(ForcedCondition.class)) {
// Turn on the flag, so later you'll inject the condition
forceCondition = true;
// Extract the values from the annotation
columnName = entity.getClass().getAnnotation(ForcedCondition.class)).columnName();
String attributeName = entity.getClass().getAnnotation(ForcedCondition.class)).attributeName();
// Use Reflection to get the value
// org.apache.commons.beanutils.PropertyUtils
attributeValue = PropertyUtils.getProperty(entity, attributeName);
}
return super.onSave(entity, id, state, propertyNames, types);
}
#Override
public String onPrepareStatement(String sql) {
if (forceCondition) {
// inject your extra condition, for better performance try java.util.regex.Pattern
sql = sql.replace(" where ", " where " + columnName + " = '" + attributeValue.replaceAll("'", "''") + "' AND ");
}
return super.onPrepareStatement(sql);
}
}
After all that everytime you call entity.save() or session.update(entity) over an entity annotated with #ForcedCondition, the SQL will be injected with the extra condition you want.
BTW: I didn't tested this code but it should get you along the way. If I did any mistake please tell me so I can correct.

Related

JPA Specification multiple join based on foreignkey

I have following relationships between three objects
public class ProductEntity {
#Id
private int id;
#OneToMany(mappedBy = "productEntity",
fetch = FetchType.LAZY)
private List<ProductInfoEntity> productInfoEntityList = new ArrayList<>();
#Column(name = "snippet")
private String snippet;
}
public class ProductInfoEntity {
#Id
private int id;
#ManyToOne
#JoinColumn(name = "product_id")
private ProductEntity productEntity;
#ManyToOne
#JoinColumn(name = "support_language_id")
private SupportLanguageEntity supportLanguageEntity;
}
public class SupportLanguageEntity {
#Id
private int id;
#Column("name")
private String name;
}
And this is actual database design
Then, I'd like to make a specification to query as followings:
select * from product_info
where product_id = 1
and support_language_id = 2;
I am also using annotation for the specification which means that I use ProductEntity_, ProductInfoEntity_ and so on.
Can you please give me a full working code for the specification for query mentioned above?
Thank you guys
To use Specification your ProductInfoEntityRepository have to extend JpaSpecificationExecutor
#Repository
public interface ProductInfoEntityRepository
extends JpaRepository<ProductInfoEntity, Integer>, JpaSpecificationExecutor<ProductInfoEntity> {
}
As far as I understand you use JPA metamodel. So then
#Autowired
ProductInfoEntityRepository repository;
public List<ProductInfoEntity> findProductInfoEntities(int productId, int languageId) {
return repository.findAll((root, query, builder) -> {
Predicate productPredicate = builder.equal(
root.get(ProductInfoEntity_.productEntity).get(ProductEntity_.id), // or root.get("productEntity").get("id")
productId);
Predicate languagePredicate = builder.equal(
root.get(ProductInfoEntity_.supportLanguageEntity).get(SupportLanguageEntity_.id), // or root.get("supportLanguageEntity").get("id")
languageId);
return builder.and(productPredicate, languagePredicate);
});
}
If you want to make specifications reusable you should create utility class contains two static methods productIdEquals(int) and languageIdEquals(int).
To combine them use Specifications(Spring Data JPA 1.*) or Specification(since Spring Data JPA 2.0)
select * from product_info where product_id = 1 and support_language_id = 2;
Should work as written. But the only thing useful will be comment.
Perhaps you want the rest of the info in all three tables?
SELECT pi.comment, -- list the columns you need
p.snippet,
sl.name
FROM product AS p -- table, plus convenient "alias"
JOIN product_info AS pi -- another table
ON p.id = pi.product_info -- explain how the tables are related
JOIN support_language AS sl -- and another
ON pi.support_language_id = sl.id -- how related
WHERE p.snippet = 'abc' -- it is more likely that you will start here
-- The query will figure out the rest.
From there, see if you can work out the obfuscation provided by JPA.

Switch from JsonStringType to JsonBinaryType when the project uses both MySQL and PostgreSQL

I have a problem with column json when it's necessary to switching from PostgreSQL to MariaDB/MySql.
I use Spring Boot + JPA + Hibernate + hibernate-types-52.
The table i want to map is like this:
CREATE TABLE atable(
...
acolumn JSON,
...
);
Ok it works for PostgreSQL and MariaDB/MySql.
The problem is when i want to deploy an application that switch easly from one to another because the correct hibernate-types-52 implementation for PostgreSQL and MySQL/MariaDB are different
This works on MySQL/MariaDB
#Entity
#Table(name = "atable")
#TypeDef(name = "json", typeClass = JsonStringType.class)
public class Atable {
...
#Type(type = "json")
#Column(name = "acolumn", columnDefinition = "json")
private JsonNode acolumn;
...
}
This works on PosgreSQL
#Entity
#Table(name = "atable")
#TypeDef(name = "json", typeClass = JsonBinaryType.class)
public class Atable {
...
#Type(type = "json")
#Column(name = "acolumn", columnDefinition = "json")
private JsonNode acolumn;
...
}
Any kind of solutions to switch from JsonBinaryType to JsonStringType (or any other solution to solve this) is appreciated.
The Hypersistence Utils project, you can just use the JsonType, which works with PostgreSQL, MySQL, Oracle, SQL Server, or H2.
So, use JsonType instead of JsonBinaryType or JsonStringType
#Entity
#Table(name = "atable")
#TypeDef(name = "json", typeClass = JsonType.class)
public class Atable {
#Type(type = "json")
#Column(name = "acolumn", columnDefinition = "json")
private JsonNode acolumn;
}
That's it!
There are some crazy things you can do - with the limitation that this only works for specific types and columns:
First, to replace the static #TypeDef with a dynamic mapping:
You can use a HibernatePropertiesCustomizer to add a TypeContributorList:
#Configuration
public class HibernateConfig implements HibernatePropertiesCustomizer {
#Value("${spring.jpa.database-platform:}")
private Class<? extends Driver> driverClass;
#Override
public void customize(Map<String, Object> hibernateProperties) {
AbstractHibernateType<Object> jsonType;
if (driverClass != null && PostgreSQL92Dialect.class.isAssignableFrom(driverClass)) {
jsonType = new JsonBinaryType(Atable.class);
} else {
jsonType = new JsonStringType(Atable.class);
}
hibernateProperties.put(EntityManagerFactoryBuilderImpl.TYPE_CONTRIBUTORS,
(TypeContributorList) () -> List.of(
(TypeContributor) (TypeContributions typeContributions, ServiceRegistry serviceRegistry) ->
typeContributions.contributeType(jsonType, "myType")));
}
}
So this is limited to the Atable.class now and I have named this custom Json-Type 'myType'. I.e., you annotate your property with #Type(type = 'myType').
I'm using the configured Dialect here, but in my application I'm checking the active profiles for DB-specific profiles.
Also note that TypeContributions .contributeType(BasicType, String...) is deprecated since Hibernate 5.3. I haven't looked into the new mechanism yet.
So that covers the #Type part, but if you want to use Hibernate Schema generation, you'll still need the #Column(columnDefinition = "... bit, so Hibernate knows which column type to use.
This is where it start's feeling a bit yucky. We can register an Integrator to manipulate the Mapping Metadata:
hibernateProperties.put(EntityManagerFactoryBuilderImpl.INTEGRATOR_PROVIDER,
(IntegratorProvider) () -> Collections.singletonList(JsonColumnMappingIntegrator.INSTANCE));
As a demo I'm only checking for PostgreSQL and I'm applying the dynamic columnDefinition only to a specific column in a specific entity:
public class JsonColumnMappingIntegrator implements Integrator {
public static final JsonColumnMappingIntegrator INSTANCE =
new JsonColumnMappingIntegrator();
#Override
public void integrate(
Metadata metadata,
SessionFactoryImplementor sessionFactory,
SessionFactoryServiceRegistry serviceRegistry) {
Database database = metadata.getDatabase();
if (PostgreSQL92Dialect.class.isAssignableFrom(database.getDialect().getClass())) {
Column acolumn=
((Column) metadata.getEntityBinding(Atable.class.getName()).getProperty("acolumn").getColumnIterator().next());
settingsCol.setSqlType("json");
}
}
#Override
public void disintegrate(SessionFactoryImplementor sessionFactory, SessionFactoryServiceRegistry serviceRegistry) {
}
}
metadata.getEntityBindings() would give you all Entity Bindings, over which you can iterate and then iterate over the properties. This seems quite inefficient though.
I'm also not sure whether you can set things like 'IS JSON' constraints etc., so a custom create script would be better.

How handle currents updates in spring-boot hibernate problem? Also need to make app scalable

Project type :- Spring-boot JPA project
Hi,
I have below Rest service which increments a number in database.
#RestController
public class IncrementController {
#Autowired
MyNumberRepository mynumberRepository;
#GetMapping(path="/incrementnumber")
public String incrementNumber(){
Optional<MyNumber> mynumber = mynumberRepository.findById(1);
int i = mynumber.get().getNumber();
System.out.println("value of no is "+i);
i = i+1;
System.out.println("value of no post increment is "+i);
mynumber.get().setNumber(i);
MyNumber entity = new MyNumber();
entity.setId(1);
entity.setNumber(i);
mynumberRepository.save(entity);
return "done";
}
}
Entity is as below :-
#Entity
#Table(name = "my_number")
public class MyNumber {
#Id
private Integer id;
private Integer number;
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
public Integer getNumber() {
return number;
}
public void setNumber(Integer number) {
this.number = number;
}
}
Below is the Repository :-
public interface MyNumberRepository extends JpaRepository<MyNumber, Integer>{
}
The service works well when I call increment number sequentially , but when concurrent threads call the incrementservice then i get non consistent results. How can I handle this situation ?
Also have to deploy the app on multiple places and connecting to same DB. i.e Scalability concern.
Thanks,
Rahul
You must use a pessimistic lock. This will issue a SELECT FOR UPDATE and lock the row for the transaction and it's not possible for another transaction to overwrite the row.
public interface MyNumberRepository extends JpaRepository<MyNumber, Integer> {
#Lock(LockModeType.PESSIMISTIC_WRITE)
Optional<MyNumber> findById(Integer id);
}
And then you have to make your REST method transactional by adding #Transactional
#RestController
public class IncrementController {
#Autowired
MyNumberRepository mynumberRepository;
#Transactional
#GetMapping(path="/incrementnumber")
public String incrementNumber(){
Optional<MyNumber> mynumber = mynumberRepository.findById(1);
int i = mynumber.get().getNumber();
System.out.println("value of no is "+i);
i = i+1;
System.out.println("value of no post increment is "+i);
mynumber.get().setNumber(i);
MyNumber entity = new MyNumber();
entity.setId(1);
entity.setNumber(i);
mynumberRepository.save(entity);
return "done";
}
}
Above solution will work , but i feel you are doing over-engineering for very simple problem.
My recommendation would be to use database sequence.I feel your requirement is quite straight forward.In your service u can simply call getnextvalue on the sequence and then set the value in the Id field.This way u don't have to manage locks also as Database will do that for you.
In oracle particularly sequences are managed in a different transactions . So if ur calling code fails with exception , still the value of sequence will be incremented . This will ensure that multi-threads will not see the same value of the sequence in case of exceptions.
Instead of locking transaction, you could also use an Oracle sequence or MySQL "AUTO_INCREMENT" feature which will prevent any ID being returned twice.
https://community.oracle.com/thread/4156674
Thread safety of MySql's Select Last_Insert_ID

saving large object takes too long on hibernate

I have an object with a Blob column requestData and a Text Column "requestDataText" .
These two fields may hold large Data. In my example , the blob data is around 1.2 MBs and the Text column holds the text equivalent of that Data.
When i try to commit this single entity , it takes around 20 seconds .
DBUtil.beginTransaction();
session.saveOrUpdate(entity);
DBUtil.commitTransaction();
Is there something wrong or is there a way to shorten this period ?
package a.db.entity;
// Generated Feb 22, 2016 11:57:10 AM by Hibernate Tools 3.2.1.GA
/**
* Foo generated by hbm2java
*/
#Entity
#Table(name="foo"
,catalog="bar"
)
public class Foo implements java.io.Serializable {
private Long id;
private Date reqDate;
private byte[] requestData;
private String requestDataText;
private String functionName;
private boolean confirmed;
private boolean processed;
private boolean errorOnProcess;
private Date processStartedAt;
private Date processFinishedAt;
private String responseText;
private String processResult;
private String miscData;
public AsyncRequestLog() {
}
#Id #GeneratedValue(strategy=IDENTITY)
#Column(name="Id", unique=true, nullable=false)
public Long getId() {
return this.id;
}
public void setId(Long id) {
this.id = id;
}
...
}
I just noticed you're starting a transaction and then doing a saveOrUpdate() which might explain the slow down, as hibernate will try to retrieve the row from the DB first (as explained on this other SO answer).
If you know if the entity is new call save() and if you the entity has to be updated call update().
Another suggestion, but I'm not sure if this applies any more to MySQL, try to store the blobs/clobs in a different table from where you store the data, if you are intending to update the blob/clobs. In the past this mix made MySQL run slow as it had to resize the 'block' allocated to a row. So have one table with all the attributes and a different table just for the blob/clob. This is not the case if the table is read-only.

How to have a field member which is persisted in another schema?

Assume the following (I'm using MySQL)
#PersistenceCapable(identityType = IdentityType.APPLICATION, detachable = "true")
public class TclRequest2 {
#PrimaryKey
#Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
private long id;
#Persistent(column = "userid")
#Column(jdbcType = "INTEGER", length = 11, allowsNull = "false", defaultValue = "1")
private Member member; // This object table is in another schema
// Getters and setters
}
The field member is persisted in another schema. I could solve this by specifying the "catalog" attribute in the Member class's #PersitentCapable annotation but that would kill the flexibility of specifying the schema name in the properties file I'm using since I'm configuring jdo in a properties file.
Thank you.