I have a scenario where I have the user table and the address table. The address table is a value objects in domain driven design in my understanding. How do I store value objects in mysql database? this sounds a bit confusing but I couldn't understand this idea value objects are immutable but how to store them?
Below are classes of my two entity
user.java
#Getter #Setter #NoArgsConstructor
#Entity // This tells Hibernate to make a table out of this class
#Table(name="user")
public class User {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
#JsonProperty("userid")
#Column(name="userid")
private Long user_id;
#JsonProperty("user_nome")
private String nome;
#JsonProperty("user_email")
#Column(unique = true, nullable = false)
private String email;
#JsonProperty("user_cpf")
private String cpf;
#JsonProperty("user_telefone")
private String telefone;
#JsonProperty("user_celular")
private String celular;
#JsonProperty("user_senha")
private String senha;
#Column(name="createdAt", columnDefinition="TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP")
#Temporal(TemporalType.TIMESTAMP)
#JsonProperty("user_createdAt")
private Date createdAt;
#Column(name="updateAt", columnDefinition="TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP")
#Temporal(TemporalType.TIMESTAMP)
#JsonProperty("user_updateAt")
private Date updateAt;
/*Person p1 = new Person("Tom", "Smith");
p1.setId(1L);
p1.setStartDate(new Date(System.currentTimeMillis())); */
}
class Address:
#Getter #Setter #NoArgsConstructor
#Entity // This tells Hibernate to make a table out of this class
#Table(name="address")
public class Address {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
#JsonProperty("address_id")
private Long address_id;
#JsonProperty("address_endereco")
private String endereco;
#JsonProperty("address_bairro")
private String bairro;
#JsonProperty("address_numero")
private String numero;
#JsonProperty("address_complemento")
private String complemento;
#JsonProperty("address_cidade")
private String cidade;
#OneToOne(fetch=FetchType.LAZY)
#JoinColumn(name = "userid")
private User userid;
}
Basically however you want: you could enforce immutability in the database, but you don't have to. Immutability can be enforced in the database by creating an unique constraint on a combination of values of an address, zipcode + house number for example.
As a database administrator I personally don't like developers enforcing immutability in the database because I see implementing/enforcing business logic in the database as a category error. What is an immutable value within the domain, to me is just data that needs to be consistently stored. Database rules are meant to ensure data consistency and the added complexity of implementing immutability in the database can interfere with that. Lets do a thought experiment:
You ensure that an address value is unique in the database with a constraint that covers all properties and store your data. Some time later a customer places an order that happens to have the same address, but he lives on the North Pole. The order is saved but the address isn't because my server throws an error because the address violates the constraint because it already exsists in the US, but that's not saved/part of the constraint. Now I have a problem because that orphaned order violates the data model, you complain to me because my server threw an error and now it's up to me to figure out what's wrong with your design decision to apply the abstract concept of immutability outside your domain, and have to update the data definition in a production environment in order to fix it.
So I think it's best you acknowledge that by storing data it leaves your domain and that is a risk your design should take into account. What I'd advice (or silently implement haha) would be the addition of an ID within the table and a record versions of the same 'immutable value' for tracability, consistency and agility to react to unforseen circumstances. Just like with user and transaction entities ;)
Related
I have a table User with columns
user_id, first_name,last_name, created_time(timestamp)
I have a class User Entity
#Getter
#Setter
#Table(name="user")
#Entity
public class User {
#Id
private Long userId;
#Column(name="first_name")
private String firstName;
#Column(name="last_name")
private String lastName;
#Column(name="created_time")
private Timestamp createdTime;
}
I have an interface User Repository
public interface UserRepository extends CRUDRepository<User,Long> {
User findByUserId(Long id);
}
The created_time stored in database table is 2020-09-08 15:38:13 and when i read the object using spring data jpa it returned as "2020-09-08 21:08:13"
How should i ensure that this automatic time zone conversion not to happen?
The root cause of the problem is that Jackson automatically converts the timestamp values to UTC and then serializes the same.
In order to correct this behavior, you can add following property to your application.properties and specify the same timezone value as is being used by your DB server.
spring.jackson.time-zone=Asia/Kolkata
There is an article that explains this problem and also proposes solutions.
You can also have a look at this answer.
I am very new in Hibernate. I am using Hibernate with JPA. I have an annotated entity class and a table related to that entity class.
#Entity
public class Test implements Serializable {
#Id
#GenericGenerator(name="inc" , strategy="identity")
#GeneratedValue(generator="inc")
private int id;
private String address; // setter getter and constructor
}
When saving this entity, it insert the data into the db. But during application running process another application is inserting data into same table. when my application try to save the data then Duplicate entry '59' for key 'PRIMARY' exception generated. So I want to use a generator which can insert the data and generate id in database level rather than application level and the identifier must saved back to my entity.
Use the Table generator strategy or the sequence generator.
You do not have to specify a generator. You can use the default generator and never set the id manually. If the error still comes post your merge/persist method.
More informations about generators can you found here https://en.wikibooks.org/wiki/Java_Persistence/Identity_and_Sequencing
#Entity
public class Test implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private int id;
private String address; // setter getter and constructor
}
I am working with EclipseLink and JPA 2.0.
Those are my 2 entities:
Feeder entity:
#Entity
#Table(name = "t_feeder")
public class Feeder implements Serializable {
private static final long serialVersionUID = 1L;
//Staff
#OneToMany(cascade = CascadeType.ALL, mappedBy = "idAttachedFeederFk")
private Collection<Port> portCollection;
//staff
}
Port entity:
#Entity
#Table(name = "t_port")
public class Port implements Serializable {
//staff
#JoinColumn(name = "id_attached_feeder_fk", referencedColumnName = "id")
#ManyToOne
private Feeder idAttachedFeederFk;
//staff
}
And this is my code:
Feeder f = new Feeder();
//staff
Port p = new Port();
p.setFeeder(f);
save(feeder); //This is the function that calls finally persist.
The probleme is that, only feeder is persisted and not the port. Am I missing something? And specially, in which side should I mention the cascading exactly. Given that in my database, the port table is referencing the feeder one with a foreign key.
EDIT
This simple piece of code worked fine with me:
public static void main(String[] args) {
Address a1 = new Address();
a1.setAddress("madinah 0");
Employee e1 = new Employee();
e1.setName("houssem 0");
e1.setAddressFk(a1);
saveEmplyee(e1);
}
I am not sure why you would expect it to work: you are attempting to save a new instance of Feeder which has no connection whatsoever to the newly created Port.
By adding the Cascade to the #OneToMany and calling save(feeder) Eclipse link would if there were an association:
Insert the record for the Feeder.
Iterate the Port collection and insert the relevant records.
As I have noted however, this new Feeder instance has no Ports associated with it.
With regard to your simple example I assume when you say it works that both the new Address and Employee have been written to the database. This is expected because you have told the Employee about the Address (e1.setAddressFk(a1);) and saved the Employee. Given the presence of the relevant Cascade option then both entities should be written to the database as expected.
Given this it should then be obvious that calling save(port) would work if the necessary cascade option was added to the #ManyToOne side of the relationship.
However if you want to call save(feeder) then you need to fix the data model. Essentially you should always ensure that any in-memory data model is correct at any given point in time, viz. if the first condition below is true then it follows that the second condition must be true.
Port p = new Port();
Feeder feeder = new Feeder();
p.setFeeder(f();
if(p.getFeeder().equals(f){
//true
}
if(f.isAssociatedWithPort(p)){
//bad --> returns false
}
This is obviously best practice anyway but ensuring the correctnes of your in-memory model should mean you do not experience the type of issue you are seeing in a JPA environment.
To ensure the correctness of the in-memory data model you should encapsulate the set/add operations.
I am getting a list of entities and attempting to add more values to it and have them persist to the data base... I am running into some issues doing this... Here is what I have so far...
Provider prov = emf.find(Provider.class, new Long(ID));
This entity has a many to many relationship that I am trying to add to
List<Organization> orgList = new ArrayList<Organization>();
...
orgList = prov.getOrganizationList();
So I now have the list of entities associated with that entity.
I search for some entities to add and I place them in the orgList...
List<Organization> updatedListofOrgss = emf.createNamedQuery("getOrganizationByOrganizationIds").setParameter("organizationIds", AddThese).getResultList();
List<Organization> deleteListOfOrgs = emf.createNamedQuery("getOrganizationByOrganizationIds").setParameter("organizationIds", DeleteThese).getResultList();
orgList.addAll(updatedListofOrgss);
orgList.removeAll(deleteListOfOrgs);
As you can see I also have a list of delete nodes to remove.
I heard somewhere that you don't need to call persist on such an opperation and that JPA will persist automatically. Well, it doesn't seem to work that way. Can you persist this way, or will I have to go throught the link table entity, and add these values that way?
public class Provider implements Serializable {
#Id
#Column(name="RESOURCE_ID")
private long resourceId;
...
#ManyToMany(fetch=FetchType.EAGER)
#JoinTable(name="DIST_LIST_PERMISSION",
joinColumns=#JoinColumn(name="RESOURCE_ID"),
inverseJoinColumns=#JoinColumn(name="ORGANIZATION_ID"))
private List<Organization> organizationList;
...//getters and setters.
}
The link table that links together organizations and providers...
public class DistListPermission implements Serializable {
#Id
#Column(name="DIST_LIST_PERMISSION_ID")
private long distListPermissionId;
#Column(name="ORGANIZATION_ID")
private BigDecimal organizationId;
#Column(name="RESOURCE_ID")
private Long resourceId;
}
The problem is that you are missing a cascade specification on your #ManyToMany annotation. The default cascade type for #ManyToMany is no cascade types, so any changes to the collection are not persisted. You will also need to add an #ElementDependent annotation to ensure that any objects removed from the collection will be deleted from the database. So, you can change your Provider implementation as follows:
#ManyToMany(fetch=FetchType.EAGER, cascade=CascadeType.ALL)
#ElementDependent
#JoinTable(name="DIST_LIST_PERMISSION",
joinColumns=#JoinColumn(name="RESOURCE_ID"),
inverseJoinColumns=#JoinColumn(name="ORGANIZATION_ID"))
private List<Organization> organizationList;
Since your Provider class is managed, you should not need to merge the entity; the changes should take effect automatically when the transaction is committed.
I think this is pretty much the simplest case for mapping a Map (that is, an associative array) of entities.
#Entity
#AccessType("field")
class Member {
#Id
protected long id;
#OneToMany(cascade = CascadeType.ALL, fetch=FetchType.LAZY)
#MapKey(name = "name")
private Map<String, Preferences> preferences
= new HashMap<String, Preferences>();
}
#Entity
#AccessType("field")
class Preferences {
#ManyToOne Member member;
#Column String name;
#Column String value;
}
This looks like it should work, and it does, in HSQL. In MySQL, there are two problems:
First, it insists that there be a table called Members_Preferences, as if this were a many-to-many relationship.
Second, it just doesn't work: since it never populates Members_Preferences, it never retrieves the Preferences.
[My theory is, since I only use HSQL in memory-mode, it automatically creates Members_Preferences and never really has to retrieve the preferences map. In any case, either Hibernate has a huge bug in it or I'm doing something wrong.]
And of course, I sweat the problem for hours, post it here, and a minute later...
Anyway, the answer is the mappedBy element of the #OneToMany annotation:
#OneToMany(cascade = CascadeType.ALL, fetch=FetchType.LAZY, mappedBy="member")
#MapKey(name = "name")
private Map<String, Preferences> preferences
= new HashMap<String, Preferences>();
Which makes a certain sense: which field in the Many entity points back to the One entity? Even allowing that looking for a matching #ManyToOne field was too error prone, I think that what they did do (assuming the existence of a mapping table) makes even worse.