Mapping values from foreign keys in table - mysql

I'm using hibernate for my project and what i'm essentially trying to do is figure out how to auto-map the values with foreign keys from my table in the database to a data object.
For example, I have a users table with the following columns
id - INT
username - VARCHAR
password - VARCHAR
email - VARCHAR
firstName - VARCHAR
lastName - VARCHAR
This is fairly straightforward to map as there are no foreign keys involved. The code I have is:
SQLQuery q = session.createSQLQuery("SELECT * FROM users WHERE username=? AND password=?");
q.setString(0, username);
q.setString(1, password);
q.addEntity(User.class);
List<User> users = q.list()
Now supposing I add some foreign keys to my user table such as
userlevel_id - INT
department_id - INT
Which reference the user level a user belongs to and the department. How do I get hibernate to map the user level name from the userlevel table and the department name from the department table? It wont be of much use if I just store the id's in the User data object as I will need to display the values to my views later on. Any help will be greatly appreciated, Thanks!!

Use an eagerly loaded #ManyToOne mapping on your User object. Make an object to represent both your Departmentand your UserLevel and add them as fields into your User object. The reason that you should have this mapping as eagerly loaded is that there is never a situation where you don't want to load a User without their Department or UserLevel. Hibernate will automatically map your User object to the appropriate Department and UserLevel.
If you want too, you can make the relationship bi-directional so you can get all users in a department by selecting a department. Anything you can do in SQL you can map using Hibernate.
Have a read of the documentation and see if that helps.

Related

Is it possible to implement such a database structure in springboot?

I use hibernate as the framework for springboot CUBA operations. But I have a problem, our workout information and recipe information are coming from external API. This means that I only need to store the recipe and workout ids in the database. The database structure is as follows.
DB sturucture
As the data is provided by the API, we have not created corresponding entity tables for workouts and recipes. But then the problem arises. What should be done with the join table in this case. I mean the premise of #manytomany is that two entity tables are needed, and we only have entity tables for users. Even if we use a joint primary key, there is still a need to add user_id as a foreign key in the joint primary key. How can we mark user_id as a foreign key without a workout or recipe entity table?
Your question is quite confusing but I'll try to provide an answer for what I understood.
If the data is completely external, you can map an element collection consisting of ids only:
#Entity
public class User {
// other fields
#ElementCollection
#CollectionTable(name = "recipse_user")
Set<Long> recipeIds;
#ElementCollection
#CollectionTable(name = "workout_user")
Set<Long> workoutIds;
}

#oneToMany with #JoinTable have unique constraint in jpa?

I have a two tables which are USER entity have
#OneToMany
#JoinTable(name="user_roles")
private List<Role> roles;
Role Entity have User
#ManyToOne
private User user;
Desc of table user_role is
Name Null
USER_RECORD_ID NOT NULL NUMBER(19)
ROLE_RECORD_ID NOT NULL NUMBER(19)
NOTE: A user can have multiple roles,and i have already created roles through script,have Id:10001,10002,10003 etc
In user_role role table i am inserting one user 800001 with all the roles so the table looks like
USER_RECORD_ID ROLE_RECORD_ID
800001 10001
800001 10002
800001 10003
800002 10001 ///This record will through me unique constraint error
So if i try to give a role to new user where role is predefined it throughs me this error
INSERT INTO USER_ROLE(USER_RECORD_ID,ROLE_RECORD_ID) VALUES(800002,10001)
Error report -
SQL Error: ORA-00001: unique constraint (SYSTEM.UK_LPLHY51JOJA1LP4465QK2E0AF) violated
00001. 00000 - "unique constraint (%s.%s) violated"
*Cause: An UPDATE or INSERT statement attempted to insert a duplicate key.
For Trusted Oracle configured in DBMS MAC mode, you may see
this message if a duplicate entry exists at a different level.
*Action: Either remove the unique restriction or do not insert the key.
I think the error is caused by the use of #ManyToOne/#OneToMany while the relation you have is #ManyToMany. This is so because in the example you give the USER_RECORD_ID with the value 800001 has multiple ROLE_RECORD_ID and the ROLE_RECORD_ID with the value 10001has multiple USER_RECORD_ID.
Therefore try using #ManyToManyinstead, this should fix your problem.
Here is a reference in case you need it: https://en.wikibooks.org/wiki/Java_Persistence/ManyToMany
You have setup your mappings incorrectly, as it seems you have intended to have a bidirectional relationship that uses a relation table but instead have setup two independent relationships. The first, User.roles uses the relation table, but the otherside
#ManyToOne
private User user;
is telling JPA to setup a Role-User relation that uses a foreign key in the Role table. This doesn't seem to be the source of your problem, but will cause you other issues and doesn't match what you are asking for- Your role can only reference a single User, yet you are asking for roles to be assigned multiple users. Try:
#ManyTooMany
#JoinTable(name="user_roles")
private List<Role> roles;
.. and in the Role entity:
#ManyTooMany(mappedby"roles")
private List<User> users;
Also make sure you drop the database schema and let JPA recreate the database using these new mappings.

Database Architecture - Keeping field with multiple values or Creating new table

I am trying to figure out the MYSQL structure best practices for fields that contain more than one value. Should you ever have a field in a table that has comma separated values or should this logic always exist in a separate table?
For Instance I have an Organization and Post table. Within this table I have the following fields:
Organization:
ID - Integer
Organization Name - String
Admin - String
Sources - String
Post:
Post_ID - Integer
Title - String
Source - String (Value taken from list of "Sources")
Organization_ID - Integer (FK)
My "Sources" field contains a predefined list of domains that can be individually selected for each "Post". Currently I have a record for an Organization that contains Sources like "wikipedia.org","google.com"
I was wondering if this is the best way to store the values or if "Sources" should be its own table and link to the Organization.
It definitely sounds like you should make a many-to-many relation between organization and source.
I.e. you make a new table which has the ids from organization and source as foreign keys.

Rails table association with ENUM column

Given the table uploads which holds the relation between 4 different apps and users:
field type
dogTag int (foreign key to dvd)
app enum
uploader int (foreign key to user)
mod string
...
And the table dvds:
dogTag int (primary key)
title string
...
And the table users:
id int (primary key)
...
How can I properly construct a model relations between the dvds table and the uploads table within Rails if it depends on an ENUM column?
With sql I simply do:
JOIN uploads ON uploads.dogTag = dvds.dogTag
WHERE uploads.app = 'dvd'
But have no idea how to create this relationship in Rails and haven't found a lot of info on this.
Thanks
I don't have too much idea on how to create relation with enum column,but if it's a four model in your app that has one upload model then you can use polymorphic association,ryan has great railscasts
Well you could re-organise your db to make the enum type a look up table, you could create a non-persistent model to implement it.
Thing is if you had started from the model instead of the database, you wouldn't have gone anywhere near mysql's enum type, and that's why you are struggling to find much, you've gone at it bass ackwards.
My advice get rid of it...

Normalizing MySQL data

I'm new to MySQL, and just learned about the importance of data normalization. My database has a simple structure:
I have 1 table called users with fields:
userName (string)
userEmail (string)
password (string)
requests (an array of dictionaries in JSON string format)
data (another array of dictionaries in JSON string format)
deviceID (string)
Right now, this is my structure. Being very new to MySQL, I'm really not seeing why my above structure is a bad idea? Why would I need to normalize this and make separate tables? That's the first question-why? (Some have also said not to put JSON in my table. Why or why not?)
The second question is how? With the above structure, how many tables should I have, and what would be in each table?
Edit:
So maybe normalization is not absolutely necessary here, but maybe there's a better way to implement my data field? The data field is an array of dictionaries: each dictionary is just a note item with a few keys (title, author, date, body). So what I do now is, which I think might be inefficient, every time a user composes a new note, I send that note from my app to PHP to handle. I get the JSON array of dictionaries already part of that user's data, I convert it to a PHP array, I then add to the end of this array the new note, convert the whole thing back to JSON, and put it back in the table as an array of dictionaries. And this process is repeated every time a new note is composed. Is there a better way to do this? Maybe a user's data should be a table, with each row being a note-but I'm not really sure how this would work?
The answer to all your questions really depends on what the JSON data is for, and whether you'll ever need to use some property of that data to determine which rows are returned.
If your data truly has no schema, and you're really just using it to store data that will be used by an application that knows how to retrieve the correct row by some other criteria (such as one of the other fields) every time, there's no reason to store it as anything other than exactly as that application expects it (in this case, JSON).
If the JSON data DOES contain some structure that is the same for all entries, and if it's useful to query this data directly from the database, you would want to create one or more tables (or maybe just some more fields) to hold this data.
As a practical example of this, if the data fields contains JSON enumerating services for that user in an array, and each service has a unique id, type, and price, you might want a separate table with the following fields (using your own naming conventions):
serviceId (integer)
userName (string)
serviceType (string)
servicePrice (float)
And each service for that user would get it's own entry. You could then query for users than have a particular service, which depending on your needs, could be very useful. In addition to easy querying, indexing certain fields of the separate tables can also make for very QUICK queries.
Update: Based on your explanation of the data stored, and the way you use it, you probably do want it normalized. Something like the following:
# user table
userId (integer, auto-incrementing)
userName (string)
userEmail (string)
password (string)
deviceID (string)
# note table
noteId (integer, auto-incrementing)
userId (integer, matches user.userId)
noteTime (datetime)
noteData (string, possibly split into separate fields depending on content, such as subject, etC)
# request table
requestId (integer, auto-incrementing)
userId (integer, matches user.userId)
requestTime (datetime)
requestData (string, again split as needed)
You could then query like so:
# Get a user
SELECT * FROM user WHERE userId = '123';
SELECT * FROM user WHERE userNAme = 'foo';
# Get all requests for a user
SELECT * FROM request WHERE userId = '123';
# Get a single request
SELECT * FROM request WHERE requestId = '325325';
# Get all notes for a user
SELECT * FROM note WHERE userId = '123';
# Get all notes from last week
SELECT * FROM note WHERE userId = '123' AND noteTime > CURDATE() - INTERVAL 1 WEEK;
# Add a note to user 123
INSERT INTO note (noteId, userId, noteData) VALUES (null, 123, 'This is a note');
Notice how much more you can do with normalized data, and how easy it is? It's trivial to locate, update, append, or delete any specific component.
Normalization is a philosophy. Some people think it fits their database approach, some don't. Many modern database solutions even focus on denormalization to improve speeds.
Normalization often doesn't improve speed. However, it greatly improves the simplicity of accessing and writing data. For example, if you wanted to add a request, you would have to write a completely new JSON field. If it was normalized, you could simply add a row to a table.
In normalization, "array of dictionaries in JSON string format" is always bad. Array of dictionaries can be translated as list of rows, which is a table.
If you're new to databases: NORMALIZE. Denormalization is something for professionals.
A main benefit of normalization is to eliminate redundant data, but since each user's data is unique to that user, there is no benefit to splitting this table and normalizing. Furthermore, since the front-end will employ the dictionaries as JSON objects anyway, undue complication and a decrease in performance would result from trying to decompose this data.
Okay, here is a normalized mySQL data-model. Note: you can separate authors and titles into two tables to further reduce data redundancy. You can probably use similar techniques for the "requests dictionaries":
CREATE TABLE USERS(
UID int NOT NULL AUTO_INCREMENT PRIMARY KEY,
userName varchar(255) UNIQUE,
password varchar(30),
userEmail varchar(255) UNIQUE,
deviceID varchar(255)
) ENGINE=InnoDB;
CREATE TABLE BOOKS(
BKID int NOT NULL AUTO_INCREMENT PRIMARY KEY,
FKUSERS int,
Title varchar(255),
Author varchar(50)
) ENGINE=InnoDB;
ALTER TABLE BOOKS
ADD FOREIGN KEY (FKUSERS)
REFERENCES USERS(UID);
CREATE TABLE NOTES(
ID int NOT NULL AUTO_INCREMENT PRIMARY KEY,
FKUSERS int,
FKBOOKS int,
Date date,
Notes text
) ENGINE=InnoDB;
ALTER TABLE NOTES
ADD FOREIGN KEY BKNO (FKUSERS)
REFERENCES USERS(UID);
ALTER TABLE NOTES
ADD FOREIGN KEY (FKBOOKS)
REFERENCES BOOKS(BKID);
In your case, I will abstract out the class that handles this table. Then keep the data normalized. if in future, the data access patterns changes and i need to normalized the data, i css just do so with less impact on the program. I just need to change the class that handles this set of data to query the normalized tables , but return the data as if the database structure never changed.