MySQL seeded table is missing a column - mysql

I'm building a ReactJS/Node.js World Cup score prediction web app, and I just stumbled upon an odd problem. Apparently, when I seed my db, one column is missing.
I have a "seed.js" file containing data from every World Cup game. Each game is an object like this:
{
"gameTime": "2022-11-30T15:00:00Z",
"homeTeam": "tun",
"awayTeam": "fra",
"groupLetter": "d",
},
(I put a single game just for reference - there's 47 more like this one).
And here's the Prisma schema:
generator client {
provider = "prisma-client-js"
previewFeatures = ["referentialIntegrity"]
}
datasource db {
provider = "mysql"
url = env("DATABASE_URL")
referentialIntegrity = "prisma"
}
model User {
id String #id #default(cuid())
name String
email String #unique
username String #unique
password String
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
guesses Guess[]
}
model Game {
id String #id #default(cuid())
homeTeam String
awayTeam String
gameTime DateTime
groupLetter String
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
guesses Guess[]
##unique([homeTeam, awayTeam, gameTime, groupLetter])
}
model Guess {
id String #id #default(cuid())
userId String
gameId String
homeTeamScore Int
awayTeamScore Int
user User #relation (fields: [userId], references: [id])
game Game #relation (fields: [gameId], references: [id])
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
##unique([userId, gameId])
}
However, when I seed those games into my db, here's how each game object goes:
{
"id":"cladepd2o0011wh7ca47fo06f",
"homeTeam":"tun",
"awayTeam":"fra",
"gameTime":"2022-11-30T15:00:00.000Z",
"createdAt":"2022-11-12T04:07:07.632Z",
"updatedAt":"2022-11-12T04:07:07.632Z"
}
There's no "groupLetter" argument! And that's a crucial one because I use it as a filter, so my app shows the games one group at a time. And then, when it tries to fetch games by group letter, I get the following error:
[GET] /games?groupLetter=a
01:12:46:96
PrismaClientValidationError: Unknown arg `groupLetter` in where.groupLetter for type GameWhereInput.
at Document.validate (/var/task/node_modules/#prisma/client/runtime/index.js:29297:20)
at serializationFn (/var/task/node_modules/#prisma/client/runtime/index.js:31876:19)
at runInChildSpan (/var/task/node_modules/#prisma/client/runtime/index.js:25100:12)
at PrismaClient._executeRequest (/var/task/node_modules/#prisma/client/runtime/index.js:31883:31)
at async PrismaClient._request (/var/task/node_modules/#prisma/client/runtime/index.js:31812:16)
at async list (file:///var/task/api/games/index.js:14:23)
at async bodyParser (/var/task/node_modules/koa-bodyparser/index.js:95:5)
at async cors (/var/task/node_modules/#koa/cors/index.js:108:16) {
clientVersion: '4.4.0'
}
I already tried to drop the "Game" table via PlanetScale and push it again, to no avail. I just don't understand why "groupLetter" is missing from the prod database.
Oh, locally it's all working fine (the app shows the games sorted by group, exactly how it's supposed to be).
Hope I didn't make it even more confusing that it is. Can you guys please help me out?

Nevermind... I realized, after staying up all night in front of the screen researching and doing a lot of trial and error, that I didn't redo npx prisma generate procedures. Problem solved now.

Related

Custom fields in Many2Many JoinTable

I have this model with a custom JoinTable:
type Person struct {
ID int
Name string
Addresses []Address `gorm:"many2many:person_addresses;"`
}
type Address struct {
ID uint
Name string
}
type PersonAddress struct {
PersonID int
AddressID int
Home bool
CreatedAt time.Time
DeletedAt gorm.DeletedAt
}
How is it possible to assign a value to the Home field when creating a new Person?
Method 1
From what I can see in the docs, here's a clean way you might currently do this:
DB.SetupJoinTable(&Person{}, "Addresses", &PersonAddress{})
addr1 := Address{Name: "addr1"}
DB.Create(&addr1)
addr2 := Address{Name: "addr2"}
DB.Create(&addr2)
person := Person{Name: "jinzhu"}
DB.Create(&person)
// Add an association with default values (i.e. Home = false)
DB.Model(&person).Association("Addresses").Append(&addr1)
// Add an association with custom values
DB.Create(&PersonAddress{
PersonID: person.ID,
AddressID: addr2.ID,
Home: true,
})
Here we're using the actual join table model to insert a row with the values we want.
We can also filter queries for the association:
addr := Address{}
// Query association with filters on join table
DB.Where("person_addresses.home = true").
Model(&person).
Association("Addresses").
Find(&addr)
Method 2
Here's a more magical way, by (ab)using the Context to pass values to a BeforeSave hook, in addition to the SetupJoinTable code from above:
func (pa *PersonAddress) BeforeSave(tx *gorm.DB) error {
home, ok := tx.Statement.Context.Value("home").(bool)
if ok {
pa.Home = home
}
return nil
}
// ...
DB.WithContext(context.WithValue(context.Background(), "home", true)).
Model(&person).
Association("Addresses").
Append(&addr2)
This method feels icky to me, but it works.
As you can find this point in the official documents of the GROM, you can implement some methods for each table(struct).
You can implement BeforeCreate() and/or AfterCreate() methods for your join table, gorm will check that method on time!
You can do anything inside those methods to achieve your goal.
here you will find the full documentation.
enjoy ;)

mysql : why would I require AttributeConverter over enum to map a DB column having enum datatype as enum with a JPA entity?

I am having a user database table as:
CREATE TABLE IF NOT EXISTS `user` (
`user_id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`first_ name` VARCHAR(45) NOT NULL,
`active_status` ENUM('ACTIVE', 'PENDING', 'DEACTIVATED', 'BLOCKED', 'SPAM', 'DELETED') NOT NULL ,
UNIQUE INDEX `unique_id_UNIQUE` (`unique_id` ASC),
UNIQUE INDEX `email_UNIQUE` (`email` ASC),
PRIMARY KEY (`user_id`))
ENGINE = InnoDB;
I mapped it to a corresponding JPA entity class as:
#Entity
public class User implements OfloyEntity {
#Id
#GeneratedValue(strategy = IDENTITY)
#Column(name = "user_id", unique = true, nullable = false)
private int userId;
//other fields
#Enumerated(EnumType.STRING)
#Column(name = "active_status", nullable = false, length = 11)
private UserStatus activeStatus;
As you can see, I have mapped activeStatus to a enum UserStatus to restrict the entires from persistence layer itself.
public enum UserStatus {
ACTIVE,
PENDING,
DEACTIVATED,
BLOCKED,
DELETED,
SPAM
}
I want to know is there any drawback of using this approach for implementing a DB enum in persistence layer? I gone throw multiple articles which recommend using AttributeConverter but since the the values in my enum are very limited and have less chances of modification, I am unable to relate all those articles with my requirement.
Is there something I am missing, or any improvement can be done in my design?
Articles I gone throw:
vladmihalcea
thorban and some other stackoverflow questions.
Update: After reading the answer from Jens, I decided to implement AttributeConverter(for user's gender). And that confused me a little:
Why I decided to use enum as MYSQL column type : as it restrict the values and require less space. Because MYSQL stores the ordinal value of it's enum behind the scene and when asked for the value it represents the String value of that, it saves space.
My implementation of gender:
public enum UserGender {
MALE('M'),
FEMALE('F'),
OTHER('O');
private Character shortName;
private UserGender(Character shortName) {
this.shortName = shortName;
}
public Character getShortName() {
return shortName;
}
public static UserGender fromShortName(Character shortName) {
switch (shortName) {
case 'M': return UserGender.MALE;
case 'F' : return UserGender.FEMALE;
case 'O' : return UserGender.OTHER;
default:
throw new UserGenderNotSupportedException("user gender with shortName : " + shortName + " not supported");
}
}
}
converter class:
#Converter(autoApply = true)
public class UserGenderConverter implements AttributeConverter<UserGender, Character> {
#Override
public Character convertToDatabaseColumn(UserGender userGender) {
return userGender.getShortName();
}
#Override
public UserGender convertToEntityAttribute(Character dbGender) {
return UserGender.fromShortName(dbGender);
}
}
Now, the major doubts:
1. As per blogs, using MYSQL enum is evil in DB, because someday if I need to add extra values to the enum column and that would require a table ALTER, but isn't it the same case with using AttributeConverter? Because there also we use a java enum, which would need to be change if someday new genders are required?
2. If I use AttributeConverter, I would have to document java enum(UserGender here) explaination somewhere so that DBA can understand what F,M,O stands for. Am I right here?
The articles gave you a rich selection of potential drawbacks:
Using #Enumerated(EnumType.STRING) has the following:
It uses lots of space compared to other options. Note that means more data needs to be loaded and transferred over the wire this has an effect on performance as well. We have no idea if this is a problem for you and you won't know either until you made some performance tests.
Ties the name of the enum values hard to the column values. Which can be risky since developers are used to renaming stuff quickly and you would need tests with actual legacy data to catch this.
If you don't work with really huge amounts of data for which updating the column for all rows is an actual problem, I wouldn't sweat it. It's easy enough to introduce an AttributeConverter and update the data when the simple solution actually becomes a problem.
Update regarding the updated question:
I don't buy into the argument that anything is "evil" because it might require an ALTER TABLE statement. By this argument, we should abolish relational databases completely because using them requires DDL and evolution of an application will require more of it. Of course, the necessity of a DDL statement makes a deployment a little more complex. But you need to be able to handle this thing anyway.
But it is true that with an AttributeConverter you wouldn't need any DDL in this case, because you'd just put another value in the same column which doesn't have any special constraints except the maximum length of the value. This assumes you don't have a check constraint on the column to limit the legal values.
Do you have to document the relationship between Enum and the value stored in the DB? Depends on your team. Does the DBA even care about the meaning of the data? Does the DBA have access and the skills to understand the Java code? If the DBA needs or wants to know and can't or won't get the information from the source code you have to document it. True.

algorithm verifying data from user beween two tables then insert into another table

Greeting I need to get details from users, in those details the user has I have to validate all the User details validate this details with another table and if the date doesn’t match insert on the table but if it does match then don insert anything, this has to be done for all the users, the domains.
User{
String orderNumber
String dealer
Int UserKm
String dateUser
String adviser
Vehicle vehicle
String dateCreated
Date appointmentDate //this date has to be validated with DateNext
appointmentDate from Appointments domain of it doesn’t exit then you can
insert on that table.
}
Appointments{
User user
Date managementDate
Date lasDataApointies
DateNext appointmentDate
Date NextdAteAppointment
Date callDate
String observations
}
def result = User.executeQuery("""select new map(
mmt.id as id, mmt.orderNumber as orderNumber, mmt.dealer.dealer as
dealer, mmt.UserKm as UserKm, mmt.dateUser as dateUser, mmt.adviser as
adviser, mmt.technician as technician, mmt.vehicle.placa as vehicle,
mmt.dateCreated as dateCreated, mmt.currenKm as currenKm) from User as
mmt """)
def result1=result.groupBy{it.vehicle}
List detailsReslt=[]
result1?.each { SlasDataApointing placa, listing ->
def firsT = listing.first()
int firstKM = firsT.UserKm
def lasT = listing.last()
def lasDataApoint = lasT.id
int lastKM = lasT.UserKm
int NextAppointmentKM = lastKM + 5000
int dayBetweenLastAndNext = lastKM - NextAppointmentKM
def tiDur = getDifference(firsT.dateUser,lasT.dateUser)
int dayToInt = tiDur.days
int restar = firstKM - lastKM
int kmPerDay = restar.div(dayToInt)
int nextMaintenaceDays = dayBetweenLastAndNext.div(kmPerDay)
def nextAppointment = lasT.dateUser + nextMaintenaceDays
detailsReslt<<[placa:placa, nextAppointment:
nextAppointment, manageId:lasDataApoint, nextKmUser: NextAppointmentKM]
}
detailsReslt?.each {
Appointments addUserData = new Appointments()
addUserData.User = User.findById(it.manageId)
addUserData.managementDate = null
addUserData.NextdAteAppointment = null
addUserData.observations = null
addUserData.callDate = it.nextAppointment
addUserData.save(flush: true)
}
println "we now have ${detailsReslt}"
}
Based on the details that are not full and looking at the code I can suggest:
no need to do a query to map you can simply query the list of users and check all the properties like user.vehicle. in any case, you need to check each row.
the groupBy{it.vehicle} is not clear but if needed you can do it using createCriteria projections "groupProperty"
Create 2 service method one for iterating all users and one for each user:
validateAppointment(User user){
/* your validation logic */
....
if (validation term){
Appointments addUserData = new Appointments()
...
}
}
validateAppointments(){
List users = User. list()
users.each{User user
validateAppointment(user)
}
}
you can trigger the validateAppointments service from anywhere in the code or create a scheduled job so it will run automatically based on your needs.
if your list of user is big and also for efficiency you can do bulk update - take a look at my post about it: https://medium.com/meni-lubetkin/grails-bulk-updates-4d749f24cba1
I would suggest to create a Custom Validator using a Service, something like this:
class User{
def appointmentService
...
Date appointmentDate
static constraints = {
appointmentDate validator: { val, obj ->
obj.appointmentService.isDateAppointmentValid(obj.appointmentDate)
}
}
}
But keep in mind that validation may run more often than you think. It is triggered by the validate() and save() methods as you’d expect (as explained in the user guide (v3.1.15)). So I'm not sure if this scenario is the best way to validate àppointmentDate` in your domain, so you have to be careful about that.
Hope this help.

golang -> gorm: How I can use sql.NullInt64 to be int(10) in mysql?

type Contact struct {
gorm.Model
PersonID sql.NullInt64
}
type Person struct {
gorm.Model
}
I am trying to use gorm with mysql in the previuos code but I have the following problem:
I want:
Use sql.NullInt64 to work easily with null values.
Use the base model definition gorm.Model, including fields ID, CreatedAt, UpdatedAt, DeletedAt.
Add a constraint Db.Model(&models.Contact{}).AddForeignKey.
My problem:
Person.ID become "int(10)" in mysql.
Contact.PersonID become "bigint(20)"
MySql need the same type for pk and fk.
Some body can help me to solve this?
The "magic" on gorm.Model is only the name of the fields, any struct with these fields look like this according to the gorm documentation, at the end of Conventions
For example: Save records having UpdatedAt field will set it to current time.
Or
Delete records having DeletedAt field, it won't be deleted from database, but only set field DeletedAt's value to current time, and the record is not findable when querying, refer Soft Delete
So solve the issue is very easy, this is the code for my case:
package models
import "time"
type Model struct {
ID uint `gorm:"primary_key;type:bigint(20) not null auto_increment"`
CreatedAt time.Time
UpdatedAt time.Time
DeletedAt *time.Time `sql:"index"`
}
So, now I only need use it as base model :)

One to many relationship with gorm in golang doesnt work

I have two tables:
type Person struct {
ID int
FirstName string
LastName string
Functions []Function
}
type Function struct {
gorm.Model
Info string
Person Person
}
I create the tables like this:
db.AutoMigrate(&models.Person{}, &models.Function{})
I then initialize the database:
user := models.Person{
FirstName: "Isa",
LastName: "istcool",
Functions: []models.Function{{Info: "Trainer"}, {Info: "CEO"}},
}
db.Create(&user)
Now the problem is that my Person table only got Firstname and Lastname columns and my Function table only got the Info column.
But when I start my GET request I get people with the column function which is always null.
Here is a screenshot from my GET request and my db
To see the code visit my GitHub repo
Finally found the answer!!
The problem is my GET functions I have to use
db.Preload("Functions").Find(&[]models.Person{})
instead of
db.Find(&[]models.Person{})