I am learning NodeJS and I came across with a situation which confused me a little bit. I could not find an answer to it in internet so decided to ask here. Sorry beforehand for any mistake.
So I am using Sequelize as ORM for MySQL. I created 1:1 relationship between User and Cart model as below
const User = sequelize.define("User", {
_id: {
type: DataTypes.INTEGER.UNSIGNED,
autoIncrement: true,
allowNull: false,
primaryKey: true,
},
name: DataTypes.STRING,
email: DataTypes.STRING,
});
const Cart = sequelize.define("Cart", {
_id: {
type: DataTypes.INTEGER.UNSIGNED,
autoIncrement: true,
allowNull: false,
primaryKey: true,
},
});
User.hasOne(Cart);
Cart.belongsTo(User);
This creates relationship and I can use methods which is result of this relationship but the problem is in Database I can create limitless Cart instances with all having same user as reference. How can this be? After searching all day I think I got some little ideas but not complete answer.
As I understood there is no 1:1 association in SQL level but it is in ORM level actually.
Is this right? If no, What am I doing wrong?
Thanks in advance!
Related
I am building a React Nativeapp with MYSQL as the dabase and I am using SequelizeORM on Node.js. The problem is that I have a table called Like and there is a column field called userId and I simply store the ID of the users there. But the userId field gets cleared randomly. Like when I am restarting the database, or when there is an error and I need to restart the database or I am restarting the Android Emulator.
Here is how it looks:
CreateLikeModel.init({
id:{
type: DataTypes.INTEGER,
autoIncrement: true,
primaryKey: true,
allowNull: false,
},
userId:{
type: DataTypes.STRING,
allowNull: true
},
food_Name:{
type: DataTypes.STRING,
allowNull: false
},
food_id:{
type: DataTypes.INTEGER,
allowNull: false
}
},
And this is part of the function that saves the user Id when the user carry's out a like functionality:
const user = await CreateCustomer.findOne({where: {id: req.user.id}});
const likeObj ={
user_Id: user?.dataValues?.id?.toString(),
food_Name: food?.dataValues?.food_Name,
food_id: food.dataValues.id as number
}
//check if this user has liked this food
const checkUser = checkLike.find((single)=> single?.dataValues.userId ==
user.dataValues.id);
if(checkUser){
const getLike = await CreateLikeModel.findOne({where: {food_Name:
food.dataValues.food_Name, userId: user?.dataValues.id}});
await getLike?.destroy();//delete the like from the record if the user already liked;
return res.status(200).json('unliked');
}
await CreateLikeModel.create({...likeObj});
return res.status(200).json('liked');
And I connected to the database like this:
sequelizeDB.sync({alter: true}).then(()=>{
console.log('connected to datatbase')
})
I tried the save the userId as a string because it was a number before. Initially, when I used number, it usually reset the userId values to 0.
It still didn't solve the problem.
This was not happening before when I was using user_name instead of userId. What could be causing the issue? For now, I usually manually input the values back in the database when they get deleted.
Inside of likeObj, which you are using as the creation attributes for your like object, you have your User ID field as snake case user_Id, which in your model, I see it in camelCase userId.
I suspect this is the root of why it isn't populating the data correctly, however keep in mind that .sync({ alter: true }) can be a destructive operation, as state in the Sequelize documentation.
So lets say I have a Sequelize model defined with paranoid defaulting to "false":
const Country = sequelize.define('Country', {
name: {
type: DataTypes.STRING,
defaultValue: '',
},
code: {
type: DataTypes.STRING,
defaultValue: '',
},
currency: {
type: DataTypes.STRING,
defaultValue: '',
},
languages: {
type: DataTypes.STRING,
defaultValue: '',
},
id: {
type: DataTypes.INTEGER,
primaryKey: true,
autoIncrement: true
},
createdAt: DataTypes.DATE,
updatedAt: DataTypes.DATE,
deletedAt: DataTypes.DATE
});
Now when I invoke Model.destroy() on any records of Country table, the records would be hard deleted. Enabling paranoid: true on the Model definition would result in soft deletes.
I wanted to achieve the opposite of this. Where, the paranoid flag on model definition is set to false and we need to explicitly define a flag on the Model.destroy() method to soft-delete an entry and by default all records would be hard deleted.
I tried to sift through the documentation in order to find something but couldn't. Would appreciate any help I can get in case I missed something or if there's a workaround.
Why I need to do this? Some background
I joined a project with about 100+ defined models (even more) on which the paranoid flag is not defined and is false by default. Thankfully, the createdAt, updatedAt and deletedAt timestamps are defined explicitly. But any call to the Model.destroy() function results in a hard delete.
I need to introduce the functionality of a soft delete without changing any model definitions (because that would result in unintended consequences). Again, thankfully, the Model.destroy() method is wrapped in a function which is used in the entire codebase.
I was thinking of introducing an optional flag on this wrapper function which would indicate whether the delete needs to be soft or hard. So the default functionality would be hard delete unless explicitly specified to be a soft delete.
Worst case solution I can think of is that in case soft delete is required, then replace the destroy method with a raw query where I update the deletedAt timestamp manually. But hoping to find cleaner solutions than this :)
The simplest solution would be to use force: false option in case of soft-delete and force: true in case of hard-delete:
async function wrappedDestroy(item, isSoftDelete) {
await item.destroy({ force: !isSoftDelete })
}
Of course, you need to turn on paranoid: true in the model because it also affects all findAll/findOne queries as well (I suppose you wish to hide all soft-deleted records from findAll/findOne by default).
I have several models with the fields: createdAt, updatedAt and deletedAt. These are set to the type DATE which results in a timestamp that is accurate to one second. I want these fields to be precise to a millisecond and the way to do that is to set their type to DATE(6) using Sequelize. This is the migration that I am using:
module.exports = {
up: (queryInterface, Sequelize) => {
return Promise.all([
queryInterface.changeColumn('transactions', 'createdAt', {
type: Sequelize.DATE(6),
allowNull: true,
}),
queryInterface.changeColumn('transactions', 'updatedAt', {
type: Sequelize.DATE(6),
allowNull: true,
}),
queryInterface.changeColumn('transactions', 'deletedAt', {
type: Sequelize.DATE(6),
allowNull: true,
}),
])
},
down: (queryInterface, Sequelize) => {
return Promise.all([
queryInterface.changeColumn('transactions', 'createdAt', {
type: Sequelize.DATE,
allowNull: true,
}),
queryInterface.changeColumn('transactions', 'updatedAt', {
type: Sequelize.DATE,
allowNull: true,
}),
queryInterface.changeColumn('transactions', 'deletedAt', {
type: Sequelize.DATE,
allowNull: true,
}),
])
}
};
The transactions table has 93 million rows and this migration ran for 10 hours before an internet problem caused it to time out. As such, is there any way to speed up it up? This is a MySQL database, the Sequelize version is 5.8.6 and the sequelize-cli version is 4.0.
This migration runs the ALTER TABLE command on the given database table. Under the hood, ALTER TABLE creates a copy of the original table, makes the given changes and copies over the original data. For a large database, this will take time. The problem is that the table gets locked whilst this operation is running so any updates on it will not go through. For this situation, there are two tools available (that I am aware of):
pt-online-schema-change
gh-ost
These tools also create a copy of the table but they don't lock up the table so we can continue updating it whilst the operation runs. They also record any updates to the original table during the operation and update the new table accordingly.
I am building a new NodeJS application with MySQL. I need to use the existing database from the original version of the application. I am using a mysql dump file from the old database to create the new database.
I generated the models automatically based on the existing database using sequelize-auto module.
The createdAt field does not exist in the database and timestamps are specifically disabled in all of the models. However, I am still seeing this error when running a query such as models.people.findAll() or models.people.findOne():
"SequelizeDatabaseError: Unknown column 'createdAt' in 'field list'"
I followed the instructions in this other post to disable timestamps globally, however it does not work to solve the issue.
Sequelize Unknown column '*.createdAt' in 'field list'
Here are the relevant versions:
"mysql": "^2.17.1",
"mysql2": "^1.6.5",
"sequelize": "^5.8.5",
"sequelize-auto": "^0.4.29",
"sequelize-auto-migrations": "^1.0.3"
Here is the people model. Timestamps have been disabled explicitly in this and all other models globally. Also, I have read that timestamps : false is the default in the latest version of sequelize, so I am confused as to how this is an issue at all.
module.exports = function(sequelize, DataTypes) {
return sequelize.define('people', {
PersonID: {
type: DataTypes.INTEGER(10).UNSIGNED,
allowNull: false,
primaryKey: true
},
FirstName: {
type: DataTypes.STRING(50),
allowNull: false
},
LastName: {
type: DataTypes.STRING(50),
allowNull: false
},
Username: {
type: DataTypes.STRING(50),
allowNull: true
},
EmailAddress: {
type: DataTypes.STRING(255),
allowNull: true
}
}, {
tableName: 'people',
timestamps: 'false'
});
};
I see that you use sequelize-auto-migrations in your dependencies and I found an opened issue in their repo linked to your problem.
You maybe made a mistake setting the timestamps or the migrations are executed in the server after the disabling of the timestamps so the created_at columns are being added again to your tables.
I hope this helps you.
I've just started to get into the framework of Sails for Node. But it seems like I can't get the unique- requirements to work when adding for example users to the sails-mysql database. I can atm add unlimited number of new users with the same username and email.
From what I have read it should work, I did also try with sails-memory and there this exact code did work. Is it something I have missed out?
module.exports = {
attributes: {
username: {
type: 'string',
required: true,
unique: true
},
firstname: {
type: 'string',
required: true
},
lastname: {
type: 'string',
required: true
},
password: {
type: 'string',
required: true
},
birthdate: {
type: 'date',
required: true
},
email: {
type: 'email',
required: true,
unique: true
},
phonenumber: 'string',
// Create users full name automaticly
fullname: function(){
return this.firstname + ' ' + this.lastname;
}
}
};
As I mentioned above, this does work with the memory-storage. And now I have also tried with mongodb where it does work fins as well.
Got support from Sails.js on twitter: "it uses the db layer- suspect it's an issue with automigrations. Would you try in a new MySQL db?"
This answer did work, a new db and everything was just working :)
Just to add to this, since sails uses auto-migrations, if you initially start the server and your model does not have an attribute as unique, the table is built without the unique (index) switch. If you then change an existing attribute in the model to unique, the table will not be rebuilt the subsequent times you start the server.
One remedy during development is to set migrations in your model to drop like this:
module.exports = {
migrate: 'drop' // drops all your tables and then re-create them Note: You loose underlying.
attributes: {
...
}
};
That way, the db would be rebuilt each time you start the server. This would of course drop any existing data as well.