I'm using the sequelize syntax to generate a unique composite with two columns in a table like so:
sequelize.define('Group', {
code: {
type: DataTypes.STRING,
unique: 'GroupCompositeIndex'
},
active: {
type: DataTypes.BOOLEAN,
unique: 'GroupCompositeIndex'
}
});
The goal is to have many inactive versions of this group code and change which is active; only one at a time. I will be changing the active value to null on one record, and to 1 on another. I have never constructed a table to work like this. Will it behave the way I expect by precluding there from ever being 2 groups active with the same code, while permitting which group is active to change?
Related
So lets say I have a Sequelize model defined with paranoid defaulting to "false":
const Country = sequelize.define('Country', {
name: {
type: DataTypes.STRING,
defaultValue: '',
},
code: {
type: DataTypes.STRING,
defaultValue: '',
},
currency: {
type: DataTypes.STRING,
defaultValue: '',
},
languages: {
type: DataTypes.STRING,
defaultValue: '',
},
id: {
type: DataTypes.INTEGER,
primaryKey: true,
autoIncrement: true
},
createdAt: DataTypes.DATE,
updatedAt: DataTypes.DATE,
deletedAt: DataTypes.DATE
});
Now when I invoke Model.destroy() on any records of Country table, the records would be hard deleted. Enabling paranoid: true on the Model definition would result in soft deletes.
I wanted to achieve the opposite of this. Where, the paranoid flag on model definition is set to false and we need to explicitly define a flag on the Model.destroy() method to soft-delete an entry and by default all records would be hard deleted.
I tried to sift through the documentation in order to find something but couldn't. Would appreciate any help I can get in case I missed something or if there's a workaround.
Why I need to do this? Some background
I joined a project with about 100+ defined models (even more) on which the paranoid flag is not defined and is false by default. Thankfully, the createdAt, updatedAt and deletedAt timestamps are defined explicitly. But any call to the Model.destroy() function results in a hard delete.
I need to introduce the functionality of a soft delete without changing any model definitions (because that would result in unintended consequences). Again, thankfully, the Model.destroy() method is wrapped in a function which is used in the entire codebase.
I was thinking of introducing an optional flag on this wrapper function which would indicate whether the delete needs to be soft or hard. So the default functionality would be hard delete unless explicitly specified to be a soft delete.
Worst case solution I can think of is that in case soft delete is required, then replace the destroy method with a raw query where I update the deletedAt timestamp manually. But hoping to find cleaner solutions than this :)
The simplest solution would be to use force: false option in case of soft-delete and force: true in case of hard-delete:
async function wrappedDestroy(item, isSoftDelete) {
await item.destroy({ force: !isSoftDelete })
}
Of course, you need to turn on paranoid: true in the model because it also affects all findAll/findOne queries as well (I suppose you wish to hide all soft-deleted records from findAll/findOne by default).
I think I've done enough research on this subject and I've only got a headache.
Here is what I have done and understood: I have restructured my MySQL database so that I will keep my user's data in different tables, I am using foreign keys. Until now I only concluded that foreign keys are only used for consistency and control and they do not automatize or do anything else (for example, to insert data about the same user in two tables I need to use two separate insert statements and the foreign key will not help to make this different or automatic in some way).
Fine. Here is what I want to do: I want to use Sequelize to insert, update and retrieve data altogether from all the related tables at once and I have absolutely no idea on how to do that. For example, if a user registers, I want to be able to insert the data in the table "A" containing some user information and in the same task insert in the table B some other data (like the user's settings in the dedicated table or whatever). Same with retrievals, I want to be able to get an object (or array) with all the related data from different tables fitting in the criteria I want to find by.
Sequelize documentation covers the things in a way that every thing depends on the previous one, and Sequelize is pretty bloated with a lot of stuff I do not need. I do not want to use .sync(). I do not want to use migrations. I have the structure of my database created already and I want Sequelize to attach to it.
Is it possible insert and retrieve several rows related at the same time and getting / using a single Sequelize command / object? How?
Again, by "related data" I mean data "linked" by sharing the same foreign key.
Is it possible insert and retrieve several rows related at the same
time and getting / using a single Sequelize command / object? How?
Yes. What you need is eager loading.
Look at the following example
const User = sequelize.define('user', {
username: Sequelize.STRING,
});
const Address = sequelize.define('add', {
address: Sequelize.STRING,
});
const Designation = sequelize.define('designation', {
designation: Sequelize.STRING,
});
User.hasOne(Address);
User.hasMany(Designation);
sequelize.sync({ force: true })
.then(() => User.create({
username: 'test123',
add: {
address: 'this is dummy address'
},
designations: [
{ designation: 'designation1' },
{ designation: 'designation2' },
],
}, { include: [Address, Designation] }))
.then(user => {
User.findAll({
include: [Address, Designation],
}).then((result) => {
console.log(result);
});
});
In console.log, you will get all the data with all its associated models that you want to include in the query
I would like to be able to update my data table like this :
Each one of the 608 update represent a date :
So basically my jobSpec is like this :
var jobSpec = {
configuration: {
load: {
destinationTable: {
projectId: projectId,
datasetId: 'Facebook',
tableId: tableId
},
allowJaggedRows: true,
writeDisposition: 'WRITE_TRUNCATE',
schema: {
fields: [
{name: 'Page_ID', type: 'STRING'},
{name: 'Post_ID', type: 'STRING'},
{name: 'Post_creation_date', type: 'STRING'},
{name: 'Post_name', type: 'STRING'},
{name: 'Post_message', type: 'STRING'}
]
}
}
}
};
and here is my job :
BigQuery.Jobs.insert(jobSpec, projectId, data);
I tried to remplace 'WRITE_TRUNCATE' by 'WRITE_APPEND' but it's merging all my update. I would like to keep track of them as I shown in my screenshot.
Thanks !
Not sure if I fully understood your question but in order to create tables like the ga_sessions all you have to do is to create tables with the same prefix and just change some identification for them.
For instance, if you go to your BigQuery WebUI and create a table called "test_1" and then create another one just like the first but named "test_2" you will see the same result as in ga_sessions (but this time you will see test_(2)).
If you want to use the API, you'd have to do something like:
BigQuery.Jobs.insert(jobSpec, projectId, data, table_id='test_1');
BigQuery.Jobs.insert(jobSpec, projectId, data, table_id='test_2');
So it's not the "write_append" nor the "write_truncate" that you should be changing but rather the table's name.
This type of partitioning is more "manual" and you are the one resposible for creating the different tables.
BigQuery offers a more automatic option as well, which is a partitioned table. This type of table is a bit different from the ga_sessions in the sense that all you will have is just one table. And all data inserted in this table say in day 28 April will be allocated automatically to this timestamp. If next day you insert more data, it automatically is allocated to the timestamp 29 April and so on.
Later on to query your data you can use the _PARTITIONTIME to select only the desired timestamp.
It's a matter of identifying which one makes more sense to you in your project.
Is it possible to create a column on a MySQL table using Sequelize that can be initialized when creating a new row, but never updated?
For example, a REST service allows a user to update his profile. He can change any field except his id. I can strip the id from the request on the API route, but that's a little redundant because there are a number of different models that behave similarly. Ideally, I'd like to be able to define a constraint in Sequelize that prevents the id column from being set to anything other than DEFAULT.
Currently, I'm using a setterMethod for the id to manually throw a ValidationError, but this seems hackish, so I was wondering if there's a cleaner way of doing this. Even worse is that this implementation still allows the id to be set when creating a new record, but I don't know a way around this as when Sequelize generates the query it calls setterMethods.id to set the value to DEFAULT.
return sequelize.define('Foo',
{
title: {
type: DataTypes.STRING,
allowNull: false,
unique: true,
validate: {
notEmpty: true
}
}
},
{
setterMethods: {
id: function (value) {
if (!this.isNewRecord) {
throw new sequelize.ValidationError(null, [
new sequelize.ValidationErrorItem('readonly', 'id may not be set', 'id', value)
]);
}
}
}
}
);
Look at this Sequelize plugin:
https://www.npmjs.com/package/sequelize-noupdate-attributes
It adds support for no update and read-only attributes in Sequelize models.
In your specific case, you could configure the attribute with the following flags:
{
title: {
type: DataTypes.STRING,
allowNull: false,
unique : true,
noUpdate : true
}
}
That will allow the initial set of the title attribute if is null, and then prevent any further modifications once is already set.
Disclaimer: I'm the plugin author.
Im sort of scratching my head on this one. Here's the scenario. Im using Doctrine and YAML schema files:
I have a table User and a table Event
User looks like this:
User:
columns:
id:
type: integer(7)
email:
type: string(100)
display_name:
type: string(255)
fb_id:
type: string(100)
relations:
Event:
type: many
refClass: UserEvent
Event looks like this:
Event:
columns:
id:
type: integer(7)
initiator_id:
type: integer(7)
loc_latitude:
type: decimal(11)
loc_longitude:
type: decimal(11)
4sq_id:
type: integer(11)
relations:
User:
type: one
local: initiator_id
foreign: id
User:
type: many
refClass: UserEvent
As you can see the problem is this: A User (or 'initiator') can start many events, and an event can belong to one User ('initiator'). However, an event can also have many Users who join it, and a User can Join many events.
So Event and User end up being related in two different fashions. How does this work? Is it possible to do it this way or am i missing something?
I think you just need one many-to-many relationship between the two tables. UserEvent will tell you what users have what events (and vice versa)... and joining through UserEvent and adding WHERE user.id = event.initiator_id will give you access to a user's initiated events, assuming they also belong to those events.
You can just add an event_attendees table with the event id and user id as the two columns? Or is this not the question