Sequelize findOrCreate not excluding attributes defined in the exclude array - mysql

Following is the code:
Accounts.findOrCreate({
where: {
userName: request.payload.userName
},
attributes: { exclude: ['password','sessionToken'] },
defaults: request.payload
}).spread(function (account, created) {
if (created) {
var account = account.get({
plain: true
});
console.log(account); // has the password and sessionToken fields
return reply(account).code(201);
} else {
return reply("user name already exists").code(422);
}
});
I noticed that sequelize first fires a select query in which the password field is not present, then it fires an insert statement in which the password field is present, and that needs to be there.
I would just like the password and sessionToken not be present in the resulting account object. I could of course delete those properties from the object but I am looking for a more straightforward way.

It seems like you need to delete those fields manually. According to the source code, findOrCreate method first fires the findOne function and then it goes with create if instance was not found. The create method does not accept attributes parameter. In such a case all fields will be returned.
Good solution would be to create instance method in the Accounts model in order to return an instance with only the desired attributes.
{
instanceMethods: {
toJson: function() {
let account = {
id: this.get('id'),
userName: this.get('userName')
// and other fields you want to include
};
return account;
}
}
}
Then you could simply use the toJson method when returning raw representation of object:
Accounts.findOrCreate({ where: { userName: 'username' } }).spread((account, created) => {
return account ? account.toJson() : null;
});

As mentioned by piotrbienias you can follow his way otherwise just delete the unwanted elements like this:
Accounts.findOrCreate({
where: {
userName: request.payload.userName
},
defaults: request.payload
}).spread(function (account, created) {
if (created) {
var account = account.get({
plain: true
});
delete account.password;
delete account.sessionToken;
console.log(account); // now you don't have the password and sessionToken fields
return reply(account).code(201);
} else {
return reply("user name already exists").code(422);
}
});

Related

Sequelize update ignores invalid column names

When I try to update a certain entry in the db, sequelize - Model.update() ignores the wrong column name passed to it. Say my table has columns 'id' and 'password', if I pass an object that has 'id' and 'pwd' to the update function, the 'pwd' is simply ignored and 'id' is updated.
Is there a way, through sequelize, to check if an invalid column name is being passed to the update function?
You can do this by adding a custom instance function to your Model via prototype to be able to access this to pass up to this.update() if your checks pass.
const MyModel = sequelize.define(
'table_name',
{ ...columns },
{ ...options },
);
MyModel.prototype.validatedUpdate = async (updates, options) => {
// get a list of all the column names from the attributes, this will include VIRTUAL, but you could filter first.
const columnNames = Object.keys(MyModel.attributes);
// the keys from the updates we are trying to make
const updateNames = Object.keys(updates);
// check to see if each of the updates exists in the list of attributes
updateNames.forEach(updateName => {
// throw an Error if we can't find one.
if (!columNames.some((columnName) => columnName == updateName)) {
throw new Error(`The field ${updateName} does not exist.`);
}
});
// pass it along to the normal update() chain
return this.update(updates, options);
}
module.exports = MyModel;
Then use it like this:
try {
const myInstance = await MyModel.findById(someId);
await myInstance.validatedUpdate({ fake: 'field' });
} catch(err) {
console.log(err); // field does not exist
}

What is the idiomatic, performant way to resolve related objects?

How do you write query resolvers in GraphQL that perform well against a relational database?
Using the example schema from this tutorial, let's say I have a simple database with users and stories. Users can author multiple stories but stories only have one user as their author (for simplicity).
When querying for a user, one might also want to get a list of all stories authored by that user. One possible definition a GraphQL query to handle that (stolen from the above linked tutorial):
const Query = new GraphQLObjectType({
name: 'Query',
fields: () => ({
user: {
type: User,
args: {
id: {
type: new GraphQLNonNull(GraphQLID)
}
},
resolve(parent, {id}, {db}) {
return db.get(`
SELECT * FROM User WHERE id = $id
`, {$id: id});
}
},
})
});
const User = new GraphQLObjectType({
name: 'User',
fields: () => ({
id: {
type: GraphQLID
},
name: {
type: GraphQLString
},
stories: {
type: new GraphQLList(Story),
resolve(parent, args, {db}) {
return db.all(`
SELECT * FROM Story WHERE author = $user
`, {$user: parent.id});
}
}
})
});
This will work as expected; if I query a specific user, I'll be able to get that user's stories as well if needed. However, this does not perform ideally. It requires two trips to the database, when a single query with a JOIN would have sufficed. The problem is amplified if I query multiple users -- every additional user will result in an additional database query. The problem gets worse exponentially the deeper I traverse my object relationships.
Has this problem been solved? Is there a way to write a query resolver that won't result in inefficient SQL queries being generated?
There are two approaches to this kind of problem.
One approach, that is used by Facebook, is to enqueue requests happening in one tick and combine them together before sending. This way instead of doing a request for each user, you can do one request to retrieve information about several users. Dan Schafer wrote a good comment explaining this approach. Facebook released Dataloader, which is an example implementation of this technique.
// Pass this to graphql-js context
const storyLoader = new DataLoader((authorIds) => {
return db.all(
`SELECT * FROM Story WHERE author IN (${authorIds.join(',')})`
).then((rows) => {
// Order rows so they match orde of authorIds
const result = {};
for (const row of rows) {
const existing = result[row.author] || [];
existing.push(row);
result[row.author] = existing;
}
const array = [];
for (const author of authorIds) {
array.push(result[author] || []);
}
return array;
});
});
// Then use dataloader in your type
const User = new GraphQLObjectType({
name: 'User',
fields: () => ({
id: {
type: GraphQLID
},
name: {
type: GraphQLString
},
stories: {
type: new GraphQLList(Story),
resolve(parent, args, {rootValue: {storyLoader}}) {
return storyLoader.load(parent.id);
}
}
})
});
While this doesn't resolve to efficient SQL, it still might be good enough for many use cases and will make stuff run faster. It's also a good approach for non-relational databases that don't allow JOINs.
Another approach is to use the information about requested fields in the resolve function to use JOIN when it is relevant. Resolve context has fieldASTs field which has parsed AST of the currently resolved query part. By looking through the children of that AST (selectionSet), we can predict whether we need a join. A very simplified and clunky example:
const User = new GraphQLObjectType({
name: 'User',
fields: () => ({
id: {
type: GraphQLID
},
name: {
type: GraphQLString
},
stories: {
type: new GraphQLList(Story),
resolve(parent, args, {rootValue: {storyLoader}}) {
// if stories were pre-fetched use that
if (parent.stories) {
return parent.stories;
} else {
// otherwise request them normally
return db.all(`
SELECT * FROM Story WHERE author = $user
`, {$user: parent.id});
}
}
}
})
});
const Query = new GraphQLObjectType({
name: 'Query',
fields: () => ({
user: {
type: User,
args: {
id: {
type: new GraphQLNonNull(GraphQLID)
}
},
resolve(parent, {id}, {rootValue: {db}, fieldASTs}) {
// find names of all child fields
const childFields = fieldASTs[0].selectionSet.selections.map(
(set) => set.name.value
);
if (childFields.includes('stories')) {
// use join to optimize
return db.all(`
SELECT * FROM User INNER JOIN Story ON User.id = Story.author WHERE User.id = $id
`, {$id: id}).then((rows) => {
if (rows.length > 0) {
return {
id: rows[0].author,
name: rows[0].name,
stories: rows
};
} else {
return db.get(`
SELECT * FROM User WHERE id = $id
`, {$id: id}
);
}
});
} else {
return db.get(`
SELECT * FROM User WHERE id = $id
`, {$id: id}
);
}
}
},
})
});
Note that this could have problem with, eg, fragments. However one can handle them too, it's just a matter of inspecting the selection set in more detail.
There is currently a PR in graphql-js repository, which will allow writing more complex logic for query optimization, by providing a 'resolve plan' in the context.

Nested collection in models Sails.js [duplicate]

I've got myself a question regarding associations in Sails.js version 0.10-rc5. I've been building an app in which multiple models are associated to one another, and I've arrived at a point where I need to get to nest associations somehow.
There's three parts:
First there's something like a blog post, that's being written by a user. In the blog post I want to show the associated user's information like their username. Now, everything works fine here. Until the next step: I'm trying to show comments which are associated with the post.
The comments are a separate Model, called Comment. Each of which also has an author (user) associated with it. I can easily show a list of the Comments, although when I want to display the User's information associated with the comment, I can't figure out how to populate the Comment with the user's information.
In my controller i'm trying to do something like this:
Post
.findOne(req.param('id'))
.populate('user')
.populate('comments') // I want to populate this comment with .populate('user') or something
.exec(function(err, post) {
// Handle errors & render view etc.
});
In my Post's 'show' action i'm trying to retrieve the information like this (simplified):
<ul>
<%- _.each(post.comments, function(comment) { %>
<li>
<%= comment.user.name %>
<%= comment.description %>
</li>
<% }); %>
</ul>
The comment.user.name will be undefined though. If I try to just access the 'user' property, like comment.user, it'll show it's ID. Which tells me it's not automatically populating the user's information to the comment when I associate the comment with another model.
Anyone any ideals to solve this properly :)?
Thanks in advance!
P.S.
For clarification, this is how i've basically set up the associations in different models:
// User.js
posts: {
collection: 'post'
},
hours: {
collection: 'hour'
},
comments: {
collection: 'comment'
}
// Post.js
user: {
model: 'user'
},
comments: {
collection: 'comment',
via: 'post'
}
// Comment.js
user: {
model: 'user'
},
post: {
model: 'post'
}
Or you can use the built-in Blue Bird Promise feature to make it. (Working on Sails#v0.10.5)
See the codes below:
var _ = require('lodash');
...
Post
.findOne(req.param('id'))
.populate('user')
.populate('comments')
.then(function(post) {
var commentUsers = User.find({
id: _.pluck(post.comments, 'user')
//_.pluck: Retrieves the value of a 'user' property from all elements in the post.comments collection.
})
.then(function(commentUsers) {
return commentUsers;
});
return [post, commentUsers];
})
.spread(function(post, commentUsers) {
commentUsers = _.indexBy(commentUsers, 'id');
//_.indexBy: Creates an object composed of keys generated from the results of running each element of the collection through the given callback. The corresponding value of each key is the last element responsible for generating the key
post.comments = _.map(post.comments, function(comment) {
comment.user = commentUsers[comment.user];
return comment;
});
res.json(post);
})
.catch(function(err) {
return res.serverError(err);
});
Some explanation:
I'm using the Lo-Dash to deal with the arrays. For more details, please refer to the Official Doc
Notice the return values inside the first "then" function, those objects "[post, commentUsers]" inside the array are also "promise" objects. Which means that they didn't contain the value data when they first been executed, until they got the value. So that "spread" function will wait the acture value come and continue doing the rest stuffs.
At the moment, there's no built in way to populate nested associations. Your best bet is to use async to do a mapping:
async.auto({
// First get the post
post: function(cb) {
Post
.findOne(req.param('id'))
.populate('user')
.populate('comments')
.exec(cb);
},
// Then all of the comment users, using an "in" query by
// setting "id" criteria to an array of user IDs
commentUsers: ['post', function(cb, results) {
User.find({id: _.pluck(results.post.comments, 'user')}).exec(cb);
}],
// Map the comment users to their comments
map: ['commentUsers', function(cb, results) {
// Index comment users by ID
var commentUsers = _.indexBy(results.commentUsers, 'id');
// Get a plain object version of post & comments
var post = results.post.toObject();
// Map users onto comments
post.comments = post.comments.map(function(comment) {
comment.user = commentUsers[comment.user];
return comment;
});
return cb(null, post);
}]
},
// After all the async magic is finished, return the mapped result
// (or an error if any occurred during the async block)
function finish(err, results) {
if (err) {return res.serverError(err);}
return res.json(results.map);
}
);
It's not as pretty as nested population (which is in the works, but probably not for v0.10), but on the bright side it's actually fairly efficient.
I created an NPM module for this called nested-pop. You can find it at the link below.
https://www.npmjs.com/package/nested-pop
Use it in the following way.
var nestedPop = require('nested-pop');
User.find()
.populate('dogs')
.then(function(users) {
return nestedPop(users, {
dogs: [
'breed'
]
}).then(function(users) {
return users
}).catch(function(err) {
throw err;
});
}).catch(function(err) {
throw err;
);
Worth saying there's a pull request to add nested population: https://github.com/balderdashy/waterline/pull/1052
Pull request isn't merged at the moment but you can use it installing one directly with
npm i Atlantis-Software/waterline#deepPopulate
With it you can do something like .populate('user.comments ...)'.
sails v0.11 doesn't support _.pluck and _.indexBy use sails.util.pluck and sails.util.indexBy instead.
async.auto({
// First get the post
post: function(cb) {
Post
.findOne(req.param('id'))
.populate('user')
.populate('comments')
.exec(cb);
},
// Then all of the comment users, using an "in" query by
// setting "id" criteria to an array of user IDs
commentUsers: ['post', function(cb, results) {
User.find({id:sails.util.pluck(results.post.comments, 'user')}).exec(cb);
}],
// Map the comment users to their comments
map: ['commentUsers', function(cb, results) {
// Index comment users by ID
var commentUsers = sails.util.indexBy(results.commentUsers, 'id');
// Get a plain object version of post & comments
var post = results.post.toObject();
// Map users onto comments
post.comments = post.comments.map(function(comment) {
comment.user = commentUsers[comment.user];
return comment;
});
return cb(null, post);
}]
},
// After all the async magic is finished, return the mapped result
// (or an error if any occurred during the async block)
function finish(err, results) {
if (err) {return res.serverError(err);}
return res.json(results.map);
}
);
You could use async library which is very clean and simple to understand. For each comment related to a post you can populate many fields as you want with dedicated tasks, execute them in parallel and retrieve the results when all tasks are done. Finally, you only have to return the final result.
Post
.findOne(req.param('id'))
.populate('user')
.populate('comments') // I want to populate this comment with .populate('user') or something
.exec(function (err, post) {
// populate each post in parallel
async.each(post.comments, function (comment, callback) {
// you can populate many elements or only one...
var populateTasks = {
user: function (cb) {
User.findOne({ id: comment.user })
.exec(function (err, result) {
cb(err, result);
});
}
}
async.parallel(populateTasks, function (err, resultSet) {
if (err) { return next(err); }
post.comments = resultSet.user;
// finish
callback();
});
}, function (err) {// final callback
if (err) { return next(err); }
return res.json(post);
});
});
As of sailsjs 1.0 the "deep populate" pull request is still open, but the following async function solution looks elegant enough IMO:
const post = await Post
.findOne({ id: req.param('id') })
.populate('user')
.populate('comments');
if (post && post.comments.length > 0) {
const ids = post.comments.map(comment => comment.id);
post.comments = await Comment
.find({ id: commentId })
.populate('user');
}
Granted this is an old question, but a much simpler solution would be to loop over the comments,replacing each comment's 'user' property (which is an id) with the user's full detail using async await.
async function getPost(postId){
let post = await Post.findOne(postId).populate('user').populate('comments');
for(let comment of post.comments){
comment.user = await User.findOne({id:comment.user});
}
return post;
}
Hope this helps!
In case anyone is looking to do the same but for multiple posts, here's one
way of doing it:
find all user IDs in posts
query all users in 1 go from DB
update posts with those users
Given that same user can write multiple comments, we're making sure we're reusing those objects. Also we're only making 1 additional query (whereas if we'd do it for each post separately, that would be multiple queries).
await Post.find()
.populate('comments')
.then(async (posts) => {
// Collect all comment user IDs
const userIDs = posts.reduce((acc, curr) => {
for (const comment of post.comments) {
acc.add(comment.user);
}
return acc;
}, new Set());
// Get users
const users = await User.find({ id: Array.from(userIDs) });
const usersMap = users.reduce((acc, curr) => {
acc[curr.id] = curr;
return acc;
}, {});
// Assign users to comments
for (const post of posts) {
for (const comment of post.comments) {
if (comment.user) {
const userID = comment.user;
comment.user = usersMap[userID];
}
}
}
return posts;
});

Extjs5 model default values in ajax request

I have a problem. I created a simple model and tried to save new value by using it through ajax request. But parameters which must be empty sends default value. You can see it by link under. The code does not specifically set the correct way bacause of what the console(f12) can be seen fallen challenge. In it I pass a value through a query-string, as well as through the payload-request (not yet invented how to get rid of it, as I understand it-payload is used by default). In general, instead of an empty carId call transfers Car-1.
https://fiddle.sencha.com/#fiddle/dsj
How do I fix this behavior and do that if we do not share any meaning, it passed empty?
You can create your custom proxy class that extends Ext.data.proxy.Ajax and then override buildRequest method to check for all create actions and to assign desired value to idProperty
Ext.define('CarProxy', {
extend: 'Ext.data.proxy.Ajax',
alias: 'proxy.carproxy',
type: 'ajax',
idParam: 'carId',
reader: {
type: 'json',
rootProperty: 'data'
},
api: {
create: './createcar.json'
},
writer: {
type: 'form'
},
buildRequest: function(operation) {
var request = this.callParent(arguments);
if (request.getAction() === 'create') {
request.getRecords().forEach(function(record) {
record.set('carId', ''); //assing desired value to id
});
}
return request;
}
});
Thanks everyone who answered. I want to show my solution:
add to model definition parameter
identifier: 'custom' .
And then create appropriate identifier which will return undefined on generate method:
Ext.define('Custom', {
extend: 'Ext.data.identifier.Generator',
alias: 'data.identifier.custom',
generate: function() {
return;
}
});
This is default ExtJS behaviour. If you do not specify id, it is generated. To avoid that you can for add constructor to your model:
Ext.define("Car", {
[...],
constructor: function() {
this.callParent(arguments);
// check if it is new record
if (this.phantom) {
// delete generated id
delete this.data[this.idProperty];
}
}
});

Create or Update Sequelize

I'm using Sequelize in my Nodejs project and I found a problem that I'm having a hard time to solve.
Basically I have a cron that gets an array of objects from a server than inserts it on my database as a object ( for this case, cartoons ). But if I already have one of the objects, I have to update it.
Basically I have a array of objects and a could use the BulkCreate() method. But as the Cron starts again, it doesn't solve it so I was needing some sort of update with an upsert true flag. And the main issue: I must have a callback that fires just once after all these creates or updates. Does anyone have an idea of how can I do that? Iterate over an array of object.. creating or updating it and then getting a single callback after?
Thanks for the attention
From the docs, you don't need to query where to perform the update once you have the object. Also, the use of promise should simplify callbacks:
Implementation
function upsert(values, condition) {
return Model
.findOne({ where: condition })
.then(function(obj) {
// update
if(obj)
return obj.update(values);
// insert
return Model.create(values);
})
}
Usage
upsert({ first_name: 'Taku' }, { id: 1234 }).then(function(result){
res.status(200).send({success: true});
});
Note
This operation is not atomic.
Creates 2 network calls.
which means it is advisable to re-think the approach and probably just update values in one network call and either:
Look at the value returned (i.e. rows_affected) and decide what to do.
Return success if update operation succeeds. This is because whether the resource exists is not within this service's responsibility.
You can use upsert
It's way easier.
Implementation details:
MySQL - Implemented as a single query INSERT values ON DUPLICATE KEY UPDATE values
PostgreSQL - Implemented as a temporary function with exception handling: INSERT EXCEPTION WHEN unique_constraint UPDATE
SQLite - Implemented as two queries INSERT; UPDATE. This means that the update is executed regardless of whether the row already
existed or not
MSSQL - Implemented as a single query using MERGE and WHEN (NOT) MATCHED THEN Note that SQLite returns undefined for created, no
matter if the row was created or updated. This is because SQLite
always runs INSERT OR IGNORE + UPDATE, in a single query, so there
is no way to know whether the row was inserted or not.
Update 07/2019 now with async/await
async function updateOrCreate (model, where, newItem) {
// First try to find the record
const foundItem = await model.findOne({where});
if (!foundItem) {
// Item not found, create a new one
const item = await model.create(newItem)
return {item, created: true};
}
// Found an item, update it
const item = await model.update(newItem, {where});
return {item, created: false};
}
I liked the idea of Ataik, but made it a little shorter:
function updateOrCreate (model, where, newItem) {
// First try to find the record
return model
.findOne({where: where})
.then(function (foundItem) {
if (!foundItem) {
// Item not found, create a new one
return model
.create(newItem)
.then(function (item) { return {item: item, created: true}; })
}
// Found an item, update it
return model
.update(newItem, {where: where})
.then(function (item) { return {item: item, created: false} }) ;
}
}
Usage:
updateOrCreate(models.NewsItem, {slug: 'sometitle1'}, {title: 'Hello World'})
.then(function(result) {
result.item; // the model
result.created; // bool, if a new item was created.
});
Optional: add error handling here, but I strongly recommend to chain all promises of one request and have one error handler at the end.
updateOrCreate(models.NewsItem, {slug: 'sometitle1'}, {title: 'Hello World'})
.then(..)
.catch(function(err){});
This might be an old question, but this is what I did:
var updateOrCreate = function (model, where, newItem, onCreate, onUpdate, onError) {
// First try to find the record
model.findOne({where: where}).then(function (foundItem) {
if (!foundItem) {
// Item not found, create a new one
model.create(newItem)
.then(onCreate)
.catch(onError);
} else {
// Found an item, update it
model.update(newItem, {where: where})
.then(onUpdate)
.catch(onError);
;
}
}).catch(onError);
}
updateOrCreate(
models.NewsItem, {title: 'sometitle1'}, {title: 'sometitle'},
function () {
console.log('created');
},
function () {
console.log('updated');
},
console.log);
User.upsert({ a: 'a', b: 'b', username: 'john' })
It will try to find record by hash in 1st param to update it, if it will not find it - then new record will be created
Here is example of usage in sequelize tests
it('works with upsert on id', function() {
return this.User.upsert({ id: 42, username: 'john' }).then(created => {
if (dialect === 'sqlite') {
expect(created).to.be.undefined;
} else {
expect(created).to.be.ok;
}
this.clock.tick(1000);
return this.User.upsert({ id: 42, username: 'doe' });
}).then(created => {
if (dialect === 'sqlite') {
expect(created).to.be.undefined;
} else {
expect(created).not.to.be.ok;
}
return this.User.findByPk(42);
}).then(user => {
expect(user.createdAt).to.be.ok;
expect(user.username).to.equal('doe');
expect(user.updatedAt).to.be.afterTime(user.createdAt);
});
});
Sound likes you want to wrap your Sequelize calls inside of an async.each.
This can be done with the custom event emitter.
Assuming your data is in a variable called data.
new Sequelize.Utils.CustomEventEmitter(function(emitter) {
if(data.id){
Model.update(data, {id: data.id })
.success(function(){
emitter.emit('success', data.id );
}).error(function(error){
emitter.emit('error', error );
});
} else {
Model.build(data).save().success(function(d){
emitter.emit('success', d.id );
}).error(function(error){
emitter.emit('error', error );
});
}
}).success(function(data_id){
// Your callback stuff here
}).error(function(error){
// error stuff here
}).run(); // kick off the queries
you can use findOrCreate and then update methods in sequelize. here is a sample with async.js
async.auto({
getInstance : function(cb) {
Model.findOrCreate({
attribute : value,
...
}).complete(function(err, result) {
if (err) {
cb(null, false);
} else {
cb(null, result);
}
});
},
updateInstance : ['getInstance', function(cb, result) {
if (!result || !result.getInstance) {
cb(null, false);
} else {
result.getInstance.updateAttributes({
attribute : value,
...
}, ['attribute', ...]).complete(function(err, result) {
if (err) {
cb(null, false);
} else {
cb(null, result);
}
});
}
}]
}, function(err, allResults) {
if (err || !allResults || !allResults.updateInstance) {
// job not done
} else {
// job done
});
});
Here is a simple example that either updates deviceID -> pushToken mapping or creates it:
var Promise = require('promise');
var PushToken = require("../models").PushToken;
var createOrUpdatePushToken = function (deviceID, pushToken) {
return new Promise(function (fulfill, reject) {
PushToken
.findOrCreate({
where: {
deviceID: deviceID
}, defaults: {
pushToken: pushToken
}
})
.spread(function (foundOrCreatedPushToken, created) {
if (created) {
fulfill(foundOrCreatedPushToken);
} else {
foundOrCreatedPushToken
.update({
pushToken: pushToken
})
.then(function (updatedPushToken) {
fulfill(updatedPushToken);
})
.catch(function (err) {
reject(err);
});
}
});
});
};
2022 update:
You can use the upsert function:
https://sequelize.org/api/v6/class/src/model.js~model#static-method-upsert
Insert or update a single row. An update will be executed if a row which matches the supplied values on either the primary key or a unique key is found. Note that the unique index must be defined in your sequelize model and not just in the table. Otherwise you may experience a unique constraint violation, because sequelize fails to identify the row that should be updated.
Implementation details:
MySQL - Implemented with ON DUPLICATE KEY UPDATE`
PostgreSQL - Implemented with ON CONFLICT DO UPDATE. If update data contains PK field, then PK is selected as the default conflict key.
Otherwise first unique constraint/index will be selected, which can satisfy conflict key requirements.
SQLite - Implemented with ON CONFLICT DO UPDATE
MSSQL - Implemented as a single query using MERGE and WHEN (NOT) MATCHED THEN
Note that Postgres/SQLite returns null for created, no matter if the row was created or updated