I'm trying to use Prisma (ORM) to manage my MySQL database.
When I used MySQL directly I could run mysql_insert_id() after insert command to get the auto_increment indexes values I've just inserted.
How can I achieve this in Prisma?
The return value of insert is the affected rows, not the indexes.
EDIT
If you use the prisma.create() it does return the object with it's new id.
But if you use prisma.createMany() it return only the count of affected rows ?!?!
Someone care to explain the design behind this?
You would need to use Raw Query to execute the insert statement which returns indexes values.
From the documentation:
Use $queryRaw to return actual records.
Use $executeRaw to return a count of affected rows
So you would need to use the queryRaw method.
You can use like this:
const now = new Date();
const createdRecord = await this.prisma.post.create({
data: {
title: input.title!,
content: input.content!,
created_at: now,
updated_at: now
}
})
// now you can access id of created record by createdRecord.id
const id = createdRecord.id
// ...whatever you want with id
Related
I am updating a row in MySQL database using UPDATE keyword (in an express server using mysql2). If the data I am using to update and the data in the row are same then it is taking time as usual. But if I update the table with different data then it takes longer time. There is my code below.
public update = async (
obj: TUpCred,
): Promise<ResultSetHeader> => {
const sql = 'UPDATE ?? SET ? WHERE ?';
const values = [obj.table, obj.data, obj.where];
const [data] = (await this.connection.query({
sql,
values,
})) as TResultSetHeader;
return data;
};
Then this query takes so long time. Usually it is taking 4 - 6 seconds but sometimes it even takes 10 to 15 seconds to update. The same happens with INSERT query. But other queries like SELECT is taking normal time to execute.
I have a MySql database, and I'm connecting to it from a .Net app using Dapper. I have the following code:
await connection.ExecuteAsync(
"DELETE FROM my_data_table WHERE somedata IN (#data)",
new { data = datalist.Select(a => a.dataitem1).ToArray() },
trans);
When I do this with more than a single value, I get the following error:
MySqlConnector.MySqlException: 'Operand should contain 1 column(s)'
Is what I'm trying to do possible in MySql / Dapper, or do I have to issue a query per line I wish to delete?
Your original code was almost fine. You just need to remove the parentheses around the parameter. Dapper will insert those for you:
await connection.ExecuteAsync(
"DELETE FROM my_data_table WHERE somedata IN #data",
new { data = datalist.Select(a => a.dataitem1).ToArray() },
trans);
When using felixge's mysql for node.js, how can I ask the result object for the number of returned rows? I have a rather expensive query so I don't want to run a COUNT(*) first, just to then run a query a second time.
If it's a select query, just take the length of the returned array.
connection.query(sql, [var1,var2], function(err, results) {
numRows = results.length;
});
If it's an update/delete query, the returned dictionary will have an affectedRows variable.
connection.query(sql, [var1,var2], function(err, result) {
numRows = result.affectedRows;
});
If you're using the examples in the readme just look at the length property of the rows object (i.e. rows.length).
With the version of mssql 2.1.2 as of 2015-04-13:
delete from DeviceAccountLinks
where DeviceAccountId = #deviceAccountId
and DeviceType = #deviceType
statement will produce no results as 'undefined'
I have changed the statement to:
delete from DeviceAccountLinks
where DeviceAccountId = #deviceAccountId
and DeviceType = #deviceType;
select ##rowcount "rowCount"
to get the output of: [{rowCount:1}]
I have made an application in Nodejs that every minute calls an endpoint and gets a json array that has about 100000 elements. I need to upsert this elements into my database such that if the element doesn't exist I insert it with column "Point" value set to 0.
So far I'm having a cron job and simple upsert query. But it's so slow:
var q = async.queue(function (data, done) {
db.query('INSERT INTO stat(`user`, `user2`, `point`) '+data.values+' ON DUPLICATE KEY UPDATE point=point+ 10',function (err, result) {
if (err) throw err;
});
},100000);
//Cron job here Every 1 minute execute the lines below:
var values='' ;
for (var v=0;v<stats.length;v++) {
values = '("JACK","' + stats[v] + '", 0)';
q.push({values: values});
}
How can I do such a task in a very short amount of time. Is using mysql a wrong decision? I'm open to any other architecture or solution. Note that I have to do this every minute.
I fixed this problem by using Bulk Upsert (from documentation)! I managed to Upsert over 24k rows in less than 3 seconds. Basically created the query first then ran it:
INSERT INTO table (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b);
Follow up to this question. I have the following code:
string[] names = new[] { "Bob", "bob", "BoB" };
using (MyDataContext dataContext = new MyDataContext())
{
foreach (var name in names)
{
string s = name;
if (dataContext.Users.SingleOrDefault(u => u.Name.ToUpper() == s.ToUpper()) == null)
dataContext.Users.InsertOnSubmit(new User { Name = name });
}
dataContext.SubmitChanges();
}
...and it inserts all three names ("Bob", "bob" and "BoB"). If this was Linq-to-Objects, it wouldn't.
Can I make it look at the pending changes as well as what's already in the table?
I don't think that would be possible in general. Imagine you made a query like this:
dataContext.Users.InsertOnSubmit(new User { GroupId = 1 });
var groups = dataContext.Groups.Where(grp => grp.Users.Any());
The database knows nothing about the new user (yet) because the insert wasn't commited yet, so the generated SQL query might not return the Group with Id = 1. The only way the DataContext could take into account the not-yet-submitted insert in cases like this would be to get the whole Groups-Table (and possibly more tables, if they are affected by the query) and perform the query on the client, which is of course undesirable. I guess the L2S designers decided that it would be counterintuitive if some queries took not-yet-committed inserts into account while others wouldn't, so they chose to never take them into account.
Why don't you use something like
foreach (var name in names.Distinct(StringComparer.InvariantCultureIgnoreCase))
to filter out duplicate names before hitting the database?
Why dont you try something like this
foreach (var name in names)
{
string s = name;
if (dataContext.Users.SingleOrDefault(u => u.Name.ToUpper() == s.ToUpper()) == null)
{
dataContext.Users.InsertOnSubmit(new User { Name = name });
break;
}
}
I am sorry, I don't understand LINQ to SQL as much.
But, when I look at the code, it seems you are telling it to insert all the records at once (similar to a transaction) using SubmitChanges and you are trying to check the existence of it from the DB, when the records are not inserted at all.
EDIT: Try putting the SubmitChanges inside the loop and see that the code will run as per your expectation.
You can query the appropriate ChangeSet collection, such as
if(
dataContext.Users.
Union(dataContext.GetChangeSet().Inserts).
Except(dataContext.GetChangeSet().Deletes).
SingleOrDefault(u => u.Name.ToUpper() == s.ToUpper()) == null)
This will create a union of the values in the Users table and the pending Inserts, and will exclude pending deletes.
Of course, you might want to create a changeSet variable to prevent multiple calls to the GetChangeSet function, and you may need to appropriately cast the object in the collection to the appropriate type. In the Inserts and Deletes collections, you may want to filter it with something like
...GetChangeSet().Inserts.Where(o => o.GetType() == typeof(User)).OfType<User>()...