Order of execution of SQL UPDATE while updating multiple values? - mysql

What is the sequence in which the values (separated by commas) will be updated?
$command = sprintf('UPDATE %s SET rating = ((rating * rating_count + %f) / (rating_count + 1.0)) , rating_count=rating_count+1 WHERE id=%d', $table, $ratingGiven, $id)`;
I want to make sure that
rating = (rating * rating_count + %f) / (rating_count + 1.0)
is executed before
rating_count=rating_count+1
without firing two SQL commands.
I am not sure if the update-value-statements are executed in the order in which they are separated by commas in MySql (or any other DB)?

I don't think it will matter UPDATE will read the current row and do the update upon it based on the existing values, and not the ones that are in the update.
So in both SET operations, the original value of rating_count will be used.

Related

Is it OK to all values in a column of a table without limit?

I'm trying to get the product ids of the product used by a customer with the query,
select
v.product_id
from
TableA as v
join
TableB as f
on
v.id = f.id
where
f.product_name = "<some_name>"
and
f.customer_id = "<id>"
Note : product_id is the primary key of TableA
There could be too many rows matching the criteria and in the worst case it could be the entire value of a column in a table.
Is this OK to execute the query without any limit operator? How could I get to know the query which I run is safe (ie; produces no OOM kind of issues) when the result set obtained is too large?
EDIT
This is my getMySQLConnection part of my code
String url = "jdbc:mysql://" + host + ":" + port + "/" + dbname;
String clazz = "org.gjt.mm.mysql.Driver";
Driver driver = (Driver) Class.forName(clazz).newInstance();
DriverManager.registerDriver(driver);
return DriverManager.getConnection(url, user, pwd);
Some thing like a GET request which could be able to transfer around 2-8kb (as far as I remember). what is the maximum size of data (ie' the result set data) that could be transferred with this url connection?
You should not use limit in the query. otherwise, you will not get the whole product list.
and the worst case happens only if there is only one customer on your site.
It depends what the purpose of the query is.
Do you need to evaluate EVERY row which meets that query criteria? Is so then you would need to SELECT without limit.
However if you do not need every row as in a paginated report, you could use LIMIT MAX MIN.

Does the Laravel `increment()` lock the row?

Does calling the Laravel increment() on an Eloquent model lock the row?
For example:
$userPoints = UsersPoints::where('user_id','=',\Auth::id())->first();
if(isset($userPoints)) {
$userPoints->increment('points', 5);
}
If this is called from two different locations in a race condition, will the second call override the first increment and we still end up with only 5 points? Or will they add up and we end up with 10 points?
To answer this (helpful for future readers): the problem you are asking about depends on database configuration.
Most MySQL engines: MyISAM and InnoDB etc.. use locking when inserting, updating, or altering the table until this feature is explicitly turned off. (anyway this is the only correct and understandable implementation, for most cases)
So you can feel comfortable with what you got, because it will work correct at any number of concurrent calls:
-- this is something like what laravel query builder translates to
UPDATE users SET points += 5 WHERE user_id = 1
and calling this twice with starting value of zero will end up to 10
The answer is actually a tiny bit different for the specific case with ->increment() in Laravel:
If one would call $user->increment('credits', 1), the following query will be executed:
UPDATE `users`
SET `credits` = `credits` + 1
WHERE `id` = 2
This means that the query can be regarded as atomic, since the actual credits amount is retrieved in the query, and not retrieved using a separate SELECT.
So you can execute this query without running any DB::transaction() wrappers or lockForUpdate() calls because it will always increment it correctly.
To show what can go wrong, a BAD query would look like this:
# Assume this retrieves "5" as the amount of credits:
SELECT `credits` FROM `users` WHERE `id` = 2;
# Now, execute the UPDATE statement separately:
UPDATE `users`
SET `credits` = 5 + 1, `users`.`updated_at` = '2022-04-15 23:54:52'
WHERE `id` = 2;
Or in a Laravel equivalent (DONT DO THIS):
$user = User::find(2);
// $user->credits will be 5.
$user->update([
// Shown as "5 + 1" in the query above but it would be just "6" ofcourse.
'credits' => $user->credits + 1
]);
Now, THIS can go wrong easily since you are 'assigning' the credit value, which is dependent on the time that the SELECT statement took place. So 2 queries could update the credits to the same value while the intention was to increment it twice. However, you CAN correct this Laravel code the following way:
DB::transaction(function() {
$user = User::query()->lockForUpdate()->find(2);
$user->update([
'credits' => $user->credits + 1,
]);
});
Now, since the 2 queries are wrapped in a transaction and the user record with id 2 is READ-locked using lockForUpdate(), any second (or third or n-th) instance of this transaction that takes place in parallel should not be able to read using a SELECT query until the locking transaction is complete.

Select column to update based on value

What I am trying to do is reduce the time needed to aggregate data by producing a roll-up table of sorts. When I insert a record, an after insert trigger is fired which will update the correct row. I would update all of the columns of the roll-up table if I need to, but since there are 25 columns in the table and each insert will only update 2 of them, I would rather be able to dynamically select the columns to update. My current update statement in the after insert trigger looks similar to this:
update peek_at_chu.organization_data_state_log odsl
inner join ( select
lookUpID as org_data_lookup,
i.interval_id,
peek_at_chu.Get_Time_Durration_In_Interval1('s', new.start_time, new.end_time, i.start_time, i.end_time) as time_in_int,
new.phone_state_id
from
(peek_at_chu.interval_info i
join peek_at_chu.interval_step int_s on i.interval_step_id = int_s.interval_step_id)) as usl on odsl.org_date_lookup_id = usl.org_data_lookup
and odsl.interval_id = usl.interval_id
set
total_seconds = total_seconds + usl.time_in_int,
case new.phone_state_id
when 2 then
available_seconds = available_seconds + time_in_int
end;
In this, lookUpID is a variable previously declared in the trigger. The field that will dictate which field of the roll-up table to update is new.phone_state_id. The phone_state_id's are not consistent, that is some numbers are skipped in this table, so an update based on column number is out the window unless I create a mapping.
The case option throws an error but I am hoping to use something similar to that instead of 25 if statements if I can.
You have to update all the columns, but use a conditional to determine whether to give it a new value or keep the old value:
set total_seconds = total_seconds + usl.time_in_int,
available_seconds = IF(new.phone_state_id = 2, available_seconds + time_in_int, available_seconds)
Repeat the pattern in the last line for all the other columns that need to be updated conditionally.

Update mysql cell after fetching related cell value via select?

SQL:
$mysqli->query("UPDATE results
SET result_value = '".$row[0]['logo_value']."'
WHERE logo_id = '".$mysqli->real_escape_string($_GET['logo_id'])."'
AND user_id = '".$user_data[0]['user_id']."'");
This results table also contains result_tries I'd like to fetch before doing update, so I can use it to modify result_value... Is there a way to do it in a single shot instead of first doing select and than doing update?
Is this possible?
Basically:
UPDATE results SET result_value = result_value + $row[0][logo_value]
for just a simple addition. You CAN use existing fields in the record being updated as part of the update, so if you don't want just addition, there's not too many limits on what logic you can use instead of just x = x + y.

Renumbering items in a list with SQL queries?

The query:
$consulta = "UPDATE `list`
SET `pos` = $pos
WHERE `id_item` IN (SELECT id_item
FROM lists
WHERE pos = '$item'
ORDER BY pos DESC
LIMIT 1)
AND id_usuario = '$us'
AND id_list = '$id_pl'";
The thing is, this query is inside a foreach, and it wants to update the order of the items in a list. Before I had it like this:
$consulta = "UPDATE `list`
SET `pos` = $pos
WHERE `$pos` = '$item'
AND id_usuario = '$us'
AND id_list = '$id_pl'";
But when I update pos 2 -> 1, and then 1 -> 2, the result is two times 2 and no 1...
Is there a solution for this query?
Renumbering the items in a list is tricky. When you renumber the items in the list using multiple separate SQL statements, it is even trickier.
Your inner sub-select statement also is not properly constrained. You need an extra condition such as:
AND id_list = '$id_pl'
There are probably many ways to do this, but the one that may be simplest follows. I'm assuming that:
the unshown foreach loop generates $pos values in the desired sequence (1, 2, ...)
the value of $id_pl is constant for the loop
the foreach loop gives values for $us and $item for each iteration
the combination of $id_pl, $us, and $item uniquely identifies a row in the list table
there aren't more than 100 pos values to worry about
you are able to use an explicit transaction around the statement sequence
The suggested solution has two stages:
Allocate 100 + pos to each row to place it in its new position
Subtract 100 from each pos
This technique avoids any complicated issues about whether rows that have had there position adjusted are reread by the same query.
Inside the loop:
foreach ...
...$pos, $item, $us...
UPDATE list
SET pos = $pos + 100
WHERE id_item = '$item'
AND id_usuario = '$us'
AND id_list = '$id_pl'
AND pos < 100
end foreach
UPDATE list
SET pos = pos - 100
WHERE id__list = '$id_pl';
If you don't know the size of the lists, you could assign negative pos values in the loop and convert to positive after the loop, or any of a number of other equivalent mappings. The key is to update the table so that the new pos numbers in the loop are disjoint from the old numbers, and then adjust the new values after the loop.
Alternative techniques create a temporary table that maps the old numbers to the new and then executes a single UPDATE statement that changes the old pos value to the new for all rows in a single operation. This is probably more efficient, especially if the mapping table can be generated as a query, but that depends on whether the renumbering is algorithmic. The technique shown, albeit somewhat clumsy, can be made to work for arbitrary renumberings.