MySql: How to delete the delay between SELECT and UPDATE - mysql

I have a simple set up for assigning opponents in a game.
Basically if the matchID is zero(value comes from elsewhere), a new match needs to be created, and it will perform a mysql select on the last matchID record to ascertain if there is someone waiting for a match or not.
To see if a player is waiting we can see if the teamB space is Zero (not taken). If however teamB has a value then no one is waiting and a new match must be created with this player as a 'team A'.
The code is as follows:
if ($matchID == 0)
{
$teamBquery = $conn ->query("SELECT matchID,teamBID,teamAID FROM challengeMatches ORDER BY matchID DESC LIMIT 1 ");
$teamBarray = $teamBquery->fetch(PDO::FETCH_ASSOC);
$teamBID=$teamBarray[teamBID];
$matchID=$teamBarray[matchID];
$teamAID=$teamBarray[teamAID];
if ($teamBID == 0){
$newChallenge = $conn ->query ("UPDATE challengeMatches SET managerBID='$managerID', teamBID='$teamID',matchStatus=1 WHERE matchID='$matchID'");
}else{
$filler = 0;
$matchID = $matchID+1;
$newChallenge = $conn ->query ("INSERT INTO challengeMatches (matchID,managerAID,managerBID,matchStatus,teamAID,teamBID) VALUES ('','$managerID','$filler','$filler','$teamID','$filler')");
}
}
My concern as someone pretty inexperienced is that as far as I can see there will be a delay between selecting the info and updating the info and so technically two mysql selects might return the same matchID to be used. And then even using the matchID+1 as variable is risky because it could be out of sync with the auto-increment matchID that is created in the database.
Are my fears founded or is the code so fast that the probability is not worth worrying about?
If i should be worried what can I do?

You firstly need to identify if you really need a solution to overcome this, the time gap should be so small unless you run a massive site the likelihood of selecting two matchId's is remote.
However there are really three area's you can look to improve this:
Locking - look at SELECT .. FOR UPDATE or SELECT ... LOCK IN SHARE MODE
Transactions
Refactor the database to actually update with a limit first and then select the updated range

Related

How to update the balance with random value (mysql) when a user registers the website?

For one of my courses, I'm trying to create banking system website using mysqldb and to write the code that make it possible for me to update users balance with random value while registration so the balance will not depend on the user's registration inserted information. i want the value to be inserted to the right spot in the table, only if this spot is null.
i used the code below:
$cursor = $MySQLdb->prepare("UPDATE users SET Balance=(Select FLOOR(0+ RAND() * 10000)) WHERE Balance=null AND userID=<userID>;")
I hope I was understood.
Thanks in advance
First of all, I would never put a calculation in a query string ;)
Also don't overcomplicate the rand() function. Take a look on the docs: rand() function docs
And lastly maybe think about it that is it a good idea to leave it on null? Maybe you could do something like 1 or so. (Only if it not possible for someone to have 1 money!!)
Do something like:
$balance_variable = $balance_variable = rand(5000, 10000);
$userID_variable = /*specify it somehow*/;
$cursor = $MySQLdb->prepare("UPDATE users SET Balance=? WHERE Balance IS NULL AND userID=?");
$cursor->bind_param('ss', $balance_variable, $userID_variable);
$cursor->execute();

Arma 2 DayZ Epoch SQL dead body cleaner

I have been trying to figure out a way to do something like what this Delete all records except the most recent one?
But I have been unable to apply it to my circumstance.
My circumstance:
https://gyazo.com/178b2493e42aa4ec4e1a9ce0cbdb95d3
SELECT * FROM dayz_epoch.character_data;
CharacterID, PlayerUID, InstanceID, Datestamp, LastLogin, Alive, Generation
5 |76561198068668633|11|2016-05-31 18:21:37|2016-06-01 15:58:03|0|1
6 |76561198068668633|11|2016-06-01 15:58:20|2016-10-08 21:30:36|0|2
7 |76561198068668633|11|2016-10-08 21:30:52|2016-10-09 18:59:07|1|3
9 |76561198010759031|11|2016-10-08 21:48:32|2016-10-08 21:53:31|0|2
10|76561198010759031|11|2016-10-08 21:53:55|2016-10-09 19:07:28|1|3
(Look at image above) So I am currently trying to make a better method for deleting dead bodies from my database for my DayZ Epoch server. I need a code to delete Where ALIVE = 0 if that same PlayerUID has another instance where it is ALIVE = 1.
The other thing the code could do is just delete all players except the most recent one for each PlayerUID. I hope this makes sense. It's hard to explain. The first link explains better for me.
But basically, I want to delete any dead player that now has an alive player with that same PlayerUID. If I were better at coding, I could see many variables I could use like PlayerUID (a must), Datestamp, Alive, and generation. Probably only need 2 of those, one being the PlayerUID.
Thanks a bunch.
The easiest to me seems like it would be something like: SORT by PlayerUID AND FOR EACH PlayerUID DELETE ALL EXCEPT(?) newest Datestamp.
This would keep the player stats from their dead body in case they do not create a new character before this script is called.
So basicly, you need to be sure that on a Insert (or update of ALIVE to 1) of a player, you removed all previous (just in case, normaly there should be only one) player with the same PlayerUID as the new one.
The easiest is to create a trigger that will run before the insert (and on UPDATE if this is possible to update ALIVE to 1 to revieve one). Using the UID of the new player to run a delete on the table for the specific UID. This is that simple ;)
For the trigger, this should look like this
Create trigger CLEAR_PLAYER
before insert on dayz_epoch.character_data
For Each Row
Delete from dayz_epoch.character_data
where PlayerUID = NEW.PlayerUID
and Alive = 0 --Just in case, but what will happen if there where a line with Alive = 1 ?
This will be executed before the insert in the table dayz_epoch.character_data
(so don't remove the new one). This will remove every line with the PlayerUID of the inserted line. If you want to add some security, you could add the and Alive= 0 in the condition.
Edit :
Didn't write trigger in a long time, but I use the official doc as a reminder. Take a look if you need.

Using logic within an update and returning updated fields using as few queries as possible

I'm writing a video game in javascript on a server that saves info in a mysql database and I am trying to make my first effect attached to simple healing potion item. To implement the effect I call a spells table using spell_id and it gets a field called effect containing the code to execute on my server. I use the eval() function to execute the code in the string. In order to optimize the game I want to run as few queries as possible. For this instance (and I think the answer will help me evaluate other similar effects) I want to update the 'player' table which contains a stat column like 'health' then I want it to add n which will be a decreasing number 15 then 250 ms later add 14 then 13 until that n=1 the net effect is a large jump in health then smaller and smaller accomplishing this is relatively easy if the player's health reaches his maximum allowed limit the effect will stop immediately...
but I'd like to do a single update statement for each increase rather than a select and an update every 250ms to check if health > max_health and make sure the player's health doesn't go above his max health. So to digress a bit I'd like a single update that given this data
player_id health max_health
========= ====== ==========
1 90 100
will add 15 to health unless (max_health-health) < 15... in this case it should only add 10.
An easier solution might be
if I could just return health and max health after each update I update it I don't mind doing a final
pseudo code
if health > max_health
update health set health = max health
So if anyone could explain how to return fields after an update that would help.
Or if anyone could show how to use logic within the update that would also help.
Also, If I didn't give enough information I'm sorry I'd be glad to provide more I just didn't want to make the question hard to understand.
update health
set health = least(max_health, health +<potion effect>)
where player_id = ...
EDIT
For your other question : normally, i think that update returns the number of affected rows. So if you try to update health when health is already = max_health, it should return 0.
I'd know how to do this in php, for example, but just said you where using javascript... so ?
http://dev.mysql.com/doc/refman/5.6/en/update.html
UPDATE returns the number of rows that were actually changed. The
mysql_info() C API function returns the number of rows that were
matched and updated and the number of warnings that occurred during
the UPDATE.
Use the ANSI standard CASE function, or the mysql only least function as in the other answer
UPDATE player
SET health = CASE WHEN health + [potion] > max_health
THEN max_health
ELSE health + [potion]
END CASE
WHERE player_id = [player_id]

Recursive MySQL trigger which calls the same table and the same trigger

I'm writing a simple forum for a php site. I'm trying to calculate the post counts for each category. Now a category can belong to another category with root categories being defined as having a NULL parent_category_id. With this architecture a category can have an unlimited number of sub-categories and keeps the table structure fairly simple.
To keep things simple lets say the categories table has 3 fields: category_id, parent_category_id, post_count. I don't think the remaining database structure is relevant so I'll leave it out for now.
Another trigger is calling the categories table causing this trigger to run. What I want is it to update the post count and then recursively go through each parent category increasing that post count.
DELIMITER $$
CREATE TRIGGER trg_update_category_category_post_count BEFORE UPDATE ON categories FOR EACH ROW
BEGIN
IF OLD.post_count != NEW.post_count THEN
IF OLD.post_count < NEW.post_count THEN
UPDATE categories SET post_count = post_count + 1 WHERE categories.category_id = NEW.parent_category_id;
ELSEIF OLD.post_count > NEW.post_count THEN
UPDATE categories SET post_count = post_count - 1 WHERE categories.category_id = NEW.parent_category_id;
END IF;
END IF;
END $$
DELIMITER ;
The error I'm getting is:
#1442 - Can't update table 'categories' in stored function/trigger because it is already used by statement which invoked this stored function/trigger.
I figure you can do a count() on each page load to calculate the total posts but on large forums this will slow things down as discussed many times on here (e.g. Count posts with php or store in database). Therefore for future proofing i'm storing the post count in the table. To go one step further I thought i'd use triggers to update these counts rather than PHP.
I understand there are limitations in MySQL for running triggers on the same table that's being updated which is what is causing this error (i.e. to stop an infinite loop) but in this case surely the loop would stop once it reaches a category with a NULL parent_category_id? There must be some kind of solution whether it's adjusting this trigger or something different entirely. Thanks.
EDIT I appreciate this might not be the best way of doing things but it is the best thing I can think of. I suppose if you changed a parents category to another it would mess things up, but this could be fixed by another trigger which re-syncs everything. I'm open to other suggestions on how to solve this problem.
I usually recommend against using triggers unless you really, really need to; recursive triggers are a great way of introducing bugs that are really hard to reproduce, and require developers to understand the side effects of an apparently simple action - "all I did was insert a record into the categories table, and now the whole database has locked up". I've seen this happen several times - nobody did anything wrong or stupid, it's just a risk you run with side effects.
So, I would only resort to triggers once you can prove you need to; rather than relying on the opinion of strangers based on generalities, I'd rig up a test environment, drop in a few million test records, and try to optimize the "calculate posts on page load" solution so it works.
A database design that might help with that is Joe Celko's "nested set" schema - this takes a while to get your head round, but can be very fast for querying.
Only once you know you have a problem that you really can't solve other than by pre-computing the post count would I consider a trigger-based approach. I'd separate out the "post counts" into a separate table; that keeps your design a little cleaner, and should get round the recursive trigger issue.
The easiest solution is to fetch all the posts per category and afterwards link them together using a script/programming language:
for instance in php:
<?php
// category: id, parent, name
// posts: id, title, message
$sql = "select *, count(posts.id) From category left join posts ON posts.cat = category.id Group by category.id";
$query = mysql_query($sql);
$result = array();
while($row = mysql_fetch_assoc($query)){
$parent = $row['parent'] == null ? 0 : $row['parent'];
$result[$parent][] = $row;
}
recur_count(0);
var_dump($result);
function recur_count($depth){
global $result;
var_dump($result[$depth],$depth);
foreach($result[$depth] as $id => $o){
$count = $o['count'];
if(isset($result[$o['id']])){
$result[$depth][$id]['count'] += recur_count($o['id']);
}
}
return $count;
}
Ok so for anyone wondering how I solved this I used a mixture of both triggers and PHP.
Instead of getting each category to update it's parent, I've left it to the following structure: a post updates it's thread and then a thread updates it's category with the post count.
I've then used PHP to pull all categories from the database and loop through adding up each post count value using something like this:
function recursiveCategoryCount($categories)
{
$count = $categories['category']->post_count;
if(!is_null($categories['children']))
foreach($categories['children'] as $child)
$count += recursiveCategoryCount($child);
return $count;
}
At worst instead of PHP adding up every post on every page load, it only adds up the total category posts (depending at what node in the tree you are in). This should be very efficient as you're reducing the total calculations from 1000s to 10s or 100s depending on your number of categories. I would also recommend running a script every week to recalculate the post counts in case they become out of sync, much like phpBB. If I run into issues using triggers then I'll move that functionality into the code. Thanks for everyones suggestions.

Is it better to use database polling or events for the following system?

I'm working on an ordering system that works exactly the way Netflix's service works (see end of this question if you're not familiar with Netflix). I have two approaches and I am unsure which approach is the right one; one relies on database polling and the other is event driven.
The following two approaches assume this simplified schema:
member(id, planId)
plan(id, moviesPerMonthLimit, moviesAtHomeLimit)
wishlist(memberId, movieId, rank, shippedOn, returnedOn)
Polling: I would run the following count queries in wishlist
Count movies shippedThisMonth (where shippedOn IS NOT NULL #memberId)
Count moviesAtHome (where shippedOn IS NOT NULL, and returnedOn IS NULL #memberId)
Count moviesInList (#memberId)
The following function will determine how many movies to ship:
moviesToShip = Min(moviesPerMonthLimit - shippedThisMonth, moviesAtHomeLimit - moviesAtHome, moviesInList)
I will loop through each member, run the counts, and loop through their list as many times as moviesToShip. Seems like a pain in the neck, but it works.
Event Driven: This approach involves adding an extra column "queuedForShipping" and marking it to 0,1 every time an event takes place. I will do the following counts:
Count movies shippedThisMonth (where shippedOn IS NOT NULL #memberId)
Count moviesAtHome (where shippedOn IS NOT NULL, and returnedOn IS NULL #memberId)
Count moviesQueuedForShipping (where queuedForShipping = 1, #memberId)
Instead of using min, I have to use the following if statements
If moviesPerMonthLimit > (shippedThisMonth + moviesQueuedForShipping)
AND IF moviesAtHomeLimit > (moviesAtHome + moviesQueuedForShipping))
If both conditions are true, I will select a row from wishlist where queuedForShippinh = 0, and set it's queuedForShipping to 1. I will run this function every time someone adds, deletes, reorders their list. When it's time to ship, I would select #memberId where queuedForShipping = 1. I would also run this when updating shippedAt and returnedAt.
Approach one is simple. It also allows members to mess around with their ranks until someone decides to run the polling. That way what to ship is always decided by rank. But ppl keep telling polling is bad.
The event driven approach is self-sustaining, but it seems like a waste of time to ping the database with all those counts every time a person changes their list. I would also have to write to the column queuedForShipment. It also means when a member re-ranks their list and they have pending shipments (shippedAt IS NULL, queuedForShipping = 1) I would have to update those rows and set queuedForShipping back to 1 based on the new ranks. (What if someone added 5 movies, and then suddenly went to change the order? Well, queuedForShipment would already be set to 1 on the first two movies he or she added)
Can someone please give me their opinion on the best approach here and the cons/advantages of polling versus event driven?
Netflix is a monthly subscription service where you create a movie list, and your movies are shipped to you based on your service plan limits.
Based on what you described, there's no reason to keep the data "ready to use" (event) when you can create it very easily when needed (poll).
Reasons to cache it:
If you needed to display the next item to the user.
If the detailed data was being removed due to some retention policy.
If the polling queries were too slow.