PHP prevent MySQL race condition - mysql

I am trying to prevent users joining at the "same time" from selecting the same available admin.
What i'm doing is:
$conn->beginTransaction();
$sth = $conn->query("SELECT admin,room FROM admins WHERE live = 1 AND available = 1 ORDER BY RAND() LIMIT 1 FOR UPDATE");
$free_admin = $sth->fetch();
if (!empty($free_admin)) {
$conn->query("UPDATE admins SET available = 0 WHERE room = " . $free_admin['room']);
.
.
$conn->commit();
} else {
$conn->rollBack();
}
Unfortunately it's not really working. When there is a high traffic, many users end up selecting the same free admin which causes an issue.
How can i lock a SELECTED row so i can read it and update it by only one user before any other user can read it?

In the updatestatement, use
WHERE room = ... AND availabe = 1
After the query, check mysql affected_rows to verify you did change the availability succesfully, and if not, restart.

Related

Can't wrap my head around MySQL statement

I have two tables:
cache and main
In cache there are a lot of fields; in main a little less. A UNION is not going to work because of the unequal number of columns.
cache
client - file - target - many other columns
main
client - file - target - few other columns
From cache I would like all columns for which main.target LIKE '%string%', cache.client = main.client, cache.file = main.file
For these particular records, target, client and file are always the same in main and cache.
I just can't get my head around this, but then again MySQL never was my strongest point.
Thank you very much in advance!
In the end combining the two SELECT statements with a UNION made things very complicated, for the simple reason there were countless other queries, some without UNION, that in the end all had to be processed by the same end routine presenting the results. As this was only a one-time query and time wasn't really an issue, in the end I just used SELECT on the two different tables and then combined the results by checking if a certain field was present. If not, the remaining results had to be fetched from the cache table; if so, the remaining results had to be fetched from the main table.
I actually wonder whether this solution is faster, slower or just as fast.
if (!isset($row['current']))
{
$field = $row['field'];
$sqlcache = "SELECT * FROM " . $dbtable . " WHERE (client = '$sqlclient' AND file = '$sqlfile' AND field = '$field')";
$resultcache = $conn->query($sqlcache);
if (!$resultcache)
{
die($conn->error);
}
$rowcache = $resultcache->fetch_assoc();
$currenttarget = $rowcache['current'];
$context = $rowcache['context'];
$dirtysource = $rowcache['dirtysource'];
$stringid = $rowcache['stringid'];
$limit = $rowcache['maxlength'];
$locked = $rowcache['locked'];
$filei = $rowcache['filei'];
}
else
{
$currenttarget = $row['current'];
$context = $row['context'];
$dirtysource = $row['dirtysource'];
$stringid = $row['stringid'];
$limit = $row['maxlength'];
$locked = $row['locked'];
$filei = $row['filei'];
}

phpMyAdmin mysql database has huge traffic

So im doing a project for which i have made an android application which communicates with a mysql database on phpMyAdmin. The android app sends requests to a php file which in turn communicates with the database. these php files are on the same server (000webhost) I have a bit of a problem though. Sometimes the connection doesnt work (eg. the data doesnt come through or i get a time-out error or something)
I have looked at the data traffic on phpMyAdmin and I think it is abnormally high (im not sure). The app is used by about 10 persons and the total queries should be about a 1000 a day. The queries are simple select and update actions. Though when I look at the data traffic (status) on phpMyadmin it says there has been 2.2 TiB data traffic total. Which I think is ridiculous. Also phpMyAdmin gives a lot of errors like "the rate of reading the first index entry is high" "and the rate of opening tables is high". And when I look at the monitor on phpMyAdmin it has random peaks of sudden 150 MiB data sent. Like, everything is incredibly high.
this is an example of a query I send to the database via php:
<?php
$con = mysqli_connect("*************");
$ID = $_POST["ID"];
$naam = $_POST["NAME"];
$inbier = $_POST["INBIER"];
$outbier = $_POST["OUTBIER"];
$pof = $_POST["POF"];
$adjust = $_POST["ADJUST"];
$statement = mysqli_prepare($con, "SELECT * FROM Bierlijst WHERE NAME = ?");
mysqli_stmt_bind_param($statement, "s", $naam);
mysqli_stmt_execute($statement);
mysqli_stmt_store_result($statement);
mysqli_stmt_bind_result($statement, $ID, $naam, $inbier, $outbier, $pof, $adjust);
$response = array();
$response["success"] = false;
while(mysqli_stmt_fetch($statement)){
$response["success"] = true;
$response["NAME"] = $naam;
$response["INBIER"] = $inbier;
$response["OUTBIER"] = $outbier;
$response["POF"] = $pof;
$response["ADJUST"] = $adjust;
}
echo json_encode($response);
?>
Is it possible that I'm running bad queries which are failing and looping and stuff?

Update planned order - two committed modifications, only one saved

I need to update two information on one object: the quantity (PLAF-gsmng) and refresh the planned order via the module function 'MD_SET_ACTION_PLAF'.
I successfully find a way to update each data separately. But when I execute the both solutions the second modification is not saved on the database.
Do you know how I can change the quantity & set the action on PLAF (Planned order) table ?
Do you know other module function to update only the quantity ?
Maybe a parameter missing ?
It's like if the second object is locked (sm12 empty, no sy-subrc = locked) ... and the modification is not committed.
I tried to:
change the order of the algorithm (refresh and after, change PLAF)
add, remove, move the COMMIT WORK & COMMIT WORK AND WAIT
add DEQUEUE_ALL or DEQUEUE_EMPLAFE
This is the current code:
1) Read the data
lv_plannedorder = '00000000001'
"Read PLAF data
SELECT SINGLE * FROM PLAF INTO ls_plaf WHERE plnum = lv_plannedorder.
2) Update Quantity data
" Standard configuration for FM MD_PLANNED_ORDER_CHANGE
CLEAR ls_610.
ls_610-nodia = 'X'. " No dialog display
ls_610-bapco = space. " BAPI type. Do not use mode 2 -> Action PLAF-MDACC will be autmatically set up to APCH by the FM
ls_610-bapix = 'X'. " Run BAPI
ls_610-unlox = 'X'. " Update PLAF
" Customize values
MOVE p_gsmng TO ls_plaf-gsmng. " Change quantity value
MOVE sy-datlo TO ls_plaf-mdacd. " Change by/datetime, because ls_610-bapco <> 2.
MOVE sy-uzeit TO ls_plaf-mdact.
CALL FUNCTION 'MD_PLANNED_ORDER_CHANGE'
EXPORTING
ecm61o = ls_610
eplaf = ls_plaf
EXCEPTIONS
locked = 1
locking_error = 2
OTHERS = 3.
" Already committed on the module function
" sy-subrc = 0
If I go on the PLAF table, I can see that the quantity is edited. It's working :)
3) Refresh BOM & change Action (MDACC) and others fields
CLEAR ls_imdcd.
ls_imdcd-pafxl = 'X'.
CALL FUNCTION 'MD_SET_ACTION_PLAF'
EXPORTING
iplnum = lv_plannedorder
iaccto = 'BOME'
iaenkz = 'X'
imdcd = ls_imdcd
EXCEPTIONS
illegal_interface = 1
system_failure = 2
error_message = 3
OTHERS = 4.
IF sy-subrc = 0.
COMMIT WORK.
ENDIF.
If I go on the table, no modification (only the modif. of the part 2. can be found on it).
Any idea ?
Maybe because the ls_610-bapco = space ?
It should be possible to update planned order quantity with MD_SET_ACTION_PLAF too, at least SAP Help tells us so. Why don't you use it like that?
Its call for changing the quantity should possibly look like this:
DATA: lt_acct LIKE TABLE OF MDACCTO,
ls_acct LIKE LINE OF lt_acct.
ls_acct-accto = 'BOME'.
APPEND lt_acct.
ls_acct-accto = 'CPOD'.
APPEND lt_acct.
is_mdcd-GSMNG = 'value' "updated quantity value
CALL FUNCTION 'MD_SET_ACTION_PLAF'
EXPORTING
iplnum = iv_plnum
iaenkz = 'X'
IVBKZ = 'X'
imdcd = is_mdcd "filled with your BOME-related data + new quantity
TABLES
TMDACCTO = lt_accto
EXCEPTIONS
illegal_interface = 1
system_failure = 2
error_message = 3.
So there is no more need for separate call of MD_PLANNED_ORDER_CHANGE anymore and no more problems with update.
I used word possibly because I didn't find any example of this FM call in the Web (and SAP docu is quite ambiguous), so I propose this solution just as is, without verification.
P.S. Possible actions are listed in T46AS table, and possible impact of imdcd fields on order can be checked in MDAC transaction. It is somewhat GUI equivalent of this FM for single order.

Too many sql connections error : due to long polling

I have designed a coding platform just like Spoj and Codeforces for competitions to be organised in my college on LAN.
I have used long polling there so that any announcements from the Admin can be broadcasted to all users with a JavaScript alert message. When anything is posted on the forum then the admin also gets a notification.
But for just 16 users (including the 1 Admin) accessing the site, the server went down showing too many sql connections. I restarted my laptop (server) and it continued for a while, then again went down; giving the same error message as before.
When I removed both long-poll processes everything continued smoothly.
Server-side code for long-poll:
include 'dbconnect.php';
$old_ann_id = $_GET['old_ann_id'];
$resultann = mysqli_query($con,"SELECT cmntid FROM announcements ORDER BY cmntid DESC LIMIT 1");
while($rowann = mysqli_fetch_array($resultann)){
$last_ann_id = $rowann['cmntid'];
}
while($last_ann_id <= $old_ann_id){
usleep(10000000);
clearstatcache();
$resultann = mysqli_query($con,"SELECT cmntid FROM announcements ORDER BY cmntid DESC LIMIT 1");
while($rowann = mysqli_fetch_array($resultann)){
$last_ann_id = $rowann['cmntid'];
}
}
$response = array();
$response['msg'] = 'new';
$response['old_ann_id'] = $last_ann_id;
$resultann = mysqli_query($con, "Select announcements from announcements where cmntid = $last_ann_id");
while($rowann = mysqli_fetch_array($resultann)){
$response['announcement'] = $rowann['announcements'];
}
echo json_encode($response);
Max connections is defined. Think the default is 100 or 151 connections depending on the version of MySQL. You can see the value in "Server variables and settings" in phpmyadmin (or directly by executing *show variables like "max_connections";* ).
If that is set to something very low (say 10) and you have (say) 15 users you will hit the limit rapidly. You are giving each long polling script its own connection, and that connection is probably sitting open until that long polling script ends. You could likely reduce this by having the script disconnect after each time it checks the database, then reconnect the next time it checks (ie, if your long polling script checks the db every 5 seconds you probably have well over 4.5 seconds of that 5 seconds currently where there is a connection to the db but where the connection is not being used)
However you could have a larger number of connections, but if you trigger the ajax polling multiple times per user, each could have several simultaneous connections. This is probably quite easy to do with a minor bug in your javascript.
Possibly worse if you are using a persistent connections you might leave connections open after the user has left the page that calls the long polling script.
EDIT - update based on your script.
Note I am not sure exactly what your dbconnect.php include is doing. I might be possible to easily call a connect / disconnect function in that include, but I have just put it in this example code as using the mysqlu_close and mysqli_connect functions.
<?php
include 'dbconnect.php';
$old_ann_id = $_GET['old_ann_id'];
$resultann = mysqli_query($con,"SELECT MAX(cmntid) FROM announcements");
if($rowann = mysqli_fetch_array($resultann))
{
$last_ann_id = $rowann['cmntid'];
}
$timeout = 0;
while($last_ann_id <=$old_ann_id and $timeout < 6)
{
$timeout++;
mysqli_close($con);
usleep(10000000);
clearstatcache();
$con = mysqli_connect("myhost","myuser","mypassw","mybd");
$resultann = mysqli_query($con,"SELECT MAX(cmntid) FROM announcements");
if($rowann = mysqli_fetch_array($resultann))
{
$last_ann_id = $rowann['cmntid'];
}
}
if ($last_ann_id >$old_ann_id)
{
$response = array();
$response['msg'] = 'new';
$response['old_ann_id'] = $last_ann_id;
$resultann=mysqli_query($con,"SELECT cmntid, announcements FROM announcements WHERE cmntid>$old_ann_id ORDER BY cmntid");
while($rowann = mysqli_fetch_array($resultann))
{
$response['announcement'][]=$rowann['announcements'];
$response['old_ann_id'] = $rowann['cmntid'];
}
mysqli_close($con);
echo json_encode($response);
}
else
{
echo "No annoucements - resubmit";
}
?>
I have added a count to the main loop. But it will drop out of the loop whether anything is found once it has executed 6 times. This way even if someone leaves the page the script will only be running for a short time afterwards (max a minute). You will have to amend you javascript to catch this and resubmit the ajax call.
Also I have changed the announcement in the response to be an array. This way if there are several announcements while the script is running all will be brought back.

Mysql Server has gone away error on PHP script

I've wrote a script to batch process domains and retrieve data on each one. For each domain retrieved, it connects to a remote page via curl and retrieves the data required for 30 domains at a time.
This page typical takes between 2 - 3 mins to load and return the curl result, at this point, the details are parsed and placed into an array (page rank tools function).
Upon running this script via CRON, I keep getting the error 'MySQL server has gone away'.
Can anyone tell me if I'm missing something obvious that could be causing this?
// script dies after 4 mins in time for next cron to start
set_time_limit(240);
include('../include_prehead.php');
$sql = "SELECT id, url FROM domains WHERE (provider_id = 9 OR provider_id = 10) AND google_page_rank IS NULL LIMIT 30";
$result = mysql_query($sql);
$row = mysql_fetch_assoc($result);
do {
$url_list[$row['id']] = $row['url'];
} while ($row = mysql_fetch_assoc($result));
// curl domain information page - typically takes about 3 minutes
$pr = page_rank_tools($url_list);
foreach ($pr AS $p) {
// each domain
if (isset($p['google_page_rank']) && isset($p['alexa_rank']) && isset($p['links_in_yahoo']) && isset($p['links_in_google'])) {
$sql = "UPDATE domains SET google_page_rank = '".$p['google_page_rank']."' , alexa_rank = '".$p['alexa_rank']."' , links_in_yahoo = '".$p['links_in_yahoo']."' , links_in_google = '".$p['links_in_google']."' WHERE id = '".$p['id']."'";
mysql_query($sql) or die(mysql_error());
}
}
Thanks
CJ
This happens because MySQL connection has its own timeout and while you are parsing your pages, well, it ends. You can try to increase this timeout with
ini_set('mysql.connect_timeout', 300);
ini_set('default_socket_timeout', 300);
(as mentioned in MySQL server has gone away - in exactly 60 seconds)
Or just call mysql_connect() again.
Because the curl take too long time, you can consider to connect again your database before entering the LOOP for update
There are many reasons why this error occurs. See a list here, it may be something you can fix quite easily
MySQL Server has gone away