How to prevent select from mysql while another proccess update row - mysql

I have job dispatcher with laravel, each running job have selecting data from mysql and after got it update status field in db that row is in work.
But when multiple proccesses running it can be cross selected the same rows while it's not updated status.
tried lockForUpdate() - not helped, DB::raw('LOCK TABLES accounts WRITE') too
$lock = Cache::lock('gettingWorker', 5);
$lock->block(6, function () use ($acc)
{
DB::raw('LOCK TABLES accounts WRITE');
$this->worker = Accounts::getFreeAccount()->lockForUpdate()->firstOrFail();
$this->worker->updateStatus('WORKING');
$lock->release();
});
laravel atomic locks seems not working too, just some sleep(1) working in this case, but it's not a deal - because thausands of jobs running every hour

Try to use start transaction and select data using statement SELECT FOR UPDATE.

Related

How Prisma returns last inserted row reliably, can I mimic this with a procedure call?

I'm using Prisma on Azure Function App, so it is serverless with one prisma client (const prisma = new PrismaClient();) and I don't close the connection after a given function run.
In the beginning I created records with prisma.myTable.create({... call, where the response was the created row. From that I needed the auto incremented Id, to make queries in other tables.
As the project progressed I was given a MySQL procedure call which creates the proper entry in the table. It helped to reduce the need of some variables from my side, but introduced a new problem of reliably get the created row id. I call the procedure like this:
const rawQuery = `call EM_ADDMSG('${myVar1}', '${myVar2}', '${myVar3}', NULL, NULL, NULL, '${myVar4}', #id);`;
await prisma.$executeRaw(rawQuery);
We tried 2 approach. One is the session variable, which now I understand why it fails.
const rawId = 'select #id';
const response = await prisma.$queryRaw(rawId);
return response[0]['#id'];
If multiple requests arrives simultaneously the response contains sometimes the same id. I believe it is expected since I have only one session and it seems even I made 2 inserts, still get back the previous #id from the session.
The second approach is to track the id in a different table.
After I insert read that table and delete the row. I tried to use prismas delete() without querying, because that returns the deleted row also.
In slow pace it is also working, but with multiple almost simultaneous I run into an error where there is nothing to delete. (There is also another identifier in that table which I'm using for selecting that row). I get the error:
An operation failed because it depends on one or more records that were required but not found. Record to delete does not exist.
It seem, the sequence is insert, insert, delete but the second delete fails, because the table is empty.
The safest solution so far was to use await prisma.$transaction([... for inserting and deleting that row, but on that cases I get the error:
code: 1213, message: "Deadlock found when trying to get lock; try restarting transaction", state: "40001"
It is at least consistent, so there is no id mixup.
Is there a solution to this problem keeping the procedure? This is a messaging app where there is a possibility that some requests arrives at the same time.

Combining 2 Queries with OR operator

I'm trying to insert something to a table and also delete at the same time so my query is like this
$query = mysqli_query($connect,"SELECT * FROM inventory_item WHERE status = 'Unserviceable' OR DELETE * FROM inventory_item WHERE status = 'Available")
or die ("Error: Could not fetch rows!");
$count = 0;
I wanted to insert datas with Unserviceable status and at the same time delete datas with Available status but its not working.
I'm not really familiar with queries and just starting out.
This is not valid SQL syntax.
If you want to issue two queries, one to INSERT and one to DELETE, then you can send them as two separate calls to mysqli_query(). There appears to be an alternate function mysqli_multi_query() that allows multiple statements to be included which you can read about here.
Finally, if you want the two separate queries to execute as a single unit (that is, if one of them fails then neither is executed) then you should research the subject of database transactions, which allow you to execute multiple queries and commit or roll back the entire set of queries as a unit.

SQL query times out when updating using inner join?

This query dies when I try to execute it in PHP code and in phpMyAdmin.
UPDATE Inventory
INNER JOIN InventorySuppliers
ON Inventory.LocalSKU = InventorySuppliers.LocalSKU
SET Inventory.Integer2 = '1'
WHERE InventorySuppliers.SupplierSKU = '2D4027A6'
The error is:
1205 - Lock wait timeout exceeded; try restarting transaction
How can I prevent the lock timeout and/or solve this problem?
I can run this query in Microsoft Access correctly, and phpMyAdmin db is a copy of that Access database. Increasing the execution time is not an option for me as that will take too long for one record update.
$data1 = array('Inventory.Integer2'=>$shipping);
$this->db->where('InventorySuppliers.SupplierSKU', $SupplierSKU);
$this->db->update('Inventory inner join InventorySuppliers on Inventory.LocalSKU = InventorySuppliers.LocalSKU', $data1);
$this->db->close();
return ($this->db->affected_rows() > 0);
Issue this command before running your UPDATE.
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED
Okay...Interesting for me...
As I told you, My SQL db was a copy from ms access db...Dont know what happened, but mysql database tables dont have primay key or index although original db has have them...I tried asigning PK and indexes, but mysql returned me error, My final solution was
Delete thetable in mysql
Make sure PK is asigned in table structures ( In my cases, after
importing from MS-Access, I have to do it again)
Check indexed feilds ( I have one feild indexed), and make sure index exists
and this did the trick for me, Now the same query is running okay...
thanks to all for the help...Hopes These two steps might help someone in future...

Can a mysql table contain a table specific metadata flag

I have a cronjob that loops through and updates a MySQL table row by row. After the table is 'completed', I would like to execute the cronjob exactly 1 more time, to perform various cleanup activities.
In execute a cronjob exactly once, thaJeztah states:
It's best to set that value in the mysql database, e.g. needs_cleanup = 1. That way you can always find those records at a later time. Keeping it in the database allows to to recover, for example, if a cron-job wasn't executed or failed half-way the loop. – thaJeztah
I think this would be a good solution if its possible, as in my case I only need to set the flag once a day. If it is possible could someone point me to the sql commands nescesary to execute the placement of a simple binary flag, with values 0,1 in a mysql table?
UPDATE mytable SET needs_cleanup = 1
does it for all records of mytable. If you need for a single record, add a WHERE condition, e.g.
UPDATE mytable SET needs_cleanup = 1
WHERE id = 1

No data if queries are sent between TRUNCATE and SELECT INTO. Using MySQL innoDB

Using a MySQL DB, I am having trouble with a stored procedure and event timer that I created.
I made an empty table that gets populated with data from another via SELECT INTO.
Prior to populating, I TRUNCATE the current data. It's used to track only log entries that occur within 2 months from the current date.
This turns a 350k+ log table into about 750 which really speeds up reporting queries.
The problem is that if a client sends a query precisely between the TRUNCATE statement and the SELECT INTO statement (which has a high probability considering the EVENT is set to run every 1 minute), the query returns no rows...
I have looked into locking a read on the table while this PROCEDURE is ran, but locks are not allowed in STORED PROCEDURES.
Can anyone come up with a workaround that (preferably) doesn't require a remodel?
I really need to be pointed in the right direction here.
Thanks,
Max
I'd suggest an alternate approach instead of truncating the table, and then selecting into it...
You can instead select your new data set into a new table. Next, using a single RENAME command, rename the new table to the existing table and the existing table to some backup name.
RENAME TABLE existing_table TO backup_table, new_table TO existing_table;
This is a single, atomic operation... so it wouldn't be possible for the client to read from the data after it is emptied but before it is re-populated.
Alternately, you could change your TRUNCATE to a DELETE FROM, and then wrap this in a transaction along with the SELECT INTO:
START TRANSACTION
DELETE FROM YourTable;
SELECT INTO YourTable...;
COMMIT