SQL query times out when updating using inner join? - mysql

This query dies when I try to execute it in PHP code and in phpMyAdmin.
UPDATE Inventory
INNER JOIN InventorySuppliers
ON Inventory.LocalSKU = InventorySuppliers.LocalSKU
SET Inventory.Integer2 = '1'
WHERE InventorySuppliers.SupplierSKU = '2D4027A6'
The error is:
1205 - Lock wait timeout exceeded; try restarting transaction
How can I prevent the lock timeout and/or solve this problem?
I can run this query in Microsoft Access correctly, and phpMyAdmin db is a copy of that Access database. Increasing the execution time is not an option for me as that will take too long for one record update.
$data1 = array('Inventory.Integer2'=>$shipping);
$this->db->where('InventorySuppliers.SupplierSKU', $SupplierSKU);
$this->db->update('Inventory inner join InventorySuppliers on Inventory.LocalSKU = InventorySuppliers.LocalSKU', $data1);
$this->db->close();
return ($this->db->affected_rows() > 0);

Issue this command before running your UPDATE.
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED

Okay...Interesting for me...
As I told you, My SQL db was a copy from ms access db...Dont know what happened, but mysql database tables dont have primay key or index although original db has have them...I tried asigning PK and indexes, but mysql returned me error, My final solution was
Delete thetable in mysql
Make sure PK is asigned in table structures ( In my cases, after
importing from MS-Access, I have to do it again)
Check indexed feilds ( I have one feild indexed), and make sure index exists
and this did the trick for me, Now the same query is running okay...
thanks to all for the help...Hopes These two steps might help someone in future...

Related

How to prevent select from mysql while another proccess update row

I have job dispatcher with laravel, each running job have selecting data from mysql and after got it update status field in db that row is in work.
But when multiple proccesses running it can be cross selected the same rows while it's not updated status.
tried lockForUpdate() - not helped, DB::raw('LOCK TABLES accounts WRITE') too
$lock = Cache::lock('gettingWorker', 5);
$lock->block(6, function () use ($acc)
{
DB::raw('LOCK TABLES accounts WRITE');
$this->worker = Accounts::getFreeAccount()->lockForUpdate()->firstOrFail();
$this->worker->updateStatus('WORKING');
$lock->release();
});
laravel atomic locks seems not working too, just some sleep(1) working in this case, but it's not a deal - because thausands of jobs running every hour
Try to use start transaction and select data using statement SELECT FOR UPDATE.

Mysql jdbc - Inserted values get deleted immediatly

I am using the mysql java connector (mysql-connector-java-8.0.11) to access a database.
I open a connection and generate some tables. So far everything works.
But when I try to insert a value in one of the tables, the row gets deleted immediatly.
I use following code
PreparedStatement stmt = connection.prepareStatement("INSERT INTO database_upgrade (version) VALUES (?)");
stmt.setInt(1, newVersion);
stmt.executeUpdate();
stmt.close();
When I debug into the executeUpdate the insertId and updateCount are returned correctly. But when I look at the database the entry is missing.
The autoincrement id increased which indicates that the row was inserted and then deleted, but I don't know why.
Interesting point: If I use a new connection for this insert, everything works perfectly. But when I use the connection from the previous actions, it does not.
Just for clarification
All previous statements are closed
The connection is still open
The previous Actions are some "create table" and "alter table"
Can anyone tell me why this happens? I used the same code with MS SQL server and this did not happen.
Thank you!
thanks to Gord Thompson I found it. The autocommit was turned off by some previous sql commands (that I did not double check).
Solution: I removed the autoCommit disabling part in my sql files.

MySQL Rollback in node.js

I'm trying to issue the following in MySQL via node.js (using the https://github.com/mysqljs/mysql library).
I'm using the transaction API to rollback when an error happens but it doesn't appear to roll back. I next tried to simplify the problem and put it directly in PHPMyAdmin SQL box to do the following....
START TRANSACTION;
UPDATE users SET balance=balance + 1 WHERE id_user='someuserid'
ROLLBACK WORK;
I was expecting the user balance to remain at it's previous value (124) but instead it keeps adding one and shows an updated balance of 125 at the end of this.
Any idea what I'm doing wrong? Could it be the MySQL Db isn't supporting of transactions and/or is UPDATE like this allowed to be rolled back in transactions?
Ok, problem solved.
For reference for anyone else encountering this it was because of the table engine being used.
My table was using MyISASM which DOES NOT support transactions and fails silently. It autocommits on every transaction hence ROLLBACK never did anything.
The fix was to change the table engine from MyISAM to InnoDB (did via phpmyadmin but could also do it via
ALTER TABLE table_name ENGINE=InnoDB; sql command.
Now the following works perfectly.
START TRANSACTION;
UPDATE users SET balance = 8 WHERE id_user='s10';
UPDATE users SET balance = 9 WHERE id_user='s12';
ROLLBACK;

Access Can't delete records due to lock violations

We have been using a Delete command on Access 2003 with xp machine from past few years and it was working good until we upgraded our systems to Access 2010 and Windows 7.
Please see the error below. No sure what i was missing. I tried creating a new link oracle table, but it didn't work.
I just ran into the same lock error when trying to update a linked SQL Server table via MS Access 2010.
This may no longer be a problem for you since the thread is so old, but hopefully it makes it easier on someone else in the future.
I was able to fix it by changing the ID field in SQL Server from a bigint to an int.
You may also want to make sure that "Default record locking" is set to "No locks" in Access Options --> Client Settings --> Advanced
DefaultRecordLocking http://www.tmetrics.net/support/patrick/stackoverflow/defaultrecordlocking.jpg
I had this error with a linked table that had a primary key and a unique key. When linking the table, Access assumed the unique key was the primary key.
By temporarily disabling or deleting the unique key and refreshing the ling using Linked Table Manager" the problem was resolved.
It appears that the 332 records it can't delete are locked, perhaps by some other process? Is there a stagnant process running somewhere that is holding a lock on those records?
I had a similar problem... Could delete records manually, but through query was getting that message.
Even though I was deleting records in the "many" table in a one-to-many relationship, a "key violation" message kept appearing.
I edited the relationship to ADD Cascade Update and Cascade Delete, and the problem went away.
I am having this problem. I have an Access front-end to an Oracle database. I am trying to delete records from a linked table. I have not found a solution anywhere on the internet. Here is my "solution".
I converted from DoCmd.RunSQL to DBS.Execute to run my Delete query. That got rid of the error message. But not all records were being deleted still. So now I execute the delete query in a loop.
recCount = DLookup("count(*)", "my_table")
Do While recCount > 0
DBS.Execute "DELETE * FROM my_table", dbSeeChanges
recCount = DLookup("count(*)", "prod_nmpsia_premiums")
Loop
Sometimes it only takes one pass. Other times it takes a few.
I know it's a kludge. But it works.
I was running into this same issue using Access 2016 and an Oracle database. I could append to the Oracle tables just fine, but when I ran the delete query to remove those same records, it would delete some records and say others were locked. If I looped the query enough times it would eventually erase all the records.
The solution I found was in the Access delete query, I set the 'Use Transactions' property to 'No' and it started working fine without any record locks. I don't know if this is a perfect solution, but it is working in my case.
--Update--
The above solution worked for some of my queries but then I still ran into the issue on other queries. So it helped in some cases but didn't work completely.
What does seem to be working now is that I stored a Procedure in Oracle that would delete the data I needed to delete and I am calling that procedure from Access.

What is the best way to run a number of sql commands to archive and delete data

I am new to mysql so forgive me if the solution to my problem is obvious. I have a mysql database that has a number of tables that I need to archive. A number of the tables have over 1000000 rows. I have 11 tables that I am archiving and then deleting from.
A snippet from my code would be :
set #archive_before_date='2010-10-01'
create table if not exists archive_prescription as
select * from prescription where prescriptiondate < #archive_before_date;
delete from prescription where prescriptiondate < #archive_before_date;
I am currently using mysql browser and I am running the script in the script tab. When I run the script the mysql query browser shows in task manager as not responding and mysql administrator shows only the first two archive tables. On a database with the same schema and less data the script works perfectly.
Does anyone have any hints as how to best approach this problem?
Copy the script into a file and save it to disk.
Then run mysql -u -p < yoursavescriptname
This will read the commands in from the script and execute them in turn.
This command line itself could be placed in a .bat file on Windows, or an executable .sh file on *nix.
Also you could place several mysql etc. < filename lines in the same .bat files.
If you wanted to run this at a certain time you could then reference the .sh file from a cron entry. however, if doing this add an env in the .sh file somewhere, and redirect the output to a file as cron runs with a different environment to a logged in user.
If the prescriptiondate field is not indexed it will take a long time and lock the table while it runs. So try adding an index.
Try the command line mysql tool
EDIT: add example
mysql -uusername -ppassword yourshema < your.sql
I would recommend that you try to deleting a fixed number of rows at a time. I haven't used mysql yet but here is an example from MS SQL.
DECLARE #DELETE_COUNT INT, #ARCHVIE_DATE DATETIME
SET #ARCHIVE_DATE = '2010-10-01'
SET #DELETE_COUNT = 1000
WHILE (#DELETE_COUNT <> 0)
BEGIN
DELETE TOP (#DELETE_COUNT)
OUTPUT DELETED.*
INTO MyArchiveTable
FROM MyTable a
WHERE a.PrescriptionDate < #ARCHIVE_DATE
SET #DELETE_COUNT = ##ROWCOUNT
END
This has the advantage of deleting and archiving the data in one statement. If MySQL does not have something similar, your plan is totally reasonable. You need to save the top 1000 rows then delete them. Please ensure that you have an order by ... in case someone changes the clustered index on you and obviously an index on PrescriptionDate. If possible, you may want to create a clustered index on PrescriptionDate. Or You may want to add prescription date to an existing clustered index as the first column.
DECLARE #DELETE_COUNT INT, #ARCHVIE_DATE DATETIME
SET #ARCHIVE_DATE = '2010-10-01'
SET #DELETE_COUNT = 1000
WHILE (#DELETE_COUNT <> 0)
BEGIN
BEGIN TRANSACTION
INSERT INTO MyArchiveTable
SELECT TOP(#DELETE_COUNT) * FROM MyTable a WHERE a.PrescriptionDate < #ARCHIVE_DATE
ORDER BY a.PrescriptionDate
DELETE FROM MyTable a INNER JOIN
(SELECT TOP(#DELETE_COUNT) * FROM MyTable
WHERE PrescriptionDate < #ARCHIVE_DATE ORDER BY PrescriptionDate) b ON a.Id = b.Id
SET #DELETE_COUNT = ##ROWCOUNT
COMMIT TRANSACTION
END
This also has the added advantage of not rolling back your entire delete in case there is a deadlock. If MySQL has try/catch statements, you may want to use those to trap deadlock errors and then ignore them. The while loop should keep trying over and over again until it succeeds.
Please note that I have never used MySQL so my syntax may not be right.
Hope this helps.