Select Records and Update in One Query - mysql

There are similar questions asked here, but was not able to find something close enough to actually resolve my problem as they involve multiple tables. So here goes...
I need to select a recordset for processing. To prevent parallel processing from selecting the same records, I want to set a status flag in the record that I can use to exclude those records on subsequent calls, i.e.
SELECT ... WHERE statusflag <> 1 //(or whatever)
I know I could use a transaction and SELECT FOR UPDATE, spinning through those records updating the flag, but I was hoping to accomplish both tasks (get/update) with one database hit. Is this possible in MySQL?

You need to use cursors in a stored procedure. There are a few tutorials on-line that will help you.

Related

Efficient way to insert record if it does not already exist, if it does exist get the ID and use it as foreign key?

Example tables (not actual database):
In this example, I would have the SecurityCode(Unique), and Time. My current solution involves attempting to add a new Person using the security code, then querying the ID, then adding to the Times table. This is 3 separate statements and could likely be a lot faster. Any advice on how to optimise this?
Thanks.
Edit: I previously forgot to mention that this is normally done in a batch of 30-40 records.
I am also considering using SecurityCode as the foreign key in Times.
I think there are many ways of achieve this, the easiest:
Try using "IF", you only need it for the first step of your statement, the last two are independent to the result of this evaluation.
Plus, save your security code in a variable, then you will save one table scan (you already have it)
**please note its just pseudo-code**
IF (exists select * from person where securityCode = #securityCode) then
Step 1
End
Step 2
Step 3
Can you try it?
The fastest way seemed to be to batch ignore insert all security codes, then batch insert all Times with a subquery to select the correct ID from Person.

Selecting and updating a row while dealing with race conditions?

We have a table of elements that can be issued to clients. These elements can only ever be given to a client once, and we have situations where many clients could be pulling elements all at the same time. We then need to return data associated with it (so there is an update, and then a select).
The current solution is that a random one is found/updated to be issued=true and sets its id as LAST_INSERTED_ID; then immediately afterwards it makes the select call to find where('id = LAST_INSERTED_ID()') which is unique per connection.
Since we're updating where issued=false to issued=true and [last inserted], that one call is small enough to not encounter race condition issues.
But, all this is being done in SQL and feels very hackish. This does not seem like a rare enough problem that it has not been solved using a more Railsy solution. Wrapping a transaction might work to prevent double-issues, but then we'd need retry logic in the case the transaction failed.
What solution are we not thinking of?
You will want to use database-level locking to avoid race conditions.
One way to do this in MySQL is SELECT FOR UPDATE like this:
SELECT * FROM elements WHERE issued=false LIMIT 1 FOR UPDATE
In ActiveRecord (Rails), this is called pessimistic locking, and an implementation would look like this:
Element.transaction do
element = Element.lock(true).where(issued: false).first
element.issued = true
# ... do other stuff to assign to a given client
element.save!
end
If that got kicked off more than once at the same time, the 2nd call would be blocked until the first call finished, so by the time it executed, the first record would already be updated to issued=true and the 2nd call would return the next record instead of the same record.
You can read about SELECT FOR UPDATE here

MySql Triggers and performance

I have the following requirement. I have 4 MySQL databases and an application in which the user needs to get the count of number of records in tables of each of these databases. The issue is that count may change in every minute or second. So whenever the user mouse-hovering the particular UI area, I need to have a call to all these databases and get the count. I don’t think it is a best approach, as these tables contain millions of records and every time on mouse over, a dB call is going to all these databases.
Trigger is the one approach I found. Rather than we are pulling data from the database, I feel like whenever any insert/update/delete happening to these tables, a trigger will execute and that will increment/decrement the count in another table (which contain only the count of these tables). But I have read like triggers will affect database performance, but also read some situation trigger is the only solution.
So please guide me in my situation triggers are the solution? If it affects the database performance I don’t need that. Is there any other better approach for this problem?
Thanks
What I understood is you have 4 databases and n number of tables in each of them and when the user hovers over a particular area in your application the user should see the number of rows in that table.
I would suggest you to use count(*) to return the number of rows in each table in the database.Triggers are used to do something when a particular event like update,delete or insert occurs in a database.It's not a good idea to invoke triggers to react to user interactions like hovering.If you can tell me in which language you are designing the front end I can be more specific.
Example:
SELECT COUNT(*) FROM tablename where condition
OR
SELECT SQL_CALC_FOUND_ROWS * FROM tablename
WHERE condition
LIMIT 5;
SELECT FOUND_ROWS();
The second one is used when you want to limit the results but still return total number of rows found.Hope it helps.
Please don't use count(*). This is inefficient, possibly to the point of causing a table scan. If you can get to the information schema, this should return the result you need sub-second:
select table_rows from information_schema.tables where table_name = 'tablename'
If you can't for some reason, and your table has a primary key, try:
SELECT COUNT(field) FROM tablename
...where field is part of the primary key. This will be slower, especially on large tables, but still better than count(*).
Definitely don't use trigger.

Complex MySQL Delete Query

Current Structure
As you can see Path can be referenced by multiple Tables and multiple records within those tables.
Points can also be referenced by two different tables.
My Question
I would like to delete a PathType however this gets complicated as
a Path may be owned by more than one PathType so deleting the
Path without checking how many references there are to it is out
of the question.
Secondly, if this Path's only reference is the PathType I'm
trying to delete then I will want to delete this Path and any
records in PathPoints.
Lastly, if there are no other references on Point from any other records then this will also need to be deleted but only if its not used by any other object.
Attempts So Far
DELETE PathType1.*, Path.*, PathPoints.*, Point.* FROM PathType1,Path,PathPoints,Point WHERE PathType1.ID = 1 AND PathType1.PATH = Path.ID AND (SELECT COUNT(*) FROM PathType1 WHERE PathType1.PATH = Path.ID) < 1 AND (SELECT COUNT(*) FROM PathType2 WHERE PathType2.PATH = Path.ID) = 0
Obviously the above statement goes on but this isn't the right way about I don't think because if one fails then nothing is deleted...
I think that maybe it isn't possible to do what I'm attempting through one statement and I may have to iterate through each section and handle them based on the outcome. Not so efficient but I don't see any alternative at this time.
I hope this is clear. If you have any more questions or need any clarification then please do not hesitate to ask
First there is no way I would do this in a query like that even if the database allowed it which most will not. This is an unmaintanable mess.
The preferred method is to create a transaction, then delete from one table at a time starting with the bottommost child table. Then commit the transaction. And of course have error handling so the entire transaction is riolled back if one delete fails to maintain data integrity. If I intended to do this repeatedly, I would do it in a stored proc.

Mysql UPDATE before first checking if necessary or just UPDATE?

I'm using mysql to update a field in a table when a condition is met...
Should I first do a SELECT to see if the condition is met or do I just try to use UPDATE every time, because if the condition is not met, nothing happens.
To be concrete, here is my SELECT:
SELECT * FROM forum_subscriptions
WHERE IDTopic=11111 AND IDUser=11111 and status=0
I am checking here if I am on forum topic 11111 and if if I (user ID 1) is subscribed to this topic and my status on the subscription is 0 (that means that he didn't yet get email about new post in topic)
So when this is met do:
UPDATE forum_subscriptions SET Status=1 where IDTopic=11111 AND IDUser=1
Now I am wondering, I always do a select here to query if a user is subscribed to this topic and he has a status that he visited that topic before so any new posts will not trigger new email notification. When he visits the page again, the update is triggered that resets the visit so any new posts will again send him email.
So select is made on every user if he is subscribed or not to test the subscription. Update is made only when necessary.
Is it better to just use the update? To try to update on every page, if he is not subscribed to the topic it will not update anything.
How fast is update that doesn't produce any valid data? How is it made internally, how does update find if there is any record, does it select and then update? If so it would be better to only update because I would achieve same thing without any slowdowns. If the update is more expensive than select I should try to check first and then update if necessary.
This example is a real life example, but the logic behing this update/select is really what I am interested because I do find this kind of a problem more often.
Thanx
UPDATE: Thanx both guys, but I do not see on your links if UPDATE is locking even without results or not. As you gave different answers I still don't know what to do.
The subscription table really doesn't need to be myisam, I could change it to InnoDB because I don't have a need to fulltext it. Is this a good solution, to only use update and change this small table to inno? Does mixing table types have any drawbacks?
You just do the update, with no previous select:
UPDATE forum_subscriptions SET Status=1 where IDTopic=11111 AND IDUser=1
If the conditions are not met, update will do nothing.
This update is very fast if you have an index on status and IDtopic and IDuser!
An empty update is just as fast as an empty select.
If you do the select first, you will just slow things down for no reason.
If you want to know how many rows where updated do a
SELECT ROW_COUNT() as rows_affected
After doing the update, this will tell you 0 if no rows where updated, or the number of rows updated (or inserted or deleted, if you used those statements).
This function is ultra fast because it just has to fetch one value from memory.
Workarounds for table locking issues
See here: http://dev.mysql.com/doc/refman/5.5/en/table-locking.html
A potential side affect of always calling the UPDATE is the locking that needs to be put to insure that no other connection modifies these rows.
If the table is MyISAM - a lock will be places on the he entire table during the search.
If the table is InnoDB, locks will be places on the indexes/gaps.
From the Docs:
A locking read, an UPDATE, or a DELETE
generally set record locks on every
index record that is scanned in the
processing of the SQL statement. It
does not matter whether there are
WHERE conditions in the statement that
would exclude the row