How to implement row level locking - mysql

This is not working for the row level locking
I just want to know if I can select the row like
select * from table where folder like %344443%**
then update the row with
update table set folder = '{"bin":"44456","venv":4366}' where id = 'i-instanceid'

You can't.
The problem is that the UPDATE must scan the entire table to find the row(s) you need to change. In doing so, it locks the entire table.
Don't bury things that you want to search on inside JSON strings. Have them as indexed columns on their own. This should let you lock a single row, and run the Update much faster.
Or look into indexing parts JSON columns. Such is still evolving. What version of MySQL are you using?
Furthermore, why select the id first, then do the update? Can't you simply do the update? If you are actually doing something else in the "transaction", say so. You may need the SELECT. At that point, it would need to be SELECT ... FOR UPDATE.

Related

MySQL Faster Rand

I have a VIEW (view1) that returns random values from another table (Table2) based on values inside Table1.
A trigger is configured to UPDATE a third table (Table3), when values inside Table1 change. The purpose of Table3 is to hold the random values so they don’t get updated all the time.
The problem I have is that each select statement inside the trigger causes the whole row to get new random values for each column being updated. If there are multiple updates than it’s multiple times the whole row is getting new random values. I have adequate hardware, but it’s still too slow. Is there a way to reduce it, so maybe it selects once for each row, regardless of how many columns are receiving the updates? Maybe holds the values somewhere temporarily and updates from that?
Here is sample fiddle. In my real data I have significantly more columns.
Fiddle Example:
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=4fd8bf89135d8babe8c19fc15a565d50
Currently I don’t have indexes on any columns. I’ve read mixed reviews re indexes and updates.
Lastly, while browsing Stack I found a few links to this: https://jan.kneschke.de/projects/mysql/order-by-rand/, but I’m not sure there is a way I can apply it.

Insert random number into table upon new record creation

I would like to store random numbers in one MySql table, randomly retrieve one and insert it into another table column each time a new record is created. I want to delete the retrieved number from the random number table as it is used.
The random numbers are 3 digit, there are 900 of them.
I have read several posts here that describe the problems using unique random numbers and triggering their insertion. I want to use this method as it seems to be reliable while generating few problems.
Can anyone here give me an example of a sql query that will accomplish the above? (If sql query is not the recommended way to do this please feel free to recommend a better method.)
Thank you for any help you can give.
I put together the two suggestions here and tried this trigger and query:
CREATE TRIGGER rand_num before
INSERT ON uau3h_users FOR EACH ROW
insert into uau3h_users (member_number)
select random_number from uau3h_rand900
where random_number not in (select member_number from uau3h_users)
order by random_number
limit 1
But it seems that there is already a trigger attached to that table so the new one cause a conflict, things stopped working until I removed it. Any ideas about how accomplish the same using another method?
You are only dealing with 900 records, so performance is not a major issue.
If you are doing a single insert into a table, you can do something like the following:
insert into t(rand)
select rand
from rand900
where rand not in (select rand from t)
order by rand()
limit 1
In other words, you don't have to continually delete from one table and move to the other. You can just choose to insert values that don't already exist. If performance is a concern, then indexes will help in this case.
More than likely you need to take a look into Triggers. You can do some stuff for instance after inserting a record in a table. Refer this link to more details.
http://dev.mysql.com/doc/refman/5.0/en/create-trigger.html

How to best check if a SQL table contents have not changed?

Assuming I have the following table named "contacts":
id|name|age
1|John|5
2|Amy|2
3|Eric|6
Is there some easy way to check whether or not this table changes much like how a sha/md5 hash works when getting the checksum for a file on your computer?
So for example, if a new row was added to this table, or if a value was changed within the table, the "hash" or some generated value shows that the table has changed.
If there is no direct mechanism, what is the best way to do this (could be some arbirary hash mechanism, as long as the method puts emphasis on performance and minimizing latency)? Could it be applied to multiple tables?
There is no direct mechanism to get that information through SQL.
You could consider adding an additional LastModified column to each row. To know the last time the table was modified, select the maximum value for that column.
You could achieve a similar outcome by using a trigger on the table for INSERT, UPDATE and DELETE, which updates a separate table with the last modified timestamp.
If you want to know if something has changed, you need something to compare. For example a date. You can add a table with two columns, the tablename and the timestamp, and program a trigger for the events on the table you are interested to control, so this trigger will update the timestamp column of this control table.
If the table isn't too big, you could take a copy of the entire table. When you want to check for changes, you can then query the old vs. new data.
drop table backup_table_name;
CREATE TABLE backup_table_name LIKE table_name;
INSERT INTO backup_table_name SELECT * FROM `table_name`;

Optimized SELECT query in MySQL

I have a very large number of rows in my table, table_1. Sometimes I just need to retrieve a particular row.
I assume, when I use SELECT query with WHERE clause, it loops through the very first row until it matches my requirement.
Is there any way to make the query jump to a particular row and then start from that row?
Example:
Suppose there are 50,000,000 rows and the id which I want to search for is 53750. What I need is: the search can start from 50000 so that it can save time for searching 49999 rows.
I don't know the exact term since I am not expert of SQL!
You need to create an index : http://dev.mysql.com/doc/refman/5.1/en/create-index.html
ALTER TABLE_1 ADD UNIQUE INDEX (ID);
The way I understand it, you want to select a row with id 53750. If you have a field named id you could do this:
SELECT * FROM table_1 WHERE id = 53750
Along with indexing the id field. That's the fastest way to do so. As far as I know.
ALTER table_1 ADD UNIQUE INDEX (<collumn>)
Would be a great first step if it has not been generated automatically. You can also use:
EXPLAIN <your query here>
To see which kind of query works best in this case. Note that if you want to change the where statement (anywhere in the future) but see a returning value in there it will be a good idea to put an index on that aswell.
Create an index on the column you want to do the SELECT on:
CREATE INDEX index_1 ON table_1 (id);
Then, select the row just like you would before.
But also, please read up on databases, database design and optimization. Your question is full of false assumptions. Don't just copy and paste our answers verbatim. Get educated!
There are several things to know about optimizing select queries like Range and Where clause Optimization, the documentation is pretty informative baout this issue, read the section: Optimizing SELECT Statements. Creating an index on the column you evaluate is very helpfull regarding performance too.
One possible solution You can create View then query from view. here is details of creating view and obtain data from view
http://www.w3schools.com/sql/sql_view.asp
now you just split that huge number of rows into many view (i. e row 1-10000 in one view then 10001-20000 another view )
then query from view.
I am pretty sure that any SQL database with a little respect for themselves does not start looping from the first row to get the desired row. But I am also not sure how they makes it work, so I can't give an exact answer.
You could check out what's in your WHERE-clause and how the table is indexed. Do you have a proper primary key? Like using a numeric data type for that. Do you have indexes on more columns, that is used in your queries?
There is also alot to concider when installing the database server, like where to put the data and log files, how much memory to give the server and setting the growth. There's a lot you can do to tune your server.
You could try and split your tables in partitions
More about alter tables to add partitions
Selecting from a specific partition
In your case you could create a partition on ID for every 50.000 rows and when you want to skip the first 50.000 you just select from partition 2. How to do this ies explained quite well in the MySQL documentation.
You may try simple as this one.
query = "SELECT * FROM tblname LIMIT 50000,0
i just tried it with phpmyadmin. WHERE the "50,000" is the starting row to look up.
EDIT :
But if i we're you i wouldn't use this one, because it will lapses the 1 - 49999 records to search.

Replicating a "For Each" loop in a MySQL query

I've been using MySQL at work, but I'm still a bit of a noob at more advanced queries, and often find myself writing lengthy queries that I feel (or hope) could be significantly shortened.
I recently ran into a situation where I need to create X number of new entries in a table for each entry in another table. I also need to copy a value from each row in the second table into each row I'm inserting into the first.
To be clear, here's pseudocode for what I'm attempting to do:
For each row in APPS
create new row in TOKENS
set (CURRENT)TOKENS.APP_ID = (CURRENT)APPS.APP_ID
Any help is appreciated, even if it boils down to "this isn't possible."
As a note, the tables only share this one field, and I'll be setting other fields statically or via other methods, so simply copying isn't really an option.
You don't need a loop, you can use a single INSERT command to insert all rows at once:
INSERT INTO TOKENS (APP_ID)
SELECT APP_ID
FROM APPS;
If you want to set other values for that row, simply modify the INSERT list and SELECT clause. For example:
INSERT INTO TOKENS (APP_ID, static_value, calculated_value)
SELECT APP_ID, 'something', 'calculated-' + APP_ID
FROM APPS