How to apply lock on table level in SQL server - sql-server-2008

I am using SQL server 2008 R2 edition and want to apply lock on table level while selecting the data from the table.
As applying NO_LOCK can led to DIRTY READ problem so want to apply NO_LOCK on the tables that contains only domain data not the transaction data. i.e. the data which is very less frequent to be changed.
Please suggest any way to apply the LOCK on the domain tables.

You don't need to lock a table while reading (SELECT) since reading always acquires a shared lock on the table or row. WITH(NOLOCK) table hint just allows to read uncommited data as well; that is rows that are yet to be inserted and commited by other session. You can consider, setting TRANSACTION ISOLATION LEVEL to READ COMMITED to make sure that uncommited data is never red.

Related

InnoDB MySQL Select Query Locking

I have an isolation level of Repeatable Read and I am making a:
Select * From examplequery. I read in https://dev.mysql.com/doc/refman/5.7/en/innodb-locks-set.html that select...from queries use consistent reads from snapshot and therefore set no locks on rows or table. Does that mean, an update, insert, or delete initiated after the select but before the select query ends would still be able to run even though the modification won't show up in the select results?
Yes, you can update/insert/delete while an existing transaction holds a repeatable-read snapshot on the data.
This is implemented by Multi-Version Concurrency Control or MVCC.
It's a fancy way of saying that the RDBMS keeps multiple versions of the same row(s), so that repeatable-read snapshots can continue reading the older version as long as they need to (that is, as long as their transaction snapshot exists).
If a row version exists that was created by a transaction that committed after your transaction started, you shouldn't be able to see that row version. Every row version internally keeps some metadata about the transaction that created it, and every transaction knows how to use this to determine if it should see the row version or not.
Eventually, all transactions that may be interested in the old row versions finish, and the MVCC can "clean up" the obsolete row versions.
Basically, yes, this is the case, with some complication.
By default, in repeatable read a select ... from ... does not place any locks on the underlying data and establishes a snapshot.
If another transaction changes the underlying data, then these changes are not reflected if the same records are selected again in the scope of the first transaction. So far so good.
However, if your first transaction modifies records that were affected by other committed transactions after the snapshot was established, then those modifications done by other transactions will be also become visible to the 1st transaction, so your snapshot may not be that consistent after all.
See the 1st notes section in Consistent Nonlocking Reads chapter of MySQL manual on further details of this feature.

Does COUNT(*) wait for row locks in InnoDB?

Does MySQL InnoDB table wait for write locks even for query such as SELECT COUNT(*) FROM t?
My situation:
I've got table with 50000 rows with many updates (views count in every row). InnoDB should put a write lock on the updated row. But when I make a query with only COUNT(*) on this table, MySQL could answer this query even without waiting for write locks because no UPDATE will change the number of rows.
Thanks a lot!
No, MySql doesn't lock InnoDb tables for queries that only read data from tables.
This is only the case for old MyIsam tables, where all readers must wait until the writer is done and vice versa.
For InnoDb tables they implemented Multiversion concurrency control
In MySql terminology it is called Consistent Nonlocking Reads
In short - when the reader starts the query, the database makes a snapshot of the database at a point in time when the query was started, and the reader (the query) sees only changes made visible (commited) up to this point in time, but doesn't see changes made by later transactions. This allows readers to read data without locking and waiting for writers, but still keeping ACID
There are subtle differences depending on the transaction isolation level, you can find detailed description here: http://dev.mysql.com/doc/refman/5.6/en/set-transaction.html
In short - in read uncommited, read commited and repeatable read modes, all SELECT statements that only read data (SELECTs without FOR UPDATE or LOCK IN SHARE MODE clasues) are performed in a nonlocking fashion.
In serializable mode all transacions are serialized and, depending on autocommit mode, SELECT can be blocked when conflicts with other transactions (when autocommit=true), or is automatically converted to SELECT ... LOCK IN SHARE MODE (when autocommit=false). All details are explained in the above links.

Efficient way to make read and write operation in the same table simuntaneously..?

I have a production issue for the below condition.
I have a table with 62 million rows and an user has attempted to insert 50000 rows into the table using upload functionality. Meanwhile, another 100 user has attempted to read the table. Because, of this simultaneous read and write operation, the database got hung up and as a result the page is not loaded.
We have already a sufficient index keys for the column that we are doing the read operation.
I have an idea about using views, but my doubt is, if we are using views for read operation, then whether the concurrent write operation values will get reflected into the views?
Kindly let me know any other possible ways.
This answer applies to Microsoft SQL Server only (not MySQL), which I'm assuming is your RDBMS based on the additional tags (sql-server-2008/-r2).
If you do not care about "dirty reads" -- you have two options to basically ignore the locks imposed by the insert operations.
At the top of your script add set transaction isolation level read uncommitted or after each table add with(nolock) -- they are effectively the same thing, but the former applies to all tables in your query, and the latter only applies to the tables you append it to.
ex:
set transaction isolation level read uncommited
select *
from mytable
where id between 1 and 100
ex2:
select *
from mytable with(nolock)
where id between 1 and 100
NB: this ONLY helps with select statements.
If you are unsure what a dirty read is, you should read up on them before allowing them into your application.
If this is not an option, then you would likely need to look at creating a snapshot or replicated copy of your database (I prefer replication), and point ALL read operations to that copy of the data.

Do transactions prevent other updates for a while, or just hide them?

When doing a transaction in a mysql db, they are talking about the ongoing transaction not being able to see any updates made by external sources until it commits. So does this mean that changes CAN be made but the transaction just will not be able to see them, or is it actually impossible to update the db while the transaction is going on.
Because I need it to be impossible for other queries to change anything about certain tables while the transaction is going. Right now I write lock all those tables, start a transaction for the atomicity, commit, and than unlock. Is this the way to do this?
From my testing it seems that setting the isolation level to SERIALIZABLE accomplishes the same as manual table locking and unlocking? Is this correct?
It's going to depend on the transaction isolation level you have set on your database. You can read more about the levels here. For example, for READ UNCOMMITTED, you can actually read rows that are uncommitted by another transaction. This is usually not what you want to happen.
Locking an entire table is a really extreme choice though, and should probably not be done unless there's no other choice. My recommendation would be to consider the rows you need to lock, and then you can lock those specific rows using a select for update statement.
For example, suppose you have a resources table and a schedules table that contains bookings for those resources. When booking a resource, you have to check the schedules table for a given resource to make sure it's available for the desired time. However, you have to do this is a concurrent way, that is, you want to ensure that between the time you check the schedules table for availability for the resource, and the time you actually insert the row into the schedules table, you want to ensure that some other transaction doesn't book the resource for the same time (or an overlapping time).
You can accomplish this by using a select for update command:
select * from resources where resource_name=’a’ for update;
Assuming you're doing this in a stored procedure, if some other code fires the stored procedure for the same resource, it will block on that statement. This will ensure that resources don't get double booked.
We could also accomplish this by locking the entire resources table. However, there's no need to do that since we're only interested in booking a single resource. So it's good enough to just lock the resource row we care about.
Note that for MySQL, you need to index the columns you use in the for update or it will lock the entire table.
The point to all this is to always consider maximum concurrency. In other words, don't lock more than you need to. Otherwise, you make the application much less scalable and you inhibit concurrency.

Prevent read when updating the table

In MySQL:
Every one minute I empty the table and fill it with a new data. Now I want that users should not read data during the fill process, before or after is ok.
How do I achieve this?
Is transaction the way?
Assuming you use a transactional engine (Usually Innodb), clear and refill the table in the same transaction.
Be sure that your readers use READ_COMMITTED or higher transaction isolation level (the default is REPEATABLE READ which is higher).
That way readers will continue to be able to read the old contents of the table during the update.
There are a few things to be careful of:
If the table is so big that it exhausts the rollback area - this is possible if you update the whole of (say) a 1M row table. Of course this is tunable but there are limits
If the transaction fails part way through and gets rolled back - rolling back big transactions is VERY inefficient in InnoDB (it is optimised for commits, not rollbacks)
Be careful of deadlocks and lock wait timeouts, which are more likely if you use big transactions.
You can LOCK your table for the duration of your operation:
http://dev.mysql.com/doc/refman/5.1/en/lock-tables.html
A table lock protects only against
inappropriate reads or writes by other
sessions. The session holding the
lock, even a read lock, can perform
table-level operations such as DROP
TABLE. Truncate operations are not
transaction-safe, so an error occurs
if the session attempts one during an
active transaction or while holding a
table lock.
I don't know enough about the internal row-versioning mechanisms of MySql (or indeed, if there is one), but other databases (Oracle, Postgresql, and more recently, Sql Server) have invested a lot of effort into allowing writers to not block readers, in so far as readers have access to the version of the rows that existed immediately before the update/write process started. Once the update is committed, that version of the row becomes the one made availabe to all readers, thereby avoiding a bottleneck that the above behaviour in MySql will introduce.
This policy ensures that table locking
is deadlock free. There are, however,
other things you need to be aware of
about this policy: If you are using a
LOW_PRIORITY WRITE lock for a table,
it means only that MySQL waits for
this particular lock until there are
no other sessions that want a READ
lock. When the session has gotten the
WRITE lock and is waiting to get the
lock for the next table in the lock
table list, all other sessions wait
for the WRITE lock to be released. If
this becomes a serious problem with
your application, you should consider
converting some of your tables to
transaction-safe tables.
You can load your data into a shadow table as slowly as you like, then instantly swap the shadow and actual with RENAME TABLE:
truncate table shadow; # make sure it is clean to start with
insert into shadow .....; # lots of inserts etc against shadow table
rename table active to temp, shadow to active, temp to shadow;
truncate table shadow; # throw away the old active data
The rename statement is atomic. An intermediate name "temp" is used to help swap the names of temp and active.
This should work with all storage engines.
Rename table - MySQL Manual