Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I'm using MySql(innodb storage engine). I want to implement row level implicit locking on update statement. so, that no other transaction can read or update that row concurrently.
Example:
Transaction1 is executing
"UPDATE Customers
SET City='Hamburg'
WHERE CustomerID=1;"
Then, at same time Transaction2 should not able to read or update same row but Transaction2 should be able to access other rows.
Any help would be appreciated.
Thank you for your time.
If there are no other statements supporting that UPDATE, it is atomic.
If, for example, you needed to look at the row before deciding to change it, then it is a little more complex:
BEGIN;
SELECT ... FOR UPDATE;
decide what you need to do
UPDATE ...;
COMMIT;
No other connection can change with the row(s) SELECTed before the COMMIT.
Other connections can see the rows involved, but they may be seeing the values before the BEGIN started. Reads usually don't matter. What usually matters is that everything between BEGIN and COMMIT is "consistent", regardless of what is happening in other connections.
Your connection might be delayed, waiting for another connection to let go of something (such as the SELECT...FOR UPDATE). Some other connection might be delayed. Or there could be a "deadlock" -- when InnoDB decides that waiting will not work.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Which query is faster among these:
DROP TABLE table_Name
TRUNCATE TABLE table_Name
DELETE FROM table_Name
In MySQL, for a table with a significant number of rows, I would suppose that drop is the fastest operation, then truncate, and finally delete.
Rationale:
drop and truncate are DDL operations, as opposed to delete, which is a DML operation; as the number of rows increases, the performance of delete degrades quickly (while DDL operations are less dependent on the underlying dataset size).
in MySQL, truncate under the hood drops and recreates the table - so it cannot be faster than a straight drop
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm working on a database that has one table with 21 million records. Data is loaded once when the database is created and there are no more insert, update or delete operations. A web application accesses the database to make select statements.
It currently takes 25 second per request for the server to receive a response. However if multiple clients are making simultaneous requests the response time increases significantly. Is there a way of speeding this process up ?
I'm using MyISAM instead of InnoDB with fixed max rows and have indexed based on the searched field.
If no data is being updated/inserted/deleted, then this might be case where you want to tell the database not to lock the table while you are reading it.
For MYSQL this seems to be something along the lines of:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
SELECT * FROM TABLE_NAME ;
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ ;
(ref: http://itecsoftware.com/with-nolock-table-hint-equivalent-for-mysql)
More reading in the docs, if it helps:
https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html
The TSQL equivalent, which may help if you need to google further, is
SELECT * FROM TABLE WITH (nolock)
This may improve performance. As noted in other comments some good indexing may help, and maybe breaking the table out further (if possible) to spread things around so you aren't accessing all the data if you don't need it.
As a note; locking a table prevents other people changing data while you are using it. Not locking a table that is has a lot of inserts/deletes/updates may cause your selects to return multiple rows of the same data (as it gets moved around on the harddrive), rows with missing columns and so forth.
Since you've got one table you are selecting against your requests are all taking turns locking and unlocking the table. If you aren't doing updates, inserts or deletes then your data shouldn't change, so you should be ok to forgo the locks.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
WHEN use attribute DELAY_KEY_WRITE?
How it helped ?
CRETA TABLE(....) ;DELAY_KEY_WRITE =1
Another performance option in MySQL is the DELAY_KEY_WRITE option. According to the MySQL documentation the option makes index updates faster because they are not flushed to disk until the table is closed.
Note that this option applies only to MyISAM tables,
You can enable it on a table by table basis using the following SQL statement:
ALTER TABLE sometable DELAY_KEY_WRITE = 1;
This can also be set in the advanced table options in the MySQL Query Browser.
This performance option could be handy if you have to do a lot of update, because you can delay writing the indexes until tables are closed. So frequent updates to large tables, may want to check out this option.
Ok, so when does MySQL close tables?
That should have been your next question. It looks as though tables are opened when they are needed, but then added to the table cache. This cache can be flushed manually with FLUSH TABLES; but here's how they are closed automatically according to the docs:
1.When the cache is full and a thread tries to open a table that is not in the cache.
2.When the cache contains more than table_cache entries and a thread is no longer using a table.
3.FLUSH TABLES; is called.
"If DELAY_KEY_WRITE is enabled, this means that the key buffer for tables with this option are not flushed on every index update, but only when a table is closed. This speeds up writes on keys a lot, but if you use this feature, you should add automatic checking of all MyISAM tables by starting the server with the --myisam-recover option (for example, --myisam-recover=BACKUP,FORCE)."
So if you do use this option you may want to flush your table cache periodically, and make sure you startup using the myisam-recover option.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the difference between concurrency control and transaction isolation levels?
I understand each of them clearly, however, I am having some problems relating them to each other. Specifically, I see some overlap in their functions and I'm not sure when one should use one over the other. Or should both be used together?
Also what does it mean to say pessimistic locking with repeatable read? Doesn't repeatable read already imply that all values to be edited will be locked? So why is there still a need for pessimistic locking?
The issue arises because there are two models for concurrency control, which are sometimes mixed by SQL implementations.
locks, as in 2PL (Two Phase Locking)
versions, as in MVCC (Multiversion Concurrency Control)
Pessimistic means rows that are read are locked. Optimistic means rows that are read are not locked.
The classic 2PL implementation of Repeatable Read is always pessimistic. The multiversion implementation of Repeatable Read is optimistic. It does not lock the rows that are read for a SELECT statement and allows other transactions to modify the rows that have been read in a SELECT. Such changes are not visible to the transaction that performed the SELECT, until it is committed.
Concurrency control is a general term for any mechanism that handles issues that arise from concurrent connections.
Transaction isolation levels are a mechanism by which MySQL implements concurrency control.
See Consistent Nonlocking Reads for documentation on how MySQL implements REPEATABLE READ without pessimistic locking:
A consistent read does not set any locks on the tables it accesses, and therefore other sessions are free to modify those tables at the same time a consistent read is being performed on the table.
Suppose that you are running in the default REPEATABLE READ isolation level. When you issue a consistent read (that is, an ordinary SELECT statement), InnoDB gives your transaction a timepoint according to which your query sees the database. If another transaction deletes a row and commits after your timepoint was assigned, you do not see the row as having been deleted. Inserts and updates are treated similarly.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have a MySQL database that is continually growing.
Every once in a while I OPTIMIZE all the tables. Would this be the sort of thing I should put a on a cron job daily or weekly?
Are there any specific tasks I could set to run automatically that keeps the database in top performance?
Thanks
Ben
You can optimize your tables inside database by executing this query:
SELECT * FROM `db_name`.`table_name` PROCEDURE ANALYSE(1, 10);
This will suggest Optimal_fieldtype to use, You have to ALTER your database so that
optimal field_type has been used.
Also, You can profile your queries inorder to make sure that proper indexing has been done on a table.
I suggest you try SQLyog which can let you know both "Calculate Optimal Datatype" and "SQL Profiler" which will definately help you in optimizing server performance.