Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
From what I've been able to find in the web, mysql stores statements that alter data in the bin log, which is then read by the slave. What remains unclear is what happens to those statements next? Are they replayed as if they happenned on the slave server?
For example, say there is a query with current time in the conditional, like "UPDATE something SET updatedat = NOW()", and due to the replication delay, the query ends at the slave a couple of seconds later. Will the values in the table be different?
Or if there is master-master replication, at time 1000 the following query happens on server 1:
UPDATE t SET data = 'old', updatedat = 1000 WHERE updatedat < 1000
At time 1001 on server 2 the following query happens:
UPDATE t SET data = 'new', updatedat = 1001 WHERE updatedat < 1001
Then server 2 fetches the replication log from server 1, the value on the server 2 will be "old"? If so, is there a way to avoid it?
For example, say there is a query with current time in the conditional, like "UPDATE something SET updatedat = NOW()", and due to the replication delay, the query ends at the slave a couple of seconds later. Will the values in the table be different?
No. The replication duplicates the row, which means that the time will be the same
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I'm using MySql(innodb storage engine). I want to implement row level implicit locking on update statement. so, that no other transaction can read or update that row concurrently.
Example:
Transaction1 is executing
"UPDATE Customers
SET City='Hamburg'
WHERE CustomerID=1;"
Then, at same time Transaction2 should not able to read or update same row but Transaction2 should be able to access other rows.
Any help would be appreciated.
Thank you for your time.
If there are no other statements supporting that UPDATE, it is atomic.
If, for example, you needed to look at the row before deciding to change it, then it is a little more complex:
BEGIN;
SELECT ... FOR UPDATE;
decide what you need to do
UPDATE ...;
COMMIT;
No other connection can change with the row(s) SELECTed before the COMMIT.
Other connections can see the rows involved, but they may be seeing the values before the BEGIN started. Reads usually don't matter. What usually matters is that everything between BEGIN and COMMIT is "consistent", regardless of what is happening in other connections.
Your connection might be delayed, waiting for another connection to let go of something (such as the SELECT...FOR UPDATE). Some other connection might be delayed. Or there could be a "deadlock" -- when InnoDB decides that waiting will not work.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm working on a database that has one table with 21 million records. Data is loaded once when the database is created and there are no more insert, update or delete operations. A web application accesses the database to make select statements.
It currently takes 25 second per request for the server to receive a response. However if multiple clients are making simultaneous requests the response time increases significantly. Is there a way of speeding this process up ?
I'm using MyISAM instead of InnoDB with fixed max rows and have indexed based on the searched field.
If no data is being updated/inserted/deleted, then this might be case where you want to tell the database not to lock the table while you are reading it.
For MYSQL this seems to be something along the lines of:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
SELECT * FROM TABLE_NAME ;
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ ;
(ref: http://itecsoftware.com/with-nolock-table-hint-equivalent-for-mysql)
More reading in the docs, if it helps:
https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html
The TSQL equivalent, which may help if you need to google further, is
SELECT * FROM TABLE WITH (nolock)
This may improve performance. As noted in other comments some good indexing may help, and maybe breaking the table out further (if possible) to spread things around so you aren't accessing all the data if you don't need it.
As a note; locking a table prevents other people changing data while you are using it. Not locking a table that is has a lot of inserts/deletes/updates may cause your selects to return multiple rows of the same data (as it gets moved around on the harddrive), rows with missing columns and so forth.
Since you've got one table you are selecting against your requests are all taking turns locking and unlocking the table. If you aren't doing updates, inserts or deletes then your data shouldn't change, so you should be ok to forgo the locks.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
The community reviewed whether to reopen this question 8 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm creating a todo app. I have a status column that receives 1, 2 or 3 (pending, overdue, completed).
Whenever I create a task it is set to pending. The user can mark it as complete. But is there a way to automatically update it to overdue in case it's not completed and due_date is less than today?
You can use MySQL event Scheduler.
Prerequisite:
You have to have event_scheduler ON in your mysql server.
Check whether event scheduler is ON or OFF
SELECT ##event_scheduler;
To turn event_scheduler ON run the following query:
SET GLOBAL event_scheduler = ON;
Note: If you restart MYSQL Server then event scheduler status will be reset unless the following is written in the configuration file.
For Windows: in my.ini file write this under [mysqld] section
[mysqld]
event_scheduler=on
For Linux: in my.cnf file
[mysqld]
event_scheduler=on
Event:
CREATE
EVENT `updateStatusEvent`
ON SCHEDULE EVERY 1 DAY STARTS '2016-08-11 00:00:00'
ON COMPLETION NOT PRESERVE
ENABLE
DO
UPDATE your_table SET status_column = 2 WHERE your_time_column < CURDATE();
The event will be started for the first time at '2016-08-11 00:00:00'
and after that the event will be scheduled in every 1 day interval and will update the status of the corresponding data.
If your version of MySQL supports it (version >= 5.1.6 if I'm not mistaken) you can use Event Scheduler.
CREATE EVENT check_overdue ON SCHEDULE EVERY 2 HOUR DO
UPDATE mytable SET status = 2 WHERE due_date < NOW();
Another option is to set up a Cron job that calls a PHP or another online script.
Anyway you have to query periodically for any overdue events and mark them as overdue.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
What is the meaning of "master heartbeat time period" in MySQL server, and how can I configure this variable in my.cnf?
As mentioned here on the mysql performance blog
MASTER_HEATBEAT_PERIOD is a value in seconds in the range between 0 to 4294967 with resolution in milliseconds. After the loss of a beat the SLAVE IO Thread will disconnect and try to connect again.
You can configure it on a slave using syntax also mentioned in that article and in the queries below.
mysql_slave > STOP SLAVE;
mysql_slave > CHANGE MASTER TO MASTER_HEARTBEAT_PERIOD=1;
mysql_slave > START SLAVE;
More information on using CHANGE MASTER can be found on the mysql documentation site
MASTER_HEARTBEAT_PERIOD sets the interval in seconds between replication heartbeats. Whenever the master's binary log is updated with an event, the waiting period for the next heartbeat is reset. interval is a decimal value having the range 0 to 4294967 seconds and a resolution in milliseconds; the smallest nonzero value is 0.001. Heartbeats are sent by the master only if there are no unsent events in the binary log file for a period longer than interval.
Setting interval to 0 disables heartbeats altogether. The default value for interval is equal to the value of slave_net_timeout divided by 2.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about programming within the scope defined in the help center.
Improve this question
Is it possible to restore from a full backup or parrallel db, only certain records with original IDs?
Lets say records were deleted from a specific date, can those records be restored without restoring the entire table?
So to be clear lets say I have records 500 - 720 still in a backup or parrallel db, but the table has had new records added since the backup so dont want to lose them either. So simply want to slot records 500 - 720 back with their original IDs to the current table.
If you have a copy of the db, that's going to be the easiest and quickest way - create a copy of your table with just the rows you need:
CREATE TABLE table2
AS
SELECT * FROM table1
WHERE table1.ID BETWEEN 500 AND 720
then dump table2 with mysqldump:
mysqldump -u -p thedatabase table2 > table2_dump.sql
and ship the dump to the main db, run the dump when using a temporary database, and insert the missing records using:
INSERT INTO table1
SELECT *
FROM temp_db.table2
If you don't have a copy of the db with the missing records, just a backup, then I don't think you can do such a selective restore. If you just have a single dump file of the entire db, then you will have to restore a complete copy to a temporary db, and insert the missing records in a similar manner to the way I've described above, but with a where clause in the insert.