MySQL Event Scheduler does not work but everything is set - mysql

My MySQL events never want to execute.
I use the latest 5.6.10 release but the problem was also in the 5.5 versions.
I have 1 Master- Many Slaves configuration. I want to run the event on the Master.
In my.ini i have already set
event_scheduler=ON
2, Checking this:
show variables like '%event_scheduler%';
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| event_scheduler | ON |
+-----------------+-------+
show processlist\G
Id: 1
User: event_scheduler
Host: localhost
db: NULL
Command: Daemon
Time: 720
State: Waiting on empty queue
Info: NULL
...
3, Next i have a recurring event on the master server.
This never want to execute. I also tried to change the date to different time.
CREATE EVENT IF NOT EXISTS dbname.event_xyz
ON SCHEDULE EVERY 1 DAY STARTS '2013-02-05 00:15:00'
ON COMPLETION PRESERVE
DISABLE ON SLAVE
COMMENT 'Collect data from ...'
DO CALL dbname._procedurename;
I have no errors calling the "_procedurename" procedure directly.
4, The creator of the event and the procedure is the "root".
5, The output of the
show events\G
Db: dbname
Name: event_xyz
Definer: root#localhost
Time zone: SYSTEM
Type: RECURRING
Execute at: NULL
Interval value: 1
Interval field: DAY
Starts: 2013-02-05 00:15:00
Ends: NULL
Status: SLAVESIDE_DISABLED
Originator: 1
character_set_client: utf8
collation_connection: utf8_general_ci
Database Collation: utf8_general_ci
6, Restarting the master server i get the following lines in the
masterservername.err:
...
Event Scheduler: Purging the queue. 0 events
...
Event Scheduler: Loaded 0 events
...
and the
show processlist\G
shows that the event_scheduler
State: Waiting on empty queue !
So, my question is why does not run the event on my master server?
Why the event scheduler does not know anything about the previously defined event?
Am I missing something?

Perhaps this is a bug.
The solution was to remove the
DISABLE ON SLAVE
parameter from the event declaration command.
After this, the event state changed to "ENABLED" status and fired by the Event Scheduler on the Master server.
On the slaves the State of this event replicated with "SLAVESIDE_DISABLED" status.
So, for what purpose exists the "DISABLE ON SLAVE" parameter in the CREATE EVENT syntax?

This might be bug http://bugs.mysql.com/bug.php?id=67191 , fixed in 5.6.14

Related

MYSQL 8 Replication errors

Good Morning I have setup GTID replication between two mysql 8 databases they are for a Powerdns Setup When i make changes on the master I can only get one transaction to replicate then i am presented this error.
Coordinator stopped because there were error(s) in the worker(s). The most recent failure being: Worker 1 failed executing transaction '0b5041c0-8e71-11ec-a064-00155d14ef09:5' at master log binlog.000001, end_log_pos 2500. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any.
If i run these commands 
On master database:
mysql> reset master;
On slave database:
mysql> stop slave;
mysql> reset slave;
mysql> reset master;
mysql> start slave;
This clears the error and i am able to sync one more entry then i get presented with the above error again.
I used this guide to set it up
maybe i missed something in the setup.
https://medium.com/#michael_w_s/basic-setup-of-master-slave-gtid-replication-on-mysql-8-8f39ea29765c
any help would be greatly appreciated
Thank you all for taking time to reply
I have resolved this issue
My Main problem was i was importing all my data before setting up replication.
As soon as i setup replication first and then imported data all has been good.
I ran into the same issue, the error isn't actually shown there with that output, but rather will be in your error log, or one of the other locations it mentions in the message.
I found the root cause with:
tail /var/log/mysql/error.log
Your log location may be different. In my case it was a permissions related error, but it could be anything.
I have had the same issue and found that I was using an older style variable in the my.cnf master file:
binlog-do-db = mydatabase
In the error log on the replica, I was getting errors because of another database I hadnt specified: 'my_other_database not found'. This implied that it was ignoring the variable and replicating ALL databases.
On MySQL8, the correct variable name to use is:
binlog_do_db = mydatabase
This solved the issue for me. Just ensure that your master database is listed in the table under "Binlog_Do_DB":
mysql> SHOW MASTER STATUS;
+------------------+----------+----------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+----------------+------------------+-------------------+
| mysql-bin.000014 | 66373612 | mydatabase | | |
+------------------+----------+----------------+------------------+-------------------+

select query in user sleep state

mysql server block many query, when i use show processlist, i found a simple query started 30 days ago
Command: Query
Time: 2262201
State: User sleep
Info: SELECT `abc` FROM `xxx` WHERE `a`='a' AND `b`='b';
below is mysql document explain about User sleep
https://dev.mysql.com/doc/refman/5.7/en/general-thread-states.html
The thread has invoked a SLEEP() call.
but this query string not include SLEEP function, why cause this state?
xxx is InnoDB table; mysql version is 5.7.32-log

Replication not updating all tables

Master db MySQL db Server 2012
Slave db MySQL Win7 XAMPP
DB size 500MB
Table count 42
I have setup the replication successfully however it stopped last week and my slave was showing the error Slave_SQL_Running No. I realised that it was looking at an incorrect log file (00004 whereas it should have been 00006).
I have since sorted this by;
At the MASTER;
SHOW MASTER STATUS;
Copied the values of MASTER_LOG_FILE and MASTER_LOG_POS.
At the SLAVE;
STOP SLAVE;
RESET SLAVE;
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98; (<- example values)
START SLAVE;
SHOW SLAVE STATUS \G;
On my master I tested the replication by editing the table members - I edited one of the row values (from 85 to 86 - this successfully replicated in my slave). However I notice that on my master members table there are 70652 members but on my slave there are only 70056.
I added two new members to my master members table and the total increases by 2 on both tables. However there still seems to be that 600 missing?
What could be the problem? Replication seems to be working but totals aren't. New members are added to the members table each day, but the aren't being added to my slave members table.
The results of my slave status table (from phpmyadmin) are;
Slave_IO_State Waiting for master to send event
Master_Host xxx.xxx.xxx.xxx
Master_User repl
Master_Port 3306
Connect_Retry 60
Master_Log_File mysql-bin.000006
Read_Master_Log_Pos 787956776
Relay_Log_File mysql-relay-bin.000004
Relay_Log_Pos 624412
Relay_Master_Log_File mysql-bin.000006
Slave_IO_Running Yes
Slave_SQL_Running Yes
Replicate_Do_DB
Replicate_Ignore_DB
Replicate_Do_Table
Replicate_Ignore_Table
Replicate_Wild_Do_Table
Replicate_Wild_Ignore_Table
Last_Errno 0
Last_Error
Skip_Counter 0
Exec_Master_Log_Pos 787956776
Relay_Log_Space 788197
Until_Condition None
Until_Log_File
Until_Log_Pos 0
Master_SSL_Allowed No
Master_SSL_CA_File
Master_SSL_CA_Path
Master_SSL_Cert
Master_SSL_Cipher
Master_SSL_Key
Seconds_Behind_Master 0
Is there something else that I could check or test?
Yes, when you stopped the replication was some rows changed or inserted. After this you have RESET the SLAVE and set the MASTER_LOG_POS. So the replication NEVER can gets the old changes.
You have 2 Options:
First:
Stop Replication
Dump the Master DB (with master position)
Restore it in the Slave
set set Position or check them
start slave
second
Stop slave
sync the masterDB to Slave DB with percona Toolkit - pt-table-sync
Start slave

mysql master/slave replication set up but non working

I'm experiencing some trouble setting up mysql replication between a master & a slave..
I did the setup successfully, but data doesn't update.
Master : show master status;
[File]: mysql-bin.000033
[Position]: 1757196
[Binlog_Do_DB]: ciel
Master : show processlist;
[User]: slave
[Host]: 92.222.177.xxx:57578 ( right slave ip )
[db]:
[Command]: Binlog Dump
[Time]: 1231
[State]: Has sent all binlog to slave; waiting for binlog to be updated
Slave : show slave status;
[Slave_IO_State]: Waiting for master to send event
[Master_Host]: 46.105.122.xxx
[Master_User]: slave
[Master_Port]: 3306
[Connect_Retry]: 60
[Master_Log_File]: mysql-bin.000033
[Read_Master_Log_Pos]: 1757196
[Relay_Log_File]: mysqld-relay-bin.000006
[Relay_Log_Pos]: 252
[Relay_Master_Log_File]: mysql-bin.000033
[Slave_IO_Running]: Yes
[Slave_SQL_Running]: Yes
[Replicate_Do_DB]: ciel
[Exec_Master_Log_Pos]: 1757196
[Relay_Log_Space]: 409
[Until_Condition]: None
[Master_SSL_Allowed]: No
[Master_SSL_Verify_Server_Cert]: No
[Master_Server_Id]: 1
Slave : show proccesslist;
[User]: system user
[Host]:
[db]:
[Command]: Connect
[Time]: 1231
[State]: Waiting for master to send event
[Info]:
[Id]: 2
[User]: system user
[Host]:
[db]:
[Command]: Connect
[Time]: 1231
[State]: Slave has read all relay log; waiting for the slave I/O thread to update it
then selecting data on master :
master: lastmod: 2014-10-26 17:14:55
slave: lastmod: 2014-10-26 15:45:45
I'm feeling lost, because I'm still not finding after 8 hours, how to set this up correctly.

How is MySQL Uptime's value "computed"?

According to MySQL Documentation, the global variable Uptime is defined as "The number of seconds that the server has been up.".
However, can somebody please explain to me how this value is actually computed? What does it use as a reference, System Time?
I am asking this question because I just came across a weird situation : when rebooting a VM with MySQL, ntpd service terminated, and at startup (since was not on chkconfig), the time got shifted +8 hours, as you can see by the following :
15:01:00 hostname shutdown[30383]: shutting down for system reboot
15:01:00 hostname init: Switching to runlevel: 6
...
15:01:06 hostname ntpd[27553]: ntpd exiting on signal 15
15:01:06 hostname syslog-ng[27399]: Termination requested via signal, terminating;
...
23:04:03 hostname kernel: Bootdata ok
The same shift is recorded in the MySQL error logs :
15:01:03 InnoDB: Starting shutdown...
15:01:05 InnoDB: Shutdown completed; log sequence number 2746293826
15:01:06 [Note] /usr/sbin/mysqld: Shutdown complete
15:01:06 mysqld_safe mysqld from pid file /var/lib/mysql/data/hostname.pid ended
23:04:06 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql/data
After we fixed the time by starting ntpd, it seemed that the Uptime got shifted :
mysql> show global status like 'Uptime';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Uptime | 18005 |
+---------------+-------+
1 row in set (0.00 sec)
mysql> show global status like 'Uptime_since_flush_status';
+---------------------------+-------+
| Variable_name | Value |
+---------------------------+-------+
| Uptime_since_flush_status | 18007 |
+---------------------------+-------+
Is this behavior possible, or it probably related to other factors?
Thank you for your patience and understanding.
Should be very simple. The application will create a timestamp from when it starts and compare it to the current time. These times are given from the system time.
So if you modify the system time, it will not adjust the initial timestamp. It will consider the time change as the current time and relay that as its comparison.