So, I'd like to be able to set the max log file size to 64M, but after doing so with innodb_log_file_size=64M MySQL starts OK, but nothing seems to work properly.
EDIT: and by properly I mean not at all. Setting other InnoDB variables aren't causing any problems.
How should I go about troubleshooting this one?
Make sure MySQL shuts down cleanly, and delete (or move elsewhere) all ib_logfile* files from MySQL data directory (/var/lib/mysql/ usually).
I've tested it and worked for me. Here's source of this hint.
InnoDB reports some errors in show table status comment field. You'll find other problems in MySQL error log (hostname.err in MySQL data directory).
I ran into this problem too, and as per #porneL's answer, here were my specific bash steps to correct this:
service mysql stop # Stop MySQL
rm /var/lib/mysql/ib_logfile0 # Delete log file 1
rm /var/lib/mysql/ib_logfile1 # Delete log file 2
vim my.conf # Change innodb_log_file_size = 64M
service mysql start # Start MySQL
I found these specific steps on the MySQL forums.
Before changing the innodb_log_file_size, you must flush all remaining transactional data out of it. You simply set innodb_fast_shutdown to 0 or 2.
innodb_fast_shutdown = 0 : InnoDB does a slow shutdown, a full purge and an insert buffer merge before shutting down
innodb_fast_shutdown = 2 : InnoDB flushes its logs and shuts down cold, as if MySQL had crashed; no committed transactions are lost, but the crash recovery operation makes the next startup take longer.
In light of this, this is how you handle it
mysql -ANe"SET GLOBAL innodb_fast_shutdown = 2"
vi /etc/my.cnf # Change innodb_log_file_size = 64M
service mysql stop # Stop MySQL
rm /var/lib/mysql/ib_logfile0 # Delete log file 1
rm /var/lib/mysql/ib_logfile1 # Delete log file 2
service mysql start # Start MySQL
Related
Please help!
I set up a master-slave replication based on the GTID mechanism.
The replication works OK, until a mysqld restart happens on slave. Then the mess begins...
After such a restart, I can not restore the replication.
When issuing a "START SLAVE" command I get the following an error message:
ERROR 1794 (HY000) at line 1: Slave is not configured or failed to
initialize properly. You must at least set --server-id to enable
either a master or a slave. Additional error messages can be found in
the MySQL error log.
Needless to say I did set server-id in my.cnf (see below).
In /var/log/mysqld.log file, I found the following error message:
[ERROR] Error creating master info: Multiple replication metadata
repository instances found with data in them. Unable to decide which
is the correct one to choose.
[ERROR] Failed to create or recover replication info repository.
I can not understand what have I done wrong.
The communication between master and slave is ssl-tunneled through stunnel, but I don't think this is a relevant fact, since until a restart everything works right.
The only way I found to re-establish the replication (after mysql restart) is to manually delete the mysql data files, and then load again the dump file imported from the master. (I use mysqldump). This is of course unreasonable.
Following are the my.cnf files:
On slave:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Recommended in standard MySQL setup
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
server-id=2
log-bin=mysql-bin
binlog_format=ROW
relay_log=relay-log
skip-slave-start
enforce-gtid-consistency
gtid-mode=ON
log-slave-updates
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
On mater:
[mysqld]
server-id=1
log-bin=mysql-bin
binlog_format=ROW
gtid-mode=on
enforce-gtid-consistency
log-slave-updates
innodb_buffer_pool_size = 1G
query_cache_size = 32M
Slave machine: Centos 6.6, mysql 5.6.24.
Master machine: RHEL 6.6, mysql 5.6.10.
Any help wold be greatly appreciated!
Thanks
Nadav Blum
on master -
mysql> reset master;
[this command will clear binary logs of master and start with new. so save it if you want.]
when you start the slave mysqld, run the following command
mysql> stop salve;
mysql> reset slave;
mysql> change master to master_host='192.168.10.116', master_user='root', master_password='root', master_auto_position=1;
mysql> start slave;
mysql> show slave status \G
Now if all goes well then, you can restart the slave (if it is committed all the transaction then no problem else it will start to execute transection in your master binary log. You can check your relay log file)
Well, mystery solved.
Remember how I wrote that the issue has nothing to do with my usage of stunnel, as the mean for tunneling communication between master and slave ?
Well, I was wrong.
The thing is, I used localhost port 3307 as the end point for the slave communication to the master. (stunnel listened to this port and forwarded data to the master-server ip). So the "change master" was done via:
change master to master_host="localhost", master_port=3307, master_user="XXX", master_password="XXX", MASTER_AUTO_POSITION = 1;'
That "localhost" thing caused the mess. I changed it to "127.0.0.1", and now restarts cause no harm!
Thanks Hitech and Jaydee for your help!
Ran into the same problem yesterday.
Oracle support doc helped.
For people who don't have Oracle support.
CAUSE
The cause is that both TABLE and FILE replication repository metadata exist at the same time, but only one form should.
SOLUTION
Before setting up replication, remove the files specified by the my.cnf variables relay_log_info_file and master_info_file .
By default their names map to relay-log.info and master.info and they are located in the datadir. (I had to remove the master.info file)
And remove any residual configuration by executing:
STOP SLAVE;
SET SQL_LOG_BIN=0;
DELETE FROM mysql.slave_master_info ;
DELETE FROM mysql.slave_relay_log_info ;
SET SQL_LOG_BIN=1;
In-Short: My binary logs aren't starting even though log-bin is set and specified. I'm not sure how to fix it.
I have a MariaDB instance running as a service on windows that I am attempting to replicate to a MariaDB instance on a Ubuntu machine. I am using MySQL workbench 6.0 as much as I can to manage everything, and following the instructions from Oracle here for setting up master-slave replication: http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html
I have made it to the fourth chapter, where I allegedly have the master and slave both configured, and I am about to read-lock the master tables for an initial data dump to the slave before I start up replication. So I flushed the tables with read lock and checked the master status:
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
That last line didn't return any binary log information. Checking further, I ran:
SHOW BINARY LOGS;
and an error message confirmed that:
Error Code: 1381. You are not using binary logging
Master Config is like this:
[mysqld]
datadir = "C:/mysql/data"
port=3306
sql_mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION"
default_storage_engine=innodb
innodb_buffer_pool_size=1535M
innodb_log_file_size=50M
feedback=ON
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
log-bin-index = "C:/mysql/logs/log-bin.index"
log-bin=mysql-bin
server-id=1
innodb_flush_log_at_trx_commit=1
[client]
port=3306
How do I make sure the binary logs are rolling so I can continue with this?
When I am trying to check binary log:
SHOW BINARY LOGS;
I get this error:
ERROR 1381 (HY000): You are not using binary logging.
How to resolve this? Can anybody help?
Set the log-bin variable in your MySQL configuration file, then restart MySQL.
An example my.cnf (on Linux/unix) or my.ini (on Windows) would look like:
[client]
...
[mysqld]
...
log-bin=mysql-bin
---
Once restarted, MySQL automatically creates a new binary log (does so upon every restart).
You may also wish to look at the following variables:
server-id = 1
expire_logs_days = 4
sync_binlog = 1
Read details on the MySQL documentation. If you're after replication setup (a primary reason for using binary logs), check out Replication configuration checklist.
Line
log-bin=mysql-bin
must placed above lines:
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
You will need to activate binary logging at startup
Add the following lines in /etc/my.cnf under the [mysqld] section
[mysqld]
log-bin=mysql-bin
expire-logs-days=7
Then, run this
service mysql restart
The next time you login to mysql, you will see a binary log listing and will rotate out after 7 days.
The default location of the binary logs will be /var/lib/mysql or where datadir is defined. If you specify a folder before the binlog name, then that folder is the location.
For example
[mysqld]
log-bin=/var/log/mysql-bin
expire-logs-days=7
UPDATE 2012-07-12 02:20 AM EDT
Please restart mysql as follows and tell us if binary logging in on
service mysql restart --log-bin=mysql-bin
To enable the binary log, start the server with the --log-bin[=base_name] option.
If no base_name value is given, the default name is the value of the pid-file option (which by default is the name of host machine) followed by -bin.
If the basename is given, the server writes the file in the data directory unless the basename is given with a leading absolute path name to specify a different directory. It is recommended that you specify a basename.
Or you can directly use:
log-bin=mysql-bin
and then restart your mysql service. Then binary file will be generated. If you are using lampp on Linux machine then you will find this file in /lampp/var/mysql/mysql-bin.000001
FWIW, I had the same issue after I tried to set up my.cnf.master and my.cnf.slave files and symlink them to my.cnf for master and slave, respectively. The idea was to be able to switch the machine from master to slave and back easily.
It turned out that mysqld simply did not handle the symlink as expected. Hard-linking the file worked (ln my.cnf.master my.cnf). Careful if you do something like this, as overwriting one of the hard-linked filenames could break the link and create two separate files instead (depending on the method of rewriting employed by the software you use for it).
I've found logging will silently fail to happen even if my.cnf config is right, so you can also try re-creating your log folder.
This may be necwssary if the logs are in an odd state. (In my case, I had simply ceased logging in my.cnf and then re-enabled it, but nothing happened, probably because the existing files were not the latest updates?).
Something like this should work:
sudo service mysql stop
sudo mv /var/log/mysql /tmp/mysqlold # or rm -fr if you're brave
mkdir /var/log/mysql
chown -R mysql:mysql /var/log/mysql
sudo service mysql start
Obligatory warning: Obviously, take care when deleting anything on a database server. This will destroy/disrupt/corrupt any replication using this database as master (though you can resume replication as a slave). That said, I believe this should be safe insofar as it doesn't delete the database itself.
I went out of my mind with this issue on a MySQL 5.5 master running Debian. None of the above worked. Finally, I rebooted the server and logging was enabled.
Remove section [mysqld_safe] and replace with [mysqld].
It works for me.
Recently, I have found out that I can maximize mysql performance when if I have good hardware. Since I've been using InnoDB I added additional configuration into my.ini
Here is the newly added configurations:
innodb_data_file_path = ibdata1:10M:autoextend
innodb_buffer_pool_size = 2G
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 256M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 120
Then I restart all of the services. But when I used my program, an error occurred "Unknown table engine 'InnoDB'".
What I have tried to solve this problem:
I delete the log file the restart the service but I still got the error.
Other solutions did not fix my problem.
InnoDB engine was disabled after adjusting config.
Removing borked ib_* log files in mysql data dir fixed my issue, and allowed me to use 2G buffer pool for InnoDB:
http://www.turnkeylinux.org/forum/support/20090111/drupal-6-problem-enable-innodb#comment-131
I just retried deleting the logfile and restarted the services, and it works! But beware of allotting 2G because innodb might not compile, please use 1G if 2G doesn't work.
I have ran into this problem as well. The problem was that I was allocating more memory to InnoDB than the server had with the variable innodb_buffer_pool_size. MySQL did not complain about not being able to allocate the memory in its logs about this.
I tried all of those (and many others) but the one method that worked for me is:
Stop MySql Server
/etc/init.d/mysql stop
Delete the log files
rm ib_logfile0 ib_logfile1
Rename the InnoDB file (If nothing else works because it will be recreated)
mv ibdata1 old_ibdata1
I have this configs in /etc/mysql/my.cnf -> Even if you don't specify this, MySql will use the default values.
[mysqld]
datadir=/data/mysql/data
socket=/var/run/mysqld/mysqld.sock
#Not a must to define the following
innodb_log_file_size=1G
innodb_file_per_table=1
innodb_flush_method=O_DIRECT
innodb_buffer_pool_size=1G
innodb_data_file_path=ibdata1:10M:autoextend
innodb_lock_wait_timeout=18000
Start MySql Server
/etc/init.d/mysql start
Another option you have if you mangle your my.cnf file completely is to replace it with a default config from the mysql install there . For linux:
You have the following options,
/usr/share/mysql/my-huge.cnf
/usr/share/mysql/my-innodb-heavy-4G.cnf
/usr/share/mysql/my-large.cnf
/usr/share/mysql/my-medium.cnf
/usr/share/mysql/my-small.cnf
Here is an example to install it:
#backup original config
mv /etc/my.cnf{,.bak}
#copy new my.cnf from template
cp /usr/share/mysql/my-large.cnf /etc/my.cnf
More information on these options is available at http://dev.mysql.com/doc/mysql/en/option-files.html
Had this issue when restoring from backup. Problem was I had a bit different settings in my.ini. So in case someone gets this issue just be sure to set the same settings (copy my.ini), stop the MySQL service, then restore whole data folder and then start the MySQL service again.
In MariaDB 10.1, there's an ignore-builtin-innodb option that should be disabled to stop fix error.
I'm importing a MySQL dump and getting the following error.
$ mysql foo < foo.sql
ERROR 1153 (08S01) at line 96: Got a packet bigger than 'max_allowed_packet' bytes
Apparently there are attachments in the database, which makes for very large inserts.
This is on my local machine, a Mac with MySQL 5 installed from the MySQL package.
Where do I change max_allowed_packet to be able to import the dump?
Is there anything else I should set?
Just running mysql --max_allowed_packet=32M … resulted in the same error.
You probably have to change it for both the client (you are running to do the import) AND the daemon mysqld that is running and accepting the import.
For the client, you can specify it on the command line:
mysql --max_allowed_packet=100M -u root -p database < dump.sql
Also, change the my.cnf or my.ini file (usually found in /etc/mysql/) under the mysqld section and set:
max_allowed_packet=100M
or you could run these commands in a MySQL console connected to that same server:
set global net_buffer_length=1000000;
set global max_allowed_packet=1000000000;
(Use a very large value for the packet size.)
As michaelpryor said, you have to change it for both the client and the daemon mysqld server.
His solution for the client command-line is good, but the ini files don't always do the trick, depending on configuration.
So, open a terminal, type mysql to get a mysql prompt, and issue these commands:
set global net_buffer_length=1000000;
set global max_allowed_packet=1000000000;
Keep the mysql prompt open, and run your command-line SQL execution on a second terminal..
This can be changed in your my.ini file (on Windows, located in \Program Files\MySQL\MySQL Server) under the server section, for example:
[mysqld]
max_allowed_packet = 10M
Re my.cnf on Mac OS X when using MySQL from the mysql.com dmg package distribution
By default, my.cnf is nowhere to be found.
You need to copy one of /usr/local/mysql/support-files/my*.cnf to /etc/my.cnf and restart mysqld. (Which you can do in the MySQL preference pane if you installed it.)
The fix is to increase the MySQL daemon’s max_allowed_packet. You can do this to a running daemon by logging in as Super and running the following commands.
# mysql -u admin -p
mysql> set global net_buffer_length=1000000;
Query OK, 0 rows affected (0.00 sec)
mysql> set global max_allowed_packet=1000000000;
Query OK, 0 rows affected (0.00 sec)
Then to import your dump:
gunzip < dump.sql.gz | mysql -u admin -p database
In etc/my.cnf try changing the max_allowed _packet and net_buffer_length to
max_allowed_packet=100000000
net_buffer_length=1000000
if this is not working then try changing to
max_allowed_packet=100M
net_buffer_length=100K
On CENTOS 6 /etc/my.cnf , under [mysqld] section the correct syntax is:
[mysqld]
# added to avoid err "Got a packet bigger than 'max_allowed_packet' bytes"
#
net_buffer_length=1000000
max_allowed_packet=1000000000
#
I have resolved my issue by this query
SET GLOBAL max_allowed_packet=1073741824;
and check max_allowed_packet with this query
SHOW VARIABLES LIKE 'max_allowed_packet';
Use a max_allowed_packet variable issuing a command like
mysql --max_allowed_packet=32M
-u root -p database < dump.sql
Slightly unrelated to your problem, so here's one for Google.
If you didn't mysqldump the SQL, it might be that your SQL is broken.
I just got this error by accidentally having an unclosed string literal in my code. Sloppy fingers happen.
That's a fantastic error message to get for a runaway string, thanks for that MySQL!
Error:
ERROR 1153 (08S01) at line 6772: Got a packet bigger than
'max_allowed_packet' bytes Operation failed with exitcode 1
QUERY:
SET GLOBAL max_allowed_packet=1073741824;
SHOW VARIABLES LIKE 'max_allowed_packet';
Max value:
Default Value (MySQL >= 8.0.3) 67108864
Default Value (MySQL <= 8.0.2) 4194304
Minimum Value 1024
Maximum Value 1073741824
Sometimes type setting:
max_allowed_packet = 16M
in my.ini is not working.
Try to determine the my.ini as follows:
set-variable = max_allowed_packet = 32M
or
set-variable = max_allowed_packet = 1000000000
Then restart the server:
/etc/init.d/mysql restart
It is a security risk to have max_allowed_packet at higher value, as an attacker can push bigger sized packets and crash the system.
So, Optimum Value of max_allowed_packet to be tuned and tested.
It is to better to change when required (using set global max_allowed_packet = xxx)
than to have it as part of my.ini or my.conf.
I am working in a shared hosting environment and I have hosted a website based on Drupal. I cannot edit the my.ini file or my.conf file too.
So, I deleted all the tables which were related to Cache and hence I could resolve this issue. Still I am looking for a perfect solution / way to handle this problem.
Edit - Deleting the tables created problems for me, coz Drupal was expecting that these tables should be existing. So I emptied the contents of these tables which solved the problem.
Set max_allowed_packet to the same (or more) than what it was when you dumped it with mysqldump. If you can't do that, make the dump again with a smaller value.
That is, assuming you dumped it with mysqldump. If you used some other tool, you're on your own.