MySQL - what does skip-locking in my.cnf do? - mysql

I'm using MySQL 5.0.67 on RHEL5 and basing my configuration on my-huge.cnf.
I can't find anything in the MySQL manual for the row 'skip-locking' which appears in the config file.
Should this be replaced with 'skip_external_locking' or should I just remove the row entirely as that is now a default.
MySQL Manual for skip-external-locking
Thanks.

see http://dev.mysql.com/doc/refman/5.0/en/external-locking.html
quote:
If you run multiple servers that use the same database directory (not recommended), each server must have external locking enabled.
It really just has to do with the dangers presented by multiple processes accessing the same data. In many DBMS situations you want to lock the table/row before performing an operation, and unlocking afterwards. This is to prevent possible data corruption.
Edit: see http://dev.mysql.com/doc/refman/4.1/en/news-4-0-3.html
Quote
Renamed --skip-locking to --skip-external-locking.

An additional note for anyone who doesn't follow the link provided by #Jonathan Fingland :
8.7.4. External Locking
This option only applies to MyISAM tables.
As Richard indicated, external locking is disabled by default. You need to enable external locking if you use myisamchk for write operations or if you use myisampack to pack tables.
From the docs:
If you use myisamchk to perform table maintenance operations on MyISAM
tables, you must either ensure that the server is not running, or that
the server has external locking enabled so that it locks table files
as necessary to coordinate with myisamchk for access to the tables.
The same is true for use of myisampack to pack MyISAM tables.
If you use myisamchk for write operations such as repairing or
optimizing tables, or if you use myisampack to pack tables, you must
always ensure that the mysqld server is not using the table. If you
don't stop mysqld, you should at least do a mysqladmin flush-tables
before you run myisamchk. Your tables may become corrupted if the
server and myisamchk access the tables simultaneously.

Related

MariaDB - online move/archive tables

We have a script that "rotates"/archive the Syslog tables in MySQL. This script:
at Linux level, renames the "MyISAM" tables files then compress them
then
inside MySQL, rename these tables
The 2 steps are "online". No MySQLd restart is required.
Now I built a new Syslog database in MariaDB (Debian Stretch). The tables are using InnoDB and not MyISAM. This script fails at the 2nd execution to rename the table inside MySQL after moving the file:
ERROR 1050 (42S01): Table 'SystemEvents_1' already exists
A reference of the table is kept somewhere (tablespace internal system table?) which prevents from doing that.
My question:
would it work if I migrate my tables to the ARIA engine with transactional=0?
Thanks, Vince
I think it is no longer possible.
I converted my tables to MyISAM (and even Aria with transactional=0) and had the same error message. I think the best is to use mysqldump to export the tables instead of directly renaming the filesystem files. Less convenient but mysqldump will always work regardless the choosen engine.

Can MySQL transactions be used with event scheduler?

Can I use MySQL transactions on an InnoDB table inside MySQL event? Are there any restrictions on the event scheduler?
Following are limits on InnoDB Tables
Warning
Do not convert MySQL system tables in the mysql database from MyISAM to InnoDB tables! This is an unsupported operation. If you do this, MySQL does not restart until you restore the old system tables from a backup or re-generate them with the mysql_install_db script.
Warning
It is not a good idea to configure InnoDB to use data files or log files on NFS volumes. Otherwise, the files might be locked by other processes and become unavailable for use by MySQL.
**Go to follwoing link
restrictions on the event scheduler
Hope its help you!!!

Big Database backup best practice

I maintain big MySQL database. I need to backup it every night, but the DB is active all the time. There are queries from users.
Now I just disable the website and then do a backup, but this is very bad as the service is disabled and users don't like this.
What is a good way to backup the data if data is changed during the backup?
What is best practice for this?
I've implemented this scheme using a read-only replication slave of my database server.
MySQL Database Replication is pretty easy to set up and monitor. You can set it up to get all changes made to your production database, then take it off-line nightly to make a backup.
The Replication Slave server can be brought up as read-only to ensure that no changes can be made to it directly.
There are other ways of doing this that don't require the replication slave, but in my experience that was a pretty solid way of solving this problem.
Here's a link to the docs on MySQL Replication.
If you have a really large (50G+ like me) MySQL MyISAM only databases, you can use locks and rsync. According to MySQL documentation you can safely copy raw files while read lock is active and you cannot do it with InnoDB.
So if the goal is zero downtime and you have extra HD space, create a script:
rsync -aP --delete /var/lib/mysql/* /tmp/mysql/sync
Then do the following:
Do flush tables
Run script
Do flush tables with read lock;
Run script again
Do unlock tables;
On first run rsync will copy a lot without stopping MySQL. The second run will be very short, it will only delay write queries, so it is a real zero downtime solution.
Do another rsync from /tmp/mysql/sync to a remote server, compress, keep incremental versions, anything you like.
This partly depends upon whether you use innodb or myiasm. For innodb; mySQL have their own (which costs money) solution for this (innodb hot copy) but there is an open source version from Percona you may want to look at:
http://www.percona.com/doc/percona-xtrabackup/
What you want to do is called "online backup". Here's a pointer to a matrix of possible options with more information:
http://www.zmanda.com/blogs/?p=19
It essentially boils down to the storage backend that you are using and how much hardware you have available.

Mysql dump: Copying MYI files in production is safe?

Want to copy some big tables which are used very often from main server to local server in production mode ofc.
Is it safe?
Some suggestions, tools are welcome. :)
I'm guessing from asking about MYI tables that all of your tables are MyISAM tables and not InnoDb ones. Each MyISAM table is made up of three files: .frm, .MYD and .MYI they contain the structure, data and index respectively.
People advise against copying these raw files from running systems but I've found that as long as you're sure nothing's writing to the tables then copying them works fine (I've done this more times than I care to remember).
If you're doing this on a replica then just stop replication on the slave before copying. If it's just a single server or the master I'd recommend running FLUSH TABLES WITH READ LOCK before you start copying files, this will prevent any process from writing to the tables. When you're done release the lock with UNLOCK TABLES.
I would always recommend doing a CHECK TABLE on the tables you've copied in this manner. I think this is what I've used in the past mysqlcheck --all-tables --fast --auto-repair
If you're using LVM on the server then taking an LVM snapshot could be another way of getting hold of a clean snapshot.
If you're going to be doing this regularly I'd recommend either using replication to keep the local server up to date or to setup a slave which you can use for taking backups (as it's not the main database there's no problem with you stopping it, dumping tables, etc)
Yes it's alright if you
Shut the mysql server down cleanly before you copy the files, or at least take a global read lock
Take the .frm .MYI and .MYD files at the same time.
Have identical my.cnf and run the same MySQL version on each system
So it's basically ok, but not necessarily a good idea.
If you have different versions of mysql (You can only generally move to a more recent version, not an older one), or are running with a significantly different my.cnf (i.e. any of the fulltext indexing parameters differ, and you have fulltext indexes), you may need to rebuild the table. Rebuilding the table can be done on the destination server with ALTER TABLE blah ENGINE=MyISAM; this might still be a bit faster than a mysqldump / restore.

How do I backup a MySQL database?

What do I have to consider when backing up a database with millions of entries? Are there any tools (maybe bundled with the MySQL server) that I could use?
Depending on your requirements, there's several options that I have been using myself:
if you don't need hot backups, take down the db server and back up on the file system level, i. e. using tar, rsync or similar.
if you do need the database server to keep running, you can start out with the mysqlhotcopy tool (a perl script), which locks the tables that are being backed up and allows you to select single tables and databases.
if you want the backup to be portable, you might want to use mysqldump, which creates SQL scripts to recreate the data, but which is slower than mysqlhotcopy
if you have a copy of the db at a certain point in time, you could also just keep the binlogs (starting at that point in time) somewhere safe. This can be very easy to do and doesn't interfere with the server's operation, but might not be the fastest to restore, and you have to make sure you don't miss part of the logs.
Methods I haven't tried, but that make sense to me:
if you have a filesystem like ZFS or are running on LVM, it might be a good idea to do a snapshot of the database by doing a filesystem snapshot, because they are very, very quick. Just remember to ensure a consistent state of your db during the whole operation, e. g. by doing FLUSH TABLES WITH READ LOCK (and of course, don't forget UNLOCK TABLES afterwards)
Additionally:
you can use a master-slave setup to replicate your production server to either a different machine or a second instance on the same machine and do any of the above to the replicated copy, leaving your production machine alone. Instead of running continously, you can also fire up the slave on regular intervals, let it read the binlog, and switch it off again.
I think, MySQL cluster and the enterprise licensed version have more tools, but I have never tried them.
Mysqlhotcopy is badly described - it only works if you use MyISAM, and it's not hot.
The problem with mysqldump is the time it takes to restore the backup (but it can be made hot if you have all InnoDB tables, see --single-transaction).
I recommend using a hot backup tool, like what is available in XtraBackup:
http://www.percona.com/docs/wiki/percona-xtrabackup:start
Watch out if using mysqldump on large tables using the MyISAM storage engine; it blocks selects while the dump is running on each table and this can take down busy sites for 5-10 minutes in some cases.
Using InnoDB, by comparison, you get non-blocking backups because of its row-level locking, so this is not such an issue.
If you need to use MyISAM, a common strategy is to replicate to a second MySQL instance and do the mysqldump against the replicated copy instead.
Use the export tab in phpMyAdmin. phpMyAdmin is the free easy to use web interface for doing MySQL administration.
I think mysqldump is the proper way of doing it.