MySQL Permanent Lock on Table - mysql

From what I read in the following: http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html, the LOCK TABLES statement is only good for the current session. How can I permanently lock specific tables to make them read-only, for all connection sessions, until I explicitly unlock them?

I dont think you can simply lock any table like that. The best way you can do so is to revoke all update, insert and delete privileges
Somthing like this:
REVOKE DROP, INSERT, TRUNCATE ON database.table FOR 'user'#'host'

For large, time-consuming operations like that, one good option can be to copy the data to a new location, to do your manipulations. This essentially makes a snapshot of the data, leaving the database unhindered (and possibly still accepting reads and writes) while you perform your operation.
Stop MySQL
Copy data files to new location.
Restart MySqL
Perform manipulations on the data copy.
Delete copy (by way of DROP TABLE)

Related

write lock all tables in mysql for a moment

I need to perform some scripted actions, which may take a while (like maybe a minute). At the beginning of these actions, I take some measures from the MySQL DB and it's important that they do not change until the actions are done. The DB has dozens of tables since it belongs to a quite old fashioned but huge CMS, and the CMS users have a dozen options to modify it.
I do not even want to change anything in the time my scripts runs in the DB myself, it just shall be frozen. It's not a Dump or Update. But tables should be kept open for reading for everyone, to prevent visitors of the connected homepage from getting errors.
If the database altering actions, which may be performed by other CMS users in the meantime would be triggered after the DB is unlocked again, it would be perfect, but if they fail, I would not mind.
So I thought at the beginning of the script I lock the tables down with
lock first_table write;
lock second_table write;
...
And after I do
unlock tables
I think that should do exactly what I want. But can I archive this for all tables of the db without naming them explicitly, to make this more futureproof?
This does not work for sure:
lock tables (select TABLE_NAME from information_schema.tables
where table_schema='whatever') write;
Another question would be, if someone can answer this on the fly, if I would have to perfom the lock/unlock with another MYSQL user than the one used by the CMS. If I understood this right, then yes.
Below is the statement to lock all tables (actually it creates a single global lock):
FLUSH TABLES WITH READ LOCK;
Then release it with:
UNLOCK TABLES;
Mysqldump does this, for example, unless you are backing up only transactional tables and use the --single-transaction option.
Read http://dev.mysql.com/doc/refman/5.7/en/flush.html for more details about FLUSH TABLES.
Re your comment:
Yes, this takes a global READ LOCK on all tables. Even your own session cannot write. My apologies for overlooking this requirement of yours.
There is no equivalent global statement to give you a write lock. You'll have to lock tables by name explicitly.
There's no syntax for wildcard table names, nor is there syntax for putting a subquery in the LOCK TABLES statement.
You'll have to get a list of table names and build a dynamic SQL query.

Mysqldump taking too much time

MySQLdump and upload process taking too long time (~8 hr) to complete the whole process.
I am dumping active database into mysqldump.tar file and almost 3gb. When I load into new database its taking 6-8 hr to complete the process (upload into new database).
What will be the recommended solution for me to complete the process?
If I understand correctly, your main problem is that loading the data into your new database is the step that's taking a lot of time. Besides reading the link provided by asdf in his comment ("How can I optimize a mysqldump of a large database?"), I suggest you some things:
Use the --disable-keys option; this will add alter table your_table DISABLE KEYS before the inserts, and alter table your_table ENABLE KEYS after the inserts are done. When I've used this option, the insertion time is about 30% faster
If possible, use the --delayed-insert option; whis will use INSERT DELAYED insted of the "normal" INSERT.
If possible, dump the data of different tables into different files; that way you may upload them concurrently.
Check the reference manual for further information.

How to lock / revoke access in database table from writing new data in It

How can I lock MySQL database table from writing data into it? I have game which insert score and user names to database, I need to lock It for now. That players can play, but result will be not written to database. My problem is that I can't edit files from FTP only database.
I've tried: LOCK tables tableName WRITE; but after this command used still data is writing to database. Is It possible to make that at all?
Use a Read Lock:
LOCK tables tableName READ;
You can revoke the write privileges (insert, update, delete) from the relevant user.
Here's an example:
revoke insert,update,delete on your_schema.your_table from your_user;

MySQL replication without delete statments

I have been looking for a way to prevent MySQL delete statements from getting processed by the slave, I'm working on data warehousing project, and I would like to delete data from production server after having data replicated to slave.
what is the best way to get this done?
Thank you
There are several ways to do this.
Run SET SQL_LOG_BIN=0; for the relevant session on the master before executing your delete. That way it is not written to the binary log
Implement a BEFORE DELETE trigger on the slave to ignore the deletes.
I tend to use approach #1 for statements that I don't want to replicate. It requires SUPER privilege.
I have not tried #2, but it should be possible.
You'll only be able to achieve this with a hack, and it will likely cause problems. MySQL replication isn't designed for this.
Imagine you insert a record in your master, it replicates to the slave. You then delete from the master, but it doesn't delete from the slave. If someone adds a record with the same unique key, there will be a conflict on the slave.
Some alternatives:
If you are looking to make a backup, I would do this by another means. You could do a periodic backup with a cronjob that runs mysqldump, but this assumes you don't want to save EVERY record, only create periodic restore points.
Triggers to update a second, mirror database. This can't cross servers though, you'd have to recreate each table with a different name. Also, the computational cost would be high and restoring from this backup would be difficult.
Don't actually delete anything, simply create a Status field which is Active or Disabled, then hide Disabled from the users. This has issues as well, for example, ON DELETE CASCADE couldn't be used, it would have to be all manually done in code.
Perhaps if you provide the reason you want this mirror database without deletes, I could give you a more targeted solution.

How to dump a single table in MySQL without locking?

When I run the following command, the output only consists of the create syntax for 'mytable', but none of the data:
mysqldump --single-transaction -umylogin -p mydatabase mytable > dump.sql
If I drop --single-transaction, I get an error as I can't lock the tables.
If I drop 'mytable' (and do the DB), it looks like it's creating the INSERT statements, but the entire DB is huge.
Is there a way I can dump the table -- schema & data -- without having to lock the table?
(I also tried INTO OUTFILE, but lacked access for that as well.)
The answer might depend on the database engine that you are using for your tables. InnoDB has some support for non-locking backups. Given your comments about permissions, you might lack the permissions required for that.
The best option that comes to mind would be to create a duplicate table without the indicies. Copy all of the data from the table you would like to backup over to the new table. If you create your query in a way that can easily page through the data, you can adjust the duration of the locks. I have found that 10,000 rows per iteration is usually pretty darn quick. Once you have this query, you can just keep running it until all rows are copied.
Now that you have a duplicate, you can either drop it, truncate it, or keep it around and try to update it with the latest data and leverage it as a backup source.
Jacob