MySQL custom storage path - mysql

I'm working on a server with almost no free space on it, so I attached a NFS volume to it. Now I would like MySQL to be storing tables or, even better, entire databases on the shared volume. Is there a way to do this?
Thanks a lot!

I don't think that is a good idea. You will have performance and consistency issues if storage is on NFS. Consider adding a secondary local disk, mounting it, and hosting your database files in it.
Having said that, you could change database location in MySQL. It's pretty simple. Have a look at this web entry. MySQL is pretty flexible with this kind of things.

Related

How can I use transparent data encryption with MySQL?

I want to enable Transparent Data encryption (TDE) on MySQL. I don't mind if the entire db is encrypted (as opposed to a few columns or rows or tables). I am using this for a study, so I am looking for something that is open and free. I found zNcrypt but it's a commercial product. They are essentially using eCryptfs which is open-source, but couldn't find a way to rightly configure it for MySQL.
Any pointers on using eCryptfs with MySQL or any other solution for enabling TDE with MySQL would be very helpful. Thanks!
I see this question is relatively old, but just in case:
eCryptfs can be considered a filesystem, so, you should just need to mount it, and then point your MySQL datadir to the mounted directory. The only drawback is that it doesn't seems to support O_DIRECT, but I don't think MySQL uses it, does it?

Is it wrong to backup a MySQL DB through copying its data files (.frm, .MYD, etc.)?

I read somewhere that it is, so I was wondering.
Don't do that, the .frm/.myd/.myi may be much larger than the actual data and may cause crash (data not consistence) or very hard to transfrom/recover.
Use mysqldump to transfer MySQL database.
It might or might not work, depending on the state of the data.
it should work if you stop the database completely and restart it again after the backup, but there you have a downtime.
A slightly more detailed response at askers request.
Some details of the dangers of a straight file copy:
If the database is live the database might change some of the files
before others so when you copy them you copy them you may get copies
in that are inconsistent with each other. If the database is offline
this method is probably reliable.
Advantages of using the documented methods:
Should work on future version of DBMS
Should work consistently across underlying engines
Always a consistent snapshot like copy

Change session storage from file to MySQL in MediaWiki

I am in the process of moving our companies MediaWiki from a single server to a clustered environment. The existing file based session storage was fine with the single server, but clearly not for the cluster.
To address this I'm looking to use one of our existing MySQL database servers to handle session management but the only article I've come across is for a new MediaWiki installation.
I set $wgSessionHandler in LocalSettings.php but that had no effect.
Anyone have advice/experience with this?
This might not be the answer you're looking for, but I was just facing this issue myself.
After trying to Do The Right Thing™ for some hours, I finally gave in and just put the sessions on shared storage.
So, if you can afford to, performance wise, and have some shared storage available or can easily create some, I can only recommend just pointing PHP's session.save_path to shared storage and save yourself the trouble.
It's the easy way out. ;-)

How to keep databases synchronized between hosting account and a local testing server?

I have several databases hosted on a shared server, and a local testing server which I use for development.
I would like to keep both set of databases somewhat synchronized (more or less daily).
So far, my ideas to solve the problem seem very clumsy. Anyway, for reference, here is what I have considered so far:
Make a database dump from online databases, trash local databases, and recreate the databases from the dump. It's a lot of work and requires a lot of download time (which guarantees I won't do it as much as I would like it to be done)
Write a small web service to access the new data, and write a small application locally to communicate with said web service, download the newest data, and update the local databases.
Both solutions sound like a lot of work for a problem that is probably already solved a zillion times over. Or maybe it's even an existing feature which I completely overlooked.
Is there an easy way to keep databases more or less in synch? Ideally something that I can set up once, schedule and forget about.
I am using MySQL 5 (MyISAM) databases on both servers.
=============
Edit: I had a look at replication, but it seems that I can't go that route because the shared hosting does not give me enough control on the server itself (I got most permissions on my databases, but not on the MySQL server itself)
I only need to keep the data synchronized, nothing else. Is there any other solution that doesn't require full control on the server?
Edit 2:
Sorry, I forgot to mention I am running on a LAMP stack on the shared server, so Windows-only solutions won't work.
I am surprised to see that there is no obvious off-the-shelves solution for this problem.
Have you considered replication? It's not to be trifled with but may be what you want. See here for more details... http://dev.mysql.com/doc/refman/5.0/en/replication-configuration.html
Take a look at Microsoft Sync Framework - you will need to code in .net, but it can resolve your issues.
http://msdn.microsoft.com/en-in/sync/default(en-us).aspx
Here is a sample for SQL server, but it can be adapted to mysql as well using ado.net provider for Mysql.
http://code.msdn.microsoft.com/sync/Release/ProjectReleases.aspx?ReleaseId=4835
You will need the additional tables for change tracking and anchors (keeping track of last synchronization) for this to work, in your mysql database, but you wont need full control as long as you can access the db.
Replication would have simpler :), but this might just work in your case.

How to bring files in a filesystem in/out MySQL DB?

The application that I am working on generates files dynamically with use. This makes backup and syncronization between staging,development and production a real big challenge. One way that we might get smooth solution (if feasable) is to have a script that at the moment of backing up the database can backup the dynamically generated files inside the database and in restore time can bring those file out of the database and in the filesystem again.
I am wondering if there are any available (pay or free) application that could be use as scripts to make this happen.
Basically if I have
/usr/share/appname/server/dynamicdir
/usr/share/appname/server/otherdir/etc/resource.file
Then taking the examples above and with the script put them on the mysql database.
Please let me know if you need more information.
Do you mean that the application is storing a files as blobs in the MySQL database, and/or creating lots of temporary tables? Or that you just want temporary files - themselves unrelated to a database - to be stored in MySQL as a backup?
I'm not sure that trying to use MySQL as an net-new intermediary for backups of files is a good idea. If the app already uses it, thats one thing, if not, MySQL isn't the right tool here.
Anyway. If you are interested in capturing a filesystem at point-in-time, the answer is to utilize LVM snapshots. You would likely have to rebuild your server to get your filesystems onto LVM, and have enough free storage there for as many snapshots as you think you'd need.
I would recommend having a new mount point just for this apps temporary files. If your MySQL tables are using InnoDB, a simple script to run mysqldump --single-transaction in the background, and then the lvm snapshot process, you could get these synced up to less then a second.
the should be trivial to accomplish using PHP, perl, python, etc. are you looking for someone to write this for you?