I want to enable Transparent Data encryption (TDE) on MySQL. I don't mind if the entire db is encrypted (as opposed to a few columns or rows or tables). I am using this for a study, so I am looking for something that is open and free. I found zNcrypt but it's a commercial product. They are essentially using eCryptfs which is open-source, but couldn't find a way to rightly configure it for MySQL.
Any pointers on using eCryptfs with MySQL or any other solution for enabling TDE with MySQL would be very helpful. Thanks!
I see this question is relatively old, but just in case:
eCryptfs can be considered a filesystem, so, you should just need to mount it, and then point your MySQL datadir to the mounted directory. The only drawback is that it doesn't seems to support O_DIRECT, but I don't think MySQL uses it, does it?
Related
I'm working on a server with almost no free space on it, so I attached a NFS volume to it. Now I would like MySQL to be storing tables or, even better, entire databases on the shared volume. Is there a way to do this?
Thanks a lot!
I don't think that is a good idea. You will have performance and consistency issues if storage is on NFS. Consider adding a secondary local disk, mounting it, and hosting your database files in it.
Having said that, you could change database location in MySQL. It's pretty simple. Have a look at this web entry. MySQL is pretty flexible with this kind of things.
I'm not sure, if it fits exactly stackoverflow, however as i'm seeking for some code rather than a tool, i think it does.
I'm looking for a way of how to replicate / synchronize different database systems -- in this case: mysql and mongodb. We are running both for different purpose. We started with a mysql database and added mongodb later on for special applications. There's data we would like to have in both databases, where we want to have constraints in mysql respectivly dbrefs in mongodb. For example: We need a user-record in mysql, but also in mongodb for references between tables respectivly objects. At the moment we have a cronjob, which dumps the mysql data and imports it in mongodb. However though it works quite well, that's not the solution we would like to have.
I think for the moment a one-way replication would be enough -- mysql->mongodb, the important part is, that the replication works in "realtime", much like a mysql master->slave replication works.
Are there already any solutions for this problem or ideas anyone of how to achieve this?
Thanks!
SymmetricDS is open source, Java-based, web-enabled, database independent, data synchronization/replication software that might do the trick with a few tweaks. It has an extension point called IDataLoaderFilter which you could use to implement a MongodbDataLoader.
This would help with one way database replication. It might be a little more difficult to synchronized from MongoDb -> relational database, but the SymmetricDS team would be very helpful in trying to find the solution.
What you're looking for is called EAI (Enterprise application integration). There are a lot of commercial tools around but under the provided link, you'll also find a couple OSS solutions. The basis of EAI is that you have data sources and data sinks. The EAI framework offers tools to build custom pumps between the two.
I suggest to either use a DB trigger to start the synchronization or send a trigger signal in your applications. Note that there is no key-hole solution since synchronization can become arbitrarily complex (for example, how do you make sure that all rows are copied?).
As far as I see you need to develop some sort of "Control program" that has the drivers for each DBMS and run it as a daemon. The daemon should have a trigger or a very small recheck interval to keep the DBs synchronized
Technically, you could set up a process which parses the binary log of the MySQL server and replicate the relevant sql queries. I've never done such a thing with a a different database as a slave, but maybe it is worth a shot?
I need a DBMS, but do not know which to choose.
Basically, the application makes many INSERT / UPDATE, but also many SELECT. SELECT mostly very simple, one field only.
I am using MySQL + InnoDB at the moment, but as the database is growing, I need the best solution. The table can grow indefinitely, and the time +- 2GiB
EDIT:
Will run on Linux, and perhaps rarely in FreeBSD.
Not need a user management, all processes currently connect as root. Typically, there are many simultaneous accesses (now in 83 threads, according to the mysqladmin).
Access will be with C++, but need access to PHP also
PHPMyAdmin statistics:
select: 42.57%
insert: 7.97%
update: 49.45%
EDIT2:
After some thought, and the answers here, I believe that I can't use MySQL for your client library is GPL
Any alternative that does not harm (much) performance?
I think you have plenty of options.
You can continue to use MySQL. YouTube have used it fairly successfully
PostgreSQL (Free, Open Source, pretty good performance, reliable)
Oracle (NOT free, but has good support for very large databases)
If it's very simple queries, could it be done well with a key/value store?
According to this, the maximum database size on Linux 2.4+ (ext3) is 4TB. So I think you are safe to stick with MySQL+InnoDB if performance is adequate.
I would think MySQL is an excellent choice from what you've stated. Oracle isn't free, and has some overhead in all the security and enterprise level features that MySQL doesn't. You want support for multiple languages. MySQL can scale well (I believe Flickr is a good example). Most databases are accessible via most languages: e.g. Perl, Java and C all have driver based APIs ( JDBC, DBI and ODBC ). IIRC PHP has one very similiar to DBI. Also: starting with a database does allow you some wiggle room for the future: e.g. joins and aggregation.
One advice I would give is: make sure whatever you choose is ACID compliant. Also, You might take the time to compare PostGres and see if there is something about it that meets your needs as well or better than MySQL.
I need to make some tests for a potential migration from Mysql to PostgreSql.
It will be easier to test if it is possible to use Postgre as slave for my MySQL master.
Is it possible ?
Thanks in advance
No.
You can build something yourself using triggers and an external process to send data over, but it's fairly difficult since mysql has a rather limited support for triggers.
For your scenario you're likely to be better off doing periodic dumps of the data over. The best way is often to migrate the schema manually, and then send your data over as CSV. The "mysqldump --compatible" usually doesn't work well enough.
It is possible. Sort of. Maybe.
One solution that supposedly supports MySQL -> PostgreSQL migration is Continuent's open-source Tungsten Replicator.
You can see some instructions on how to implement this "Heterogeneous Replication" here (although the method they suggest, using tungsten-installer, is deprecated and you might be better off using tpm like so).
Thing is, while there are plenty of resources indicating Tungsten really did use to support this, officially it seems they no longer do. This means that if you try to use the most recent Tungsten Replicator version (3.*), you'll quickly find that some files needed for Postgres are missing.
If, on the other hand, you try to download an older version, say 2.2.1, none of those errors appear, and all the files seem to be present, which leaves some room for optimism.
Personally, I must admit I haven't been able to get 2.2.1 to work either, but this probably has more to do with my lack of experience using Tungsten Replicator in general, and not with Postgres support. Also, in my case the real-time element wasn't as important, so we just ended up going with a cron job running pgloader.
So, if real-time replication from MySQL to Postgres is something you must have, I'd recommend at least trying out Tungsten Replicator before you start implementing a solution of your own. However, if real-time isn't an absolute requirement, there are probably simpler ways.
(Also, you might want to have a look at SymmetricDS which claims to do something similar, though I haven't personally looked into it.)
I don't think so, master-slave replication is only possible between same databases.
You could configure MySQL using the PostgreSQL-SQL-mode and you could also make a dump ready to import in Postgresql by using --compatible in mysqldump.
SymmetricDS does support MySQL to Postgres replication. There is an open source version available as well as a professional version which provides a web based interface.
Few months ago I have asked a question regarding how to change database location at runtime - I didn't get any solution for this problem yet.
I am needing to create more than 32,000 databases in MySQL. The default data location of MySQL data folder, after creating 32,000 database on that location, I want to change the data directory to other location. I am planning to do this through Java code.
But before that can anyone tell me if this is possible?
I am really needing to implement this as an requirement. Please help me out with this.
I am sorry If I am unclear anywhere in this description, but let me know if you all need any more information.
That sounds difficult. I don't understand why 32000 - is mysql refusing to create more databases than that? What error do you get? Seems arbitrary - maybe this is a config variable that can be changed?
A few possibilities. You could run more than one mysql server, each with a different data directory. I don't think it would help, but you might look into the NDB storage engine, it can handle tablespaces which just might let you store data in multiple locations.
You can create several partitions and join them using LVM (this is in Linux) and mount the partition to the DATADIR path. Also you can use soft links the the databases moved to other folders/partitions.
Unfortunately MySQL supports only one DATADIR but it seemed to me that InnoDB tables can be places to separate path. Could you check this?