Storing MySQL credentials in a MySQL database - mysql

This is a similar question to "Storing MS SQL Server credentials in a MySQL Database"
So, in theory, imagine I have 1 MySQL server. I have a "master" database, and then X number of other generic databases. What im looking for, is a way of using an app (for arguments sake, lets say a web app, running on php) to first access the master database. This database then needs to tell the app which database to connect to - in the process, giving it all the credentials and username etc.
How is the best way around this?
The three ideas I have so far
Store the credentials in the master database for all the other databases. These credentials would of course be encrypted in some way, AES probably. The app would get the encrypted credentials, decrypt, connect.
Store the credentials elsewhere - maybe a completely separate server. When the master database is accessed, it returns some sort of token, which can be used to access the credential storage. Again, encrypted via AES.
Using some sort of system that I am not aware of to do exactly this.
Not doing this at all, and come up with a completely different approach.
To give a little example. "master" would contain a list of clients. Each client would contain it's own separate database, with it's own permissions etc.

I've had no reason to do this kind of thing myself but your first two ideas sound good to me and (as long as you include server address) not even necessarily separate ideas (could have some clients on the server with master, and some elsewhere) the client logic won't need to care. The only issue I can see is keeping the data in the "master" schema synced with the server's security data. Also, I wouldn't bother keeping database permissions in the master schema as I would think all clients have the same permissions, just specific to their schema. If you have "permissions" (settings) that limit what specific clients can do (perhaps limited by contract/features paid for), I would think it would be much easier to keep those in that clients' schema but where their db user cannot change data.
Edit: It is a decent idea to have separate database users in this kind of situation; it will let you worry less about queries from one user's client inadvertently (or perhaps maliciously) modifying another's (client account should only have permissions to access their own schema.) It would probably be a good idea to keep the code for the "master" coordination (and connection) somewhat segregated from the client code base to prevent accidental leaking of access to that database into the client code; even if encrypted you probably don't want them to even have any more access than necessary to your client connection info.

I did something like this not long ago. It sounds like you're trying to build some kind of one-database-per-tenant multi-tenant system.
Storing encrypted credentials in a directory database is fine, since there's really no fundamentally different way to do it. At some point, you need to worry about storing some secret (your encryption key) no matter what you do.
In my use case, I was able to get away with a setup where the directory just mapped tenants to db-hosts. The database name and credentials for each tenant were derived from the tenant's identifier (a string). So something like, given a TenantID T:
host = whatever the directory says.
dbname = "db_" + T
dbuser = T
dbpass = sha1("some secret string" + T)
From a security standpoint, this is no better (actually a bit worse) than storing AES encrypted credentials in the directory database, since if someone owns your app server, they can learn everything either way. But it's pretty good, and easy to implement.
This is also nice because you can think about extending the idea a bit and get rid of the directory server entirely and write some function that maps your tenant-id to one of N database hosts. That works great until you add or remove db servers, and then you need to handle shuffling things around. See how memcache works, for example.

You can use Vault to do this in much systematic way. In fact this is a strong use-case for this.
Percona has already written a great blog on it,

Related

Is it possible to store client data on separate databases?

We have a client who is determined to keep their data in our cloud VM separate from other client data. That is, we have a centralized MySQL database where we store all of our client data and access the data depending on the id etc. The clients are now requesting that their data is separated one from the other. Meaning that if the database is hacked the hacker can't jump from one users data to see the others. I have never heard of this type of functionality especially for MySQL databases (you can create users and allocate to tables but not to specific data in a table) as far as I know. Possibly this is a functionality of Azure databases or something.
Has anyone encountered something like this request/solution?
Thanks
I did work for a notification service. We stored each client's data in a separate schema, but on the same MySQL instance. The reason was to keep PII (Personally Identifiable Information) separate, so on any given application request, it was not possible that it could accidentally read data for another client.
The application first connected to a special schema that stored a table listing all the client schemas and the username & password for each client schema. The app reads this table to query for one specific client, then opens a new connection using that username & password.
It added a little bit of overhead to every session to do this two-step connection, but it wasn't too much.
I'm not sure how this eliminates the possibility of being hacked. That's still a risk. If an attacker hacks the primary database, why couldn't they also hack the specific client's database?

Preventing access to databases on self hosted mysql server

Our application uses a sql database for storing data which mustnt be modified by the user.
For now we are using a local sqlite db which is encrypted via sqlcipher and which gets decrypted on
application start with a private key set by us. This way the user cant modify any data without knowing
this key or even load the database in his favourite db browser.
We now want to allow for the database to be on a mysql server. But as far as i understand
an equal way of securing the data isnt possible. Especially because we want the user to be
able to host his own server (The same way as he used his "own" local sqlite file) I understand there is a so called "at rest" encryption for innodb in mysql now but this seems to be completely transparent to the user. So if the user connects to the db he doesnt have to enter a key for it to be decrypted but this will happen automatically for him in the background.
Is there a way to allow the user to use its own mysql server but prevent him from modifying
any database we create on it? Or is this only possible with a server we host ourselves?
Let me first give a short comment regarding the method you used until now.
I think that the concept has been wrong in the first place, because it is not secure. The decryption key has to be in the application because otherwise your users would not be able to open the database. As soon as the application runs, a user could extract that key from RAM using well-known methods / tools.
In contrast, when using a server in a locked room, you have real safety provided that the server software does not have bugs which allow users to attack it.
Thus, the answer to your question is:
Yes, it is wise to upgrade to MySQL.
Use one server for all users which physically is at a place where normal users don't have access to.
No, do not try to encrypt the MySQL table files on the disk if your only concern is that users shall not be able to change the data.
Instead, assign access privileges to your central database and tables properly. If the normal users have only read privilege on all tables, they will not have the chance to modify any data via network, but can read all data. As far as I have understood, this is what you want.

How to store sensitive data of different clients in SQL server?

I work at a small company and I am trying to figure out a solution for storing sensitive data of multiple clients in Microsoft SQL server. Actually, I feel like this is a general database question and it is not specific to MSSQL.
Until now we have been using a proprietary database where the client data is stored as db files (flat files) in the client’s root directories in the file system. So the operating system permissions guarantee that the application used by client X can never fetch data from client Y’s database. Please note that there is no database server/instance/engine here…
However, for my project I want to use SQL database. But the security folks are expressing concerns over putting data of different clients on a single database.
One option is to create separate database instances for different clients. However, I am not sure if this idea is scalable.
So my questions are:
1) Is there any mechanism in MSSQL that enables you to store databases ‘separately’ in different files used by the SQL server?
2) Let’s say I have only one database instance where I have databases of client X and client Y. How can I make sure that client X’s requests can never (accidentally) get misdirected to client Y’s database? I do not want to rely on some parameter in my code to determine which database to fetch from! :)
So, is there any solid authentication scheme to guarantee that my queries could not be misdirected to fetch from an incorrect client table?
I think this is a very common problem and there has to be a good solution for this. What are other companies doing?
Please let me know if there are any good articles to read up on this.
Different databases are always stored in different files in SQL Server so you don't even have to do anything special for this. However, NTFS permissions will not help you in this case as the clients aren't ever accessing the files directly on disk.
One possible solution in SQL Server is to create separate sets of Windows user IDs and map those to separate SQL Logins for each customer. You could then only assign those logins access to the appropriate databases. For example, if you were hosting web sites for client X and client Y, you would set up the connection string(s) in the web.config for client X's web site to use the appropriate login(s) for client X's database. Vice versa for client Y. This guarantees that no matter what (barring a hard-coded login), the code from client X's site will never access client Y's database.
You can have 32,000 databases on a single instance of SQL server and having separate databases enables a number of improved serviceability scenarios (such as restoring a single customer's DB in case of a data problem without affecting all of your other customers).
http://technet.microsoft.com/en-us/library/ms143432.aspx

CRUD Admins: Why not use MySQL users for auth/acl instead of User/Group tables?

In several frameworks (symfony/Django), you have admin generators that usually control access via a User table (which assigns a user to a specified Group table).
I'm curious, why not simply use MySQL's actual users (with select/read/write access already baked in) instead?
Another good reason that hasn't been listed is the fact that MySQL usernames/passwords are stored in clear text in config files. There maybe a vulnerability in your code that allows a user to read a text file, which then would give immediate access to a hacker without having to breaking a password hash. Having a your database remotely accessible is a serious secuirty hazard and is prohibited by PCI-DSS.
Another good reason is that in order to add new accounts or change your password your web application would need ROOT access, which is among the worst things you could do. In many databases (including mysql) this makes it very easy for a hacker to turn a sql injection vulnerability into full remote code execution (like uploading a .php file).
I would presume one reason would be, that many ISPs provide you with only one user account (without extra cost) to your mysql database, and thus, such an aproach wouldn't work as everyone would have identical priviledges.
The magic here being lowest common denominator and easy deployment as far and wide as possible, with minimum requirement in server administration.
I'd imagine most people are a little leery giving their application's MySQL user the ability to create and grant privileges to new MySQL users, particularly in a shared hosting environment. It's not that difficult to handle, it keeps everything within one database table, and you can have any permission you like.

Keep database information secure

there's this interesting problem i can not solve myself. I will be very glad, if you help me.
Here's it:
there are many client applications that send data records to one MySQL server.
Few data records are not very important, but the whole database is. (You can imagine it is facebook DB :) )
Is there any way to ensure that
data from DB won't be used by anyone but true owner
DB will preserve essential features such as sorting etc.
assuming that attacker can mysteriously gain full access to server?
You can't simply encrypt data client-side and store it encrypted, since client application is wide-spread and attacker can get key from it.
Maybe adding some layers between application and DB, or combining encryption methods client- and server-side (using mysql built-in methods) will help?
As long as the database needs to start up and run unattended you can't hide the keys from a compromised root account (= 'mysterious full access'). Anywhere the database could possibly store the master key(s), the root will also have access. No amount of business layers or combination of client-server encryption will ever circumvent this simple fact. You can obfuscate it till the day after but if the prize is worth then root can get it.
One alternative is to require a manually assisted start up process, ie. a human enters the master key password during the server boot (or hardware module PIN), but this is extremely hard to maintain in real world, it requires a highly trusted employee to be on pager call to log in and start the database whenever there is downtime.
Solutions like TPM offer protection against physical loss of the server, but not against a compromised root.
Your root is as important as the database master key(s), so you must protect your root with the same care as the keys. This means setting up operating procedures, screening who has access to root, rotating the root password and so on and so forth. The moment someone gains 'mysteriously full access' the game is pretty much lost.
I pretty much agree with Remus Rusanu's answer.
Maintaining good security is hard, but you can always pay attention to what you do. When ever you access sensitive information carefully verify your query and make sure it cannot be spoofed or exploited to gain access to information which shouldn't be accessible by given client.
If you can roll out physical access to the box by the attacker then there are several things you can do to harden your security. First of all I'd configure ssh access only to only allow connections from specific IP or IP range (and of course no root access). You can also do that that on your firewall. This would mean that the weakest link is your server (the application which receives data/requests from clients, could be web-server and whatever scripts you use). Now you "just" have to make sure that no one can exploit your server. There are a lot more things you could do to harden your system, but it think it would be more appropriate to ask on ServerFault.
If you're worried about physical access to the PC, there isn't really much you can do and most stuff has already been mentioned in Remus answer.
There's also another option. This is by far the most ineffective method from speed and ease to develop viewpoint, but it would partly protect you from any kind of an attack on your server (including physical). It's actually quite simple, but a bit hard to implement - only store the encrypted data in the database and handle all encryption/decryption client-side using javascript or flash. Only the client will have the key and data will always be transfered over the wire and stored in encrypted format. The biggest drawback is that once client forgets the key there's no way back, the data is inaccessible.
Of course it's all matter of time, money and effort - with enough of these anything can be broken.
I've no idea if such a thing exists in MySql, but row-level-versioning in Oracle enables you to define access rights on row-level IN the database: so that means, regardless of what tool is being used to access the data, the user only ever sees the same selection as determined by his/her credentials.
So if my username/role is only allowed to see data limited by some WHERE clause, that can appended to each and every SELECT that appears in the database, regardless of whether it comes from a web app, a SQL querying tool, or whatever.
I will use a 2nd layer and a firwall between them.
so you have firewall ---- web server --- firewall -- 2nd layer server --- firewll --- db
it will be wise to use different platfroms between layers, it all depends how important is the data.
anyway - the web server should have no access to DB.
about preserving sort - if you use a file encrypotion mechisim - it will only protect you from Hard drive theaft.
if you encrypt the data it self, and if you do it smartly (storing the keys in a separate place) you will not loose sorting as you will look for the encryoted entry and not the real one- but now you have another thing to protect....