How to protect mysql database from anyone - mysql

I have launched my project to a hosting company. But I am worried about how to protect my mysql database from the hosting company.
My question is how can I protect my database from the hosting company so they can't access my database / data.

Here's a relevant rule of IT security:
"If a bad guy has unrestricted physical access to your computer, it's not your computer anymore."
http://technet.microsoft.com/en-us/library/cc722487.aspx

If you don't trust your hosting company, it's time to get a new one. There's little you can do to prevent someone with physical access to a server from getting at what's on it.

I think that you will just have to trust them. There is no way to fully protect the database, because a hosting company has access to almost all levels of your application. They can event inject a code that would fetch all data in some layer of your application.
The hosting company is only one of the threats. You should think about XSS's, CSRF's, data sniffing at network level and so on...

As all have answered there really is no way to protect your data from the hosting company. They own the server therefore giving them access to all databases on it.
Depending on your data you could encrypt all of it but that's kind of overkill and not a practical solution unless your data is sensitive. In that case I would recommend getting a server of your own and building it to support your needs.
You could check out rackspace and setup one of their servers but again if it's not physically in your possession they could potentially get on it and see what's there. I think it's less likely as you would be setting up your own VM or server through them.

I guess there is nothing we can do to prevent it from the hosting company. therefore configuring your own server may be the only option.

I whole heartedly endorse the general rule that Jay puts forward.
However, in certain environments it might be a good idea to take extra steps to ensure that your data is somewhat more protected given the rule that it really is someoneelses computer.
Try to encrypt data that does not need to be acted up with public keys, and keep private keys off of the server. This is trivial to overcome if someone else can change the code and ensure that unencrypted copies are kept in parallel.
So try to use stuff like Tripwire to ensure that your code has not been changed. Again tripwire can be reconfigured with physical access, so this is not fool proof, but it can work
Good luck, you are interested in a tackling an intractable problem, which are, of course, the most fun.
-FT

Related

Host a mySQL Server

I am making a Javafx program and need to use a small mySQL database. Currently I am hosting one on my computer but I can't access it on other computers on other networks. I need the mySQL server to be accessible from anywhere. How do I host one that does that? Thanks in advance, all help is welcome.
Well you have a few options depending on how important this MySQL database is to you, how you intend to connect to it from outside, and what you want to do with it.
The naive implementation would involve opening your firewall and directing all incoming traffic using whatever port you have configured MySQL for to point to the ip address of your server. If you do this you absolutely must secure your database with a password!!! You'll also need to keep the server's public ip address handy so you know how to find it when you go out.
Use Amazon AWS, Google Compute, Google App Engine, or some other cloud platform to host a MySQL instance. All the big players also tend to host pretty awesome RDBMS solutions. The advantage here is that you're not exposing your home computer to malice and you are connecting into an ecosystem that will answer a lot of other questions for you as they come up along the way (IE - how do you ensure redundancy? Backups? Scale your network for traffic?). There's a ton of other advantages too. It's the cloud... dude...
Use a SaaS DB service such as Firebase (Note: We are leaving MySQL and SQL database territory with Firebase)
If you plan to let other parties access your MySQL instance to make use of your data, you might also want to consider implementing a REST API (or SOAP API if you hate the future) which acts as an abstraction layer to interact with and provide the data from your database in a consistent and reliable format.
Best answer I can give with the details afforded - look around though the options in this arena are near limitless depending on how and what you're trying to do.
You should be able to access your machine from your LAN pretty easily unless there is some firewall rules preventing opening connection to your machine. Another way is there are many cloud shosting providers has free tier you can signup to bring up a test instance of mysql. Example: Open Shift.

mail website forum under one database

I've tried to find answer to my question but i couldn't find the right answer yet (would be glad if you point me to one). I'm a newbie when it comes to running services (websites, forum, wikis, emails). I'm rather experimenting.
I have couple of websites (mainly wordpress), mail server, forum, wikis, and file sharing (owncloud) hosted on one server.
Until now every time I would install new service I would create new database (mysql), just like the install readme's would advice. I would like to connect some of the services together. Mainly unified user database.
What is the best way to do it. Is having multiple databases versus one db heavier for my servers cpu load? Is it secure? Is it easy to administrate it?
If cpu load isn't issue while having multiple db's is it possible to create user database and link it to the services databases i would like to link it to?
Having multiple applications (forum, wiki, ...) access the same database is not likely to have any effect on CPU usage, but there are other drawbacks:
Table names used by applications might have conflicts (many of them might have a "session" or "posts" table). Some web apps have a feature to prefix table names with a string, like "wp_session" and "wp_posts" for example to get around conflicts.
Yes, it's less secure. When one of the applications has a security hole and someone manages to access its database, data of all applications is compromised.
Multiple databases is likely to be easier to manage when doing application upgrades, backups, removing or adding applications to the mix.
Accidentally break one database, and you'll break all apps.
To get the applications use the same authentication database it's usually not enough to point them at the same database, as they're likely to use a different database schema for storing user information (different columns in the auth database), different hashing for password storage, and so on.
The question is quite broad, and the specific answer depends a lot on the actual applications you're using. The best approach in general is probably to pick applications which support a protocol such as OpenID or OAuth, or an authentication backend such as an LDAP database or PAM (Pluggable Authentication Module). These methods allow you to use a single user database managed by a single method. The apps all need to work with the same backend. In any case, it's likely to be quite a learning experience to get it running smoothly.

How does database tiering work?

The only good reference that I can find on the internet is this whitepaper, which explains what database tiering is, but not how it works:
The concept behind database tiering is
the seamless co-existence of multiple
(legacy and new) database technologies
to best solve a business problem.
But, how does it implemented? How does it work?
Any links regarding this would also be helpful. Thanks.
I think the idea of that document is you to put "cheap" databases in front of the "expensive" databases to reduce costs.
For example. Let's assume you have an "expensive" db...something like Oracle, or DB2 or even MSSQL (more realistically it's probably more of an issue with a legacy DB system that is not supported much or you need specialized resources to maintain). A database engine that costs a lot to purchase and maintain (arguably these are not expensive when you take all factors into consideration. But let's use them for the example).
Now if you suddenly get famous and your server starts to get overloaded what do you do? Do you buy a bigger server and migrate all your data to that new server? That could be incredibly expensive.
With the tiering solution you put several "cheap" databases in front of you "expensive" database to take the brunt of the work. So your web servers (or app servers) talk to a bunch of MySQL servers, for example, instead of directly to the your expensive server. Then these MySQL servers handle the majority of the calls. For example, they could handle all read-only calls completely on their own and only need to pass write-calls back to the main database server. These MySQL servers are then kept in sync via standard replication practices.
Using methods like this you could in theory scale out your expensive server to dozens, if not hundreds, of "cheap" database servers and handle a much higher load.
Database tiering is just a specific style of tiering. There are also application tiering and service tiering. It's a form of scalability.
What exactly are you asking? This question is rather vague.
This is a PDF from a course at Ohio State. What it discusses is a bit over my head, but hopefully you might understand it better.

How do I create a safe local development environment?

I'm currently doing web development with another developer on a centralized development server. In the past this has worked alright, as we have two separate projects we are working on and rarely conflict. Now, however, we are adding a third (possible) developer into the mix. This is clearly going to create problems with other developers changes affecting my work and vice versa. To solve this problem, I'm thinking the best solution would be to create a virtual machine to distribute between the developers for local use. The problem I have is when it comes to the database.
Given that we all develop on laptops, simply keeping a local copy of the live data is plain stupid.
I've considered sanitizing the data, but I can't really figure out how to replace the real data, with data that would be representative of what people actually enter with out repeating the same information over and over again, e.g. everyone's address becomes 123 Testing Lane, Test Town, WA, 99999 or something. Is this really something to be concerned about? Are there tools to help with this sort of thing? I'm using MySQL. Ideally, if I sanitized the db it should be done from a script that I can run regularly. If I do this I'd also need a way to reduce the size of the db itself. (I figure I could select all the records created after x and whack them and all the records in corresponding tables out so that isn't really a big deal.)
The second solution I've thought of is to encrypt the hard drive of the vm, but I'm unsure of how practical this is in terms of speed and also in the event of a lost/stolen laptop. If I do this, should the vm hard drive file itself be encrypted or should it be encrypted in the vm? (I'm assuming the latter as it would be portable and doesn't require the devs to have any sort of encryption capability on their OS of choice.)
The third is to create a copy of the database for each developer on our development server that they are then responsible to keep the schema in sync with the canonical db by means of migration scripts or what have you. This solution seems to be the simplest but doesn't really scale as more developers are added.
How do you deal with this problem?
Use fake data -- invest in a data generator if you must, but please don't use real data in a development environment, especially if it's possible that access to it may be compromised. I'm more familiar with tools for MS SQL, but googling for "MySQL data generator" brought up EMS SqlManager and Datanamic.
As tvanfosson mentioned, use fake data instead of live. Doing so will not only keep the live data safe but also allow you to test different scenarios, such as international names and such.
As for how to distribute your DB, your schema and creation scripts really should be in source control, so each developer can create a local copy of the database as they see fit.
You could set up a fixtures (seed data) system. You provide the data once and it gets put into the db as many times as you need. That could be held in source control so that the fixtures are used/updated by all users.
I think that auto-generators are usually a bad idea. It is hard for them to generate information that could be real. Fixtures would allow you to make this information and know that it is what you are looking for. You could also push the bounds of your validators by using fixtures.
It may take a bit of time to set up the first time around, but I think you will get a much higher quality of data that is put in for testing.
Regards,
Justin

Keep database information secure

there's this interesting problem i can not solve myself. I will be very glad, if you help me.
Here's it:
there are many client applications that send data records to one MySQL server.
Few data records are not very important, but the whole database is. (You can imagine it is facebook DB :) )
Is there any way to ensure that
data from DB won't be used by anyone but true owner
DB will preserve essential features such as sorting etc.
assuming that attacker can mysteriously gain full access to server?
You can't simply encrypt data client-side and store it encrypted, since client application is wide-spread and attacker can get key from it.
Maybe adding some layers between application and DB, or combining encryption methods client- and server-side (using mysql built-in methods) will help?
As long as the database needs to start up and run unattended you can't hide the keys from a compromised root account (= 'mysterious full access'). Anywhere the database could possibly store the master key(s), the root will also have access. No amount of business layers or combination of client-server encryption will ever circumvent this simple fact. You can obfuscate it till the day after but if the prize is worth then root can get it.
One alternative is to require a manually assisted start up process, ie. a human enters the master key password during the server boot (or hardware module PIN), but this is extremely hard to maintain in real world, it requires a highly trusted employee to be on pager call to log in and start the database whenever there is downtime.
Solutions like TPM offer protection against physical loss of the server, but not against a compromised root.
Your root is as important as the database master key(s), so you must protect your root with the same care as the keys. This means setting up operating procedures, screening who has access to root, rotating the root password and so on and so forth. The moment someone gains 'mysteriously full access' the game is pretty much lost.
I pretty much agree with Remus Rusanu's answer.
Maintaining good security is hard, but you can always pay attention to what you do. When ever you access sensitive information carefully verify your query and make sure it cannot be spoofed or exploited to gain access to information which shouldn't be accessible by given client.
If you can roll out physical access to the box by the attacker then there are several things you can do to harden your security. First of all I'd configure ssh access only to only allow connections from specific IP or IP range (and of course no root access). You can also do that that on your firewall. This would mean that the weakest link is your server (the application which receives data/requests from clients, could be web-server and whatever scripts you use). Now you "just" have to make sure that no one can exploit your server. There are a lot more things you could do to harden your system, but it think it would be more appropriate to ask on ServerFault.
If you're worried about physical access to the PC, there isn't really much you can do and most stuff has already been mentioned in Remus answer.
There's also another option. This is by far the most ineffective method from speed and ease to develop viewpoint, but it would partly protect you from any kind of an attack on your server (including physical). It's actually quite simple, but a bit hard to implement - only store the encrypted data in the database and handle all encryption/decryption client-side using javascript or flash. Only the client will have the key and data will always be transfered over the wire and stored in encrypted format. The biggest drawback is that once client forgets the key there's no way back, the data is inaccessible.
Of course it's all matter of time, money and effort - with enough of these anything can be broken.
I've no idea if such a thing exists in MySql, but row-level-versioning in Oracle enables you to define access rights on row-level IN the database: so that means, regardless of what tool is being used to access the data, the user only ever sees the same selection as determined by his/her credentials.
So if my username/role is only allowed to see data limited by some WHERE clause, that can appended to each and every SELECT that appears in the database, regardless of whether it comes from a web app, a SQL querying tool, or whatever.
I will use a 2nd layer and a firwall between them.
so you have firewall ---- web server --- firewall -- 2nd layer server --- firewll --- db
it will be wise to use different platfroms between layers, it all depends how important is the data.
anyway - the web server should have no access to DB.
about preserving sort - if you use a file encrypotion mechisim - it will only protect you from Hard drive theaft.
if you encrypt the data it self, and if you do it smartly (storing the keys in a separate place) you will not loose sorting as you will look for the encryoted entry and not the real one- but now you have another thing to protect....