How to store cross application values - configuration

I have a couple of values that I need to access from multiple programs. The values are in key value format.
i.e.
phone: 733 209 2309
ip address: 111.111.121.111
max-clients: 340
Is there a service / protocol that is designed for storing this sort of thing? I am thinking about using a mysql server and creating a table called configuration; however, if there is a protocol more suited to the task, I would love to learn about it. I want a solution that works well in various different programming languages.
The number of records should be less than 5000. The values will not normally change (thus fast or simultaneous write is not really necessary). However, fast read speeds is important.

Depending on you needs
multi - simultaneous - write access
million entrys
complex trigger
From gzip to postgreSQL, you could find SQLite (maybe the best choice for your need!), DB (Berkeley), OpenOffice Report, MySQL, etc...

For now I have decided to go with a mysql Database. Most languages already support mysql and it responds quickly to requests over a local network. It still seems like overkill, but I don't know of a better option.

Related

MySQL "auto"-encrypt per session [duplicate]

I am trying to improve the security a a MySQL database that will contain sensitive data. I am struggling to get to grips with some terminology. Can somebody let me know if I have understood the situation correctly:
Encryption at rest - it seems like I can enable this on a table level. All data in the table in encrypted using a key. If somebody got hold of a backup file or gained physical access to the server then the data would be protected. This assumes, of course, that the key is stored elsewhere.
AES_ENCRYPT - when inserting/updating data into my table I can use AES_ENCRYPT('data', 'password'). When querying the data via a SELECT I use AES_DECRYPT
Assuming I was just using encryption at rest then do I need to do anything different in my PHP code to query the data? Does my PHP code need to send the key to the database via my PDO request? Or can I use my normal code for querying the database and the decryption is handled automatically?
Or have I misunderstood what encryption at rest does and I need to use AES_ENCRYPT instead/as well
Encryption at rest
Encryption at rest is the data in the database when it is not being used/accessed or updated. Encryption on the move is things like TLS where the data (from the database) is transported from server to server to browser, to server, to browser, etc. TLS is perfectly good in most situations if it's handled carefully and approached with an attitude that you need to do more than the bare minimum to actually make it realisitically secure.
A typical example is people put on a TLS certificate from LetsEncrypt on their domain and think that suddenly all their stuff is safe; but they don't encrypt their sessions or their cookies so leaving a massive potential hole in their defences.
Do not use MySQL's built in encryption system.
I can not stress this enough; the built in encryption system in MySQL is not suitable for actual secure data protection.
Please read my answer to a very similar question here as to the details (I don't want to simply copy/paste).
Ok, then, because you insist.... here:
I have always understood NOT TO USE MySQL's built in encryption fuctionality because the point of encryption of data at rest (in the SQL) is that if the server is compromised, the data is not at [as much] risk.
The problem with the MySQL built in functionality is that it doesn't apply to when the data is passed to and from the "at rest" state, so the plaintext of any data can be recorded in MySQL logs (and elsewhere on the storage system, such as query lookups are not encrypted so you can from numerous lookups and their count results deduce column values) before/as it is encrypted. You can read more about this here.
Regarding encryption, you should use some tried and tested library like defuse/php-encryption.
From what I've read in my own research on this topic, the link provided by Magnus to defuse/php-encryption is one of the best ways of preventing MySQL ever causing you to compromise your data, by never letting the MySQL program/server ever see the plaintext value of your data.
-- Answer as posted May 7th 2017.
Also Bill Karwin's answer to the same question gives some valuable additional insights:
+1 to Martin's answer, but I'll add some info for what it's worth.
MySQL 5.7 has implemented encryption at rest for InnoDB tablespaces (https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html).
MySQL 8.0 will reportedly also implement encryption at rest for InnoDB redo log and undo log files (https://dev.mysql.com/doc/refman/8.0/en/innodb-tablespace-encryption.html).
This still leaves unencrypted the query logs and the binary log. We'll have to wait for some future version of MySQL for that.
Why does it take so long? The head of the security engineering for MySQL said at a bird-of-feather session at the Percona Live conference last month [April 2017] that they are being very careful to implement encryption right. This means implementing features for encryption, but also key security and key rotation, and other usage. It's very complex to get this right, and they don't want to implement something that will become deprecated and make everyone's encrypted databases invalid.
-- Answer as posted May 7th 2017.
Closing Point:
Security is complex. If you want to do it properly and have a confidence in your protective onion skins then you need to do a lot of things (see bullets below); but the first thing you need to do is:
Define Who you are protecting against
Seriously. You need different strategies against someone who wants to steal your plaintext names and addresses versus someone who wants to take over your server versus someone who simply wants to trash the data just because. It is a myth that you can protect against everyone all of the time, by concept this is impossible*; so you need to define the most likely agressors and then work out how best to mitigate their advances.
Sepcifically to MySQL, some clear recommendations:
Keep the SQL and the PHP on the same server. Do not remote access to the MySQL data.
Exclude external access to the SQL (so it's localhost only)
Obfuscate your table names and column names; if someone break into your data and you have HDTBJ^BTUETHNUYT under the column username then they know that this garble is probably a username so they have a very good start in trying to break your encryption.
IMPORTANT: Really lock down your table access; set up lots of MySQL users, each with only the bare minimum privilieges to do what they need; you want a user to read the table (only) and only read certain tables; users to write to certain tables but have no access to other tables. It's seperation of concern so that if any one user on the MySQL is compromised; you've not automatically lost every piece of data in there.
Use PHP encrpytion services . Store Encryption keys in a completely separate place; for example have another server you use solely for backup that you can access solely for reaching out to grab the encryption keys, therefore if your PHP/MySQL server is compromised you have some room to cut off and lock down the Key server so thay you can limit the damage. If the key server also has backups then really you're not too badly compromised (situation dependant).
Set up lots of watchers and email informers to tell you exactly when certain processes are running and which server users (not people but programs) are doing what. So you can see why an unexpected process starts to run at 5am to try and measure the size of the MySQL tables. WTF?
There is a lot of potential to have your MySQL AES_ENCRYPT'ed data "sniffed" even if it is not at rest in the DB, but if the website gets compromised (or worse, the PHP code is insecure) then timing attacks can work out data contents by timing query lookups and data packet returns.
Security is a black hole; at some point or another you're going to think "Sod this, I've done enough". No one ever has total security, some very dedicated organisations have enough security. You need to work out how far you're willing to walk before you've gone the distance.
* Why impossible? Because to protect your data from all threats, all of the time, it would need to be unreadable, unusable, like a hash. A hash is protected from everyone, all of the time. But a hash can never be un-hashed.

Does it make sense to encrypt every value in MySQL?

I currently have a MySQL database without built in database encryption. I am aware that encryption is available, but it's not available on AWS RDS for the instance size I'm working with.
Instead, I plan to utilize AWS KMS (basically standard hashing encryption) to hash every single value before entering it in the datable. I am working with sensitive data that needs to be HIPAA compliant.
My question is, by hashing the values, this essentially renders querying useless right? Additionally, if that's the case, what would be the difference between hashing every value (first name, last name, DOB, etc..) vs. treating the entire row as a single JSON string, and then hashing that (and storing in a single column).
If anyone has experience encrypting on the application level with HIPAA/sensitive data and storing it in MySQL, I'd appreciate any suggestions!
While I've worked on a few HIPPA projects in the past I'm in no way an expert. HIPAA has a lot of components you need to take into account so take the following as non HIPPA specific.
I would consider operating your own relational DB server with full disc and database encryption or (if your able to just work with JSON strings anyway) use a NOSQL DB like dynamo DB.
The last project I worked on kept data in an encrypted relational DB and locked it down (we hired security engineers for that) however on the application level we didn't encrypt anything.
I would try to avoid encrypting on the application level if possible as it leads to added complexity
Lastly, you might find this link useful
https://d0.awsstatic.com/whitepapers/compliance/AWS_HIPAA_Compliance_Whitepaper.pdf
as well as this tool for managing PHI with dynamoDB
https://github.com/awslabs/aws-dynamodb-encryption-java
I work as a DB encryption consultant and in your case I recommend using a Column-based encryption solution. That way you will be able to select which columns contain sensitive information, define column-specific enc/dec and access control policies and of course have different keys for each column.
Since you are using MySQL, you may want to check out MyDiamo, there is a trial license for the solution. I have deployed it on a number of occasions where clients were specifically targeting HIPAA compliance (a KMS solution is indeed needed to be fully compliant). The solution's Security Agent resides in the DB engine and a CLI will help you for its management.

SQLite faster than MySQL?

I want to set up a teamspeak 3 server. I can choose between SQLite and MySQL as database. Well I usually tend to "do not use SQLite in production". But on the other hand, it's a teamspeak server. Well okay, just let me google this... I found this:
Speed
SQLite3 is much faster than MySQL database. It's because file database is always faster than unix socket. When I requested edit of channel it took about 0.5-1 sec on MySQL database (127.0.0.1) and almost instantly (0.1 sec) on SQLite 3. [...]
http://forum.teamspeak.com/showthread.php/77126-SQLite-vs-MySQL-Answer-is-here
I don't want to start a SQLite vs MySQL debate. I just want to ask: Is his argument even valid? I can't imagine it's true what he says. But unfortunately I'm not expert enough to answer this question myself.
Maybe TeamSpeak dev's have some major differences in their db architecture between SQLite and MySQL which explains a huge difference in speed (I can't imagine this).
At First Access Time will Appear Faster in SQLite
The access time for SQLite will appear faster at first instance, but this is with a small number of users online. SQLite uses a very simplistic access algorithm, its fast but does not handle concurrency.
As the database starts to grow, and the amount of simultaneous access it will start to suffer. The way servers handle multiple requests is completely different and way more complex and optimized for high concurrency. For example, SQLite will lock the whole table if an update is going on, and queue the orders.
RDBMS's Makes a lot of extra work that make them more Scalable
MySQL for example, even with a single user will create an access QUEUE, lock tables partially instead of allowing only single user-per time executions, and other pretty complex tasks in order to make sure the database is still accessible for any other simultaneous access.
This will make a single user connection slower, but pays off in the future, when 100's of users are online, and in this case, the simple
"LOCK THE WHOLE TABLE AND EXECUTE A SINGLE QUERY EACH TIME"
procedure of SQLite will hog the server.
SQLite is made for simplicity and Self Contained Database Applications.
If you are expecting to have 10 simultaneous access writing at the database at a time SQLite may perform well, but you won't want an 100 user application that constant writes and reads data to the database using SQLite. It wasn't designed for such scenario, and it will trash resources.
Considering your TeamSpeak scenario you are likely to be ok with SQLite, even for some business it is OK, some websites need databases that will be read only unless when adding new content.
For this kind of uses SQLite is a cheap, easy to implement, self contained, perfect solution that will get the job done.
The relevant difference is that SQLite uses a much simpler locking algorithm (a simple global database lock).
Using fine-grained locking (as MySQL and most other DB servers do) is much more complex, and slower if there is only a single database user, but required if you want to allow more concurrency.
I have not personally tested SQLite vs MySQL, but it is easy to find examples on the web that say the opposite (for instance). You do ask a question that is not quite so religious: is that argument valid?
First, the essence of the argument is somewhat specious. A Unix socket would be used to communicate to a database server. A "file database" seems to refer to the fact that communication is through a compiled-in interface. In the terminology of SQLite, it is server-less. Most databases store data in files, so the terminology "file database" is a little misleading.
Performance of a database involves multiple factors, such as:
Communication of query to the database.
Speed of compilation (ability to store pre-compiled queries is a plus here).
Speed of processing.
Ability to handle complex processing.
Compiler optimizations and execution engine algorithms.
Communication of results back to the application.
Having the interface be compiled-in affects the first and last of these. There is nothing that prevents a server-less database from excelling at the rest. However, database servers are typically millions of lines of code -- much larger than SQLite. A lot of this supports extra functionality. Some of it supports improved optimizations and better algorithms.
As with most performance questions, the answer is to test the systems yourself on your data in your environment. Being server-less is not an automatic performance gain. Having a server doesn't make a database "better". They are different applications designed for different optimization points.
In short:
For Local application databses, single user applications, and little simple projects keeping small data SQLite is winner.
For Network database applications, multiuser and concurrency, load balancing and growing data managements, security and roll based authentications, big projects and widely used services you should choose MySql.
In your question I do not know much about teamspeak servers and what kind of data it actually needs to keep in its database but if it just needs a local DBMS and not needs to proccess lots of concurrency and managements SQLite will be my choice.

Splitting a mysql database for security

I have used sql (mostly mysql) for years but not to a professional standard, so I'm looking for a shove in the right direction.
I am currently designing a web app that will collect user's names/addresses/emails etc in one set of tables, as well as other personal information in another set of tables. These would most naturally reside in one database, but I've been considering splitting the user contact information in one database on a separate server and all the other information into another database/server, the theory being that a hacker would have to break both systems to get anything very useful.
I've done searches off and on for a few weeks and haven't found this type of design discussed much so far. Is this generally done? Is it overkill? Is there a design method to approach it, or will I have to roll it all on my own?
I did find Is splitting databases a legitimate security measure? which I guess is saying that this approach is likely overkill.
I tend to think this is overkill.
Please check my answer on this question: Sharing users between 2 databases
Keep in mind to address separately database design and data access
security issues. Data access security should not lead you to illogical
choices in database design.
IMHO that seems to be wrong. By splitting data across 2 DB you will only increase complexity without reasonable security profits.
I think this is where data encryption can be used. Generate encryption key based on user credentials and encrypt/decrypt sensible data by user requests. Since private data must be shown only to that user, everything should be ok.
Here's an approach I used before:
Server1: DB
Server2: SC
DB is in a network domain that is accessible by the public, but cannot access SC
SC is in a network domain that is not accessible by the public, but can access SC
DB is where you stored all pertinent information, including the 'really important stuff'.
At a specified interval (I used 5 seconds) SC checks DB for any new records in any table it may want to monitor (there is a job or scheduled task) and encrypts the important information.
Although I was utilizing SQL Server 2005 and was able to work in two domains (a private(intern al) and public(for client access) and that what I just shared was a stripped down (removed as much MSSQL-exclusive parts), simplified version, with some effort I think it would be possible to recreate something similar in mysql, especially if you can host your two databases in separate, physical machines.
While many will also think this is overkill, this idea had been implemented. It costs more, and requires more work when it's data reporting time but the clients were pleased.

Setting up multiple MySQL databases with scalability options

I need to set up a MySQL environment that will support adding many unique databases over time (thousands, actually).
I assume that at some point I will need to start adding MySQL servers, and would like my environment to be prepared for the case beforehand, to make the transition to a 2nd, 3rd, 100th server easy.
And just to make it interesting, It would be very convenient if the solution was modeled so the application that queries the databases sends all the queries to a single address and receives a result. It should be unaware of the number and location of the servers. The database name is unique and can be used to figure out which server holds the database.
I've done some research, and MySQL Proxy pops out as the main candidate, but I haven't been able to find anything specific about making it perform as described above.
Anyone?
Great question. I know of several companies that have done this (Facebook jumps out as the biggest). None are happy, but alternatives kind of suck, too.
More things for you to consider -- what happens when some of these databases or servers fail? What happens when you need to do a cross-database query (and you will, even if you don't think so right now).
Here's the FriendFeed solution: http://bret.appspot.com/entry/how-friendfeed-uses-mysql
It's a bit "back-asswards" since they are basically using MySQL as a glorified key-value store. I am not sure why they don't just cut out the middleman and use something like BerkeleyDB for storing their objects. Connection management, maybe? Seems like the MySQL overhead would be too high a price to pay for something that could be added pretty easily (famous last words).
What you are really looking for (I think) is a distributed share-nothing database. Several have been built on top of open-source technologies like MySQL and PostgreSQL, but none are available for free. If you are in the buying mood, check out these companies: Greenplum, AsterData, Netezza, Vertica.
There is also a large number of various distributed key-value storage solutions out there. For lack of a better reference, here's a starting point: http://www.metabrew.com/article/anti-rdbms-a-list-of-distributed-key-value-stores/ .
Your problem sounds similar to one we faced - that you are acting as a white-label, and that each client needs to have their own separate database. Assuming this concept parallels yours, what we did was leverage a "master" database that stored the hostname and database name for the client (which could be cached in the application tier). The server the client was accessing could then dynamically shift its datasource to the required database. This allowed us to scale up to thousands of client databases, scattered across servers.