Splitting a mysql database for security - mysql

I have used sql (mostly mysql) for years but not to a professional standard, so I'm looking for a shove in the right direction.
I am currently designing a web app that will collect user's names/addresses/emails etc in one set of tables, as well as other personal information in another set of tables. These would most naturally reside in one database, but I've been considering splitting the user contact information in one database on a separate server and all the other information into another database/server, the theory being that a hacker would have to break both systems to get anything very useful.
I've done searches off and on for a few weeks and haven't found this type of design discussed much so far. Is this generally done? Is it overkill? Is there a design method to approach it, or will I have to roll it all on my own?
I did find Is splitting databases a legitimate security measure? which I guess is saying that this approach is likely overkill.

I tend to think this is overkill.
Please check my answer on this question: Sharing users between 2 databases
Keep in mind to address separately database design and data access
security issues. Data access security should not lead you to illogical
choices in database design.

IMHO that seems to be wrong. By splitting data across 2 DB you will only increase complexity without reasonable security profits.
I think this is where data encryption can be used. Generate encryption key based on user credentials and encrypt/decrypt sensible data by user requests. Since private data must be shown only to that user, everything should be ok.

Here's an approach I used before:
Server1: DB
Server2: SC
DB is in a network domain that is accessible by the public, but cannot access SC
SC is in a network domain that is not accessible by the public, but can access SC
DB is where you stored all pertinent information, including the 'really important stuff'.
At a specified interval (I used 5 seconds) SC checks DB for any new records in any table it may want to monitor (there is a job or scheduled task) and encrypts the important information.
Although I was utilizing SQL Server 2005 and was able to work in two domains (a private(intern al) and public(for client access) and that what I just shared was a stripped down (removed as much MSSQL-exclusive parts), simplified version, with some effort I think it would be possible to recreate something similar in mysql, especially if you can host your two databases in separate, physical machines.
While many will also think this is overkill, this idea had been implemented. It costs more, and requires more work when it's data reporting time but the clients were pleased.

Related

Best database model for saas application (1 db per account VS 1 db for everyone)

Little question, I'm developing a saas software (erp).
I designed it with 1 database per account for these reasons :
I make a lot of personalisation, and need to add specific table columns for each account.
Easier to manage db backup (and reload data !)
Less risky : sometimes I need to run SQL queries on a table, in case of an error with bad query (update / delete...), only one customer is affected instead of all of them.
Bas point : I'm turning to have hundreds of databases...
I'm hiring a company to manage my servers, and they said that it's better to have only one database, with a few tables, and put all data in the same tables with column as id_account. I'm very very surprised by these words, so I'm wondering... what are your ideas ?
Thanks !
Frederic
The current environment I am working in, we handle millions of records from numerous clients. Our solution is to use Schema to segregate each individual client. A schema allows you to partition your clients into separate virtual databases while inside a single db. Each schema will have an exact copy of the tables from your application.
The upside:
Segregated client data
data from a single client can be easily backed up, exported or deleted
Programming is still the same, but you have to select the schema before db calls
Moving clients to another db or standalone server is a lot easier
adding specific tables per client is easier (see below)
single instance of the database running
tuning the db affects all tenants
The downside:
Unless you manage your shared schema properly, you may duplicate data
Migrations are repeated for every schema
You have to remember to select the schema before db calls
hard pressed to add many negatives... I guess I may be biased.
Adding Specific Tables: Why would you add client specific tables if this is SAAS and not custom software? Better to use a Postgres DB with a Hstore field and store as much searchable data as you like.
Schemas are ideal for multi-tenant databases Link Link
A lot of what I am telling you depends on your software stack, the capabilities of your developers and the backend db you selected (all of which you neglected to mention)
Your hardware guys should not decide your software architecture. If they do, you are likely shooting yourself in the leg before you even get out of the gate. Get a good senior software architect, the grief they will save you, will likely save your business.
I hope this helps...
Bonne Chance

When is MS Access better that a web app backed by RDBMS?

I haven't used Access since high school, years ago.
What kind of problem does it solve well, or even better than a web app backed by a real RDBMS?
Is it still actively developed? Or is it pretty dead to MS already?
What are its biggest limitations?
Update:
What resource shall one use to learn how to develop a MS Access solution for small business?
Thanks
First and foremost, Access IS a real RDBMS. What it is isn't is a client server RDBMS.
The only implications of this are that there is a throttle on the number of simultaneous connections and the security of the data needs careful thought.
Amongst other things, Access is also an IDE that uses VBA as its language.
This means that in Access you can write Front End apps that link to either a SQL Server back end, an Access back end, or a SharePoint back end. So it is one very versatile cookie.
It's limitations are:
Security: if you are using an Access Back End, take note that it doesn't have the built in security of a client server database. In any app, security is a function of the cost and the requisite secrecy of the data.
Number of simultaneous connections. if you are not careful, Access will struggle with more than 10 people trying to update data simultaneously. You can extend that, but you need to know what you are doing to guarantee results. to put a number to it, lets say 50 simultaneous connections.
Like most databases, it is liable to corruption.
NOTE: when referring to Access as a database, you should really be referring to the "database engine", JET or ACE, depending on the version and, for Access 2007+, dictated by the file format that you use. In other words, if you are storing data in Access tables, you are using either JET or ACE. However, if you are using LINKED TABLES, that are in, for example, SQL Server, then you are not, strictly speaking, using JET or ACE security for those tables.
Access SQL doesn't allow you to write stored procedures (you can write functions in VBA), in the sense that Access SQL only allows imperative statements as opposed to procedural statements (eg, control flow statements). You can introduce some "procedural code" using VBA functions, but this is very different to using SQL statements.
You backup the file itself. You can write code to do this at the click of a button.
Security is always a function of cost. If you have data that is worth more than 100,000 US$ (either in loss of rights or legal liabilities if it is stolen and you have not shown due diligence in protecting it), then Access is probably not the answer. 100,000 is an arbitrary figure. The precise figure will depend on whether the data is insurable and the consequences of it being lost or stolen.
Ie, if the value of the data is the driving concern, then definitely don't use Access as a Back End. Whether you use it as a Front End, is a matter of budget. For US$5000 I have written apps that are still running 10 years later. They now need to port the back end to SQL Server because the volume of sensitive data has grown.
Access, when used within the above constraints AND when used by a professional Access developer (rather than some disgruntled fool who thinks he should be using "cooler" technologies), will produce very sophisticated, sturdy and reliable applications at a 10th of the cost of other systems. In such scenarios, Access is a total NO BRAINER.
Anything else will cost more, take longer and will only be as good as the person who writes the code and designs the UI.
I have an application (the first one I ever built in Access) that has run without problems for 10 years. We have extended it massively. I have moved into ASP.NET MVC, but Access is where I hail from and I have seen it work well.
So in summary: the number of users is relevant and the value or liabilities implicit in the data are the other deciding factor.
If the number of simultaneous users is low and the value/implicit liabilities of the data is low, then the choice is definitely Access.
However, get yourself a good developer.
EDITS/CLARIFICATIONS:
The above answer, like all answers, was written in haste in the middle of a working day. Some statements were a bit glib and generic and not written with a suitable degree of precision... However, when the comments made by others are reasonable, the author of the answer should edit the post and clarify.
1/
Access is a holy trinity. It is an IDE for writing forms and reports and functions to use in your queries. It "includes" a database engine (JET/ACE). It provides a Visual Interface onto the database engine that allows you to design queries, set up relationships between tables, etc.
It is usually referred in its many roles as just Access, but precision does help to learn Access and get the most out of it.
2/
Access can't use stored procedures in the sense that Access SQL can only use imperative statements rather than the procedural ones (eg, control flow statements). There is a reason, I have always thought, for calling them stored PROCEDURES.
3/
Not every Access app costs exactly 100,000. Nor is the budget of an Access app equal to the value of the data. That is obvious. The idea I was trying to convey was that if the data is worth more than a sum that can be reasonably insured, then don't use Access. Is that figure 100,000? According to Luke Chung and Clint Covington, ex program manager for Access, yes, but don't take their word for it. It really just means "a lot of money".
I have written an app for Medical Charities that still runs 10 years later after an initial budget of 5000. They have probably invested another 20,000 over the years. That kind of app is the Access sweet spot.
It all depends really, I will give you a quick example that happened to me recently. At work they needed a small system to capture some records from a group of about 15 users and pass about 15% of those records to another team of about 5 or so to do additional tasks on those records. This was a one off project that was going to last about 4 months.
The official IT solution was of course a web app with a SQL server backend coming in at about £60,000. As they had no SQL server space available and the budget was very small I decided to go with an unbound access database using JET to store the data.
In this example access/JET was the right choice, now if this had been a long term system to support 500 users of course the web app would be the way to go. Its horses for courses at the end of the day and people should not let their prejudices effect their business decisions.
Ah. Never. Point. Too many limitations in general. Backups are problematic, stability CAN be problematic. Especially if you compare access (file share daabase) against web ap you are in for a world of pain pretty much in every scenario.
Access is usable for small single place db stuff (loading data before moving it off to a SQL Server) or a front end for SQL Server (i.e. access not actually storing any data). The later is also pretty much the direction MS is taking access to - a front end technology.
My knowledge is quite old now, but it always used to be very good for reports - very quick, powerful, and much easier than, e.g. Crystal Reports.
If you just want to hack something out quickly, it's probably a bit easier to do at least some kinds of applications with Access than a web front end with a SQL (or whatever) backend. It is still being developed (Access 2010 was release within the last month or two, if memory serves).
I haven't used the new version to say for sure, but the last time I did any looking, it seemed like new editions were mostly updating the look to go along with the latest version of office, cleaning up semi-obvious problems and bugs, but not a whole lot more than that. I wouldn't say it's dead, but I don't see much to indicate that it's really one of Microsoft's top priorities either.
Trying to pin down it's biggest limitations is hard. The "JET Red" storage engine it's based around doesn't scale well at all -- but it was never really intended to. Its basic design is intended to conflate the application with the data being stored, so it's relatively difficult to just treat it as raw data to be used for other purposes. I don't know if it's still the case, but at least at one time, the database format was also fairly fragile -- file corruption was semi-common, and in most cases about the only hope of recovery was a backup file (which meant, at best, losing everything that had happened since the last backup -- and some forms of corruption weren't immediately obvious, so corrupt backups sometimes happened as well).
It comes down to this: if one of the Wizards built into access can produce exactly what you want, or at least something really close, and you only ever need to support a few users with the result, it might be a reasonable choice in a few situations. If that doesn't (all) apply, there are almost certain to be better alternatives.
The Jet database engine used by Access is considered deprecated by Microsoft, though it is still supported. The limits of an .mdb database and the newer .accdb type are described here.
Link
Even SQL Server Express would be better in almost every case.
Someone with very limited knowledge of RDBMS/programming can still throw a quick frontend together in Access (Ideally using an external database), that's really the main use for it.

database synchronization - MS Access

I have an issue at the moment where multiple (same schema) access 2003 databases are used on laptops.
I need to find an automated way to synchronize the data into a central access database.
Data on the laptops is only appended to so update/delete operations wont be an issue.
Which tools will allow me to do this easily?
What factors will affect the decision on the best tool or solution?
It is possible to use the Jet replication built into Access, but I will warn you, it is quite flaky. It will also mess up your PK on whatever tables you do it on because it picks random signed integers to try and avoid key collisions, so you might end up with -1243482392912 as your next PK on a given record. That's a PITA to type in if you're doing any kind of lookup on it (like a customer ID, order number, etc.) You can't automate Access synchronization (maybe you can fake something like it by using VBA. but still, that will only be run when the database is opened).
The way I would recommend is to use SQL Server 2005/2008 on your "central" database and use SQL Server Express Editions as the back-end on your "remote" databases, then use linked tables in Access to connect to these SSEE databases and replication to sync them. Set up either merge replication or snapshot replication with your "central" database as the publisher and your SSEE databases as subscribers. Unlike Access Jet replication, you can control the PK numbering but for you, this won't be an issue as your subscribers will not be pushing changes.
Besides the scalability that SQL server would bring, you can also automate this using the Windows Synchronization manager (if you have synchronized folders, that's the annoying little box that pops up and syncs them when you logon/logoff), and set it up so that it synchronizes at a given interval, on startup, shutdown, or at a time of day, and/or when computer is idle, or only synchronizes on demand. Even if Access isn't run for a month, its data set can be updated every time your users connect to the network. Very cool stuff.
Access Replication can be awkward, and as you only require append queries with some checking, it would probably be best to write something yourself. If the data collected by each laptop cannot overlap, this may not be too difficult.
You will need to consider the primary keys. It may be best to incorporate the user or laptop name in the key to ensure that records relate correctly.
The answers in this thread are filled with misinformation about Jet Replication from people who obviously haven't used it and are just repeating things they've heard, or are attributing problems to Jet Replication that actually reflect application design errors.
It is possible to use the Jet
replication built into Access, but I
will warn you, it is quite flaky.
Jet Replication is not flakey. It is perfectly reliable when used properly, just like any other complex tool. It is true that certain things that cause no problems in a non-replicated database can lead to issues when replicated, but that stands to reason because of the nature of what replication by any database engine entails.
It will also mess up your PK on
whatever tables you do it on because
it picks random signed integers to try
and avoid key collisions, so you might
end up with -1243482392912 as your
next PK on a given record. That's a
PITA to type in if you're doing any
kind of lookup on it (like a customer
ID, order number, etc.)
Surrogate Autonumber PKs should never be exposed to users in the first place. They are meaningless numbers used for joining records behind the scenes, and if you're exposing them to users IT'S AN ERROR IN YOUR APPLICATION DESIGN.
If you do need sequence numbers, you'll have to roll your own and deal with the issue of how to prevent collisions between your replicas. But that's an issue for replication in any database engine. SQL Server offers the capability of allocating blocks of sequence numbers for individual replicas at the database engine level and that's a really nice feature, but it comes at the cost of increased administrative overhead from maintaining multiple SQL Server instances (with all the security and performance issues that entails). In Jet Replication, you'd have to do this in code, but that's hardly a complicated issue.
Another alternative would be to use a compound PK, where one column indicates the source replica.
But this is not some flaw in the Replication implementation of Jet -- it's an issue for any replication scenario with a need for meaningful sequence numbers.
You can't automate Access
synchronization (maybe you can fake
something like it by using VBA. but
still, that will only be run when the
database is opened).
This is patently untrue. If you install the Jet synchronizer you can schedule synchs (direct, indirect or Internet synchs). Even without it, you could schedule a VBScript to run periodically and do the synchronization. Those are just two methods of accomplishing automated Jet synchroniziation without needing to open your Access application.
A quote from MS documentation:
Use Jet and Replication Objects
JRO is really not the best way to manage Jet Replication. For one, it has only one function in it that DAO itself lacks, i.e., the ability to initiate an indirect synch in code. But if you're going to add a dependency to your app (JRO requires a reference, or can be used via late binding), you might as well add a dependency on a truly useful library for controlling Jet Replication, and that's the TSI Synchronizer, created by Michael Kaplan, once the world's foremost expert on Jet Replication (who has since moved onto internationalization as his area of concentration). It gives you full programmatic control of almost all the replication functionality that Jet exposes, including scheduling synchs, initiating all kinds of synchronization, and the much-needed MoveReplica command (the only legal way to move or rename a replica without breaking replication).
JRO is one of the ugly stepchildren of Microsoft's aborted ADO-Everywhere campaign. Its purpose is to provide Jet-specific functionality to supplement what is supported in ADO itself. If you're not using ADO (and you shouldn't be in an Access app with a Jet back end), then you don't really want to use JRO. As I said above, it adds only one function that isn't already available in DAO (i.e., initiating an indirect synch). I can't help but think that Microsoft was being spiteful by creating a standalone library for Jet-specific functionality and then purposefully leaving out all the incredibly useful functions that they could have supported had they chosen to.
Now that I've disposed of the erroneous assertions in the answers offered above, here's my recomendation:
Because you have an append-only infrastructure, do what #Remou has recommended and set up something to manually send the new records whereever they need to go. And he's right that you still have to deal with the PK issue, just as you would if you used Jet Replication. This is because that's necessitated by the requirement to add new records in multiple locations, and is common to all replication/synchronization applications.
But one caveat: if the add-only scenario changes in the future, you'll be hosed and have to start from scratch or write a whole lot of hairy code to manage deletes and updates (this is not easy -- trust me, I've done it!). One advantage of just using Jet Replication (even though it's most valuable for two-way synchronizations, i.e., edits in multiple locations) is that it will handle the add-only scenario without any problems, and then easily handle full merge replication should it become a requirement in the future.
Last of all, a good place to start with Jet Replication is the Jet Replication Wiki. The Resources, Best Practices and Things Not to Believe pages are probably the best places to start.
You should read into Access Database Replication, as there is some information out there.
But I think that in order for it to work correctly with your application, you will have to roll out a custom made solution using the methods and properties available for that end.
Use Jet and Replication Objects (JRO) if you require programmatic control over the exchange of data and design information among members of the replica set in Microsoft Access databases (.mdb files only). For example, you can use JRO to write a procedure that automatically synchronizes a user's replica with the rest of the set when the user opens the database. To replicate a database programmatically, the database must be closed.
If your database was created with Microsoft Access 97 or earlier, you must use Data Access Objects (DAO) to programmatically replicate and synchronize it.
You can create and maintain a replicated database in previous versions of Microsoft Access by using DAO methods and properties. Use DAO if you require programmatic control over the exchange of data and design information among members of the replica set. For example, you can use DAO to write a procedure that automatically synchronizes a user's replica with the rest of the set when the user opens the database.
You can use the following methods and properties to create and maintain a replicated database:
MakeReplica method
Synchronize method
ConflictTable property
DesignMasterID property
KeepLocal property
Replicable property
ReplicaID property
ReplicationConflictFunction property
Microsoft Jet provides these additional methods and properties for creating and maintaining partial replicas (replicas that contain a subset of the records in a full replica):
ReplicaFilter property
PartialReplica property
PopulatePartial method
You should definitely read the Synchronizing Data part of the documentation.
I used replication in a00 for years, until forced to upgrade to a07 (when it went away). The most problematic issue we ran into, at the enterprise level, was managing the CONFLICTS. If not managed timely, or there are too many, users get frustrated and the data becomes unreliable.
Replication did work well when our remote sites were not always connected to the internet. This allowed them to work with their data, and synchronize when they could. At least twice daily.
We install a separate database on the remote computers that managed the synchronization, so the user only had to click an icon on their desktop to evoke the synchronization.
The user had a separate button to push/pull in feeds off a designated FTP file that would update from the Legacy systems.
This process worked quite well, as we had 30 of these "nodes" working around the country, managing their data and updating to the FTP servers.
If you are seriously considering this path, let me know and I can send you my documentation.
You can write your own synchronization software that connects to the laptop selects the diff from it's db and inserts it to the master.
It is depends on your data scheme how easy this operation will be.
(if you have many tables with FKs... you will need to do it smartly).
I think it will be the most efficient if you write it yourself.
Automating this kind of behavior is called replication, and Accesss Supports that apparently, but I've never seen it implemented.
As I guess most of the time the laptop is not connected to the main DB it is not a good idea anyway (to replicate data).
if you will look for a 3rd party tool to do it - look for something that can easily do the diff between the tables before copying, and can do it incrementally of course.
FWIW:
Autonumbers. I agree with David - they should never be exposed. To remove that temptation, I use a Random autonumber.
Replication. I used this extensively some years back, with scheduled syncs, and using GUIDs as the PK. I repeatedly found that any hiccups over the network corrupted the replicas, with the result that I had to salvage data, and re-issue replicas. Painful!

Setting up multiple MySQL databases with scalability options

I need to set up a MySQL environment that will support adding many unique databases over time (thousands, actually).
I assume that at some point I will need to start adding MySQL servers, and would like my environment to be prepared for the case beforehand, to make the transition to a 2nd, 3rd, 100th server easy.
And just to make it interesting, It would be very convenient if the solution was modeled so the application that queries the databases sends all the queries to a single address and receives a result. It should be unaware of the number and location of the servers. The database name is unique and can be used to figure out which server holds the database.
I've done some research, and MySQL Proxy pops out as the main candidate, but I haven't been able to find anything specific about making it perform as described above.
Anyone?
Great question. I know of several companies that have done this (Facebook jumps out as the biggest). None are happy, but alternatives kind of suck, too.
More things for you to consider -- what happens when some of these databases or servers fail? What happens when you need to do a cross-database query (and you will, even if you don't think so right now).
Here's the FriendFeed solution: http://bret.appspot.com/entry/how-friendfeed-uses-mysql
It's a bit "back-asswards" since they are basically using MySQL as a glorified key-value store. I am not sure why they don't just cut out the middleman and use something like BerkeleyDB for storing their objects. Connection management, maybe? Seems like the MySQL overhead would be too high a price to pay for something that could be added pretty easily (famous last words).
What you are really looking for (I think) is a distributed share-nothing database. Several have been built on top of open-source technologies like MySQL and PostgreSQL, but none are available for free. If you are in the buying mood, check out these companies: Greenplum, AsterData, Netezza, Vertica.
There is also a large number of various distributed key-value storage solutions out there. For lack of a better reference, here's a starting point: http://www.metabrew.com/article/anti-rdbms-a-list-of-distributed-key-value-stores/ .
Your problem sounds similar to one we faced - that you are acting as a white-label, and that each client needs to have their own separate database. Assuming this concept parallels yours, what we did was leverage a "master" database that stored the hostname and database name for the client (which could be cached in the application tier). The server the client was accessing could then dynamically shift its datasource to the required database. This allowed us to scale up to thousands of client databases, scattered across servers.

Keep database information secure

there's this interesting problem i can not solve myself. I will be very glad, if you help me.
Here's it:
there are many client applications that send data records to one MySQL server.
Few data records are not very important, but the whole database is. (You can imagine it is facebook DB :) )
Is there any way to ensure that
data from DB won't be used by anyone but true owner
DB will preserve essential features such as sorting etc.
assuming that attacker can mysteriously gain full access to server?
You can't simply encrypt data client-side and store it encrypted, since client application is wide-spread and attacker can get key from it.
Maybe adding some layers between application and DB, or combining encryption methods client- and server-side (using mysql built-in methods) will help?
As long as the database needs to start up and run unattended you can't hide the keys from a compromised root account (= 'mysterious full access'). Anywhere the database could possibly store the master key(s), the root will also have access. No amount of business layers or combination of client-server encryption will ever circumvent this simple fact. You can obfuscate it till the day after but if the prize is worth then root can get it.
One alternative is to require a manually assisted start up process, ie. a human enters the master key password during the server boot (or hardware module PIN), but this is extremely hard to maintain in real world, it requires a highly trusted employee to be on pager call to log in and start the database whenever there is downtime.
Solutions like TPM offer protection against physical loss of the server, but not against a compromised root.
Your root is as important as the database master key(s), so you must protect your root with the same care as the keys. This means setting up operating procedures, screening who has access to root, rotating the root password and so on and so forth. The moment someone gains 'mysteriously full access' the game is pretty much lost.
I pretty much agree with Remus Rusanu's answer.
Maintaining good security is hard, but you can always pay attention to what you do. When ever you access sensitive information carefully verify your query and make sure it cannot be spoofed or exploited to gain access to information which shouldn't be accessible by given client.
If you can roll out physical access to the box by the attacker then there are several things you can do to harden your security. First of all I'd configure ssh access only to only allow connections from specific IP or IP range (and of course no root access). You can also do that that on your firewall. This would mean that the weakest link is your server (the application which receives data/requests from clients, could be web-server and whatever scripts you use). Now you "just" have to make sure that no one can exploit your server. There are a lot more things you could do to harden your system, but it think it would be more appropriate to ask on ServerFault.
If you're worried about physical access to the PC, there isn't really much you can do and most stuff has already been mentioned in Remus answer.
There's also another option. This is by far the most ineffective method from speed and ease to develop viewpoint, but it would partly protect you from any kind of an attack on your server (including physical). It's actually quite simple, but a bit hard to implement - only store the encrypted data in the database and handle all encryption/decryption client-side using javascript or flash. Only the client will have the key and data will always be transfered over the wire and stored in encrypted format. The biggest drawback is that once client forgets the key there's no way back, the data is inaccessible.
Of course it's all matter of time, money and effort - with enough of these anything can be broken.
I've no idea if such a thing exists in MySql, but row-level-versioning in Oracle enables you to define access rights on row-level IN the database: so that means, regardless of what tool is being used to access the data, the user only ever sees the same selection as determined by his/her credentials.
So if my username/role is only allowed to see data limited by some WHERE clause, that can appended to each and every SELECT that appears in the database, regardless of whether it comes from a web app, a SQL querying tool, or whatever.
I will use a 2nd layer and a firwall between them.
so you have firewall ---- web server --- firewall -- 2nd layer server --- firewll --- db
it will be wise to use different platfroms between layers, it all depends how important is the data.
anyway - the web server should have no access to DB.
about preserving sort - if you use a file encrypotion mechisim - it will only protect you from Hard drive theaft.
if you encrypt the data it self, and if you do it smartly (storing the keys in a separate place) you will not loose sorting as you will look for the encryoted entry and not the real one- but now you have another thing to protect....