Can splitting .MDB files into segments help with stability? - ms-access

Is this a realistic solution to the problems associated with larger .mdb files:
split the large .mdb file into
smaller .mdb files
have one 'central' .mdb containing
links to the tables in the smaller
.mdb files
How easy would it be to make this change to an .mdb backed VB application?
Could the changes to the database be done so that there are no changes required to the front-end application?

Edit Start
The short answer is "No, it won't solve the problems of a large database."
You might be able to overcome the DB size limitation (~2GB) by using this trick, but I've never tested it.
Typically, with large MS Access databases, you run into problems with speed and data corruption.
Speed
Is it going to help with speed? You still have the same amount of data to query and search through, and the same algorithm. So all you are doing is adding the overhead of having to open up multiple files per query. So I would expect it to be slower.
You might be able to speed it up by reducing the time time that it takes to ge tthe information off of disk. You can do this in a few ways:
faster drives
put the MDB on a RAID (anecdotally RAID-1,0 may be faster)
split the MDB up (as you suggest) into multiple MDBs, and put them on separate drives (maybe even separate controllers).
(how well this would work in practice vs. theory, I can't tell you - if I was doing that much work, I'd still choose to switch DB engines)
Data Corruption
MS Access has a well deserved reputation for data corruption. To be fair, I haven't had it happen to me fore some time. This may be because I've learned not to use it for anything big; or it may be because MS has put a lot of work in trying to solve these problems; or more likely a combination of both.
The prime culprits in data corruption are:
Hardware: e.g., cosmic rays, electrical interference, iffy drives, iffy memory and iffy CPUs - I suspect MS Access does not have as good error handling/correcting as other Databases do.
Networks: lots of collisions on a saturated network can confuse MS Access and convince it to scramble important records; as can sub-optimally implemented network protocols. TCP/IP is good, but it's not invincible.
Software: As I said, MS has done a lot of work on MS Access over the years, if you are not up to date on your patches (MS Office and OS), get up to date. Problems typically happen when you hit extremes like the 2GB limit (some bugs are hard to test and won't manifest them selves except at the edge cases, which makes the less likely to have been seen or corrected, unless reported by a motivated user to MS).
All this is exacerbated with larger databases, because larger databases typically have more users and more workstations accessing it. Altogether the larger database and number of users multiply to provide more opportunity for corruption to happen.
Edit End
Your best bet would be to switch to something like MS SQL Server. You could start by migrating your data over, and then linking one MDB to to it. You get the stability of SQL server and most (if not all) of your code should still work.
Once you've done that, you can then start migrating your VB app(s) over to us SQL Server instead.

If you have more data than fits in a single MDB then you should get a different database engine.
One main issue that you should consider is that you can't enforce referential integrity between tables stored in different MDBs. That should be a show-stopper for any actual database.
If it's not, then you probably don't have a proper schema designed in the first place.

For reasons more adequately explained by CodeSlave the answer is No and you should switch to a proper relational database.
I'd like to add that this does not have to be SQL Server. Quite possibly the reason why you are reluctant to do this is one of cost, SQL Server being quite expensive to obtain and deploy if you are not in an educational or charitable organisation (when it's remarkably cheap and then usually a complete no-brainer).
I've recently had extremely good results moving an Access system from MDB to MySQL. At least 95% of the code functioned without modification, and of the remaining 5% most was straightforward with only a few limited areas where significant effort was required. If you have sloppy code (not closing connections or releasing objects) then you'll need to fix these, but generally I was remarkably surprised how painless this approach was. Certainly I would highly recommend that if the reason you are reluctant to move to a database backend is one of cost then you should not attempt to manipulate .mdb files and go instead for the more robust database solution.

Hmm well if the data is going through this central DB then there is still going to be a bottle neck in there. The only reason I can think why you would do this is to get around the size limit of an access mdb file.
Having said that if the business functions can be split off in the separate applications then that might be a good option with a central DB containing all the linked tables for reporting purposes. I have used this before to good effect

Related

What is the best way to prevent Access database bloat

Intro:
I am creating a Access database system that will be rolled out with multi-user functionality.
But as i am creating this database in Access 2000 (Old school I know) there are quite a lot of bugs and random mysterious problems that occur when my database gets passed 40-60MB.
My question:
Has anyone got a good solution to how I can shrink this down or to prevent the bloat?
Details:
I am using many local tables combined with SQL Tables and my front-end links to a back-end SQL Server.
I have already tried compact and repair but it only ever shrinks it to about 15MB and after the user has used the database a few time the bloat expands quickly to over 50-60MB!
Let me know if more detail is needed but that is the rough outline of my problem.
Many Thanks!
Here's some ideas for you to follow.
You said you also have a lot of local tables. Split the local tables off into yet another Access database. So you'll have 2 back-ends (1 SQL Server & 1 Access), and the front end.
Create a batch file that opens your local tables backend database with the /compact option. So, it will look something like this:
"C:\Prog...\Microsoft...\Officexx\ C:\ProjectX_backend.mdb /compact"
Then run this batch file on a daily basis using scheduled tasks. Your frontend should never need compacting unless you edit it in any way.
If you are stuck with 2000, which has a quite bad reputation, then you have to dig down into your application and find out what creates the bloat. The most common reason are bulk inserts followed by deletes. Other reasons, are the use of OLE Object fields. Other reasons are programmatic changes in in form, etc objects. You really have to go through your application and find the specific cause.
An mdb file that is only connected to a backed server and does not make changes to local objects should not grow.
As for your random issues, besides some lack of stability in the 2000 version, you should look into bad RAM in the computers, bad hard drives, and broken network controllers if your mdb file is shared on the network.

When is MS Access better that a web app backed by RDBMS?

I haven't used Access since high school, years ago.
What kind of problem does it solve well, or even better than a web app backed by a real RDBMS?
Is it still actively developed? Or is it pretty dead to MS already?
What are its biggest limitations?
Update:
What resource shall one use to learn how to develop a MS Access solution for small business?
Thanks
First and foremost, Access IS a real RDBMS. What it is isn't is a client server RDBMS.
The only implications of this are that there is a throttle on the number of simultaneous connections and the security of the data needs careful thought.
Amongst other things, Access is also an IDE that uses VBA as its language.
This means that in Access you can write Front End apps that link to either a SQL Server back end, an Access back end, or a SharePoint back end. So it is one very versatile cookie.
It's limitations are:
Security: if you are using an Access Back End, take note that it doesn't have the built in security of a client server database. In any app, security is a function of the cost and the requisite secrecy of the data.
Number of simultaneous connections. if you are not careful, Access will struggle with more than 10 people trying to update data simultaneously. You can extend that, but you need to know what you are doing to guarantee results. to put a number to it, lets say 50 simultaneous connections.
Like most databases, it is liable to corruption.
NOTE: when referring to Access as a database, you should really be referring to the "database engine", JET or ACE, depending on the version and, for Access 2007+, dictated by the file format that you use. In other words, if you are storing data in Access tables, you are using either JET or ACE. However, if you are using LINKED TABLES, that are in, for example, SQL Server, then you are not, strictly speaking, using JET or ACE security for those tables.
Access SQL doesn't allow you to write stored procedures (you can write functions in VBA), in the sense that Access SQL only allows imperative statements as opposed to procedural statements (eg, control flow statements). You can introduce some "procedural code" using VBA functions, but this is very different to using SQL statements.
You backup the file itself. You can write code to do this at the click of a button.
Security is always a function of cost. If you have data that is worth more than 100,000 US$ (either in loss of rights or legal liabilities if it is stolen and you have not shown due diligence in protecting it), then Access is probably not the answer. 100,000 is an arbitrary figure. The precise figure will depend on whether the data is insurable and the consequences of it being lost or stolen.
Ie, if the value of the data is the driving concern, then definitely don't use Access as a Back End. Whether you use it as a Front End, is a matter of budget. For US$5000 I have written apps that are still running 10 years later. They now need to port the back end to SQL Server because the volume of sensitive data has grown.
Access, when used within the above constraints AND when used by a professional Access developer (rather than some disgruntled fool who thinks he should be using "cooler" technologies), will produce very sophisticated, sturdy and reliable applications at a 10th of the cost of other systems. In such scenarios, Access is a total NO BRAINER.
Anything else will cost more, take longer and will only be as good as the person who writes the code and designs the UI.
I have an application (the first one I ever built in Access) that has run without problems for 10 years. We have extended it massively. I have moved into ASP.NET MVC, but Access is where I hail from and I have seen it work well.
So in summary: the number of users is relevant and the value or liabilities implicit in the data are the other deciding factor.
If the number of simultaneous users is low and the value/implicit liabilities of the data is low, then the choice is definitely Access.
However, get yourself a good developer.
EDITS/CLARIFICATIONS:
The above answer, like all answers, was written in haste in the middle of a working day. Some statements were a bit glib and generic and not written with a suitable degree of precision... However, when the comments made by others are reasonable, the author of the answer should edit the post and clarify.
1/
Access is a holy trinity. It is an IDE for writing forms and reports and functions to use in your queries. It "includes" a database engine (JET/ACE). It provides a Visual Interface onto the database engine that allows you to design queries, set up relationships between tables, etc.
It is usually referred in its many roles as just Access, but precision does help to learn Access and get the most out of it.
2/
Access can't use stored procedures in the sense that Access SQL can only use imperative statements rather than the procedural ones (eg, control flow statements). There is a reason, I have always thought, for calling them stored PROCEDURES.
3/
Not every Access app costs exactly 100,000. Nor is the budget of an Access app equal to the value of the data. That is obvious. The idea I was trying to convey was that if the data is worth more than a sum that can be reasonably insured, then don't use Access. Is that figure 100,000? According to Luke Chung and Clint Covington, ex program manager for Access, yes, but don't take their word for it. It really just means "a lot of money".
I have written an app for Medical Charities that still runs 10 years later after an initial budget of 5000. They have probably invested another 20,000 over the years. That kind of app is the Access sweet spot.
It all depends really, I will give you a quick example that happened to me recently. At work they needed a small system to capture some records from a group of about 15 users and pass about 15% of those records to another team of about 5 or so to do additional tasks on those records. This was a one off project that was going to last about 4 months.
The official IT solution was of course a web app with a SQL server backend coming in at about £60,000. As they had no SQL server space available and the budget was very small I decided to go with an unbound access database using JET to store the data.
In this example access/JET was the right choice, now if this had been a long term system to support 500 users of course the web app would be the way to go. Its horses for courses at the end of the day and people should not let their prejudices effect their business decisions.
Ah. Never. Point. Too many limitations in general. Backups are problematic, stability CAN be problematic. Especially if you compare access (file share daabase) against web ap you are in for a world of pain pretty much in every scenario.
Access is usable for small single place db stuff (loading data before moving it off to a SQL Server) or a front end for SQL Server (i.e. access not actually storing any data). The later is also pretty much the direction MS is taking access to - a front end technology.
My knowledge is quite old now, but it always used to be very good for reports - very quick, powerful, and much easier than, e.g. Crystal Reports.
If you just want to hack something out quickly, it's probably a bit easier to do at least some kinds of applications with Access than a web front end with a SQL (or whatever) backend. It is still being developed (Access 2010 was release within the last month or two, if memory serves).
I haven't used the new version to say for sure, but the last time I did any looking, it seemed like new editions were mostly updating the look to go along with the latest version of office, cleaning up semi-obvious problems and bugs, but not a whole lot more than that. I wouldn't say it's dead, but I don't see much to indicate that it's really one of Microsoft's top priorities either.
Trying to pin down it's biggest limitations is hard. The "JET Red" storage engine it's based around doesn't scale well at all -- but it was never really intended to. Its basic design is intended to conflate the application with the data being stored, so it's relatively difficult to just treat it as raw data to be used for other purposes. I don't know if it's still the case, but at least at one time, the database format was also fairly fragile -- file corruption was semi-common, and in most cases about the only hope of recovery was a backup file (which meant, at best, losing everything that had happened since the last backup -- and some forms of corruption weren't immediately obvious, so corrupt backups sometimes happened as well).
It comes down to this: if one of the Wizards built into access can produce exactly what you want, or at least something really close, and you only ever need to support a few users with the result, it might be a reasonable choice in a few situations. If that doesn't (all) apply, there are almost certain to be better alternatives.
The Jet database engine used by Access is considered deprecated by Microsoft, though it is still supported. The limits of an .mdb database and the newer .accdb type are described here.
Link
Even SQL Server Express would be better in almost every case.
Someone with very limited knowledge of RDBMS/programming can still throw a quick frontend together in Access (Ideally using an external database), that's really the main use for it.

What database systems should a startup company consider?

Right now I'm developing the prototype of a web application that aggregates large number of text entries from a large number of users. This data must be frequently displayed back and often updated. At the moment I store the content inside a MySQL database and use NHibernate ORM layer to interact with the DB. I've got a table defined for users, roles, submissions, tags, notifications and etc. I like this solution because it works well and my code looks nice and sane, but I'm also worried about how MySQL will perform once the size of our database reaches a significant number. I feel that it may struggle performing join operations fast enough.
This has made me think about non-relational database system such as MongoDB, CouchDB, Cassandra or Hadoop. Unfortunately I have no experience with either. I've read some good reviews on MongoDB and it looks interesting. I'm happy to spend the time and learn if one turns out to be the way to go. I'd much appreciate any one offering points or issues to consider when going with none relational dbms?
The other answers here have focused mainly on the technical aspects, but I think there are important points to be made that focus on the startup company aspect of things:
Availabililty of talent. MySQL is very common and you will probably find it easier (and more importantly, cheaper) to find developers for it, compared to the more rarified database systems. This larger developer base will also mean more tutorials, a more active support community, etc.
Ease of development. Again, because MySQL is so common, you will find it is the db of choice for a great many systems / services. This common ground may make any external integration a little easier.
You are preparing for a situation that may never exist, and is manageable if it does. Very few businesses (nevermind startups) come close to MySQL's limits, and with all due respect (and I am just guessing here); the likelihood that your startup will ever hit the sort of data throughput to cripple a properly structured, well resourced MySQL db is almost zero.
Basically, don't spend your time ( == money) worrying about which db to use, as MySQL can handle a lot of data, is well proven and well supported.
Going back to the technical side of things... Something that will have a far greater impact on the speed of your app than choice of db, is how efficiently data can be cached. An effective cache can have dramatic effects on reducing db load and speeding up the general responsivness of an app. I would spend your time investigating caching solutions and making sure you are developing your app in such a way that it can make the best use of those solutions.
FYI, my caching solution of choice is memcached.
So far no one has mentioned PostgreSQL as alternative to MySQL on the relational side. Be aware that MySQL libs are pure GPL, not LGPL. That might force you to release your code if you link to them, although maybe someone with more legal experience could tell you better the implications. On the other side, linking to a MySQL library is not the same that just connecting to the server and issue commands, you can do that with closed source.
PostreSQL is usually the best free replacement of Oracle and the BSD license should be more business friendly.
Since you prefer a non relational database, consider that the transition will be more dramatic. If you ever need to customize your database, you should also consider the license type factor.
There are three things that really have a deep impact on which one is your best database choice and you do not mention:
The size of your data or if you need to store files within your database.
A huge number of reads and very few (even restricted) writes. In that case more than a database you need a directory such as LDAP
The importance of of data distribution and/or replication. Most relational databases can be more or less well replicated, but because of their concept/design do not handle data distribution as well... but will you handle as much data that does not fit into one server or have access rights that needs special separate/extra servers?
However most people will go for a non relational database just because they do not like learning SQL
What do you think is a significant amount of data? MySQL, and basically most relational database engines, can handle rather large amount of data, with proper indexes and sane database schema.
Why don't you try how MySQL behaves with bigger data amount in your setup? Make some scripts that generate realistic data to MySQL test database and and generate some load on the system and see if it is fast enough.
Only when it is not fast enough, first start considering optimizing the database and changing to different database engine.
Be careful with NHibernate, it is easy to make a solution that is nice and easy to code with, but has bad performance with large amount of data. For example whether to use lazy or eager fetching with associations should be carefully considered. I don't mean that you shouldn't use NHibernate, but make sure that you understand how NHibernate works, for example what "n + 1 selects" -problem means.
Measure, don't assume.
Relational databases and NoSQL databases can both scale enormously, if the application is written right in each case, and if the system it runs on is properly tuned.
So, if you have a use case for NoSQL, code to it. Or, if you're more comfortable with relational, code to that. Then, measure how well it performs and how it scales, and if it's OK, go with it, if not, analyse why.
Only once you understand your performance problem should you go searching for exotic technology, unless you're comfortable with that technology or want to try it for some other reason.
I'd suggest you try out each db and pick the one that makes it easiest to develop your application. Go to http://try.mongodb.org to try MongoDB with a simple tutorial. Don't worry as much about speed since at the beginning developer time is more valuable than the CPU time.
I know that many MongoDB users have been able to ditch their ORM and their caching layer. Mongo's data model is much closer to the objects you work with than relational tables, so you can usually just directly store your objects as-is, even if they contain lists of nested objects, such as a blog post with comments. Also, because mongo is fast enough for most sites as-is, you can avoid dealing the complexities of caching and generally deliver a more real-time site. For example, Wordnik.com reported 250,000 reads/sec and 100,000 inserts/sec with a 1.2TB / 5 billion object DB.
There are a few ways to connect to MongoDB from .Net, but I don't have enough experience with that platform to know which is best:
Norm: http://wiki.github.com/atheken/NoRM/
MongoDB-CSharp: http://github.com/samus/mongodb-csharp
Simple-MongoDB: http://code.google.com/p/simple-mongodb/
Disclaimer: I work for 10gen on MongoDB so I am a bit biased.

Will Access support 35- 40 users writing to a Access database

We are looking to have about 35-40 people writing to an access database via script on a shared drive. The metrics break down to them needed to write about 3-7 times an hour. Would Access support this without going ape on me.
Yes I would love to use this as a SQL server but that means going through massive amounts of red tape/meetings paperwork etc that I would prefer not to bother with
Could you not make them go with the free edition of SQL Server Express without the red tape?
In answer to your question, though, I've seen Access give big problems in environments with this many users, although that was pre 2007. I dunno how much it has changed.
If it were me, I'd avoid Access at all cost.
Could it? Yes. If you are very careful and perform locking and ensure that nobody steps on anybody else. Access is really not designed for any form of concurrency. I know of one place that managed to make it work in a very concurrent environment, but that environment basically logged everything and if the DB clobbered itself, it'd restore from the last backup and replay against the Access file automatically, so that the failures were transparent. I would not recommend following that course of action...
Should you do it? No. Is there any reason that you cannot use something like PostgreSQL or MySQL?
Yes, it would work. No, it's not a good idea.
Access would be able to handle the load, as long as those 35-40 people aren't all trying to access the database at once. It'll quickly bog down when you start having more than a couple of concurrent users, particularly if those users are all trying to update something.
The problem is that isn't not safe. You need to have the entire database file accessible on a network share, where any users will be able to write to it. You'll have multiple instances of Access trying to read and modify the file at the same time, and unless you are very careful with locking, it's quite possible for the database to become damaged or corrupt.
You'll also never be able to add any kind of access control beyond basic file permissions. You might not need it now, but internal databases often end up needing to be exposed to the wider world somehow.
It's not worth it. There are plenty of real RDBMS systems out there, for free, that are designed to handle this kind of thing. Why spend time trying to make Access work in such an environment, when you could just install SQL Server Express and be done with it? It has limitations, but if you're seriously considering Access, you're never going to be anywhere near those. Or use MySQL, PostgreSQL, Firebird...
I would avoid access too. Have you every thought about sql ce. It should handle multi users better and it is file just like access.
7 * 40 = 280 per hour.
280 / 60 = 4,6 per mins.
If your script is light, and if you don't read results too often, maybe...
Of course I don't recommand you to try. Meetings time! ;)
If the connections are opened only as long as needed to run the scripts, and you use transactions and have some retry logic built in when there's a conflict, there really oughtn't be too much of an issue.
If your script takes 1 second to do its update (that's a pretty long time in computer/database terms, of course), and there are 280 updates per hour, if you were lucky enough that no two users simultaneously ran their scripts, you would still have 3,320 seconds when the database was not open.
I don't see an issue, assuming that you know how to properly manage your connections and manage your Jet transactions.
That volume is not a problem for Access so long as it's on a stable LAN or very high speed WAN. Wireless connections are also a bad idea.
I have several clients which are adding about 200K to 300K transactions per year into the systems. So that's about 1000 per work day. That's using both an Access front end and back end.
That said one of them will be upsizing shortly to SQL Server. I fired the other client when they hired a PHB (Dilbert's pointy haired boss.)
It's iffy. The first time the database crashes you'll wish you went with SQL Server Express. And it will crash, eventually.
In my previous job we had a product with an Access database backend. We had some clients with 25 users. We refused clients who had 40 potential users because we knew from experience that the database would corrupt itself on a regular basis, and performance would be unacceptable.
The day we went to SQL Server Express, the performance of the application doubled, and the problems with crashing and corruption virtually disappeared.

Why should I care about compacting an MS Access .mdb file?

We distribute an application that uses an MS Access .mdb file. Somebody has noticed that after opening the file in MS Access the file size shrinks a lot. That suggests that the file is a good candidate for compacting, but we don't supply the means for our users to do that.
So, my question is, does it matter? Do we care? What bad things can happen if our users never compact the database?
In addition to making your database smaller, it'll recompute the indexes on your tables and defragment your tables which can make access faster. It'll also find any inconsistencies that should never happen in your database, but might, due to bugs or crashes in Access.
It's not totally without risk though -- a bug in Access 2007 would occasionally delete your database during the process.
So it's generally a good thing to do, but pair it with a good backup routine. With the backup in place, you can also recover from any 'unrecoverable' compact and repair problems with a minimum of data loss.
Make sure you compact and repair the database regularly, especially if the database application experiences frequent record updates, deletions and insertions. Not only will this keep the size of the database file down to the minimum - which will help speed up database operations and network communications - it performs database housekeeping, too, which is of even greater benefit to the stability of your data. But before you compact the database, make sure that you make a backup of the file, just in case something goes wrong with the compaction.
Jet compacts a database to reorganize the content within the file so that each 4 KB "page" (2KB for Access 95/97) of space allotted for data, tables, or indexes is located in a contiguous area. Jet recovers the space from records marked as deleted and rewrites the records in each table in primary key order, like a clustered index. This will make your db's read/write ops faster.
Jet also updates the table statistics during compaction. This includes identifying the number of records in each table, which will allow Jet to use the most optimal method to scan for records, either by using the indexes or by using a full table scan when there are few records. After compaction, run each stored query so that Jet re-optimizes it using these updated table statistics, which can improve query performance.
Access 2000, 2002, 2003 and 2007 combine the compaction with a repair operation if it's needed. The repair process:
1 - Cleans up incomplete transactions
2 - Compares data in system tables with data in actual tables, queries and indexes and repairs the mistakes
3 - Repairs very simple data structure mistakes, such as lost pointers to multi-page records (which isn't always successful and is why "repair" doesn't always work to save a corrupted Access database)
4 - Replaces missing information about a VBA project's structure
5 - Replaces missing information needed to open a form, report and module
6 - Repairs simple object structure mistakes in forms, reports, and modules
The bad things that can happen if the users never compact/repair the db is that it will become slow due to bloat, and it may become unstable - meaning corrupted.
Compacting an Access database (also known as a MS JET database) is a bit like defragmenting a hard drive. Access (or, more accurately, the MS JET database engine) isn't very good with re-using space - so when a record is updated, inserted, or deleted, the space is not always reclaimed - instead, new space is added to the end of the database file and used instead.
A general rule of thumb is that if your [Access] database will be written to (updated, changed, or added to), you should allow for compacting - otherwise it will grow in size (much more than just the data you've added, too).
So, to answer your question(s):
Yes, it does matter (unless your database is read-only).
You should care (unless you don't care about your user's disk space).
If you don't compact an Access database, over time it will grow much, much, much larger than the data inside it would suggest, slowing down performance and increasing the possibilities of errors and corruption. (As a file-based database, Access database files are notorious for corruption, especially when accessed over a network.)
This article on How to Compact Microsoft Access Database Through ADO will give you a good starting point if you decide to add this functionality to your app.
I would offer the users a method for compacting the database. I've seen databases grow to 600+ megabytes when compacting will reduce to 60-80.
To echo Nate:
In older versions, I've had it corrupt databases - so a good backup regime is essential. I wouldn't code anything into your app to do that automatically. However, if a customer finds that their database is running really slow, your tech support people could talk them through it if need be (with appropriate backups of course).
If their database is getting to be so large that the compaction starts to be come a necessity though, maybe it's time to move to MS-SQL.
I've found that Access database files almost always get corrupted over time. Compacting and repairing them helps hold that off for a while.
Well it really matters! mdb files keep increasing in size each time you manipulate its data, until it reaches unbearable size. But you don't have to supply a compacting method through your interface. You can add the following code in your mdb file to have it compacted each time the file is closed:
Application.SetOption ("Auto Compact"), 1
I would also highly recommend looking in to VistaDB (http://www.vistadb.net/) or SQL Compact(http://www.microsoft.com/sql/editions/compact/) for your application. These might not be the right fit for your app... but are def worth a look.
If you don't offer your users a way to decompress and the raw size isn't an issue to begin with, then don't bother.