Growing Access Frontend: Should I be concerned? - ms-access

I've read opinions across the internet that say if you design or MS Access FrontEnd properly, it shouldn't shrink too much when you do a compact. I've got one front end I'm using that is typically around 15 MB when compacted, but grows to 20-25 MB while I'm working on it! Is this something I should be concerned about?

There is a distinction between development and production use.
during development, bloat should be expected -- you're churning the data pages in your front end, revising forms, reports, modules, etc., so there will be frequent discarding of data pages. There is nothing wrong with this. During development, you should compact regularly, and occasionally decompile (not often -- I tend to do it maybe once a day during heavy development, and/or immediately before distributing a new front end into production use).
during production use, a properly-designed front end should not bloat much. Yes, when you supply a compiled and compacted front end, it will grow some during use, but after a while, that growth should top off. But you shouldn't be concerned about that, as front ends are fungible. If something goes wrong with one, you just replace it with a new one.
The most common reason people encounter bloat in front ends is because they design them incorrectly, including temporary data in their front end (e.g., a table that has data appended to it that is then deleted). Temp data belongs in a temp file. All of my apps have a tmp.mdb that is distributed along with the front end and stored in the same folder as the front end, and all temporary data is stored there. I generally never bother to compact temp files.
Other sources of bloat might include:
design changes to forms/reports made in code (which would be the same in terms of bloat as the human developer making the same changes). This is almost always a design error, in my opinion.
changes to saved QueryDefs in the app. This one is less significant, as the amount of bloat is quite small compared to other types of bloat. However, if this is being done thousands of times in a session, it could theoretically reach the level of significance. There are a few good reasons to edit saved QueryDefs at runtime, but not very many, so while I wouldn't call doing this a design error, it would be a red flag that it needed to be checked to make sure it's not something that can be accomplished efficiently without editing the saved QueryDef.

As you are adding reports and so on, I do not think you should be concerned. I suggest that you decompile* fairly regularly when you are working on code, forms and reports.
* http://wiki.lessthandot.com/index.php/Decompile

Growing Front-End? To stupid to be true but it works. My database is used (through the cloud) by several companies and therefore the application can hardly ever be closed for compression (The last one to leave puts the lights out: compresses the database). My customers need to be online in the database all of the time. In less than one week the front-end used to grow from 16Mb to over 2Gb! This was scaring the hell out of me.
Solution: In the file explorer, simply right-click the front-end database, click 'properties' and check the 'read-only'-box.
Access will try to write the enlarged front-end but does not crash on the read-only flag. Again: just to simple to be true!
Best regards, Jaap Schokker, miniPLEX B.V., Wageningen, Holland

Related

Inspect large linked database in Access

This sounds like a quite simple and straightforward question, but I tried to search online and could not find an answer to my problem. I would like to view the linked database from Access but the database is too large and every step takes forever to load the data. I wonder if there is a better way to inspect the data tables? Sorry if this has been asked somewhere else, I am a bit new to Access.
Well, you have the program part (often called the front end (FE).
Then you have linked tables to the data file 9often called the back end (BE).
So, I can't say there necessary going to be much difference then just looking at the list of linked tables in the nav pane (FE).
Or, you can fire up access, and open the BE file. At that point, you will again see the "list" of tables in the nav pane. About the only difference here is that you as a general rule can't make changes to the table structure(s) in the FE.
But, other than that, the performance should not be much different. Of course if you are on a network and the BE is in some folder? Well then your network connection of course can and will effect performance.
So, in that case, what one often does is simply copy the BE from the server folder to that of a local folder. You can then open + use + play + consume that database (BE) 100% local on your computer without a network between you and the data file. This will of course run MUCH faster, and thus let you see/play and look at the tables and open them to see data inside such tables.
So, all in all? Copy the BE to a local folder. You be working on a copy of the data (that's safe - can't mess up production data), but certainly performance wise you find that any performance considerations should be quite much eliminated.
And for development and testing? Often we take the BE and place it on our local computer (say laptop) and thus work with that BE local. And depending on how the FE (program/software part) is setup, often it will have some options to re-link and thus you can point the FE to a different BE.
Just keep in mind that if you make changes to the BE? And you want such changes from that copy to appear or be made on the production BE? Well, you have to make notes, since there not really a automated way to send changes (say new tables, or changes to table designs) to the production BE. And of course, one has to be VERY careful. You can make changes to the tables such as re-name, or changing field names - that will for sure break the FE program part. You can in most cases of course add new fields/columns to existing tables, and that in most cases should not break your software.
But, from a performance point of view? I am somewhat perplexed you note performance issues and problems. Perhaps there is some VPN between the FE and BE (and that does not work well at all - you in general require a good solid network connection - a LAN (not a VPN/WAN) between the FE and BE. If a VPN (WAN) is to be adopted, then in most cases the BE needs to be migrated to sql server - the FE (program) part can then used linked tables to SQL server, and not a file based BE.
So while above should make sense - it is somewhat perplexing the performance issue you dealing with, or that you note here? (that does not quite make a whole lot of sense).

MS-Access split database with some shared physical tables

Edit. I have been told that my question is too broad in scope and is likely to result in opinion-based answers. I disagree but anyway, in an attempt to get this question accepted as valid, here is a synopsis:
Is it possible to have a split MS Access database where some of the tables are physically common across several back-ends? By physically common, I mean that some of the physical tables are shared. This would allow admin users to update fields in relatively data-stable tables in one back end instance so that the updates are seen across all back end instances. At the same time, most of the back end tables would remain separate, so that changes to that data would apply only to that specific back end instance.
Edit ends
I am about to try splitting a development Access database and I am confident that it will be a straightforward process. However, further down the line, I would like to implement a split with links between some of the back-end tables or even so that some of the back-end tables are shared. I have tried to find information on the viability of this but so far, all I can find is help on redirecting a front end to different back-end data and help on creating different front-ends to view self-contained back-end data.
My future scenario is this:
I want a few different sets of back end data; one full nationwide set, others restricted to data imported from source A, source B etc. All of these would be production data options available to the user and the structure would be identical for all of them. While the table and query structure is identical, the way the data is presented, within some of the form/report fields, differs from one source to another and any attempt to present data from all sources together would confuse users. I have thought about translating the various representations into a common format but that would lose some information detail.
I also want a production front end plus at least one development/test front end. The former should allow the user to attach to any of the back-end production data sets and the dev/test front ends should allow attachment to anything, with the constraint that any dev front end should match the structure of a matching dev back end. It is possible that multiple dev front-end/back-end pairs will be required, depending on simultaneous structural trials. Again, though it might involve careful version control, I am fairly confident that it will work easily enough.
So, my problem: I would like some of the back-end physical tables to be shared between all of the production back-end data sets. This is because a few of the tables are very structurally stable and their data will be common to all production versions and altered only by admin users. I want to allow admin users to amend/add/delete data in these stable production tables just once, with their updates shared across all production back ends. At the worst, admin users would have to make such amendments in each of the back end data sets which obviously introduces the likelihood of mismatches between the various back end data sets - coffee anyone? Now where was I?
I suppose I could write something to update data across all the back end tables but that isn't ideal, though not worst case.
I could add to some of the tables a "dataset" field and extend my forms, queries, reports etc to take the dataset into account, thereby just having a single production data set but that just feels cheap and not very robust; moreover, it would probably degrade performance.
Is there any way, given the circumstances I describe above, that I can have the back-end data share a few physical tables? Not all, just a few of them?
I hope I have described the problem well enough (possibly with too much detail) so that someone who has had this problem in the past can point me to a solution.
The answer is no. A table belongs to one file.
You can create a link to that table from one or several other files. These will normally be frontends, but you can create a link in a backend file as well, though that makes little sense as you have to open the backend to read the linked table, which is what you normally don't do with a backend file.

MS Access antiquated? Anything new in 2011?

Our company has a database of 17,000 entries. We have used MS Access for over 10 years for our various mailings. Is there something new and better out there? I'm not a techie, so keep in mind when answering. Our problems with Access are:
-no record of what was deleted,
-will not turn up a name in a search if cap's or punctuation
is not entered exactly,
-is complicated for us to understand the de-duping process.
- We'd like a more nimble program that we can access from more than one dedicated computer.
The only applications I know of that are comparable to Access are FileMaker Pro, and the database component of the Open Office suite. FM Pro is a full-fledged product and gets good marks for ease of use from non-technical users, while Base is much less robust and is not nearly as easy for creating an application.
All of the answers recommending different databases really completely miss the point here -- the original question is about a data store and application builder, not just the data store.
To the specific problems:
PROBLEM 1: no record of what was deleted
This is a design error, not a flaw in Access. There is no database that really keeps a record of what's deleted unless someone programs logging of deleted data.
But backing up a bit, if you are asking this question it suggest that you've got people deleting things that shouldn't be deleted. There are two solutions:
regular backups. That would mean you could restore data from the last backup and likely recover most of the lost data. You need regular backups with any database, so this is not really something that is specific to Access.
design your database so records are never deleted, just marked deleted and then hidden in data entry forms and reports, etc. This is much more complicated, but is very often the preferred solution, as it preserves all the data.
Problem #2: will not turn up a name in a search if cap's or punctuation is not entered exactly
There are two parts to this, one of which is understandable, and the other of which makes no sense.
punctuation -- databases are stupid. They can't tell that Mr. and Mister are the same thing, for instance. The solution to this is that for all data that needs to be entered in a regularized fashion, you use all possible methods to insure that the user can only enter valid choices. The most common control for this is a dropdown list (i.e., "combo box"), which limits the choices the user has to the ones offered in the list. It insures that all the data in the field conforms to a finite set of choices. There are other ways of maintaining data regularity and one of those involves normalization. That process avoids the issue of repeatedly storing, say, a company name in multiple records -- instead you'd store the companies in a different table and just link your records to a single company record (usually done with a combo box, too). There are other controls that can be used to help insure regularity of data entry, but that's the easiest.
capitalization -- this one makes no sense to me, as Access/Jet/ACE is completely case-insensitive. You'll have to explain more if you're looking for a solution to whatever problem you're encountering, as I can't conceive of a situation where you'd actually not find data because of differences in capitalization.
Problem #3: is complicated for us to understand the de-duping process
De-duping is a complicated process, because it's almost impossible for the computer to figure out which record among the candidates is the best one to keep. So, you want to make sure your database is designed so that it is impossible to accidentally introduce duplicate records. Indexing can help with this in certain kinds of situations, but when mailing lists are involved, you're dealing with people data which is almost impossible to model in a way where you have a unique natural key that will allow you to eliminate duplicates (this, too, is a very complicated topic).
So, you basically have to have a data entry process that checks the new record against the existing data and informs the user if there's a duplicate (or near match). I do this all the time in my apps where the users enter people -- I use an unbound form where they type in the information that is the bare minimum to create a new record (usually some combination of lastname, firstname, company and email), and then I present a list of possible matches. I do strict and loose matching and rank by closeness of the match, with the closer matches at the top of the list.
Then the user has to decide if there's a match, and is offered the opportunity to create the duplicate anyway (it's possible to have two people with the same name at the same company, of course), or instead to abandon adding the new record and instead go to one the existing records that was presented as a possible duplicate.
This leaves it up to the user to read what's onscreen and make the decision about what is and isn't a duplicate. But it maximizes the possibility of the user knowing about the dupes and never accidentally creating a duplicate record.
Problem #4: We'd like a more nimble program that we can access from more than one dedicated computer.
This one confuses me. Access is multi-user out of the box (and has been from the very beginning, nearly 20 years ago). There is no limitation whatsoever to a single computer. There are things you have to do to make it work, such as splitting your database into to parts, one part with just the data tables, and the other part with your forms and reports and such (and links to the data tables in the other file). Then you keep the back end data file on one of the computers that acts as a server, and give a copy of the front end (the reports, forms, etc.) to each user. This works very well, actually, and can easily support a couple of dozen users (or more, depending on what they are doing and how well your database is designed).
Basically, after all of this, I would tend to second #mwolfe02's answer, and agree with him that what you need is not a new database, but a database consultant who can design for you an application that will help you manage your mailing lists (and other needs) without you needing to get too deep into the weeds learning Access (or FileMaker or whatever). While it might seem more expensive up front, the end result should be a big productivity boost for all your users, as well as an application that will produce better output (because the data is cleaner and maintained better because of the improved data entry systems).
So, basically, you either need to spend money upfront on somebody with technical expertise who would design something that allows you to do better work (and more efficiently), or you need to invest time in upping your own technical skills. None of the alternatives to Access are going to resolve any of the issues you've raised without significant investment in interface design to further the goals you have (cleaner data, easier to find information, etc.).
At the risk of sounding snide, what you are really looking for is a consultant.
In the hands of a capable programmer, all of your issues with Access are easily handled. The problems you are having are not the result of using the wrong tool, but using that tool less than optimally.
Actually, if you are not a techie then Access is already the best tool for you. You will not find a more non-techie friendly way to build a data application from bottom to top.
That said, I'd say you have three options at this point:
Hire a competent database consultant to improve your application
Find commercial off-the-shelf (COTS) software that does what you need (I'm sure there are plenty of products to handle mailings; you'll need to research)
Learn about database normalization and building proper MS Access applications
If you can find a good program that does what you want then #2 above will maximize your Return on Investment (ROI). One caveat is that you'll need to convert all of your existing data, which may not be easy or even possible. Make sure you investigate that before you buy anything.
While it may be the most expensive option up-front, hiring a competent database consultant is probably your best option if you need a truly custom solution.
SQL Server sounds like a viable alternative to your scenario. If cost is a concern, you can always use SQL Server Express, which is free. Full blown SQL Server provides a lot more functionality that might not be needed right away. Express is a lot simpler as the number of features provided with it are much smaller. With either version though you will have centralized store for your data and the ability to allow all transactions to be recorded in the transaction log. Also, both have the ability to import data from an Access Database.
The newest version of SQL Server is 2008 R2
You probably want to take a look at modern databases. If you're into Microsoft-based products, start with SQL Server Express
EDIT: However, since I understand that you're not a programmer yourself, you'd probably be better off having someone experienced look into your technical problem more deeply, like the other answer suggests.
It sounds like you may want to consider a front-end for your existing Access data store. Microsoft has yet to replace Access per se, but they do have a new tool that is a lot lower on the programming totem pole than some other options. Check out Visual Studio Lightswitch - http://www.microsoft.com/visualstudio/en-us/lightswitch.
It's fairly new (still in beta) but showing potential. With it, just as with any Visual Studio project, you can connect to an MS Access datasource and design a front-end to interact with it. The plus-side here is programming requirements are much lower than with straight-up Visual Studio (read: Wizards).
Given that replacing your Access DB will require some font-end programming, you may look into VistaDB. It should allow your front end to be created in .NET with an XCopy database on the backend without requiring a server. One plus is that it retains SQL Server syntax, so if you do decide to move to SQL Server you'll be one step ahead.
(Since you're not a techie and may not understand my previous statement, you might pass my answer on to the consultant/programmer/database guy who is going to do the work for you.)
http://www.vistadb.net/

When is MS Access better that a web app backed by RDBMS?

I haven't used Access since high school, years ago.
What kind of problem does it solve well, or even better than a web app backed by a real RDBMS?
Is it still actively developed? Or is it pretty dead to MS already?
What are its biggest limitations?
Update:
What resource shall one use to learn how to develop a MS Access solution for small business?
Thanks
First and foremost, Access IS a real RDBMS. What it is isn't is a client server RDBMS.
The only implications of this are that there is a throttle on the number of simultaneous connections and the security of the data needs careful thought.
Amongst other things, Access is also an IDE that uses VBA as its language.
This means that in Access you can write Front End apps that link to either a SQL Server back end, an Access back end, or a SharePoint back end. So it is one very versatile cookie.
It's limitations are:
Security: if you are using an Access Back End, take note that it doesn't have the built in security of a client server database. In any app, security is a function of the cost and the requisite secrecy of the data.
Number of simultaneous connections. if you are not careful, Access will struggle with more than 10 people trying to update data simultaneously. You can extend that, but you need to know what you are doing to guarantee results. to put a number to it, lets say 50 simultaneous connections.
Like most databases, it is liable to corruption.
NOTE: when referring to Access as a database, you should really be referring to the "database engine", JET or ACE, depending on the version and, for Access 2007+, dictated by the file format that you use. In other words, if you are storing data in Access tables, you are using either JET or ACE. However, if you are using LINKED TABLES, that are in, for example, SQL Server, then you are not, strictly speaking, using JET or ACE security for those tables.
Access SQL doesn't allow you to write stored procedures (you can write functions in VBA), in the sense that Access SQL only allows imperative statements as opposed to procedural statements (eg, control flow statements). You can introduce some "procedural code" using VBA functions, but this is very different to using SQL statements.
You backup the file itself. You can write code to do this at the click of a button.
Security is always a function of cost. If you have data that is worth more than 100,000 US$ (either in loss of rights or legal liabilities if it is stolen and you have not shown due diligence in protecting it), then Access is probably not the answer. 100,000 is an arbitrary figure. The precise figure will depend on whether the data is insurable and the consequences of it being lost or stolen.
Ie, if the value of the data is the driving concern, then definitely don't use Access as a Back End. Whether you use it as a Front End, is a matter of budget. For US$5000 I have written apps that are still running 10 years later. They now need to port the back end to SQL Server because the volume of sensitive data has grown.
Access, when used within the above constraints AND when used by a professional Access developer (rather than some disgruntled fool who thinks he should be using "cooler" technologies), will produce very sophisticated, sturdy and reliable applications at a 10th of the cost of other systems. In such scenarios, Access is a total NO BRAINER.
Anything else will cost more, take longer and will only be as good as the person who writes the code and designs the UI.
I have an application (the first one I ever built in Access) that has run without problems for 10 years. We have extended it massively. I have moved into ASP.NET MVC, but Access is where I hail from and I have seen it work well.
So in summary: the number of users is relevant and the value or liabilities implicit in the data are the other deciding factor.
If the number of simultaneous users is low and the value/implicit liabilities of the data is low, then the choice is definitely Access.
However, get yourself a good developer.
EDITS/CLARIFICATIONS:
The above answer, like all answers, was written in haste in the middle of a working day. Some statements were a bit glib and generic and not written with a suitable degree of precision... However, when the comments made by others are reasonable, the author of the answer should edit the post and clarify.
1/
Access is a holy trinity. It is an IDE for writing forms and reports and functions to use in your queries. It "includes" a database engine (JET/ACE). It provides a Visual Interface onto the database engine that allows you to design queries, set up relationships between tables, etc.
It is usually referred in its many roles as just Access, but precision does help to learn Access and get the most out of it.
2/
Access can't use stored procedures in the sense that Access SQL can only use imperative statements rather than the procedural ones (eg, control flow statements). There is a reason, I have always thought, for calling them stored PROCEDURES.
3/
Not every Access app costs exactly 100,000. Nor is the budget of an Access app equal to the value of the data. That is obvious. The idea I was trying to convey was that if the data is worth more than a sum that can be reasonably insured, then don't use Access. Is that figure 100,000? According to Luke Chung and Clint Covington, ex program manager for Access, yes, but don't take their word for it. It really just means "a lot of money".
I have written an app for Medical Charities that still runs 10 years later after an initial budget of 5000. They have probably invested another 20,000 over the years. That kind of app is the Access sweet spot.
It all depends really, I will give you a quick example that happened to me recently. At work they needed a small system to capture some records from a group of about 15 users and pass about 15% of those records to another team of about 5 or so to do additional tasks on those records. This was a one off project that was going to last about 4 months.
The official IT solution was of course a web app with a SQL server backend coming in at about £60,000. As they had no SQL server space available and the budget was very small I decided to go with an unbound access database using JET to store the data.
In this example access/JET was the right choice, now if this had been a long term system to support 500 users of course the web app would be the way to go. Its horses for courses at the end of the day and people should not let their prejudices effect their business decisions.
Ah. Never. Point. Too many limitations in general. Backups are problematic, stability CAN be problematic. Especially if you compare access (file share daabase) against web ap you are in for a world of pain pretty much in every scenario.
Access is usable for small single place db stuff (loading data before moving it off to a SQL Server) or a front end for SQL Server (i.e. access not actually storing any data). The later is also pretty much the direction MS is taking access to - a front end technology.
My knowledge is quite old now, but it always used to be very good for reports - very quick, powerful, and much easier than, e.g. Crystal Reports.
If you just want to hack something out quickly, it's probably a bit easier to do at least some kinds of applications with Access than a web front end with a SQL (or whatever) backend. It is still being developed (Access 2010 was release within the last month or two, if memory serves).
I haven't used the new version to say for sure, but the last time I did any looking, it seemed like new editions were mostly updating the look to go along with the latest version of office, cleaning up semi-obvious problems and bugs, but not a whole lot more than that. I wouldn't say it's dead, but I don't see much to indicate that it's really one of Microsoft's top priorities either.
Trying to pin down it's biggest limitations is hard. The "JET Red" storage engine it's based around doesn't scale well at all -- but it was never really intended to. Its basic design is intended to conflate the application with the data being stored, so it's relatively difficult to just treat it as raw data to be used for other purposes. I don't know if it's still the case, but at least at one time, the database format was also fairly fragile -- file corruption was semi-common, and in most cases about the only hope of recovery was a backup file (which meant, at best, losing everything that had happened since the last backup -- and some forms of corruption weren't immediately obvious, so corrupt backups sometimes happened as well).
It comes down to this: if one of the Wizards built into access can produce exactly what you want, or at least something really close, and you only ever need to support a few users with the result, it might be a reasonable choice in a few situations. If that doesn't (all) apply, there are almost certain to be better alternatives.
The Jet database engine used by Access is considered deprecated by Microsoft, though it is still supported. The limits of an .mdb database and the newer .accdb type are described here.
Link
Even SQL Server Express would be better in almost every case.
Someone with very limited knowledge of RDBMS/programming can still throw a quick frontend together in Access (Ideally using an external database), that's really the main use for it.

Can splitting .MDB files into segments help with stability?

Is this a realistic solution to the problems associated with larger .mdb files:
split the large .mdb file into
smaller .mdb files
have one 'central' .mdb containing
links to the tables in the smaller
.mdb files
How easy would it be to make this change to an .mdb backed VB application?
Could the changes to the database be done so that there are no changes required to the front-end application?
Edit Start
The short answer is "No, it won't solve the problems of a large database."
You might be able to overcome the DB size limitation (~2GB) by using this trick, but I've never tested it.
Typically, with large MS Access databases, you run into problems with speed and data corruption.
Speed
Is it going to help with speed? You still have the same amount of data to query and search through, and the same algorithm. So all you are doing is adding the overhead of having to open up multiple files per query. So I would expect it to be slower.
You might be able to speed it up by reducing the time time that it takes to ge tthe information off of disk. You can do this in a few ways:
faster drives
put the MDB on a RAID (anecdotally RAID-1,0 may be faster)
split the MDB up (as you suggest) into multiple MDBs, and put them on separate drives (maybe even separate controllers).
(how well this would work in practice vs. theory, I can't tell you - if I was doing that much work, I'd still choose to switch DB engines)
Data Corruption
MS Access has a well deserved reputation for data corruption. To be fair, I haven't had it happen to me fore some time. This may be because I've learned not to use it for anything big; or it may be because MS has put a lot of work in trying to solve these problems; or more likely a combination of both.
The prime culprits in data corruption are:
Hardware: e.g., cosmic rays, electrical interference, iffy drives, iffy memory and iffy CPUs - I suspect MS Access does not have as good error handling/correcting as other Databases do.
Networks: lots of collisions on a saturated network can confuse MS Access and convince it to scramble important records; as can sub-optimally implemented network protocols. TCP/IP is good, but it's not invincible.
Software: As I said, MS has done a lot of work on MS Access over the years, if you are not up to date on your patches (MS Office and OS), get up to date. Problems typically happen when you hit extremes like the 2GB limit (some bugs are hard to test and won't manifest them selves except at the edge cases, which makes the less likely to have been seen or corrected, unless reported by a motivated user to MS).
All this is exacerbated with larger databases, because larger databases typically have more users and more workstations accessing it. Altogether the larger database and number of users multiply to provide more opportunity for corruption to happen.
Edit End
Your best bet would be to switch to something like MS SQL Server. You could start by migrating your data over, and then linking one MDB to to it. You get the stability of SQL server and most (if not all) of your code should still work.
Once you've done that, you can then start migrating your VB app(s) over to us SQL Server instead.
If you have more data than fits in a single MDB then you should get a different database engine.
One main issue that you should consider is that you can't enforce referential integrity between tables stored in different MDBs. That should be a show-stopper for any actual database.
If it's not, then you probably don't have a proper schema designed in the first place.
For reasons more adequately explained by CodeSlave the answer is No and you should switch to a proper relational database.
I'd like to add that this does not have to be SQL Server. Quite possibly the reason why you are reluctant to do this is one of cost, SQL Server being quite expensive to obtain and deploy if you are not in an educational or charitable organisation (when it's remarkably cheap and then usually a complete no-brainer).
I've recently had extremely good results moving an Access system from MDB to MySQL. At least 95% of the code functioned without modification, and of the remaining 5% most was straightforward with only a few limited areas where significant effort was required. If you have sloppy code (not closing connections or releasing objects) then you'll need to fix these, but generally I was remarkably surprised how painless this approach was. Certainly I would highly recommend that if the reason you are reluctant to move to a database backend is one of cost then you should not attempt to manipulate .mdb files and go instead for the more robust database solution.
Hmm well if the data is going through this central DB then there is still going to be a bottle neck in there. The only reason I can think why you would do this is to get around the size limit of an access mdb file.
Having said that if the business functions can be split off in the separate applications then that might be a good option with a central DB containing all the linked tables for reporting purposes. I have used this before to good effect