When is MS Access better that a web app backed by RDBMS? - ms-access

I haven't used Access since high school, years ago.
What kind of problem does it solve well, or even better than a web app backed by a real RDBMS?
Is it still actively developed? Or is it pretty dead to MS already?
What are its biggest limitations?
Update:
What resource shall one use to learn how to develop a MS Access solution for small business?
Thanks

First and foremost, Access IS a real RDBMS. What it is isn't is a client server RDBMS.
The only implications of this are that there is a throttle on the number of simultaneous connections and the security of the data needs careful thought.
Amongst other things, Access is also an IDE that uses VBA as its language.
This means that in Access you can write Front End apps that link to either a SQL Server back end, an Access back end, or a SharePoint back end. So it is one very versatile cookie.
It's limitations are:
Security: if you are using an Access Back End, take note that it doesn't have the built in security of a client server database. In any app, security is a function of the cost and the requisite secrecy of the data.
Number of simultaneous connections. if you are not careful, Access will struggle with more than 10 people trying to update data simultaneously. You can extend that, but you need to know what you are doing to guarantee results. to put a number to it, lets say 50 simultaneous connections.
Like most databases, it is liable to corruption.
NOTE: when referring to Access as a database, you should really be referring to the "database engine", JET or ACE, depending on the version and, for Access 2007+, dictated by the file format that you use. In other words, if you are storing data in Access tables, you are using either JET or ACE. However, if you are using LINKED TABLES, that are in, for example, SQL Server, then you are not, strictly speaking, using JET or ACE security for those tables.
Access SQL doesn't allow you to write stored procedures (you can write functions in VBA), in the sense that Access SQL only allows imperative statements as opposed to procedural statements (eg, control flow statements). You can introduce some "procedural code" using VBA functions, but this is very different to using SQL statements.
You backup the file itself. You can write code to do this at the click of a button.
Security is always a function of cost. If you have data that is worth more than 100,000 US$ (either in loss of rights or legal liabilities if it is stolen and you have not shown due diligence in protecting it), then Access is probably not the answer. 100,000 is an arbitrary figure. The precise figure will depend on whether the data is insurable and the consequences of it being lost or stolen.
Ie, if the value of the data is the driving concern, then definitely don't use Access as a Back End. Whether you use it as a Front End, is a matter of budget. For US$5000 I have written apps that are still running 10 years later. They now need to port the back end to SQL Server because the volume of sensitive data has grown.
Access, when used within the above constraints AND when used by a professional Access developer (rather than some disgruntled fool who thinks he should be using "cooler" technologies), will produce very sophisticated, sturdy and reliable applications at a 10th of the cost of other systems. In such scenarios, Access is a total NO BRAINER.
Anything else will cost more, take longer and will only be as good as the person who writes the code and designs the UI.
I have an application (the first one I ever built in Access) that has run without problems for 10 years. We have extended it massively. I have moved into ASP.NET MVC, but Access is where I hail from and I have seen it work well.
So in summary: the number of users is relevant and the value or liabilities implicit in the data are the other deciding factor.
If the number of simultaneous users is low and the value/implicit liabilities of the data is low, then the choice is definitely Access.
However, get yourself a good developer.
EDITS/CLARIFICATIONS:
The above answer, like all answers, was written in haste in the middle of a working day. Some statements were a bit glib and generic and not written with a suitable degree of precision... However, when the comments made by others are reasonable, the author of the answer should edit the post and clarify.
1/
Access is a holy trinity. It is an IDE for writing forms and reports and functions to use in your queries. It "includes" a database engine (JET/ACE). It provides a Visual Interface onto the database engine that allows you to design queries, set up relationships between tables, etc.
It is usually referred in its many roles as just Access, but precision does help to learn Access and get the most out of it.
2/
Access can't use stored procedures in the sense that Access SQL can only use imperative statements rather than the procedural ones (eg, control flow statements). There is a reason, I have always thought, for calling them stored PROCEDURES.
3/
Not every Access app costs exactly 100,000. Nor is the budget of an Access app equal to the value of the data. That is obvious. The idea I was trying to convey was that if the data is worth more than a sum that can be reasonably insured, then don't use Access. Is that figure 100,000? According to Luke Chung and Clint Covington, ex program manager for Access, yes, but don't take their word for it. It really just means "a lot of money".
I have written an app for Medical Charities that still runs 10 years later after an initial budget of 5000. They have probably invested another 20,000 over the years. That kind of app is the Access sweet spot.

It all depends really, I will give you a quick example that happened to me recently. At work they needed a small system to capture some records from a group of about 15 users and pass about 15% of those records to another team of about 5 or so to do additional tasks on those records. This was a one off project that was going to last about 4 months.
The official IT solution was of course a web app with a SQL server backend coming in at about £60,000. As they had no SQL server space available and the budget was very small I decided to go with an unbound access database using JET to store the data.
In this example access/JET was the right choice, now if this had been a long term system to support 500 users of course the web app would be the way to go. Its horses for courses at the end of the day and people should not let their prejudices effect their business decisions.

Ah. Never. Point. Too many limitations in general. Backups are problematic, stability CAN be problematic. Especially if you compare access (file share daabase) against web ap you are in for a world of pain pretty much in every scenario.
Access is usable for small single place db stuff (loading data before moving it off to a SQL Server) or a front end for SQL Server (i.e. access not actually storing any data). The later is also pretty much the direction MS is taking access to - a front end technology.

My knowledge is quite old now, but it always used to be very good for reports - very quick, powerful, and much easier than, e.g. Crystal Reports.

If you just want to hack something out quickly, it's probably a bit easier to do at least some kinds of applications with Access than a web front end with a SQL (or whatever) backend. It is still being developed (Access 2010 was release within the last month or two, if memory serves).
I haven't used the new version to say for sure, but the last time I did any looking, it seemed like new editions were mostly updating the look to go along with the latest version of office, cleaning up semi-obvious problems and bugs, but not a whole lot more than that. I wouldn't say it's dead, but I don't see much to indicate that it's really one of Microsoft's top priorities either.
Trying to pin down it's biggest limitations is hard. The "JET Red" storage engine it's based around doesn't scale well at all -- but it was never really intended to. Its basic design is intended to conflate the application with the data being stored, so it's relatively difficult to just treat it as raw data to be used for other purposes. I don't know if it's still the case, but at least at one time, the database format was also fairly fragile -- file corruption was semi-common, and in most cases about the only hope of recovery was a backup file (which meant, at best, losing everything that had happened since the last backup -- and some forms of corruption weren't immediately obvious, so corrupt backups sometimes happened as well).
It comes down to this: if one of the Wizards built into access can produce exactly what you want, or at least something really close, and you only ever need to support a few users with the result, it might be a reasonable choice in a few situations. If that doesn't (all) apply, there are almost certain to be better alternatives.

The Jet database engine used by Access is considered deprecated by Microsoft, though it is still supported. The limits of an .mdb database and the newer .accdb type are described here.
Link
Even SQL Server Express would be better in almost every case.
Someone with very limited knowledge of RDBMS/programming can still throw a quick frontend together in Access (Ideally using an external database), that's really the main use for it.

Related

MySQL stored procedures use them or not to use them

We are at the beginning of a new project, and we are really wondering if we should use stored procedures in MySQL or not.
We would use the stored procedures only to insert and update business model entities. There are several tables which represent a model entity, and we would abstract it in those stored procedures insert/update.
On the other hand, we can call insert and update from the Model layer but not in MySQL but in PHP.
In your experience, Which is the best option? advantages and disadvantages of both approaches. Which is the fastest one in terms of high performance?
PS: It is is a web project with mostly read and high performance is the most important requisite.
Unlike actual programming language code, they:
not portable (every db has its own version of PL/SQL. Sometimes different versions of the same database are incompatible - I've seen it!)
not easily testable - you need a real (dev) database instance to test them and thus unit testing their code as part of a build is virtually impossible
not easily updatable/releasable - you must drop/create them, ie modify the production db to release them
do not have library support (why write code when someone else has)
are not easily integratable with other technologies (try calling a web service from them)
they use a language about as primitive as Fortran and thus are inelegant and laborious to get useful coding done, so it is difficult to express business logic, even though typically that is what their primary purpose is
do not offer debugging/tracing/message-logging etc (some dbs may support this - I haven't seen it though)
lack a decent IDE to help with syntax and linking to other existing procedures (eg like Eclipse does for java)
people skilled in coding them are rarer and more expensive than app coders
their "high performance" is a myth, because they execute on the database server they usually increase the db server load, so using them will usually reduce your maximum transaction throughput
inability to efficiently share constants (normally solved by creating a table and questing it from within your procedure - very inefficient)
etc.
If you have a very database-specific action (eg an in-transaction action to maintain db integrity), or keep your procedures very atomic and simple, perhaps you might consider them.
Caution is advised when specifying "high performance" up front. It often leads to poor choices at the expense of good design and it will bite you much sooner than you think.
Use stored procedures at your own peril (from someone who's been there and never wants to go back). My recommendation is to avoid them like the plague.
Unlike programming code, they:
render SQL injection attacks almost
impossible (unless you are are
constructing and executing dynamic
SQL from within your procedures)
require far less data to be sent over
the IPC as part of the callout
enable the database to far better
cache plans and result sets (this is
admittedly not so effective with
MySQL due to its internal caching
structures)
are easily testable in isolation
(i.e. not as part of JUnit tests)
are portable in the sense that they
allow you to use db-specific
features, abstracted away behind a
procedure name (in code you are stuck
with generic SQL-type stuff)
are almost never slower than SQL
called from code
but, as Bohemian says, there are plenty of cons as well (this is just by way of offering another perspectve). You'll have to perhaps benchmark before you decide what's best for you.
As for performances, they have the potential to be really performant in a future MySQL version (under SQL Server or Oracle, they are a real treat!). Yet, for all the rest... They totally blow up competition. I'll summarize:
Security: You can give your app the EXECUTE right only, everything is fine. Your SP will insert update select ..., with no possible leak of any sort. It means global control over your model, and an enforced data security.
Security 2: I know it's rare, but sometimes php code leaks out from the server (i.e. becomes visible to public). If it includes your queries, possible attackers know your model. This is pretty odd but I wanted to signal it anyway
Task force: yes, creating efficient SQL SPs requires some specific resources, sometimes more expensive. But if you think you don't need these resources just because you're integrating your queries in your client... you're going to have serious problems. I'd mention the analogy of web development: it's good to separate the view from the rest because your designer can work on their own technology while the programmers can focus on programming the business layer.
Encapsulating business layer: using stored procedures totally isolates the business where it belongs: the damn database.
Quickly testable: one command line under your shell and your code is tested.
Independence from the client technology: if tomorrow you'd like to switch from php to something else, no problem. Ok, just storing these SQL in a separate file would do the trick too, that's right. Also, good point in the comments about if you decide to switch sql engines, you'd have a lot of work to do. You have to have a good reason to do that anyway, because for big projects and big companies, that rarely happens (due to the cost and HR management mostly)
Enforcing agile 3+-tier developments: if your database is not on the same server than your client code, you may have different servers but only one for the database. In that case, you don't have to upgrade any of your php servers when you need to change the SQL related code.
Ok, I think that's the most important thing I had to say on the subject. I developed in both spirits (SP vs client) and I really, really love the SP style one. I just wished Mysql had a real IDE for them because right now it's kind of a pain in the ass limited.
Stored procedures are good to use because they keep your queries organized and allow you to perform a batch at once. Stored procedures are normally quick in execution because they are pre-compiled, unlike queries that are compiled on every run. This has significant impact in situations where database is on a remote server; if queries are in a PHP script, there are multiple communication between the application and the database server - the query is send, executed, and result thrown back. However, if using stored procedures, it only need to send a small CALL statement instead of big, complicated queries.
It might take a while to adapt to programming a stored procedure because they have their own language and syntaxes. But once you are used to it, you'll see that your code is really clean.
In terms of performance, it might not be any significant gain if you use stored procedures or not.
I will let know my opinion, despite my toughts possibly are not directly related to the question.:
As in many issues, reply about using Stored Procedures or an application-layer driven solution relies on questions that will drive the overall effort:
What you want to get.
Are you trying to do either batch operations or on-line operations? are they completely transactional? how recurrent are those operations? how heavy is the awaited workload for the database?
What you have in order to get it.
What kind of database technology you have? What kind of infrastucture? Is your team fully trained in the database technology? Is your team better capable of building a database-aegnostic solution?
Time for get it.
No secrets about that.
Architecture.
Is your solution required to be distributed onto several locations? is your solution required to use remote communications? is your solution working on several database servers, or possibly using a cluster-based architecture?
Mainteinance.
How much is the application required to change? do you have personal specifically trained for maintain the solution?
Change Management.
Do you see your database technology will change at a short, middle, long time? do you see will be required to migrate the solution frequently?
Cost
How much will cost to implement that solution using one or another strategy?
The overall of those points will drive the answer. So you have to care each of this points when making a decision about using or not any strategy. There are cases where using of stored procedures are better than application-layer managed queries, and others when, conducting queries and using an application-layer based solution is best.
Using of stored procedures tends to be more addequate when:
Your database technology isn't provided to change at a short time.
Your database technology can handle parallelized operations, table partitions or anything else strategy for divide the workload onto several processors, memory and resources (clustering, grid).
Your database technology is fully integrated with the stored proceduce definition language, that is, support is inside the database engine.
You have a development team who aren't afraid about using a procedural language (3rd. Generation language) for getting a result.
Operations you wanna achieve are built-in or supported inside the database (Exporting to XML data, managing data integrity and coherence appropiately with triggers, scheduled operations, etc).
Portability isn't an important issue and you do not whatch a technology change at a short time into your organization, even, it is not desirable. Generally, portability is seen like a milestone by the application-driven and layered-oriented developers. From my point of view, portability isn't an issue when your application isn't required to be deployed for several platforms, less when there are no reasons for making a technology change, or the effort for migrating all the organizational data is higher than the benefit for making a change. What you can win by using an application-layer driven approach (portability) you can loose in performance and value obtained from your database (Why to spend thousands of dollars for to get a Ferrari that you'll drive no more than 60 milles/hr?).
Performance is an issue. First: In several cases, you can achieve better results by using a single stored procedure call than multiple requests for data from another application. Moreover, some characteristics you need to perform may be built-in your database and its use less expensive in terms of workload. When you use an application-layer driven solution you have to take in account the cost associated to make database connections, making calls to the database, network traffic, data wrapping (i.e., using either Java or .NET, there is an implicit cost when using JDBC/ADO.NET calls as you have to wrap your data into objects that represents the database data, so instantiation has an associated cost in terms of processing, memory, and network when data comes from and goes to outside).
Using of application-layer driven solutions tends to be more addequate when:
Portability is an important issue.
Application will be deployed onto several locations with only one or few database repositories.
Your application will use heavy business-oriented rules, that need to be agnostic of the underlying database technology.
You have in mind to do change technology providers based on market tendencies and budget.
Your database isn't fully integrated with the stored procedure language that calls to the database.
Your database capabilities are limited and your requirement goes beyond what you can achieve with your database technology.
Your application can support the penalty inherent to external calls, is more transactional-based with business-specific rules and has to abstract the database model onto a business model for the users.
Parallelizing database operations isn't important, moreover, your database has not parallelization capabilities.
You have a development team which is not well-trained onto the database technology and is better productive by using an application-driven based technology.
Hope this may help to anyone asking himself/herself what is better to use.
I would recommend you don't use stored procedures:
Their language in MySQL is very crappy
There is no way to send arrays, lists, or other types of data structure into a stored procedure
A stored procedure cannot ever change its interface; MySQL permits neither named nor optional parameters
It makes deploying new versions of your application more complicated - say you have 10x application servers and 2 databases, which do you update first?
Your developers all need to learn and understand the stored procedure language - which is very crap (as I mentioned before)
Instead, I recommend to create a layer / library and put all your queries in there
You can
Update this library and ship it on your app servers with your app
Have rich data types, such as arrays, structures etc passed around
Unit test this library, instead of the stored procedures.
On performance:
Using stored procedures will decrease the performance of your application developers, which is the main thing you care about.
It is extremely difficult to identify performance problems within a complicated stored procedure (it is much easier for plain queries)
You can submit a query batch in a single chunk over the wire (if CLIENT_MULTI_STATEMENTS flag is enabled), which means you don't get any more latency without stored procedures.
Application-side code generally scales better than database-side code
If your database is complex and not a forum type with responses, but true warehousing SP will definitely benefit. You can out all your business logic in there and not a single developer is going to care about it, they just call your SP's. I have been doing this joining over 15 tables is not fun, and you cannot explain this to a new developer.
Developers also don't have access to a DB, great! Leave that up to database designers and maintainers. If you also decide that the table structure is going to get changed, you can hide this behind your interface. n-Tier, remember??
High performance and relational DB's is not something that goes together, not even with MySQL InnoDB is slow, MyISAM should be thrown out of the window by now. If you need performance with a web-app, you need proper cache, memcache or others.
in your case, because you mentioned 'Web' I would not use stored procedures, if it was data warehouse I would definitely consider it (we use SP's for our warehouse).
Tip:
Since you mentioned Web-project, ever though about nosql sort of solution? Also, you need a fast DB, why not use PostgreSQL? (trying to advocate here...)
I used to use MySql and my understanding of sql was poor at best, I spent a fair amount of time using Sql Server, I have a clear separation of a data layer and an application layer, I currently look after a server with 0.5 terabytes.
I have felt frustrated at times not using an ORM as development is really quick with stored procedures it is much slower. I think much of our work could have been sped up by using an ORM.
When your application reaches critical mass, the ORM performance will suffer, a well written stored procedure, will give you your results faster.
As an example of performance I collect 10 different types of data in an application, then convert that to XML, which I process in the stored procedure, I have one call to the database rather than 10.
Sql is really good at dealing with sets of data, one thing that gets me frustrated is when I see someone getting data from sql in a raw form and using application code to loop over the results and format and group them, this really is bad practice.
My advice is to learn and understand sql enough and your applications will really benefit.
Lots of info here to confuse people, software development is a evolutionary. What we did 20 years ago isn't best practice now. Back in the day with classic client server you wouldnt dream of anything but SPs.
It is absolutely horses for courses, if you are a big organisation with you will use multi tier, and probably SPs but you will care little about them because a dedicated team will be sorting them out.
The opposite which is where I find myself trying to quickly knock up a web app solution, that fleshes out business requirements, it was super fast to leave the developer (remote to me) to knock up the pages and SQL queries and I define the DB structure.
However complexity is growing and without an easy way to provide APIs, I am staring to use SPs to contain the business logic. I think it is working well and sensible, I control this because I can build logic and provide a simple result set for my offshore developer to build a front end around.
Should I find my software a phenomenal success, then more separation of concerns will occur and different implementations of n teir will come about but for now SPs are perfect.
You should know all the tool sets available to you and match them is wise to start with. Unless you are building an enterprise system to start with then fast and simple is best.
I would recommend that you stay away from DB specific Stored Procedures.
I've been through a lot of projects where they suddently want to switch DB platform and the code inside a SP is usually not very portable = extra work and possible errors.
Stored Procedure development also requires the developer to have access directly to the SQL-engine, where as a normal connection can be changed by anyone in the project with code-access only.
Regarding your Model/layer/tier idea: yes, stick with that.
Website calls Business layer (BL)
BL calls Data layer (DL)
DL calls whatever storage (SQL, XML, Webservice, Sockets, Textfiles etc.)
This way you can maintain the logic level between tiers. IF and ONLY IF the DL calls seems to be very slow, you can start to fiddle around with Stored Procedures, but maintain the original none-SP code somewhere, if you suddently need to transfer the DB to a whole new platform. With all the Cloud-hosting in the business, you never know whats going to be the next DB platform...
I keep a close eye on Amazon AWS of the very same reason.
I think there is a lot of misinformation floating around about database stored queries.
I would recommend using MySQL Stored Procedures if you're doing many static queries for data manipulation. Especially if you're moving things from one table to another (i.e. moving from a live table to a historical table for whatever reason). There are drawbacks of course in that you'll have to keep a separate log of changes to them (you could in theory make a table that just holds changes to the stored procedures that the DBA's update). If you have many different applications interfacing with the database, especially if say you have a desktop program written in C# and a web program in PHP, it might be more beneficial to have some of your procedures stored in the database as they are platform independent.
This website has some interesting information on it you may find useful.
https://www.sitepoint.com/stored-procedures-mysql-php/
As always, build in a sandbox first, and test.
Try to update 100,000,000 records on a live system from a framework, and let me know how it goes. For small apps, SPs are not a must, but for large serious systems, they are a real asset.

MS Access antiquated? Anything new in 2011?

Our company has a database of 17,000 entries. We have used MS Access for over 10 years for our various mailings. Is there something new and better out there? I'm not a techie, so keep in mind when answering. Our problems with Access are:
-no record of what was deleted,
-will not turn up a name in a search if cap's or punctuation
is not entered exactly,
-is complicated for us to understand the de-duping process.
- We'd like a more nimble program that we can access from more than one dedicated computer.
The only applications I know of that are comparable to Access are FileMaker Pro, and the database component of the Open Office suite. FM Pro is a full-fledged product and gets good marks for ease of use from non-technical users, while Base is much less robust and is not nearly as easy for creating an application.
All of the answers recommending different databases really completely miss the point here -- the original question is about a data store and application builder, not just the data store.
To the specific problems:
PROBLEM 1: no record of what was deleted
This is a design error, not a flaw in Access. There is no database that really keeps a record of what's deleted unless someone programs logging of deleted data.
But backing up a bit, if you are asking this question it suggest that you've got people deleting things that shouldn't be deleted. There are two solutions:
regular backups. That would mean you could restore data from the last backup and likely recover most of the lost data. You need regular backups with any database, so this is not really something that is specific to Access.
design your database so records are never deleted, just marked deleted and then hidden in data entry forms and reports, etc. This is much more complicated, but is very often the preferred solution, as it preserves all the data.
Problem #2: will not turn up a name in a search if cap's or punctuation is not entered exactly
There are two parts to this, one of which is understandable, and the other of which makes no sense.
punctuation -- databases are stupid. They can't tell that Mr. and Mister are the same thing, for instance. The solution to this is that for all data that needs to be entered in a regularized fashion, you use all possible methods to insure that the user can only enter valid choices. The most common control for this is a dropdown list (i.e., "combo box"), which limits the choices the user has to the ones offered in the list. It insures that all the data in the field conforms to a finite set of choices. There are other ways of maintaining data regularity and one of those involves normalization. That process avoids the issue of repeatedly storing, say, a company name in multiple records -- instead you'd store the companies in a different table and just link your records to a single company record (usually done with a combo box, too). There are other controls that can be used to help insure regularity of data entry, but that's the easiest.
capitalization -- this one makes no sense to me, as Access/Jet/ACE is completely case-insensitive. You'll have to explain more if you're looking for a solution to whatever problem you're encountering, as I can't conceive of a situation where you'd actually not find data because of differences in capitalization.
Problem #3: is complicated for us to understand the de-duping process
De-duping is a complicated process, because it's almost impossible for the computer to figure out which record among the candidates is the best one to keep. So, you want to make sure your database is designed so that it is impossible to accidentally introduce duplicate records. Indexing can help with this in certain kinds of situations, but when mailing lists are involved, you're dealing with people data which is almost impossible to model in a way where you have a unique natural key that will allow you to eliminate duplicates (this, too, is a very complicated topic).
So, you basically have to have a data entry process that checks the new record against the existing data and informs the user if there's a duplicate (or near match). I do this all the time in my apps where the users enter people -- I use an unbound form where they type in the information that is the bare minimum to create a new record (usually some combination of lastname, firstname, company and email), and then I present a list of possible matches. I do strict and loose matching and rank by closeness of the match, with the closer matches at the top of the list.
Then the user has to decide if there's a match, and is offered the opportunity to create the duplicate anyway (it's possible to have two people with the same name at the same company, of course), or instead to abandon adding the new record and instead go to one the existing records that was presented as a possible duplicate.
This leaves it up to the user to read what's onscreen and make the decision about what is and isn't a duplicate. But it maximizes the possibility of the user knowing about the dupes and never accidentally creating a duplicate record.
Problem #4: We'd like a more nimble program that we can access from more than one dedicated computer.
This one confuses me. Access is multi-user out of the box (and has been from the very beginning, nearly 20 years ago). There is no limitation whatsoever to a single computer. There are things you have to do to make it work, such as splitting your database into to parts, one part with just the data tables, and the other part with your forms and reports and such (and links to the data tables in the other file). Then you keep the back end data file on one of the computers that acts as a server, and give a copy of the front end (the reports, forms, etc.) to each user. This works very well, actually, and can easily support a couple of dozen users (or more, depending on what they are doing and how well your database is designed).
Basically, after all of this, I would tend to second #mwolfe02's answer, and agree with him that what you need is not a new database, but a database consultant who can design for you an application that will help you manage your mailing lists (and other needs) without you needing to get too deep into the weeds learning Access (or FileMaker or whatever). While it might seem more expensive up front, the end result should be a big productivity boost for all your users, as well as an application that will produce better output (because the data is cleaner and maintained better because of the improved data entry systems).
So, basically, you either need to spend money upfront on somebody with technical expertise who would design something that allows you to do better work (and more efficiently), or you need to invest time in upping your own technical skills. None of the alternatives to Access are going to resolve any of the issues you've raised without significant investment in interface design to further the goals you have (cleaner data, easier to find information, etc.).
At the risk of sounding snide, what you are really looking for is a consultant.
In the hands of a capable programmer, all of your issues with Access are easily handled. The problems you are having are not the result of using the wrong tool, but using that tool less than optimally.
Actually, if you are not a techie then Access is already the best tool for you. You will not find a more non-techie friendly way to build a data application from bottom to top.
That said, I'd say you have three options at this point:
Hire a competent database consultant to improve your application
Find commercial off-the-shelf (COTS) software that does what you need (I'm sure there are plenty of products to handle mailings; you'll need to research)
Learn about database normalization and building proper MS Access applications
If you can find a good program that does what you want then #2 above will maximize your Return on Investment (ROI). One caveat is that you'll need to convert all of your existing data, which may not be easy or even possible. Make sure you investigate that before you buy anything.
While it may be the most expensive option up-front, hiring a competent database consultant is probably your best option if you need a truly custom solution.
SQL Server sounds like a viable alternative to your scenario. If cost is a concern, you can always use SQL Server Express, which is free. Full blown SQL Server provides a lot more functionality that might not be needed right away. Express is a lot simpler as the number of features provided with it are much smaller. With either version though you will have centralized store for your data and the ability to allow all transactions to be recorded in the transaction log. Also, both have the ability to import data from an Access Database.
The newest version of SQL Server is 2008 R2
You probably want to take a look at modern databases. If you're into Microsoft-based products, start with SQL Server Express
EDIT: However, since I understand that you're not a programmer yourself, you'd probably be better off having someone experienced look into your technical problem more deeply, like the other answer suggests.
It sounds like you may want to consider a front-end for your existing Access data store. Microsoft has yet to replace Access per se, but they do have a new tool that is a lot lower on the programming totem pole than some other options. Check out Visual Studio Lightswitch - http://www.microsoft.com/visualstudio/en-us/lightswitch.
It's fairly new (still in beta) but showing potential. With it, just as with any Visual Studio project, you can connect to an MS Access datasource and design a front-end to interact with it. The plus-side here is programming requirements are much lower than with straight-up Visual Studio (read: Wizards).
Given that replacing your Access DB will require some font-end programming, you may look into VistaDB. It should allow your front end to be created in .NET with an XCopy database on the backend without requiring a server. One plus is that it retains SQL Server syntax, so if you do decide to move to SQL Server you'll be one step ahead.
(Since you're not a techie and may not understand my previous statement, you might pass my answer on to the consultant/programmer/database guy who is going to do the work for you.)
http://www.vistadb.net/

Synchronizing MS Access database file

I am developing a database with about 10 tables in it. Basically it will be used in 2 or 3 distant geographical locations (let's call them A,B and C). The desired work flow will be as follows:
A,B and C should always have the same database. So when A does any changes he should be able to send those changes over to B and C. Emailing the entire mdb file doesnt make sense since its 15+mb in size. So I would like to send the new additional records and changes only to B and C. The changes B and C make should also be reflected to the other repective parties. How can I do this?
I have a few ideas in mind but cont know how to implement it.
solution 'A' - export the data tables only into a xls file and email that. But the importing of the tables into the mdb file could be a bit complex right? and the xls is file will also become bigger and bigger with time.
solution 'B' - try extract just the changes and email only the new parts? (but how to extract just those)
Solution 'C' - find some way of syncing all users onto the same database(storage) location. I was thinking of a front/back end splitting solution by storing the tables in a shared drive in the parent company's server (which is also overseas). But the network connection between locations is very slow, and I dont know how much bandwidth is needed for this.
Any recomendations would be most welcome!
In regard to sources for information on replication, start with my Jet Replication Wiki.
But I would never recommend Jet replication for your scenario. The only environment where I currently recommend it (and I've been doing replicated apps since 1997 and still have several in production use) is for supporting laptop users who have to work with live data in the field disconnected from any network, and return to the home office and synch direct with the mother ship.
The easiest solutions with an Access application would be hosting the app on Windows Terminal Server/Citrix and the users would run it over a Remote Desktop Connection, or using Sharepoint. The Terminal Server/Citrix solution has no accomodation for disconnected users, but Sharepoint can accomodate offline usage and synch changes when connected. Access 2010 and Sharepoint 2010 provide a host of new features, including better schema design, the equivalent of triggers and greatly improved peformance for large Sharepoint lists, so it's a no-brainer to me that if you choose Sharepoint you'd want to use A2010 and Sharepoint 2010.
While it's possible to do what you want with Jet Replication, it requires a lot of setup on the server and client ends, and is relatively fragile (not in terms of data integrity if you're using indirect replication (as you should), but in terms of network reliability) -- there are too many moving parts and too many failure points.
Windows Terminal Server/Citrix is by far the simplest, with the fewest moving parts and completely centralized administration, and works very well for a relatively small investment.
Sharepoint is more complicated than WTS/Citrix, but is less complex and more centralized than a Jet Replication solution.
If it were me, I'd probably go with WTS/Citrix if there was no need for disconnected usage, but I'd be salivating over trying out A2010/Sharepoint 2010. If there was a need for disconnected usage, then I'd definitely go the Sharepoint route.
You want to use "Jet Replication". See
MSDN Search for jro at http://social.msdn.microsoft.com/Search/en-US?query=jro&ac=8
MSDN Search for access replication at http://social.msdn.microsoft.com/Search/en-US?query=access%20replication&ac=3
It's been some time since I did it, but the indirect method of replication worked well for me in a similar situation.
It takes something to set up. The documentation used to be appalling for it, but I found articles written by Michael Kaplan (aka Michka) that walked me through how to do it.
If your final environment is going to be fairly stable, then use Access the whole way. If not, then I'd urge you to take HansUp's advice and go with SQL Server or SharePoint.
Do note: if you're working in Access 2007 or later, replication is not directly supported, and you'll have to roll-your-own bits and pieces. If you're using an earlier installation, you'll be fine, but allow time for some head-scratching.

Proper way to program a Microsoft Access Backend Database in a Multiuser Environment

There is a prevailing opinion that regards Access as an unreliable backend database for concurrent use, especially for more than 20 concurrent users, due to the tendency of the database being corrupted.
There is a minority opinion that says an Access database backend is perfectly stable and performant, provided that:
Your network has no problems, and
You write your program correctly.
My question is very specific: what does "Write your program correctly" mean? What are the requirements that you have to follow in order to prevent the database from being corrupted?
Edit: To be clear: The database is already split. Assume less than 25 users. I'm not interested in performance considerations, only database stability.
If you’re looking for great example of what programming practices you need to avoid, number one on the list is generally that of NOT running a split database. Number two is not placing the front end on each computer.
For example the above poster had all kinds of problems, but you can darn your bet that their failing was either that they didn’t have the databae split, or they weren’t placing the software (front end) on each computer.
As for the person having to resort to some weird locking mechanism, that’s kind of strange and not required. Access (actually the JET data engine, now called ACE) has had a row locking feature built in since office 2000 came out.
I’ve been deploying applications written access commercially for about 12 years now. In all those years I had one corruption occur from ONE customer.
Keep in mind that before Microsoft started pushing and selling SQL server, they rated the JET database engine for about 50 users. While my clients don't have problems, in 9 out of 10 cases when someone has a probem you find number one on the list is that they failed to split the database, or they’re not installing the front in part on each computer.
As for coding Techniques or tips? Any program design that you build and make it in which a reduced number of records are loaded into the form is a great start in your designs. In other words you never want to just simply throw up a form attached to a large table without restricting the the records to be loaded into the form. This is probably the number one tip I can give here.
For example, it makes no sense to load up an instant teller machine with everybody’s account number, and THEN ask the user what account number to work on. In fact I asked a 80 year old grandmother if this idea made any sense, and even she could figure that out. It makes far more sense to ask the user what account to work on, and then simply load in the one customer.
The above same concept applies to a split database on a network. If you ask a user for the customer account number, and THEN open up the form to the one record with a where clause, then even with 100,000 records in the back end, the form load time will be near instant because only ONE RECORD will be dragged from the customers table down the network wire.
Also keep in mind that there is a good number of commercial applications in the marketplace such as simply accounting that use a jet back end ( you can actually open simply accounting files with MS access, they renamed the extensions to hide this fact, but it is an access mdb file).
Some of my clients have 3-5 users with headsets on, and they’re running my reservation software all day long. Many have booked more then 40,000+ customers and in a 10 year period NONE of them have had a probem. (the one corruption example above was actually on a single user system believe it or not).
So, I never had one service call due to reliability of my access products. On the other hand this application only has 160 forms, and about 30,000 lines of code. It has about 65 highly related and noralized tables (relations enforced, and also cascade deletes).
So there’s no particular programming approach needed here for multi user applications, the exception being good designs that reduce bandwidth requirements.
At the end of the day it turns out that good applications are ones that do not load unnecessary records into a form. It turns out that when you design your applications this way then when you change your backend part to SQL server you find this approach results in very little work needed to make your access front end work great with a SQL server back end.
At last count I think here's an estimate of close to 100 million access users around the world. Access is by far the most popular desktop database engine out there and for the most part users find they have trouble free operation.
The only people who have operational problems on networks are those that not split, and not placed the front end on each computer.
The only compelling answers so far seem to be to reduce network traffic, and make sure your hardware cannot fail.
I find these answers unsatisfactory for a number of reasons.
The network traffic position is contradictory. If the database can only handle a certain amount of network traffic, then people need sensible guidelines to gauge this, so they can intelligently choose a database that is appropriate.
Blaming Access database crashes on hardware failures is not a defensible position. Users will (rightly) claim that their other software doesn't suffer from these kinds of problems.
Access database corruption is not an imaginary problem. The people who regularly suggest that 5 to 20 users is the upper practical limit for Access applications are speaking from experience.
Also see Corrupt Microsoft Access MDBs FAQ Which I've compiled over the years based on newsgroup postings and predates Allen's page. That said my clients have had very few corruptions over the years and have never lost data nor had to restore from backup.
I'm not sure what "write your program correcly" means in this context. I've read a few postings indicating this but it's more the implementation aspects. As Albert has pointed out you have to split the database and give each user their own copy of the FE MDB/MDE. You can't access a backend MDB over a wireless network card as they are too unstable. Same with a WAN unless the WAN is very fast/wide and very stable. We then suggest upszing to SQL Server or using Terminal Services/Citrix.
I have several clients running 20 to 25 users all day long into the system. One MDB has 120 tables while another has 160 tables. A few tables have over 600,000 to 800,000 records. The one client had 4 or 5 corruptions in five or seven years. We figured out the cause of all but two of those. And they were hardware related in one way or another. At least one of these apps should've been upsized to SQL Server. However that was cancelled on me by a Dilbert's PHB (Pointy Haired Boss).
Very good code (wrapped in trasactions with rollbacks) we had a call center with over 100 very active users at a time back in Access 97 days.
Another one with VB 5 front-end, Access Jet on portables that RAS (yes the old dial up days) to a SQL Server 6 database - 250 concurrent users.
People using the wizard to link a form directly to a table where the form is used to make edits ... might be a problem.
Uncompleted transactions e.g a recordset that does not get closed properly and a break in network connection for any reason while a database is open (have seen the power saving features of NIC causing corruption) are my number one causes
I don't believe the number of users is a limitation with MS-Access Jet Engine.
My understanding is that the Jet Engine queues up concurrent maintenance transactions to apply them 1 at a time (like a printer queue does to print print jobs). Via ODBC connectivity, and an intelligent user-application program that manages the record set sizes, locking of records open for edit, and only maintains DB connections long enough to retrive a record and save a record, that puts little strain on the jet engine. I look at mdb file as tables. You can conceivably have 100s of these in one database, or more. The SQL querying to these tables would be by random access, and the naming convention of the mdb files lets the SQL query built in the applciations program which table (mdb file) to access. MS-Access databases can be 10s 100s or 1000s of Gigabytes this way and run smoothly. Proper indexing and normalizing of data to revent storing of redundant data also helps. I've never run into a database crash or concurrency issue with MS-Access and ODBC and Win32 Perl GUI interface driving the applciation. I use no MS-Access Objects other than tables, indexes, and perhaps views/queries. And yes, I store the database on a dedicated PC and install my applications software on each workstation PC.

Is MS Access (JET) suitable for multiuser access?

I have a product designed to be a desktop product using MS Access file as a DB.
Now, some users need to install it in a few PCs (let's say 2 or 3) and SHARE the database.
I thought to place the MS Access file in a shared folder and access it from the PC, but... the JET Engine is designed for multiple user access?
Any tips or things to be aware of doing this?
EDIT:
The app is a .net one, using the database as storage (not using the database as frontend)
There is so much misinformation in the answers in this thread that I don't know where to start. I just spent 4 points in reputation voting down the answers with misleading and wrong information in them.
the Jet database engine (which is all that's involved here, as the OP clarified with an edit) is by default multi-user -- it was built from the ground up to be that way.
sharing a Jet data store is very reliable when the network is not substandard. This means not a WAN and not wireless, because the bandwidth has to be sufficient for Jet to maintain the LDB file (for multi-user locking), which means a ping by your local PC's instance of the Jet database engine once per second (with default settings), and because Jet can't recover from a dropped connection (which is quite common in a wireless environment).
the situation where Access falls down is when a front-end Access application MDB is shared (which is not the case for this poster). The reason it fails is because you're sharing things that can't be reliably shared and have no reason to be shared. Because of the way Access objects are stored in an MDB file (the entire Access project is stored in a single BLOB field in one record in one of the system tables), it's very prone to corruption if multiple users open it. In my estimation, sharing an Access front end (or an unsplit MDB with the tables and forms/reports/etc. all in one MDB) is the source for 99.99% of corruptions of Access/Jet files.
My basic answer to the OP's question is that, yes, Jet would be a great data store for an app of that size. However, if there's any possibility at all for the user population to grow above 25, then it might be better to start off from scratch with a database engine that is more robust at higher user populations.
It's perfectly feasible to do this; but you MUST split the database into a front end (with forms, queries, code) and a back end (data only). Every user has to have the front end on their own computer, linking to the shared back end.
It will be slow as Jet generates a ton of network traffic. Microsoft is also gradually deprecating Access as a development tool. Access 2007, for instance, has a far less sophisticated security model than Access 2003.
As a long time Access developer I am gradually moving away from Access.
Don't do it... the Jet database claims to be able to support multiple users, but it is incredibly easy to use the upsizing wizard to convert your Access file to a Sql Express database. That database file could EASILY become locked by a user or admin, and all of your users would be unable to use the database.
... and Sql Express is free. Your upgrade path from there to a full instance of Sql Server or some other commercial database is simple.
With 2 or 3 users on a reliable local network you should be fine, as long as you back the network drive up often.
Avoid any bit/bool fields in your tables - Jet has some nasty corruption issues with multiple access to them.
Also bear in mind that all locking in Access is optimistic: you will get dirty reads occasionally.
MS Access is designed for small office scenarios like this: non-critical light office use that you can set up with the minimum of programming.
Expect the data file to get corrupted every now and then - back up regularly.
The ACE/Jet engine is a great piece of software but, while it was designed to support multiple users, actually supporting multiple users in practise is not one of its strong points. The last straw for me is where then removed user level security (ULS) from the engine: I suppose I can imagine a simple database situation where all users will have the same privileges (i.e. admin access to all database objects) but IMO that is not supporting multiple users well, as compared with, say, MS SQL Server.
Yes, it supports access by multiple (that is, a small, workgroup-sized, number) of users over a network file share. However, the file share architecture is simply not ideal for supporting simultaneous writing to a file by multiple users. A client/server database system (SQL Server, etc.) generally provides better performance, security, and reliability.
As a sysadmin, please don't use Access for anything multi-user. Do what Jeff Fritz suggests and use a database that is designed for multi-user access. You may think that your little app is only going to be shared between a few people, but I guarantee you that it'll have a hundred users and fifty new features by the end of the year. And if those are all Access, rather than VB/SQL Express, your Ops people will break into your house one night and slit your throat.
Access isn't a client-server app, and provides very little in the way of backup/restore, or any automation whatsoever. Not to mention the interface and the DB are very tightly coupled... so if you ever want to turn this into a web app, or make any serious changes, your world will be filled with pain.
It's been done so many times by so many generic software engineers where we've seen a .mdb go corrupt in a multi user situation. If so many experienced specialist Access developers can get it right, as I'm inclined to believe, then we generalists must be doing something wrong and that something must be fairly fundamental yet non-obvious for so many of us to run away from the thing screaming 'Never again!' So if you consider yourself to be a experienced specialist Access developer (or you know how to find one) then go for it. But if you are a generalist or casual user looking for a lightweight back end then I suggest you look elsewhere (SQL Server is good IMO).
If your users can wait twice as long for an application with half of the features they want, then don't use Access.
Jet does not have the sophisticated lock logic required to support multi-user scenarios. You can get away with using it if your application is mostly reads and low-contention.
I've seen websites support many users, but I would recommend SQL Express unless you have a compelling reason to choose Jet.
I can tell you from painful experience that Jet 3/3.5 was not reliable. I saw it crash frequently under light load and when there were crashes you risked data corruption. It used to be extremely sensitive to any power problems, any client crashing against it (even the UI linked to the mdb), and any LAN problems. More recent versions of Jet might be better but switching to Sql Server is clearly the way to go in my opinion for anything other than trivial data entry with a small number of users. Sql Express is free and you don't really lose anything, especially if you're UI is in .Net, rather than Access.
EDIT: Microsoft doesn't think you should rely on Jet 4 either.
from: http://support.microsoft.com/kb/303528
Microsoft Jet is not intended for use with high-stress server applications, high-concurrency server applications, or 24 hours a day, seven days a week server applications. This includes server applications, such as Web applications, commerce applications, transactional applications, and messaging server applications. For these types of applications, the best solution is to switch to a true client/server-based database system, such as Microsoft Data Engine (MSDE) or Microsoft SQL Server. When you use Microsoft Jet in high-stress applications such as Microsoft Internet Information Server (IIS), you may experience any one of the following problems:
Database corruption
Stability issues, such as IIS crashing or locking up
Sudden failure or persistent failure of the driver to connect to a valid database that requires re-starting the IIS service
just check whether the db lock file (like .ldb) is there or not. If it is there, somebody is accessing that file. If it is not there, at present there is no one accessing that file and you may proceed. Otherwise, wait for when that file (.ldb) is no longer existing.
If you use a Terminal Server, the performance is real good. We have more solutions up to 50 Users at one Access mdb. Development is real fast and deployment easy.
Problems:
everybody can copy data mdb
no access rights
limited store procedures
optimize (compress and repair) only possible with no use data Database
limit to 2 GB!