I am currently using VB6 to connect to a MS access DB using DAO and I’m experiencing a very noticeable speed reduction when a 2nd user connects to the Database.
Here are the steps to reproduce:
Open the Database from computer A by logging into the software
Add records to the database via the software (takes about .4 seconds)
A second user logs into the software (Computer B), ie: this opens the database, displays todays transactions, but the user does nothing else
On Computer A, repeat the operation of adding records, now the operation takes approximately 6 seconds
Further info…
the operation continues to take aprox 6 seconds, even after Computer B logs out of the software
if you close and reopen the application from Computer A the operation returns to taking only .4 seconds to execute!
Any help would be greatly appreciated!
Thanks!
That is the way MS Access works. While it kind of supports multiple users, and kind of supports placing the DB on a file share so multiple PCs can access it, it does neither really well. And if you are doing both (multi-user and over a network to a file share) then I feel for your pain.
The answer is to run the upgrade wizard and convert this to an MS SQL Server instance. MS SQL Server Express edition is a good choice to replace Acess in the case. Note that you can still keep all of your code and reports etc you have in Access, only the data needs to be moved.
Just to be clear on the differences, in MS Access when you read data from the database, all of the data required to perform your query is read from a file by your program, no server-side processing is done. If that data resides on a network, you are pulling that data across your network. If there are multiple users, you have an additional overhead of locking. Each users program/process effectively dialogs with the program/process of the other users via file I/O (writing lock info into the networked file or files). And if the network I/O times out or has other issues then those files can become corrupted.
In SQL Server, it is the SQL Server engine that manages the data requests and only returns the data required. It also manages the locks and can detect when a client has disconnected or timed out to clean up, which reduces issues caused by multiple users on a network.
We had this problem with our VB3 / Jet DB 2.5 application when we transitioned to using newer file servers.
The problem is "opportunistic locking" : http://support.microsoft.com/kb/296264?wa=wsignin1.0
Albert is probably describing the same thing ; the server will permit one client exclusive access of a file, but when another chimes in, this exclusive access will "thrash" between them, causing delays as the client with the oplock flushes all it's local cache to the server before the other client can access the file.
This may also be why you're getting good performance with one client - if it takes an oplock, it can cache all the data locally.
This can also cause some nasty corruption if one of your clients has a power failure or drops off the network, because this flushing of the local cache to the server can be interrupted.
You used to be able to disable this (on the client - so you need to service ALL the clients) on Windows 2000 and XP as per the article, but after Vista SP2 it seems to be impossible.
The comments about not using Access / JetDB as a multi-user database are essentially correct - it's not a good architectural choice, especially in light of the above. DAO is also an obsolete library, even in the obsolete VB6. ADODB is a better choice for VB6, and should allow you some measure of database independence depending on how your app is written.
Since as you pointed out you get decent performance with one user on the system, then obviously your application by nature is not pulling too much data over the network, and we can't blame network speed here.
In fact what is occurring is the windows file share system is switching from single file share mode into multi-share file mode. This switching file modes causes a significant delay. And this also means that the 2nd or more user has to attempt to figure out and setup locks on the file.
To remove this noticable delay simply at the start of your application open what we call a persistent connection. A persistent connection is simply something that forces the network connection to remain open at all times, and therefore this significant delay in switching between two file modes for file share is eliminated. You now find that performance with two users should be the same as one (assuming one user is idle and not increasing network load). So at application startup time, open a back end table to a global var and KEEP that table open at all times.
Related
Background
We use MS Access to manage some of our work, but often times our operations are in highly remote locations where cell or sat signal are the most reliable forms of connectivity. But the service isn't fantastic. Coverage will drop sometimes (more often than not) in the middle of an update.
Setup
Current set up has a Backend File which stores only the tables and a few simple routines that is stored on a cloud based server and a Frontend File that is stored on the user's machine. In order to make updating and usage of the system even feasible we had to create copies of the tables on the Frontend so that the user could run updates effectively then provided a sync button that essentially just appends the information from the Local Table to the Cloud Table. Otherwise the user would have to wait for the server to respond with each entry which was extremely slow.
Problem
However, nearly anytime they run this process it stops in the update and corrupts the file. So we switched to just using a simple excel export that they can then email to the main office and someone at the office imports the file to update their local tables and syncs to the Cloud Tables in the Backend File.
General Notes
I believe we've narrowed it down to being an issue with MS ACCESS and poor internet connectivity, because all systems work when the internet connection is even just reasonably decent. Are there any work arounds available that will resolve this issue?
I have an MS access system on a network with 15 users. The Front end is installed on users C:\ and BE on a mapped drive X:.The front end is about 8 meg, backend around 25.
Since day 1, one user constantly (every 30 mins at best) and some other users have a network interrupted error. Apart from being quite annoying to the users, this causes a temporarily masked/hidden issue where update queries run without error on 2 tables but do not update actually update/insert data.
A compact and repair resolves the issue, but is not feasible to run daily as users have the system open throughout the day. This is such a headache that I've had to write code to check that the data has been written after each query is run to detect if the issue is present.
Both myself and IT are 3rd parties to the business and are in the difficult opposing positions of "its your the network" and "its your database". Thankfully its all calm and peaceful but its not getting a solution for the client.
I've installed MS access FE/BE systems on over a hundred networks over the last 10 years and only ever seen the same issue on a peer to peer network. I'm aware that Access is very picky about network stability, but am faced with users who don't believe that there is a problem with the network as their email works and the internet radio doesn't drop out.
What I'm hoping to get assistance with here is either a tool / method that can test a network for stability / robustness with MS access and prove either one of us right/wrong with MS access or perhaps some advice on how I could move forward on this deadlock.
Thanks
I have seen a similar instance with damaged cables. A client of mine had mice that chewed through part of the cable, causing an intermittent interruption. Also, in another case, a cubicle wall was on top of the network cable (poor cable installation) and causing a short.
In order to bypass Access's need for constant network connection, I have my systems create local temporary tables for any view, and a local, 1-record table for any detail form that they are actively editing. Once they hit 'save' it runs the update query, and once done, no active connection with the server is needed again. It allows me to run much faster access systems, and eliminated the need for stable wireless or Ethernet. It does require quite a bit of structure change at first - as you will have to insert code to create local temporary tables in the FE file, and also code in an UPDATE Sequence in the AFTERUPDATE Form events too - but the time that it is has saved me and my users has been tremendous.
To put in in perspective, i have 1200+ users in the same Access database in a given week (sometimes 400+ in a day) and since they only 'pull' data from the server to make local table copies, there are only a handful of connections at any one time. My users can now dock and undock from their desks without needing to close the database.
I am developing an application which uses as a back-end an MS Access database (.mdb, not my decision). Recently I came across someone suggesting that using JET engine over WAN is not really a good idea, with a high risk of data corruption. Since my application should be doing just that (connecting to database on NAS (EDIT: not NAS, shared shared network drive), I got worried. It is really that risky? If so, is there any work around or is an MS Access database just unusable for that kind of application?
EDIT
The front end is .NET windows desktop application in C# (WPF). The system does not have many users, max 10. Most of the time they will approach the database from LAN and 99% of writing to the database will be done within the LAN (from the area of the company). However there are some cases where they will connect to the NAS (EDIT: not NAS, shared shared network drive) from outside the company via network (from their home).
If you have a 100 Mb/s fibre, it will be OK, but if your line is, say, an xDSL line, it is generally an absolute no-no.
Convince the powers that be to move the backend to a server engine like SQL Server where the Express version is free.
The scenario you describe is not a good fit for having an Access database as the back-end. The WAN users could very well find the application slow, but the NAS is the real cause for concern regarding corruption, and that would affect both LAN and WAN users.
Many (most?) NAS devices run on Linux and use Samba to provide Windows file-sharing services. The Access Database Engine apparently uses some low-level features of "real" Windows file sharing that Samba does not always fully implement (ref: here).
In fact, the only time I've seen repeated corruption problems with a shared Access back-end (and a properly distributed front-end) was when a client moved their file shares from an older Windows server to a newer NAS device. The Access application continued to work for the most part, but every few months they would find that the primary keys of some tables would disappear after they did a Compact and Repair on the back-end database file. That never happened while their file share was on the Windows server.
Splitting a front-end from a back-end removes the majority of the risk of corruption. Of course, with Access there's always the possibility and if you're looking for something that reduces the risk to close to nil then you might want to consider SQL Server or MySQL. However, using Access is fine as long as you take proper precautions.
For example, you might want to look into record-locking on tables that will get edited, to prevent multiple simultaneous writes. Backing up your DB on a regular basis is always good, too.
If I use MS Access in the back-end of a client-server type software and the database file is sent from client to server, will it create any problems in further database handling, transfer speed, or performance compared to SQL Server?
In my experience there are 4 major differences between MS Access MDB files and SQL Server performance in a small LAN based environment (where small means 20 users or less with no more than 10 concurrent user sessions)
Security. Use of an access MDB file requires that the client have direct access to the MDB file. This architecture can't be truly secure if you need to limit data access for some users. Access user level security can be cracked. You can use file level or file share level security in the OS if that satisfies your security requirements.
MDB files are subject to corruption as a result of network errors. The only time I've seen a SQL Server database become corrupted was as a result of hardware failure on the server.
The upper limit for an MDB file is around 25 users, and Access is sensitive to high transaction volume for inserts, updates, and deletes.
In most cases with Access you'll need to have all users sign out of the database to make any changes to the structure of the tables. This is much less convenient than using DDL scripts in SQL Server. If you decide to go with Access, I'd recommend getting a copy of LDBView so you can tell who you'll have to kick out of the database each time you make a routine change to the data structure.
There is a case to be made for a back end MDB file if the user audience is small and the simplicity of deployment is appealling to the client organization. But if you are starting a new project, the advantages of a SQL Server backend should be carefully considered. If you have a large user audience then SQL Server is strongly recommended.
It is unlikely that you will have a problem with transfer speed when using an MDB file with an up-to-date version of MS Access and well configured LAN.
If you use MS Access as your back-end database it isn't a client-server solution. Jet Databases (The kind MS Access creates) are file based, not client-server.
If the bandwidth between the client and the DB is high (like another server on the same network) then it shouldn't pose any major performance problems related to transfer speed. However, if you were connecting over a slow WAN link to the DB from the client, it definitely could introduce a performance bottleneck.
I have a article on using access over a network, and especially that of a WAN here:
http://www.kallal.ca/Wan/Wans.html
Some good answers here, already. But something that is often overlooked is that there are scenarios where using a lightweight mdb gives you much more performance than a heavy-weighted SQL server. For example, if multi-user access is not so important, but you have to do a lot of batch processing on your data, using mdb files can be much faster. On the other hand, if you have a lot of classical OLTP processing with many users, you can benefit from a real client/server database.
I have a website (www.soltrago.com) wghere I use a .mdb microsoft access database to retrieve data when the pages loads. I use a dns less connection to connect to the database. My question is how many simultaneous connections can I have to my webpage? Like how many people per second can view my webpage? Thanks!
There is no single answer to this question.
For instance, I could say... 25, and it could be true, in the sense that in some cases, you can run 25 simultaneous users against the database.
Or I could say 150, and it could be true.
The problem is, I could also say 75, and it would not be true, basically because the way you're using the database has serious performance problems.
Or I could say 2, and it wouldn't be true either, because every connection you make locks the same data, and thus you end up serializing every access because every other user has to wait for the first one to complete his transaction and thus unlock the data.
How many users is a function of the upper limit of the database engine, and the way you're using the database. The page I linked to in my comment says the upper limit is 255. I can't vouch for that, but it sounds plausible, simply because access isn't meant to be a multi-user database. Sure, it handles it, but it's not meant to serve thousands of users.
Your best bet is actually to get some kind of load tester application and see when your application either starts having serious performance problems, or perhaps even just crashes.
Other than that, nobody can tell you the right answer.
I wonder why you don't just use SQL Server Express Edition instead - a far more scaleable engine, but still free.
(edit)
As an added bonus; when your site "takes off" and you need more grunt (bigger, more CPUs, more memory, more failover, clustering, etc), you just buy a bigger box and a SQL Server license and you're set to go; you don't have that luxury with mdb.
Your web app probably has just one connection to the MS Access file. The number of web pages that can be served is different.
Your title and question do not match.
MS Access is not an database engine in the server sense. You do not prepare a query, submit to an engine, get a result (say per web page). This scales well because it's all stateless.
In this case, it's basically a structured file (.mdb) recognised by Jet (used by msaccess.exe) so your web app has the file open when it starts.