I have an MS access system on a network with 15 users. The Front end is installed on users C:\ and BE on a mapped drive X:.The front end is about 8 meg, backend around 25.
Since day 1, one user constantly (every 30 mins at best) and some other users have a network interrupted error. Apart from being quite annoying to the users, this causes a temporarily masked/hidden issue where update queries run without error on 2 tables but do not update actually update/insert data.
A compact and repair resolves the issue, but is not feasible to run daily as users have the system open throughout the day. This is such a headache that I've had to write code to check that the data has been written after each query is run to detect if the issue is present.
Both myself and IT are 3rd parties to the business and are in the difficult opposing positions of "its your the network" and "its your database". Thankfully its all calm and peaceful but its not getting a solution for the client.
I've installed MS access FE/BE systems on over a hundred networks over the last 10 years and only ever seen the same issue on a peer to peer network. I'm aware that Access is very picky about network stability, but am faced with users who don't believe that there is a problem with the network as their email works and the internet radio doesn't drop out.
What I'm hoping to get assistance with here is either a tool / method that can test a network for stability / robustness with MS access and prove either one of us right/wrong with MS access or perhaps some advice on how I could move forward on this deadlock.
Thanks
I have seen a similar instance with damaged cables. A client of mine had mice that chewed through part of the cable, causing an intermittent interruption. Also, in another case, a cubicle wall was on top of the network cable (poor cable installation) and causing a short.
In order to bypass Access's need for constant network connection, I have my systems create local temporary tables for any view, and a local, 1-record table for any detail form that they are actively editing. Once they hit 'save' it runs the update query, and once done, no active connection with the server is needed again. It allows me to run much faster access systems, and eliminated the need for stable wireless or Ethernet. It does require quite a bit of structure change at first - as you will have to insert code to create local temporary tables in the FE file, and also code in an UPDATE Sequence in the AFTERUPDATE Form events too - but the time that it is has saved me and my users has been tremendous.
To put in in perspective, i have 1200+ users in the same Access database in a given week (sometimes 400+ in a day) and since they only 'pull' data from the server to make local table copies, there are only a handful of connections at any one time. My users can now dock and undock from their desks without needing to close the database.
Related
I am new to a new role at a company where they are using MS Access with a MySQL db which is running in server that's physically in our office behind our private network. I have been hired to develop an entire new application to bring the company up to modern standards. As we move features/modules to the new Angular/NodeJs App I am writing, users still need to utilize the UI provided by MS Access to the new production database that will be on AWS Lightsail.
However, when I change the configurations of Ms Access, OBDC connections to point to the AWS Lightsail MySQL Db, everything(reports especially) in the MS Access UI becomes slower than when it was being pointed to the MySQL Db here in office in-network.
I am going to the "Linked Table Manager" and changing the "Connection String".
Somewhere I read I should make sure SSLMODE is disabled to remove any performance issues.
DSN=AWS_Dev;DATABASE=ECSDataTables;PORT=3306;SERVER=IP_ADDRESS;SSLMODE=DISABLED;
I went through the normal "ODBC Data Source Administrator" in Windows and added the MySQL AWS host, user/pass as normal.
I have done extensive research and have found several sources, but none are really helping.
I have been asked not to spend too much time trying to fix/optimize anything in MS Access as my focus should be on the new application, but it's hard to believe that a simple switch of MySQL database can have such impact. In the new Angular/NodeJs application, everything runs very fast, so I know it's not the AWS MySQL db or anything.
Am I missing something, any configurations I should be doing in Ms Access? I have not used VB in about a decade, so I am hoping something can be done without the need of too much technical background in this matter.
Thank You.
Well, the issue is that your local area network (LAN) is about 10 times, or more faster then your internet connection.
Your low cost office network is very likely to be a 1 gig bit network. (100 base T is rare).
However, your internet high speed connection is likely say 10 mbits. So, you going from 1000 to 10 - that is 100 times slower. So, 3 seconds now becomes 300 seconds.
I mean, with such a slower connection speed, then no surprise should exist here.
What you can do is for any report that is a complex join of client side sql is convert the sql query to a server side view, link to that view. Now use that view as the base source for the report. And of course existing VBA filers that you always use (right???) to launch a report will now only pull the data it needs down the network pipe. Access reports (or forms) only pull down what you ask - not the whole table. So, any filter you have (use the where clause of the open report command) will be respected. So, you either have to pull less data, or simply find something with a similar speed rating as your local area network (and such high speed internet is rare).
The LAN vs WAN concept and speed issue is outlined in this article:
http://www.kallal.ca//Wan/Wans.html
While the above article is very old, the speed differences of the internet are about 10x faster today, but so is the local area that's gone from 100 baseT to 1 gig bit base.
So, things are slower because you are working with a VASTLY slower connection speed. Slower is slower!!!
Edit
While as noted, access will only pull what you ask, the case where access client does a poor job is sql queries that involve multiple tables - often the client will mess up what it sends server side. As noted, the solution in this case is to adopt views server side. This means you move the client side query that drives the report to a view, and link to that view. You not gain much performance for a single table query, but for any report based on complex (multi-table joins), then using a view will force the sql and "join work" to occur sql server side, and this can result in huge performance gains.
Well this is a case where limited knowledge just produces worst results than the expected ones.
Over the years top DBAs just "hate" Ms Access... they just see only problems,issues you name it ...the end sentence is "switch to a real Database engine".
Well this has created a faulty assumption that MsSQL, MySQL,Oracle, PostGreSQL and the rest database engines are somewhat a "magic pill"...you just switch the BE to one of the above DBE and all your problems will get resolved...just like that.
DBE --Database Engine (if you would like to call somewhat else feel free)
WRONG
Ms Access follows a different philosophy from the DBE and it does its job damn well given all its shortcoming and the major fact that is a file based DBE.
Switching to another DBE will give amazing performance IF and ONLY IF you respect the fact that you are not working with Access ....just don't treat e.g. MySQL as your file repository and DON'T just link the tables and expect everything to go well...
Want to keep blaming Access ...just switch over to another platform (.NET,PHP,Js , Java...make your pick) ...and do a small application that pulls ALL of your data in a single go like you do with Access . it will certainly crash or go Not responding...
So stop blaming Access ...start reading on how to make the most of two worlds and i am pretty sure that the results will amaze you....but again i must stress out that this is not a "magic pill" solution ...it involves a LOT of work ...planning,data manipulation,normalization,code changes and above all change of philosophy..
I would recommend starting the journey by picking this book : https://www.amazon.com/Microsoft-Access-Developers-Guide-Server/dp/0672319446 ( i don't want complains about its Old and MsSQL ...just read first and complain later)
Also take a look at an old benchmark alike video i made some years ago : https://www.linkedin.com/posts/tsgiannis_a-small-demo-of-connecting-ms-access-fe-to-activity-6392696633531858944-dsuU
Last but not least....years ago i was making some tests to see what the "magic pill" would do to my company's applications....the simplest test of all...
A simple table with few fields but with around 8 millions records...just display it
Access BE (local)--> It would run in 1-2 seconds...that's fast
Access BE (Network share)--> It would run in a few seconds...not so fast but it was usable
MSSQL BE (linked table)--> somethimes it get the results sometimes it wouldn't....slow...really slow ..like you make a coffee and go for a small walk.
MySQL BE (linked table)--> it never finished...timeout of "Not Responding"
PostGreSQL BE (linked table)--> it never finished...timeout of "Not Responding"
So stop blaming Access...start working and get amazed....
My team and I utilize MS Access databases across a network that disconnects frequently. Whenever a disconnect happens, there's a cascade of failure messages in Access and any records mid-entry are lost.
We know what's causing this, but it's beyond the level of my authority to fix. It's related to Windows 10 re-mapping the network drive whenever there's a group policy update, causing it to 'lose' the network drive for a split-second; long enough to disconnect the database.
As resolving the network disconnects will involve the IT department escalating it to the national level (Government computer system), I need a fix "now" so my form files don't generate a dozen errors and need to be restarted every time this happens.
What settings or code could I use to harden the forms files against network disconnects?
EDit: To answer questions
The data is kept in a separate file from the forms, allowing multiple people to work on the database at the same time.
I believe it's pointing to a drive letter for where the data file is. I don't know how to setup a server address location. My method of connecting was to browse to the file.
I am currently using VB6 to connect to a MS access DB using DAO and I’m experiencing a very noticeable speed reduction when a 2nd user connects to the Database.
Here are the steps to reproduce:
Open the Database from computer A by logging into the software
Add records to the database via the software (takes about .4 seconds)
A second user logs into the software (Computer B), ie: this opens the database, displays todays transactions, but the user does nothing else
On Computer A, repeat the operation of adding records, now the operation takes approximately 6 seconds
Further info…
the operation continues to take aprox 6 seconds, even after Computer B logs out of the software
if you close and reopen the application from Computer A the operation returns to taking only .4 seconds to execute!
Any help would be greatly appreciated!
Thanks!
That is the way MS Access works. While it kind of supports multiple users, and kind of supports placing the DB on a file share so multiple PCs can access it, it does neither really well. And if you are doing both (multi-user and over a network to a file share) then I feel for your pain.
The answer is to run the upgrade wizard and convert this to an MS SQL Server instance. MS SQL Server Express edition is a good choice to replace Acess in the case. Note that you can still keep all of your code and reports etc you have in Access, only the data needs to be moved.
Just to be clear on the differences, in MS Access when you read data from the database, all of the data required to perform your query is read from a file by your program, no server-side processing is done. If that data resides on a network, you are pulling that data across your network. If there are multiple users, you have an additional overhead of locking. Each users program/process effectively dialogs with the program/process of the other users via file I/O (writing lock info into the networked file or files). And if the network I/O times out or has other issues then those files can become corrupted.
In SQL Server, it is the SQL Server engine that manages the data requests and only returns the data required. It also manages the locks and can detect when a client has disconnected or timed out to clean up, which reduces issues caused by multiple users on a network.
We had this problem with our VB3 / Jet DB 2.5 application when we transitioned to using newer file servers.
The problem is "opportunistic locking" : http://support.microsoft.com/kb/296264?wa=wsignin1.0
Albert is probably describing the same thing ; the server will permit one client exclusive access of a file, but when another chimes in, this exclusive access will "thrash" between them, causing delays as the client with the oplock flushes all it's local cache to the server before the other client can access the file.
This may also be why you're getting good performance with one client - if it takes an oplock, it can cache all the data locally.
This can also cause some nasty corruption if one of your clients has a power failure or drops off the network, because this flushing of the local cache to the server can be interrupted.
You used to be able to disable this (on the client - so you need to service ALL the clients) on Windows 2000 and XP as per the article, but after Vista SP2 it seems to be impossible.
The comments about not using Access / JetDB as a multi-user database are essentially correct - it's not a good architectural choice, especially in light of the above. DAO is also an obsolete library, even in the obsolete VB6. ADODB is a better choice for VB6, and should allow you some measure of database independence depending on how your app is written.
Since as you pointed out you get decent performance with one user on the system, then obviously your application by nature is not pulling too much data over the network, and we can't blame network speed here.
In fact what is occurring is the windows file share system is switching from single file share mode into multi-share file mode. This switching file modes causes a significant delay. And this also means that the 2nd or more user has to attempt to figure out and setup locks on the file.
To remove this noticable delay simply at the start of your application open what we call a persistent connection. A persistent connection is simply something that forces the network connection to remain open at all times, and therefore this significant delay in switching between two file modes for file share is eliminated. You now find that performance with two users should be the same as one (assuming one user is idle and not increasing network load). So at application startup time, open a back end table to a global var and KEEP that table open at all times.
I've been asked for a quick turn around on this. The group I'm assisting has a .MDB database where offsite workers that don't have internet all the time. Thus, way back the team implemented an Access DB which allows for synchronization.
As their team grew bigger they started running into the following issues:
Remote synching – when an user tries to synch from a worksite, more often than not, the database will crash either due to loss of wireless signal, program timing out, or Inspector manually shutting down due to time (i.e., 30 or more minutes)
Multiple synchers – we are unable to synch multiple at one time (there are currently 34 users in 3 different territories). If someone is synching and another person tries to synch at the same time, the second user will end up with an error message. They will have to shut down their DB and try to synch at a later time.
Incomplete synchs – sometimes when an worker synch’s his/her DB, not all the line items will copy over to the Master file which can cause confusion during review.
Is there any work arounds or items I can look into to resolve these?
I have little resources and time so anything involving a new server might not work.
THanks
It sounds as though you are mainly adding new data from different field operatives, rather than everyone updating existing data, if this is the case then that's good and you could try the following:
Ensure all the tables have "Replication ID's" for the Primary Keys as this will ensure no two operatives create conflicting records.
The synchronisation process should then be amended to take a snapshot of said table/tables to a .txt file on the operatives machine and then this file transferred back to the source machine.
Then at the end of the day or more often if required, the master copy should be setup to import the new data from all the text files it has received, as there will be no conflicting Primary Keys you should be ok, just remember to insert only those where the Primary Key is not already in the table.
Hope all that makes sense : )
There is a prevailing opinion that regards Access as an unreliable backend database for concurrent use, especially for more than 20 concurrent users, due to the tendency of the database being corrupted.
There is a minority opinion that says an Access database backend is perfectly stable and performant, provided that:
Your network has no problems, and
You write your program correctly.
My question is very specific: what does "Write your program correctly" mean? What are the requirements that you have to follow in order to prevent the database from being corrupted?
Edit: To be clear: The database is already split. Assume less than 25 users. I'm not interested in performance considerations, only database stability.
If you’re looking for great example of what programming practices you need to avoid, number one on the list is generally that of NOT running a split database. Number two is not placing the front end on each computer.
For example the above poster had all kinds of problems, but you can darn your bet that their failing was either that they didn’t have the databae split, or they weren’t placing the software (front end) on each computer.
As for the person having to resort to some weird locking mechanism, that’s kind of strange and not required. Access (actually the JET data engine, now called ACE) has had a row locking feature built in since office 2000 came out.
I’ve been deploying applications written access commercially for about 12 years now. In all those years I had one corruption occur from ONE customer.
Keep in mind that before Microsoft started pushing and selling SQL server, they rated the JET database engine for about 50 users. While my clients don't have problems, in 9 out of 10 cases when someone has a probem you find number one on the list is that they failed to split the database, or they’re not installing the front in part on each computer.
As for coding Techniques or tips? Any program design that you build and make it in which a reduced number of records are loaded into the form is a great start in your designs. In other words you never want to just simply throw up a form attached to a large table without restricting the the records to be loaded into the form. This is probably the number one tip I can give here.
For example, it makes no sense to load up an instant teller machine with everybody’s account number, and THEN ask the user what account number to work on. In fact I asked a 80 year old grandmother if this idea made any sense, and even she could figure that out. It makes far more sense to ask the user what account to work on, and then simply load in the one customer.
The above same concept applies to a split database on a network. If you ask a user for the customer account number, and THEN open up the form to the one record with a where clause, then even with 100,000 records in the back end, the form load time will be near instant because only ONE RECORD will be dragged from the customers table down the network wire.
Also keep in mind that there is a good number of commercial applications in the marketplace such as simply accounting that use a jet back end ( you can actually open simply accounting files with MS access, they renamed the extensions to hide this fact, but it is an access mdb file).
Some of my clients have 3-5 users with headsets on, and they’re running my reservation software all day long. Many have booked more then 40,000+ customers and in a 10 year period NONE of them have had a probem. (the one corruption example above was actually on a single user system believe it or not).
So, I never had one service call due to reliability of my access products. On the other hand this application only has 160 forms, and about 30,000 lines of code. It has about 65 highly related and noralized tables (relations enforced, and also cascade deletes).
So there’s no particular programming approach needed here for multi user applications, the exception being good designs that reduce bandwidth requirements.
At the end of the day it turns out that good applications are ones that do not load unnecessary records into a form. It turns out that when you design your applications this way then when you change your backend part to SQL server you find this approach results in very little work needed to make your access front end work great with a SQL server back end.
At last count I think here's an estimate of close to 100 million access users around the world. Access is by far the most popular desktop database engine out there and for the most part users find they have trouble free operation.
The only people who have operational problems on networks are those that not split, and not placed the front end on each computer.
The only compelling answers so far seem to be to reduce network traffic, and make sure your hardware cannot fail.
I find these answers unsatisfactory for a number of reasons.
The network traffic position is contradictory. If the database can only handle a certain amount of network traffic, then people need sensible guidelines to gauge this, so they can intelligently choose a database that is appropriate.
Blaming Access database crashes on hardware failures is not a defensible position. Users will (rightly) claim that their other software doesn't suffer from these kinds of problems.
Access database corruption is not an imaginary problem. The people who regularly suggest that 5 to 20 users is the upper practical limit for Access applications are speaking from experience.
Also see Corrupt Microsoft Access MDBs FAQ Which I've compiled over the years based on newsgroup postings and predates Allen's page. That said my clients have had very few corruptions over the years and have never lost data nor had to restore from backup.
I'm not sure what "write your program correcly" means in this context. I've read a few postings indicating this but it's more the implementation aspects. As Albert has pointed out you have to split the database and give each user their own copy of the FE MDB/MDE. You can't access a backend MDB over a wireless network card as they are too unstable. Same with a WAN unless the WAN is very fast/wide and very stable. We then suggest upszing to SQL Server or using Terminal Services/Citrix.
I have several clients running 20 to 25 users all day long into the system. One MDB has 120 tables while another has 160 tables. A few tables have over 600,000 to 800,000 records. The one client had 4 or 5 corruptions in five or seven years. We figured out the cause of all but two of those. And they were hardware related in one way or another. At least one of these apps should've been upsized to SQL Server. However that was cancelled on me by a Dilbert's PHB (Pointy Haired Boss).
Very good code (wrapped in trasactions with rollbacks) we had a call center with over 100 very active users at a time back in Access 97 days.
Another one with VB 5 front-end, Access Jet on portables that RAS (yes the old dial up days) to a SQL Server 6 database - 250 concurrent users.
People using the wizard to link a form directly to a table where the form is used to make edits ... might be a problem.
Uncompleted transactions e.g a recordset that does not get closed properly and a break in network connection for any reason while a database is open (have seen the power saving features of NIC causing corruption) are my number one causes
I don't believe the number of users is a limitation with MS-Access Jet Engine.
My understanding is that the Jet Engine queues up concurrent maintenance transactions to apply them 1 at a time (like a printer queue does to print print jobs). Via ODBC connectivity, and an intelligent user-application program that manages the record set sizes, locking of records open for edit, and only maintains DB connections long enough to retrive a record and save a record, that puts little strain on the jet engine. I look at mdb file as tables. You can conceivably have 100s of these in one database, or more. The SQL querying to these tables would be by random access, and the naming convention of the mdb files lets the SQL query built in the applciations program which table (mdb file) to access. MS-Access databases can be 10s 100s or 1000s of Gigabytes this way and run smoothly. Proper indexing and normalizing of data to revent storing of redundant data also helps. I've never run into a database crash or concurrency issue with MS-Access and ODBC and Win32 Perl GUI interface driving the applciation. I use no MS-Access Objects other than tables, indexes, and perhaps views/queries. And yes, I store the database on a dedicated PC and install my applications software on each workstation PC.