Improving Performance in an MS Access Database - ms-access

I have an MS Access 2003 Database with more than 60 tables and 120,000 records. This DB works on the network and it is split into a back end(BE) and a front end(FE). At most two or three users use this DB at the same time.
I want to improve the performance as it is currently quite slow. Which approach will improve performance - using FE/BE or putting the whole DB on a shared folder (without FE/BE) and then using it?

Under no circumstances should you run an unsplit MS Access app in a multiuser environment. If your app is slow, the article posted by vulkanino is a step on the way to making it faster. You should make sure that all you tables have suitable indexes, that queries use those indexes as far as possible, that forms are based on queries, not tables and that complex forms are carefully organised. For example, subform controls can be loaded with forms as required.

From my experience, none of your solutions will work well. You should use a sqlserver (full or express) as your backend. Then connect with odbc from the front-end. After that, look carefully at your indexes and try to move the most heavy query in storedprocedures or views in sqlserver. This will save your work on forms, reports and vba code.
Of course this scenario assumes a lot about your network (do you have a always on server pc?), about your work and purpose of your db.

Related

MS Access changing linked table to AWS MySQL Db slows down forms/reports

I am new to a new role at a company where they are using MS Access with a MySQL db which is running in server that's physically in our office behind our private network. I have been hired to develop an entire new application to bring the company up to modern standards. As we move features/modules to the new Angular/NodeJs App I am writing, users still need to utilize the UI provided by MS Access to the new production database that will be on AWS Lightsail.
However, when I change the configurations of Ms Access, OBDC connections to point to the AWS Lightsail MySQL Db, everything(reports especially) in the MS Access UI becomes slower than when it was being pointed to the MySQL Db here in office in-network.
I am going to the "Linked Table Manager" and changing the "Connection String".
Somewhere I read I should make sure SSLMODE is disabled to remove any performance issues.
DSN=AWS_Dev;DATABASE=ECSDataTables;PORT=3306;SERVER=IP_ADDRESS;SSLMODE=DISABLED;
I went through the normal "ODBC Data Source Administrator" in Windows and added the MySQL AWS host, user/pass as normal.
I have done extensive research and have found several sources, but none are really helping.
I have been asked not to spend too much time trying to fix/optimize anything in MS Access as my focus should be on the new application, but it's hard to believe that a simple switch of MySQL database can have such impact. In the new Angular/NodeJs application, everything runs very fast, so I know it's not the AWS MySQL db or anything.
Am I missing something, any configurations I should be doing in Ms Access? I have not used VB in about a decade, so I am hoping something can be done without the need of too much technical background in this matter.
Thank You.
Well, the issue is that your local area network (LAN) is about 10 times, or more faster then your internet connection.
Your low cost office network is very likely to be a 1 gig bit network. (100 base T is rare).
However, your internet high speed connection is likely say 10 mbits. So, you going from 1000 to 10 - that is 100 times slower. So, 3 seconds now becomes 300 seconds.
I mean, with such a slower connection speed, then no surprise should exist here.
What you can do is for any report that is a complex join of client side sql is convert the sql query to a server side view, link to that view. Now use that view as the base source for the report. And of course existing VBA filers that you always use (right???) to launch a report will now only pull the data it needs down the network pipe. Access reports (or forms) only pull down what you ask - not the whole table. So, any filter you have (use the where clause of the open report command) will be respected. So, you either have to pull less data, or simply find something with a similar speed rating as your local area network (and such high speed internet is rare).
The LAN vs WAN concept and speed issue is outlined in this article:
http://www.kallal.ca//Wan/Wans.html
While the above article is very old, the speed differences of the internet are about 10x faster today, but so is the local area that's gone from 100 baseT to 1 gig bit base.
So, things are slower because you are working with a VASTLY slower connection speed. Slower is slower!!!
Edit
While as noted, access will only pull what you ask, the case where access client does a poor job is sql queries that involve multiple tables - often the client will mess up what it sends server side. As noted, the solution in this case is to adopt views server side. This means you move the client side query that drives the report to a view, and link to that view. You not gain much performance for a single table query, but for any report based on complex (multi-table joins), then using a view will force the sql and "join work" to occur sql server side, and this can result in huge performance gains.
Well this is a case where limited knowledge just produces worst results than the expected ones.
Over the years top DBAs just "hate" Ms Access... they just see only problems,issues you name it ...the end sentence is "switch to a real Database engine".
Well this has created a faulty assumption that MsSQL, MySQL,Oracle, PostGreSQL and the rest database engines are somewhat a "magic pill"...you just switch the BE to one of the above DBE and all your problems will get resolved...just like that.
DBE --Database Engine (if you would like to call somewhat else feel free)
WRONG
Ms Access follows a different philosophy from the DBE and it does its job damn well given all its shortcoming and the major fact that is a file based DBE.
Switching to another DBE will give amazing performance IF and ONLY IF you respect the fact that you are not working with Access ....just don't treat e.g. MySQL as your file repository and DON'T just link the tables and expect everything to go well...
Want to keep blaming Access ...just switch over to another platform (.NET,PHP,Js , Java...make your pick) ...and do a small application that pulls ALL of your data in a single go like you do with Access . it will certainly crash or go Not responding...
So stop blaming Access ...start reading on how to make the most of two worlds and i am pretty sure that the results will amaze you....but again i must stress out that this is not a "magic pill" solution ...it involves a LOT of work ...planning,data manipulation,normalization,code changes and above all change of philosophy..
I would recommend starting the journey by picking this book : https://www.amazon.com/Microsoft-Access-Developers-Guide-Server/dp/0672319446 ( i don't want complains about its Old and MsSQL ...just read first and complain later)
Also take a look at an old benchmark alike video i made some years ago : https://www.linkedin.com/posts/tsgiannis_a-small-demo-of-connecting-ms-access-fe-to-activity-6392696633531858944-dsuU
Last but not least....years ago i was making some tests to see what the "magic pill" would do to my company's applications....the simplest test of all...
A simple table with few fields but with around 8 millions records...just display it
Access BE (local)--> It would run in 1-2 seconds...that's fast
Access BE (Network share)--> It would run in a few seconds...not so fast but it was usable
MSSQL BE (linked table)--> somethimes it get the results sometimes it wouldn't....slow...really slow ..like you make a coffee and go for a small walk.
MySQL BE (linked table)--> it never finished...timeout of "Not Responding"
PostGreSQL BE (linked table)--> it never finished...timeout of "Not Responding"
So stop blaming Access...start working and get amazed....

SQLite3 database per customer

Scenario:
Building a commercial app consisting in an RESTful backend with symfony2 and a frontend in AngularJS
This app will never be used by many customers (if I get to sell 100 that would be fantastic. Hopefully much more, but in any case will be massive)
I want to have a multi tenant structure for the database with one schema per customer (they store sensitive information for their customers)
I'm aware of problem when updating schemas but I will have to live with it.
Today I have a MySQL demo database that I will clone each time a new customer purchase the app.
There is no relationship between my customers, so I don't need to communicate with multiple shards for any query
For one customer, they can be using the app from several devices at the time, but there won't be massive write operations in the db
My question
Trying to set some functional tests for the backend API I read about having a dedicated sqlite database for loading testing data, which seems to be good idea.
However I wonder if it's also a good idea to switch from MySQL to SQLite3 database as my main database support for the application, and if it's a common practice to have one dedicated SQLite3 database PER CLIENT. I've never used SQLite and I have no idea if the process of updating a schema and replicate the changes in all the databases is done in the same way as for other RDBMS
Is this a correct scenario for SQLite?
Any suggestion (aka tutorial) in how to achieve this?
[I wonder] if it's a common practice to have one dedicated SQLite3 database PER CLIENT
Only if the database is deployed along with the application, like on a phone. Otherwise I've never heard of such a thing.
I've never used SQLite and I have no idea if the process of updating a schema and replicate the changes in all the databases is done in the same way as for other RDBMS
SQLite is a SQL database and responds to ALTER TABLE and the like. As for updating all the schemas, you'll have to re-run the update for all schemas.
Schema synching is usually handled by an outside utility, usually your ORM will have something. Some are server agnostic, some only support specific servers. There are also dedicated database change management tools such as Sqitch.
However I wonder if it's also a good idea to switch from MySQL to SQLite3 database as my main database support for the application, and
SQLite's main advantage is not requiring you to install and run a server. That makes sense for quick projects or where you have to deploy the database, like a phone app. For server based application there's no problem having a database server. SQLite's very restricted set of SQL features becomes a disadvantage. It will also likely run slower than a server database for anything but the simplest queries.
Trying to set some functional tests for the backend API I read about having a dedicated sqlite database for loading testing data, which seems to be good idea.
Under no circumstances should you test with a different database than the production database. Databases do not all implement SQL the same, MySQL is particularly bad about this, and your tests will not reflect reality. Running a MySQL instance for testing is not much work.
This separate schema thing claims three advantages...
Extensibility (you can add fields whenever you like)
Security (a query cannot accidentally show data for the wrong tenant)
Parallel Scaling (you can potentially split each schema onto a different server)
What they're proposing is equivalent to having a separate, customized copy of the code for every tenant. You wouldn't do that, it's obviously a maintenance nightmare. Code at least has the advantage of version control systems with branching and merging. I know only of one database management tool that supports branching, Sqitch.
Let's imagine you've made a custom change to tenant 5's schema. Now you have a general schema change you'd like to apply to all of them. What if the change to 5 conflicts with this? What if the change to 5 requires special data migration different from everybody else? Now let's imagine you've made custom changes to ten schemas. A hundred. A thousand? Nightmare.
Different schemas will require different queries. The application will have to know which schema each tenant is using, there will have to be some sort of schema version map you'll need to maintain. And every different possible query for every different possible schema will have to be maintained in the application code. Nightmare.
Yes, putting each tenant in a separate schema is more secure, but that only protects against writing bad queries or including a query builder (which is a bad idea anyway). There are better ways mitigate the problem such as the view filter suggested in the docs. There are many other ways an attacker can access tenant data that this doesn't address: gain a database connection, gain access to the filesystem, sniff network traffic. I don't see the small security gain being worth the maintenance nightmare.
As for scaling, the article is ten years out of date. There are far, far better ways to achieve parallel scaling then to coarsely put schemas on different servers. There are entire databases dedicated to this idea. Fortunately, you don't need any of this! Scaling won't be a problem for you until you have tens of thousands to millions of tenants. The idea of front loading your design with a schema maintenance nightmare for a hypothetical big parallel scaling problem is putting the cart so far before the horse, it's already at the pub having a pint.
If you want to use a relational database I would recommend PostgreSQL. It has a very rich SQL implementation, its fast and scales well, and it has something that renders this whole idea of separate schemas moot: a built in JSON type. This can be used to implement the "extensibility" mentioned in the article. Each table can have a meta column using the JSON type that you can throw any extra data into you like. The application does not need special queries, the meta column is always there. PostgreSQL's JSON operators make working with the meta data very easy and efficient.
You could also look into a NoSQL database. There are plenty to choose from and many support custom schemas and parallel scaling. However, it's likely you will have to change your choice of framework to use one that supports NoSQL.

What is the best way to prevent Access database bloat

Intro:
I am creating a Access database system that will be rolled out with multi-user functionality.
But as i am creating this database in Access 2000 (Old school I know) there are quite a lot of bugs and random mysterious problems that occur when my database gets passed 40-60MB.
My question:
Has anyone got a good solution to how I can shrink this down or to prevent the bloat?
Details:
I am using many local tables combined with SQL Tables and my front-end links to a back-end SQL Server.
I have already tried compact and repair but it only ever shrinks it to about 15MB and after the user has used the database a few time the bloat expands quickly to over 50-60MB!
Let me know if more detail is needed but that is the rough outline of my problem.
Many Thanks!
Here's some ideas for you to follow.
You said you also have a lot of local tables. Split the local tables off into yet another Access database. So you'll have 2 back-ends (1 SQL Server & 1 Access), and the front end.
Create a batch file that opens your local tables backend database with the /compact option. So, it will look something like this:
"C:\Prog...\Microsoft...\Officexx\ C:\ProjectX_backend.mdb /compact"
Then run this batch file on a daily basis using scheduled tasks. Your frontend should never need compacting unless you edit it in any way.
If you are stuck with 2000, which has a quite bad reputation, then you have to dig down into your application and find out what creates the bloat. The most common reason are bulk inserts followed by deletes. Other reasons, are the use of OLE Object fields. Other reasons are programmatic changes in in form, etc objects. You really have to go through your application and find the specific cause.
An mdb file that is only connected to a backed server and does not make changes to local objects should not grow.
As for your random issues, besides some lack of stability in the 2000 version, you should look into bad RAM in the computers, bad hard drives, and broken network controllers if your mdb file is shared on the network.

Database data migration

I'm fishing for knowledge on this one as I know a way I could get what I'm after but I'm wondering if it's the best approach. Due to project creep the database of what was once a very simple application became an overcomplicated monster, and while it worked administration wasn't exactly easy.
I've got the opportunity now to essentially start again, however only piece by piece, gradually migrating functions from the old application to the new one. This requires that I keep the new database synchronised with the old one, which thankfully only needs to be one way as no data will be getting created on the new database that needs to be migrated back.
The options considered so far are SSIS and a Quartz triggered windows service using plain old C# ADO.Net. SSIS I've decided is probably a bad idea as it can be a nightmare with upserts, requiring temporary tables to be created followed by a merge and the schema differences are pretty extensive so the SSIS logic would be a headache. The ADO.Net approach is the direction I'm leaning as data readers, bulk inserts and LINQ should do the job nicely. However considering how many people this must have effected before me I'm thinking there must be a better way. What approach do you guys use?
To get a bit more specific the details are:
SQL Server 2008 R2
~2 million rows over roughly 30 tables -> ~20
tables
Databases will each be on completely separate database servers
with different credentials
Many thanks
Lets supose this scenario:
S_old: your 'old' server
S_new: your 'new' server
db_old: your 'old' database
db_new: your 'new' database
A solution may be a nighly db_old backup and restore from s_old to s_new: db_old_bk. Then, in s_new you have access to both databases (db_old_bk and db_new) At this point you can do upserts with 'merge' sqlcommand on t-sql easyly (we are talking about 20 tables). Is this that you are looking for?

MS Access freezing

My Microsoft Access 2007 is freezing on me. Could it be the 700 queries?
Yes, it's on a network, but only 2-3 people are accessing the backend at a time.
I have tried compact and repair, and also yelling at it. Nothing worked.
From my experience I can assure you that up to 15 persons working simultenously with back-end MDB should be fine with no visible freezing.
You can explore the following:
How big is your MDB? if it is too big consider splitting it to smaller parts and put all your historical and rarely accessed data into the separate file -- you can easily re-bind all the table in front-end to multiple back-end files.
Check your queries and VBA code. Use optimistic locks everywhere you can and avoid locking tables for reading purposes!
Check your network connectivity and hard drive throughput? Is your serving trying to perform a virus scan every time you update your MDB? Maybe an update is running?
If nothing helps -- try to install MS SQL Express, quickly upsize your tables and re-bind those to your front-end using ODBC connection -- you won't need to re-write your queries (as long as those are written in agnostic SQL, without DISTINCTROW etc).
It sounds like a locking issue. The Jet engine was not designed for multi-user access, and does not handle it very well at all. You should consider upgrading to SQL Server, which handles it much better. The Express version of SQL Server is free, and MS Access has an upsizing wizard that will do all the hard work for you.