Concurrent Users in MS Access (2007) - ms-access

I am supposed to make our MS Access application work in parallel. Basically we will always at most be 3 people that need concurrent access (so from what I read this should not be too much of a problem traffic-wise).
Mostly we will all need to work on the same table (well, it's actually 3 tables, but with this access tool you can always open the sub-tables directly by clicking on the +).
I am having a hard time finding information on how to do this, so any pointers to good articles would be welcome.
Also I would like to be able to see who changed what... So implement some sort of logging.
At the moment the database lies somewhere, we download it (write that it is in use), make changes and upload it back. It's a stone-age solution and I need to change this asap.
Any help is greatly appreciated!

The easiest way is to stick the mdb/accdb file on a network drive, and make people open it from there, rather than copying it locally first. 3 concurrent users probably won't crash it too often, but make sure you take regular backups.
As for logging, well, it's easy enough to audit changes made via forms, but not so much with tables. Have a look at this thread http://forums.devarticles.com/microsoft-access-development-49/creating-audit-trail-of-all-edits-to-database-22382.html

Related

Inspect large linked database in Access

This sounds like a quite simple and straightforward question, but I tried to search online and could not find an answer to my problem. I would like to view the linked database from Access but the database is too large and every step takes forever to load the data. I wonder if there is a better way to inspect the data tables? Sorry if this has been asked somewhere else, I am a bit new to Access.
Well, you have the program part (often called the front end (FE).
Then you have linked tables to the data file 9often called the back end (BE).
So, I can't say there necessary going to be much difference then just looking at the list of linked tables in the nav pane (FE).
Or, you can fire up access, and open the BE file. At that point, you will again see the "list" of tables in the nav pane. About the only difference here is that you as a general rule can't make changes to the table structure(s) in the FE.
But, other than that, the performance should not be much different. Of course if you are on a network and the BE is in some folder? Well then your network connection of course can and will effect performance.
So, in that case, what one often does is simply copy the BE from the server folder to that of a local folder. You can then open + use + play + consume that database (BE) 100% local on your computer without a network between you and the data file. This will of course run MUCH faster, and thus let you see/play and look at the tables and open them to see data inside such tables.
So, all in all? Copy the BE to a local folder. You be working on a copy of the data (that's safe - can't mess up production data), but certainly performance wise you find that any performance considerations should be quite much eliminated.
And for development and testing? Often we take the BE and place it on our local computer (say laptop) and thus work with that BE local. And depending on how the FE (program/software part) is setup, often it will have some options to re-link and thus you can point the FE to a different BE.
Just keep in mind that if you make changes to the BE? And you want such changes from that copy to appear or be made on the production BE? Well, you have to make notes, since there not really a automated way to send changes (say new tables, or changes to table designs) to the production BE. And of course, one has to be VERY careful. You can make changes to the tables such as re-name, or changing field names - that will for sure break the FE program part. You can in most cases of course add new fields/columns to existing tables, and that in most cases should not break your software.
But, from a performance point of view? I am somewhat perplexed you note performance issues and problems. Perhaps there is some VPN between the FE and BE (and that does not work well at all - you in general require a good solid network connection - a LAN (not a VPN/WAN) between the FE and BE. If a VPN (WAN) is to be adopted, then in most cases the BE needs to be migrated to sql server - the FE (program) part can then used linked tables to SQL server, and not a file based BE.
So while above should make sense - it is somewhat perplexing the performance issue you dealing with, or that you note here? (that does not quite make a whole lot of sense).

What is the best way to prevent Access database bloat

Intro:
I am creating a Access database system that will be rolled out with multi-user functionality.
But as i am creating this database in Access 2000 (Old school I know) there are quite a lot of bugs and random mysterious problems that occur when my database gets passed 40-60MB.
My question:
Has anyone got a good solution to how I can shrink this down or to prevent the bloat?
Details:
I am using many local tables combined with SQL Tables and my front-end links to a back-end SQL Server.
I have already tried compact and repair but it only ever shrinks it to about 15MB and after the user has used the database a few time the bloat expands quickly to over 50-60MB!
Let me know if more detail is needed but that is the rough outline of my problem.
Many Thanks!
Here's some ideas for you to follow.
You said you also have a lot of local tables. Split the local tables off into yet another Access database. So you'll have 2 back-ends (1 SQL Server & 1 Access), and the front end.
Create a batch file that opens your local tables backend database with the /compact option. So, it will look something like this:
"C:\Prog...\Microsoft...\Officexx\ C:\ProjectX_backend.mdb /compact"
Then run this batch file on a daily basis using scheduled tasks. Your frontend should never need compacting unless you edit it in any way.
If you are stuck with 2000, which has a quite bad reputation, then you have to dig down into your application and find out what creates the bloat. The most common reason are bulk inserts followed by deletes. Other reasons, are the use of OLE Object fields. Other reasons are programmatic changes in in form, etc objects. You really have to go through your application and find the specific cause.
An mdb file that is only connected to a backed server and does not make changes to local objects should not grow.
As for your random issues, besides some lack of stability in the 2000 version, you should look into bad RAM in the computers, bad hard drives, and broken network controllers if your mdb file is shared on the network.

Synchronizing MS Access database file

I am developing a database with about 10 tables in it. Basically it will be used in 2 or 3 distant geographical locations (let's call them A,B and C). The desired work flow will be as follows:
A,B and C should always have the same database. So when A does any changes he should be able to send those changes over to B and C. Emailing the entire mdb file doesnt make sense since its 15+mb in size. So I would like to send the new additional records and changes only to B and C. The changes B and C make should also be reflected to the other repective parties. How can I do this?
I have a few ideas in mind but cont know how to implement it.
solution 'A' - export the data tables only into a xls file and email that. But the importing of the tables into the mdb file could be a bit complex right? and the xls is file will also become bigger and bigger with time.
solution 'B' - try extract just the changes and email only the new parts? (but how to extract just those)
Solution 'C' - find some way of syncing all users onto the same database(storage) location. I was thinking of a front/back end splitting solution by storing the tables in a shared drive in the parent company's server (which is also overseas). But the network connection between locations is very slow, and I dont know how much bandwidth is needed for this.
Any recomendations would be most welcome!
In regard to sources for information on replication, start with my Jet Replication Wiki.
But I would never recommend Jet replication for your scenario. The only environment where I currently recommend it (and I've been doing replicated apps since 1997 and still have several in production use) is for supporting laptop users who have to work with live data in the field disconnected from any network, and return to the home office and synch direct with the mother ship.
The easiest solutions with an Access application would be hosting the app on Windows Terminal Server/Citrix and the users would run it over a Remote Desktop Connection, or using Sharepoint. The Terminal Server/Citrix solution has no accomodation for disconnected users, but Sharepoint can accomodate offline usage and synch changes when connected. Access 2010 and Sharepoint 2010 provide a host of new features, including better schema design, the equivalent of triggers and greatly improved peformance for large Sharepoint lists, so it's a no-brainer to me that if you choose Sharepoint you'd want to use A2010 and Sharepoint 2010.
While it's possible to do what you want with Jet Replication, it requires a lot of setup on the server and client ends, and is relatively fragile (not in terms of data integrity if you're using indirect replication (as you should), but in terms of network reliability) -- there are too many moving parts and too many failure points.
Windows Terminal Server/Citrix is by far the simplest, with the fewest moving parts and completely centralized administration, and works very well for a relatively small investment.
Sharepoint is more complicated than WTS/Citrix, but is less complex and more centralized than a Jet Replication solution.
If it were me, I'd probably go with WTS/Citrix if there was no need for disconnected usage, but I'd be salivating over trying out A2010/Sharepoint 2010. If there was a need for disconnected usage, then I'd definitely go the Sharepoint route.
You want to use "Jet Replication". See
MSDN Search for jro at http://social.msdn.microsoft.com/Search/en-US?query=jro&ac=8
MSDN Search for access replication at http://social.msdn.microsoft.com/Search/en-US?query=access%20replication&ac=3
It's been some time since I did it, but the indirect method of replication worked well for me in a similar situation.
It takes something to set up. The documentation used to be appalling for it, but I found articles written by Michael Kaplan (aka Michka) that walked me through how to do it.
If your final environment is going to be fairly stable, then use Access the whole way. If not, then I'd urge you to take HansUp's advice and go with SQL Server or SharePoint.
Do note: if you're working in Access 2007 or later, replication is not directly supported, and you'll have to roll-your-own bits and pieces. If you're using an earlier installation, you'll be fine, but allow time for some head-scratching.

How do I create a safe local development environment?

I'm currently doing web development with another developer on a centralized development server. In the past this has worked alright, as we have two separate projects we are working on and rarely conflict. Now, however, we are adding a third (possible) developer into the mix. This is clearly going to create problems with other developers changes affecting my work and vice versa. To solve this problem, I'm thinking the best solution would be to create a virtual machine to distribute between the developers for local use. The problem I have is when it comes to the database.
Given that we all develop on laptops, simply keeping a local copy of the live data is plain stupid.
I've considered sanitizing the data, but I can't really figure out how to replace the real data, with data that would be representative of what people actually enter with out repeating the same information over and over again, e.g. everyone's address becomes 123 Testing Lane, Test Town, WA, 99999 or something. Is this really something to be concerned about? Are there tools to help with this sort of thing? I'm using MySQL. Ideally, if I sanitized the db it should be done from a script that I can run regularly. If I do this I'd also need a way to reduce the size of the db itself. (I figure I could select all the records created after x and whack them and all the records in corresponding tables out so that isn't really a big deal.)
The second solution I've thought of is to encrypt the hard drive of the vm, but I'm unsure of how practical this is in terms of speed and also in the event of a lost/stolen laptop. If I do this, should the vm hard drive file itself be encrypted or should it be encrypted in the vm? (I'm assuming the latter as it would be portable and doesn't require the devs to have any sort of encryption capability on their OS of choice.)
The third is to create a copy of the database for each developer on our development server that they are then responsible to keep the schema in sync with the canonical db by means of migration scripts or what have you. This solution seems to be the simplest but doesn't really scale as more developers are added.
How do you deal with this problem?
Use fake data -- invest in a data generator if you must, but please don't use real data in a development environment, especially if it's possible that access to it may be compromised. I'm more familiar with tools for MS SQL, but googling for "MySQL data generator" brought up EMS SqlManager and Datanamic.
As tvanfosson mentioned, use fake data instead of live. Doing so will not only keep the live data safe but also allow you to test different scenarios, such as international names and such.
As for how to distribute your DB, your schema and creation scripts really should be in source control, so each developer can create a local copy of the database as they see fit.
You could set up a fixtures (seed data) system. You provide the data once and it gets put into the db as many times as you need. That could be held in source control so that the fixtures are used/updated by all users.
I think that auto-generators are usually a bad idea. It is hard for them to generate information that could be real. Fixtures would allow you to make this information and know that it is what you are looking for. You could also push the bounds of your validators by using fixtures.
It may take a bit of time to set up the first time around, but I think you will get a much higher quality of data that is put in for testing.
Regards,
Justin

Will Access support 35- 40 users writing to a Access database

We are looking to have about 35-40 people writing to an access database via script on a shared drive. The metrics break down to them needed to write about 3-7 times an hour. Would Access support this without going ape on me.
Yes I would love to use this as a SQL server but that means going through massive amounts of red tape/meetings paperwork etc that I would prefer not to bother with
Could you not make them go with the free edition of SQL Server Express without the red tape?
In answer to your question, though, I've seen Access give big problems in environments with this many users, although that was pre 2007. I dunno how much it has changed.
If it were me, I'd avoid Access at all cost.
Could it? Yes. If you are very careful and perform locking and ensure that nobody steps on anybody else. Access is really not designed for any form of concurrency. I know of one place that managed to make it work in a very concurrent environment, but that environment basically logged everything and if the DB clobbered itself, it'd restore from the last backup and replay against the Access file automatically, so that the failures were transparent. I would not recommend following that course of action...
Should you do it? No. Is there any reason that you cannot use something like PostgreSQL or MySQL?
Yes, it would work. No, it's not a good idea.
Access would be able to handle the load, as long as those 35-40 people aren't all trying to access the database at once. It'll quickly bog down when you start having more than a couple of concurrent users, particularly if those users are all trying to update something.
The problem is that isn't not safe. You need to have the entire database file accessible on a network share, where any users will be able to write to it. You'll have multiple instances of Access trying to read and modify the file at the same time, and unless you are very careful with locking, it's quite possible for the database to become damaged or corrupt.
You'll also never be able to add any kind of access control beyond basic file permissions. You might not need it now, but internal databases often end up needing to be exposed to the wider world somehow.
It's not worth it. There are plenty of real RDBMS systems out there, for free, that are designed to handle this kind of thing. Why spend time trying to make Access work in such an environment, when you could just install SQL Server Express and be done with it? It has limitations, but if you're seriously considering Access, you're never going to be anywhere near those. Or use MySQL, PostgreSQL, Firebird...
I would avoid access too. Have you every thought about sql ce. It should handle multi users better and it is file just like access.
7 * 40 = 280 per hour.
280 / 60 = 4,6 per mins.
If your script is light, and if you don't read results too often, maybe...
Of course I don't recommand you to try. Meetings time! ;)
If the connections are opened only as long as needed to run the scripts, and you use transactions and have some retry logic built in when there's a conflict, there really oughtn't be too much of an issue.
If your script takes 1 second to do its update (that's a pretty long time in computer/database terms, of course), and there are 280 updates per hour, if you were lucky enough that no two users simultaneously ran their scripts, you would still have 3,320 seconds when the database was not open.
I don't see an issue, assuming that you know how to properly manage your connections and manage your Jet transactions.
That volume is not a problem for Access so long as it's on a stable LAN or very high speed WAN. Wireless connections are also a bad idea.
I have several clients which are adding about 200K to 300K transactions per year into the systems. So that's about 1000 per work day. That's using both an Access front end and back end.
That said one of them will be upsizing shortly to SQL Server. I fired the other client when they hired a PHB (Dilbert's pointy haired boss.)
It's iffy. The first time the database crashes you'll wish you went with SQL Server Express. And it will crash, eventually.
In my previous job we had a product with an Access database backend. We had some clients with 25 users. We refused clients who had 40 potential users because we knew from experience that the database would corrupt itself on a regular basis, and performance would be unacceptable.
The day we went to SQL Server Express, the performance of the application doubled, and the problems with crashing and corruption virtually disappeared.