Is there a way to use the class objects in a separate MS Access database without importing the class module and all its dependencies?
I have a database project that is fully running and I would like to use some of its' custom classes and data in a separate project. I would rather not have to import the class modules and their dependent queries as well as to link to their dependent tables.
No.
(This must be at least 30 chars, so here they are)
Related
Looked around and found a variety of answers, but nothing recent that really compares these options pro and con. So I thought I'd ask the community to weigh in on which route you prefer and why.
Background
This is what we have:
Common set of Access modules & classes used in numerous protocol databases (Access 2010 *.accdb split front/back-ends)
Front-ends link to back-end database tables & code (linked dbs)
Back-ends contain protocol specific data & code
Common module/class database shouldn't be directly edited by users
Knowns
Add-ins & db reference databases:
Require re-distribution each time they are changed (even if no code is changed within them).
Must be edited within their IDE vs. the IDE of the protocol database (or you'll lose your edits since that db isn't the common code's)
Questions
How should the common module/class database be connected to the protocol databases?
linked database just like back-ends are
attach it as a reference in the IDE (Tools > Reference)
create an add-in and add it as a reference
How would you do it and why?
What are the pros/cons?
Which option would maximize performance?
I have several variations of the following code lying around, I think this one is most applicable to your situation (it pulls modules and forms from a database, overwriting existing ones, as soon as the database is started).
Public Sub ImportModules()
Dim ImportDbLocation As String: ImportDbLocation = CurrentProject.path & "\ModuleDb.accdb"
Dim ObjectsToImport As Recordset
Set ObjectsToImport = CurrentDb.OpenRecordset("SELECT * FROM Objects IN """ & ImportDbLocation & """")
Do While Not ObjectsToImport.EOF
On Error Resume Next
DoCmd.DeleteObject ObjectsToImport!ObjectType, ObjectsToImport!ObjectName
On Error GoTo 0
DoCmd.TransferDatabase acImport, "Microsoft Access", ImportDbLocation, ObjectsToImport!ObjectType, ObjectsToImport!ObjectName, ObjectsToImport!ObjectName
ObjectsToImport.MoveNext
Loop
End Sub
This code is triggered from the AutoExec macro in the front-end databases.
The database referred to as ImportDbLocation is the database containing all modules and forms I want to import. It contains a single table named Objects. This table has two columns, one named ObjectName containing the names of all objects that should be pulled, and one named ObjectType, which is a lookup field that corresponds with the acObjectType enum (some irrelevant objects removed).
Advantages:
You get a fresh copy of all modules in the database, making sure any overwrites are irrelevant.
You can add a third column to the Objects table to filter out objects for a specific database, thus selectively pushing some objects to some front-ends, and others to others (and a fourth one to specify the name of the objects in the ModulesDb file, to use multiple variants of the same form for different front-ends).
You have all code in the front-end, so no weirdness with external files and references
You can also use this code to pull any other type of objects you want (in my case mainly queries and forms)
Disadvantages:
Load time increases (normally by a tiny bit, but dependent on how many objects are imported, it might be long)
Modules are visible and readable for end-users (technically also modifiable, but any changes are undone on database load)
You can't modify the module doing the import this way
You NEED proper security settings, else users will get spammed with security popups on every database open
I also have a variant lying around that asynchronously pushes the modules from a separate thread to the database when opened, and I have another implementation that allows me to selectively move modules to front-ends after doing modifications.
I can share them if needed, but haven't yet implemented the asynchronous one in a production environment (still a work in progress, it should save on load time and can push all modules)
First a big "thank you" to #Erik and #Gustav for your inputs.
Solution
The solution opted for in my use case was to use a common reference library database that houses modules and classes.
While I considered #Erik's solution, in the end copying over the modules and classes into each protocol database was a bit too much like sending out copies of them which could then devolve from the master modules/classes. This in part was one reason to shift to the single reference database - to avoid propagation into the other databases so there would be a single code source for them.
Caveat: If there are performance issues with the reference library solution, classes & modules may have to be pulled from the common library database ala #Erik's solution.
Database as Reference Library
The common modules and classes are contained within a database (*.accdb). Whenever the code is desired in another database, a reference is added like referencing any other code library (VB IDE - Tools > References). The only difference is that you Browse... to the library and make sure you select Microsoft Access Databases (*.accdb) in the search filter.
If the reference library database is kept in a common location, there shouldn't be issues with re-referencing, although re-connecting the reference is easily done (same as connecting in the first place).
I've also separated out common version control and development modules into similar separate databases that can be referenced when desired during development and de-referenced when a given database goes to production.
Development "Gotchas"
Actually these are more considerations/things to remember than true "gotchas":
- Edit library classes & modules in the library database
If you edit a class or module from the database that references the library database you will lose the edits as soon as you close the database you were working in.
You can try out edits from the database you're working in - the code will run, but it doesn't save.
I often have Notepad++ or OneNote open to copy & paste over code I'm testing to the library database. That way I can continue working on other areas and have a set of changes to update the library database.
- Compile the library database before using its components in the referencing database
This is more a workflow issue - make sure you save & compile in the library database, then open your referencing database after you've completed your changes and closed the library database first.
Resources
Here are some links which proved helpful:
Using a centralized vba module in multiple access databases
Using VBA Code Libraries in Access Database Applications
Referencing VBA Projects as Libraries
Demonstration Applications & VBA Code Libraries
Classes in VBA
Using Database Library Files in Your Access Application
Object Oriented VBA: Design Patterns: Simple Factory
Object Oriented VBA: Static Classes
The 6th reference (available only via the internet archive) has been particularly helpful in sorting out how to properly instantiate classes. In the end I opted to create a single Factory class (static class) with multiple functions (one per class - e.g. NewClassABC()) that instantiates the class within the common reference library database.
So far, so good.
The reference library database has the code in one place and other protocol databases can be updated to it when desired. No dealing with multiple copies and versions of the same module or class.
Hope this helps others looking for a "common" library type solution.
We have a large number of access databases that contain a number of common functions. We need to make a small change to one of the functions, and are looking for a way to automate this via scripting, etc. Is there any way to get at the VBA code inside an Access database, and make a change to it?
I have used ADOX before from PS or VBS to be able to list tables, views and queries, but it does not expose the VBA code in its API.
~bp
If the code changes are always in a module with a consistent name then you can simply remove the existing module and reimport the new version in all the projects you need to apply the change.
http://www.cpearson.com/excel/VBE.aspx
The above link posts information on how to import modules into a VBA project. So simple case of looping through your multiple databases and removing the module and importing the latest version.
If the code changes are located in different module names, you can open each database and loop through the modules to search for the string. The functions you'll need are listed here (depending on the style of change, it'll most likely be some combination of Find and ReplaceLine)
http://msdn.microsoft.com/en-us/library/office/aa223124%28v=office.11%29.aspx
This question is going to be a purely organizational question about SSIS project best practice for medium sized imports.
So I have source database which is continuously being enriched with new data. Then I have a staging database in which I sometimes load the data from the source database so I can work on a copy of the source database and migrate the current system. I am actually using a SSIS Visual Studio project to import this data.
My issue is that I realised the actual design of my project is not really optimal and now I would like to move this project to SQL Server so I can schedule the import instead of running manually the Visual Studio project. That means the actual project needs to be cleaned and optimized.
So basically, for each table, the process is simple: truncate table, extract from source and load into destination. And I have about 200 tables. Extractions cannot be parallelized as the source database only accepts one connection at a time. So how would you design such a project?
I read from Microsoft documentation that they recommend to use one Data Flow per package, but managing 200 different package seems quite impossible, especially that I will have to chain for scheduling import. On the other hand a single package with 200 Data Flows seems unamangeable too...
Edit 21/11:
The first apporach I wanted to use when starting this project was to extract my table automatically by iterating on a list of table names. This could have worked out well if my source and destination tables had all the same schema object names, but the source and destination database being from different vendor (BTrieve and Oracle) they also have different naming restrictions. For example BTrieve does not reserve names and allow more than 30 characters names, which Oracle does not. So that is how I ended up manually creating 200 data flows with a semi-automatic column mapping (most were automatic).
When generating the CREATE TABLE query for the destination database, I created a reusable C# library containing the methods to generate the new schema object names, just in case the methodology could automated. If there was any custom tool to generate the package that could use an external .NET library, then this might do the trick.
Have you looked into BIDS Helper's BIML (Business Intelligence Markup Language) as a package generation tool? I've used it to create multiple packages that all follow the same basic truncate-extract-load pattern. If you need slightly more cleverness than what's built into BIML, there's BimlScript, which adds the ability to embed C# code into the processing.
From your problem description, I believe you'd be able to write one BIML file and have that generate two hundred individual packages. You could probably use it to generate one package with two hundred data flow tasks, but I've never tried pushing SSIS that hard.
You can basically create 10 child packages each having 20 data flow tasks and create a master package which triggers these child pkgs.Using parent to child configuration create a single XML file configuration file .Define the precedence constraint for executing the package in serial fashion in master pkg. In this way maintainability will be better compared to having 200 packages or single package with 200 data flow tasks.
Following link may be useful to you.
Single SSIS Package for Staging Process
Hope this helps!
My colleague and I decides to break the jobs into two, she design the GUI in MSAccess and I design the classes in MSAccess too. Now the problem is, we need to combine those files into a single file. How I can do that? Any help is highly appreciated
Assuming that the classes that you implemented are modules in an Access database, then you can import one user's work into the other user's database (doesn't matter whether you're moving the GUI or the code) with the import function. Up to and including Access 2003 you would right click on the list in the database window and choose Import. . . from the context menu.
Access 2003.
I have two versions of an MS Access based application (data is in SQL Server, queries, forms, etc are in MS Access)
Are there any tools or techniques for merging the two versions? As a simple example, if I import all queries into DB_PRIMARY from DB_SECONDARY, non duplicated named queries will come in without a problem, but if a query already exists, the imported one will have "1" appended to the end (which makes sense). However, in the simplest case, I would rather just import and overwrite any existing objects.
Are there any tools or better techniques for managing this process?
Jakkwylde's suggestion is simple and effective. However, if your database includes a huge number of objects to reconcile, or if this is something you anticipate needing to do repeatedly, look at Access' undocumented SaveAsText and LoadFromText methods.
They have been discussed several times here on Stack Overflow. Start with this Google search:
http://www.google.com/search?q=site%3Astackoverflow.com+saveastext+loadfromtext
Why not just use the import feature built into access? From there you can review the collisions where their are collisions in object names with the integer tacked onto the end for manual review and reconcilation (eg look for at a table named Parts and Parts1 and determine what action if any will be necessary).
If that's not sufficient there are a number of software vendors (like Altova DatabaseSpy) that have data comparision tools that are more robust and have operations to determine differences in schemas and data.
Open the second database in code as a DATABASE object, loop through the different types of objects, and IMPORT them one by one.
What version of MS Access are you using?