Table load VBA vs Built in function - ms-access

I have a rather odd thing happening to me when I import a table into access.
I have tried doing this in three ways...
Firstly, I import the table (from a text file) using the built in functionality and having defined (and saved) the Import Specification everything works tickety-boo.
Next I re-run the upload by creating a macro but with reference to the Import Specification file that I saved above.
The problem is, some of the Import Specifications are not holding when I do it this way, in particular, I have requested that one field has indexing (No Duplicates), however when it loads with the macro it sets the fields indexing back to the default position back to "No"
I also tried to load the table using VBA (with reference to the same Import Specification File), but encounter the same problem.
Does anyone know how to maintain the integrity of the indexing when doing automated loads based of spec files?

One of the easiest way to control upload(if you have to change upload scenario dynamically) is to use custom VBA code to create and fill table with date from the source. The reason for that, in my opinion, the specifications and export/import system in Access is working, but weak and have pretty low level tools to use it via VBA, because it was mainly created for hand import/export with Access interface.
PS by custom VBA code, i mean the code that absolutely avoid built-in import features. Again this only make sense if your import/export scenario is dynamic and need to make choices in a process. If it's static and can be created using standard features, then just do it by hand with a step by step wizard and then call it with your VBA code without any changes.

Related

Bulk insert CSV file into Access DataBase through powershell

We are trying to import CSV file into Access Database via Powershell. My input file size is 1GB and it is getting difficult to iterate through each row and use insert command. Any quick suggestions here are highly appreciated
Thanks!!
as expresssed by #AlbertD.Kallal - what is the reason to use powershell at all? ... I simply made an assumption that you sought something that would run automatically, daily, unattended - - - as that is a typical reason.
if that is the case then it really breaks down to 2 parts:
make the import work manually in Access - - and then set up that import to fire automatically upon start/open of the Access file (auto exec).
just use powershell to start/open the Access file daily (or whenever...).
Access is not designed to be open full time and run unattended. So this is the typical approach to use it in that mode.
Ok, now having stated no need for power-shell, there are cases in which the IT folks and people are using power-shell to automate processes. So it not "bad" to consider power-shell - especially if it is being used.
I only wanted to point out that PowerShell will not help performance wise - and probably will be slower.
If you have (had) to say schedule a import to occur every 15 min or whatever?
Then I suggest setting up a VBA routine in a standard code module in Access to do the import. You then in the power shell, or windows script launch access, and then call that import routine. So, first step is to setup that routine in Access - even if using some kind of batch system for scheduling that import routine to run.
So, you use the windows scheduler.
It would: launch access, run the VBA sub, shutdown Access.
And using the windows scheduler is quite robust. So, we don't need (or want) to keep access running, but only launch it, run the import, and then shutdown access.
Next up, if the import process is "huge" or rather large, then on startup, a temp accDB file can be created, and then we import into that. We then can take the import table and send it into the production data table. (often column names are different etc. It also of course much safer to import into that temp table, and better yet, we can delete that temp file after - and thus we never suffer bloating or file size problems (no need to compact + repair).
So, the first thing to do is manually import the csv file using the Access UI. This ALSO allows you to create + setup a import spec. That import spec can thus remember the data types (currency, or often date time columns).
Once we have the import working and the import spec created?
Then we now can write code to do the above same steps, and THEN take the imported table and put that data into the production data table.
It it not clear if you "stage" the imported csv into that temp table, and then process that table into the real production data table, but I do suggest doing this.
(too dangerous to try and import directly into the production data).
you also don't share right now what kind of pre-processing, or what additonal code is required after you do the import of that csv (but, still, we assume now that such imports will be into a new temp table).
So, I would assume the steps are:
we import the csv file using built in import abiity of Acces
we then send this data table to the production table, perhaps with some code processing of each row before we send that temp table to production table.
Once we done the import, then we dump + delete the temp accDB file we used for the import, and thus we eliminate the huge data bloat issue.
Thus for the next time, then we create that temp file for a fresh import, and thus each time we start out with a nice empty database file.
So the first question, and you can create a blank new database for this test. Do you or can you import the csv file using Access. You want to do this, since such imports are VERY fast and VERY high speed. Even if the imported format is not 100% as you want, you do need to confirm and try if using the access UI you can import the csv file. if you can, then we can adopt VBA commands to do the same thing, but no use writing code if a simple csv import via Access UI can't be used.

Use metadata within SSIS package

I have the following problem:
We have a case where we need to import several dozen, if not hundreds, of uniquely structured source files with SSIS. Perfect opportunity to use BIML for this. We do not have metadata information for the files as is, so I need to get them - not manually, file by file, though.
So I thought, easy, build a table with filepaths, use BIML to create a package with the source connection, SSIS identifies metadata (maybe not 100% correctly) and I can use this metadata (column name, datatype, length, etc.) to initially persist in a metadata table for further use.
But: there seems to be no way to achieve this. While I can view the metadata and even paste it to the clipboard in SSDT, I cannot get it to the pipeline. Tried a script component as well (not my best skill), can get everything BUT the column name in ProcessInput, but can't create an output.
So: is there any known way of achieving this? Googled for several hours already to no avail.

Access 2010: linked database, reference, or add-in?

Looked around and found a variety of answers, but nothing recent that really compares these options pro and con. So I thought I'd ask the community to weigh in on which route you prefer and why.
Background
This is what we have:
Common set of Access modules & classes used in numerous protocol databases (Access 2010 *.accdb split front/back-ends)
Front-ends link to back-end database tables & code (linked dbs)
Back-ends contain protocol specific data & code
Common module/class database shouldn't be directly edited by users
Knowns
Add-ins & db reference databases:
Require re-distribution each time they are changed (even if no code is changed within them).
Must be edited within their IDE vs. the IDE of the protocol database (or you'll lose your edits since that db isn't the common code's)
Questions
How should the common module/class database be connected to the protocol databases?
linked database just like back-ends are
attach it as a reference in the IDE (Tools > Reference)
create an add-in and add it as a reference
How would you do it and why?
What are the pros/cons?
Which option would maximize performance?
I have several variations of the following code lying around, I think this one is most applicable to your situation (it pulls modules and forms from a database, overwriting existing ones, as soon as the database is started).
Public Sub ImportModules()
Dim ImportDbLocation As String: ImportDbLocation = CurrentProject.path & "\ModuleDb.accdb"
Dim ObjectsToImport As Recordset
Set ObjectsToImport = CurrentDb.OpenRecordset("SELECT * FROM Objects IN """ & ImportDbLocation & """")
Do While Not ObjectsToImport.EOF
On Error Resume Next
DoCmd.DeleteObject ObjectsToImport!ObjectType, ObjectsToImport!ObjectName
On Error GoTo 0
DoCmd.TransferDatabase acImport, "Microsoft Access", ImportDbLocation, ObjectsToImport!ObjectType, ObjectsToImport!ObjectName, ObjectsToImport!ObjectName
ObjectsToImport.MoveNext
Loop
End Sub
This code is triggered from the AutoExec macro in the front-end databases.
The database referred to as ImportDbLocation is the database containing all modules and forms I want to import. It contains a single table named Objects. This table has two columns, one named ObjectName containing the names of all objects that should be pulled, and one named ObjectType, which is a lookup field that corresponds with the acObjectType enum (some irrelevant objects removed).
Advantages:
You get a fresh copy of all modules in the database, making sure any overwrites are irrelevant.
You can add a third column to the Objects table to filter out objects for a specific database, thus selectively pushing some objects to some front-ends, and others to others (and a fourth one to specify the name of the objects in the ModulesDb file, to use multiple variants of the same form for different front-ends).
You have all code in the front-end, so no weirdness with external files and references
You can also use this code to pull any other type of objects you want (in my case mainly queries and forms)
Disadvantages:
Load time increases (normally by a tiny bit, but dependent on how many objects are imported, it might be long)
Modules are visible and readable for end-users (technically also modifiable, but any changes are undone on database load)
You can't modify the module doing the import this way
You NEED proper security settings, else users will get spammed with security popups on every database open
I also have a variant lying around that asynchronously pushes the modules from a separate thread to the database when opened, and I have another implementation that allows me to selectively move modules to front-ends after doing modifications.
I can share them if needed, but haven't yet implemented the asynchronous one in a production environment (still a work in progress, it should save on load time and can push all modules)
First a big "thank you" to #Erik and #Gustav for your inputs.
Solution
The solution opted for in my use case was to use a common reference library database that houses modules and classes.
While I considered #Erik's solution, in the end copying over the modules and classes into each protocol database was a bit too much like sending out copies of them which could then devolve from the master modules/classes. This in part was one reason to shift to the single reference database - to avoid propagation into the other databases so there would be a single code source for them.
Caveat: If there are performance issues with the reference library solution, classes & modules may have to be pulled from the common library database ala #Erik's solution.
Database as Reference Library
The common modules and classes are contained within a database (*.accdb). Whenever the code is desired in another database, a reference is added like referencing any other code library (VB IDE - Tools > References). The only difference is that you Browse... to the library and make sure you select Microsoft Access Databases (*.accdb) in the search filter.
If the reference library database is kept in a common location, there shouldn't be issues with re-referencing, although re-connecting the reference is easily done (same as connecting in the first place).
I've also separated out common version control and development modules into similar separate databases that can be referenced when desired during development and de-referenced when a given database goes to production.
Development "Gotchas"
Actually these are more considerations/things to remember than true "gotchas":
- Edit library classes & modules in the library database
If you edit a class or module from the database that references the library database you will lose the edits as soon as you close the database you were working in.
You can try out edits from the database you're working in - the code will run, but it doesn't save.
I often have Notepad++ or OneNote open to copy & paste over code I'm testing to the library database. That way I can continue working on other areas and have a set of changes to update the library database.
- Compile the library database before using its components in the referencing database
This is more a workflow issue - make sure you save & compile in the library database, then open your referencing database after you've completed your changes and closed the library database first.
Resources
Here are some links which proved helpful:
Using a centralized vba module in multiple access databases
Using VBA Code Libraries in Access Database Applications
Referencing VBA Projects as Libraries
Demonstration Applications & VBA Code Libraries
Classes in VBA
Using Database Library Files in Your Access Application
Object Oriented VBA: Design Patterns: Simple Factory
Object Oriented VBA: Static Classes
The 6th reference (available only via the internet archive) has been particularly helpful in sorting out how to properly instantiate classes. In the end I opted to create a single Factory class (static class) with multiple functions (one per class - e.g. NewClassABC()) that instantiates the class within the common reference library database.
So far, so good.
The reference library database has the code in one place and other protocol databases can be updated to it when desired. No dealing with multiple copies and versions of the same module or class.
Hope this helps others looking for a "common" library type solution.

How to update a function in multiple Access databases (MDB, ACCDB)

We have a large number of access databases that contain a number of common functions. We need to make a small change to one of the functions, and are looking for a way to automate this via scripting, etc. Is there any way to get at the VBA code inside an Access database, and make a change to it?
I have used ADOX before from PS or VBS to be able to list tables, views and queries, but it does not expose the VBA code in its API.
~bp
If the code changes are always in a module with a consistent name then you can simply remove the existing module and reimport the new version in all the projects you need to apply the change.
http://www.cpearson.com/excel/VBE.aspx
The above link posts information on how to import modules into a VBA project. So simple case of looping through your multiple databases and removing the module and importing the latest version.
If the code changes are located in different module names, you can open each database and loop through the modules to search for the string. The functions you'll need are listed here (depending on the style of change, it'll most likely be some combination of Find and ReplaceLine)
http://msdn.microsoft.com/en-us/library/office/aa223124%28v=office.11%29.aspx

how to combine classes of two ms access aplication

My colleague and I decides to break the jobs into two, she design the GUI in MSAccess and I design the classes in MSAccess too. Now the problem is, we need to combine those files into a single file. How I can do that? Any help is highly appreciated
Assuming that the classes that you implemented are modules in an Access database, then you can import one user's work into the other user's database (doesn't matter whether you're moving the GUI or the code) with the import function. Up to and including Access 2003 you would right click on the list in the database window and choose Import. . . from the context menu.