Passing DAO.Workspace as a parameter - Bad idea? - ms-access

In David Fenton's answer to the question "MS Access (Jet) transactions, workspaces" he states:
I would also say, never pass the workspace -- only pass the objects you've created with the workspace.
Is passing the workspace a bad idea? If so, why?
Additional info:
My intent here is to execute INSERT/UPDATE/DELETE statements from within multiple Subs/Functions but wrap them all in a single transaction.
The base functionality is wrapped up in a class module. I want to be able to isolate multiple instances of the class module, which means I can't just assume my transactions are executing against the default workspace.

Related

Data Migrating document for Couchbase (i.e changing existing field type)?

I am coming from object relation database background, I understand Couchbase is schema-less, but data migration will still happen as the application develop.
In SQL we have management tool to alter table, or I can write migration script with SQL to do migration from version 1 table to version 2 table.
But in document, say we have json Document UserProfile:
UserProfile
{
"Owner": "Rich guy!",
"Car": "Cool car"
}
We might want to add a last visit field there, allow user have multiple car, so the new updated document will become follows:
UserProfile
{
"Owner": "Rich guy!",
"Car": ["Cool car", "Another car"],
"LastVisit": "2015-09-29"
}
But for easier maintenance, I want all other UserProfile documents to follow the same format, having "Car" field as an array.
From my experience in SQL, I could write migration script which support migrating different version of table. Migrate from version 1 table to version 2...N table.
So how can I should I write such migration code? I will have to really just writing an app (executable) using Couchbase SDK to migrate all the documents each time?
What will be the good way for doing migration like this?
Essentially, your problem breaks down into two parts:
Finding all the documents that need to be updated.
Retrieving and updating said documents.
You can do this in one of two ways: using a view that gives you the document ids, or using a DCP stream to get all the documents from the bucket. The view only gives you the ids of the documents, so you basically iterate over all the ids, and then retrieve, update and store each one using regular key-value methods. The DCP protocol, on the other hand, gives you the actual documents.
The advantage of using a view is that it's very simple to implement, works with any language SDK, and it lets you write your own logic around the process to make it more robust and safe. The disadvantage is having to build a view just for this, and also that if the data keeps changing, you must retrieve the ENTIRE view result at once, because if you try to page over the view with offsets, the ordering of results can change, thus giving you an inconsistent snapshot of the data.
The advantage of using DCP to stream all documents is that you're guaranteed to get a consistent snapshot of your data even if it's constantly changing, and also that you get the whole document directly as part of the stream, so you don't need to retrieve it separately - just update and store back to the database. The disadvantage is that it's currently only implemented in the Java SDK and is considered an experimental feature. See this blog for a simple implementation.
The third - and most convenient for an SQL user - way to do this is through the N1QL query language that's introduced in Couchbase 4. It has the same data manipulation commands as you would expect in SQL, so you could basically issue a command along the lines of UPDATE myBucket SET prop = {'field': 'value'} WHERE condition = 'something'. The advantage of this is pretty clear: it both finds and updates the documents all at once, without writing a single line of program code. The disadvantage is that the DML commands are considered "beta" in the 4.0 release of Couchbase, and that if the data set is too large, then it might not actually work due to timing out at some point. And of course, that fact that you need Couchbase 4.0 in the first place.
I don't know of any official tool currently to help with data model migrations, but there are some helpful code snippets depending on the SDK you use (see e.g. bulk updates in java).
For now you will have to write your own script. The basic process is as follow:
Make sure all your documents have a model_version attribute that you increment after each migration.
Before a migration update your application code so it can handle both the old and the new model_version, and so that new documents are written in the new model.
Write a script that iterate through all the old model documents in your bucket(you need a view that emits the document key), make the update you want, increment model_version and save the document back.
In a high concurrency environment it's important to have good error handling and monitoring, you could have for example a view that counts how many documents are in each model_version.
You can use Couchmove, which is a java migration tool working like Flyway DB.
You can execute N1QL queries with this tool to migrate your documents and keep tracking of your changes.
If I understood correctly, the crux here is getting and then 'update every CB docs'. This can be done with a view, provided that you understand that views are only 'eventually consistent' (unlike read/write actions which are strongly consistent).
If (at migration-time) no new documents are added to your bucket, then your view would be up-to-date and should return the entire set of documents to be migrated. easy.
On the other hand, if new documents continue to be written into your bucket, and these documents need to be migrated, then you will have to run your migration code continually to catch all these new docs (since the view wont return them until it is updated, a few seconds later).
In this 2nd scenario, while migration is happening, your bucket will contain a heterogeneous collection of docs: some that have been migrated already, some that are about to be migrated and some that your view has not 'seen' yet (because they were recently added) and would only be migrated once you re-run the migration code.
To make the migration process efficient, you'll need to find a way to differentiate between already-migrated items and yet-to-be-migrated items. You can add a field to each doc with its 'version number' and update it during the migration. Your view should be defined to only select documents with older 'version number' and ignore already-migrated items.
I suggest you read more about couchbase views - here and on their site.
Regarding your migration: There are two aspects here: (1) getting the list of document ids that need to be updated and (2) the actual update.
The actual update is simple: you retrieve the doc and save it again with the new format. There's no explicit schema. Where once you added column in SQL and populated it, you now just add a field in the json-doc (of all the docs). All migrated docs should have this field. Side note: Things get little more complicated if (while you're migrating) the document can be updated by another process. This requires special handling (read aboud CAS if that's the case).
Getting all the relevant doc-keys requires that you define a view and query it. Its beyond the scope of this answer (and is very well documented). Once you have all the keys, you simply iterate them one by one and update them.
With N1QL, Couchbase provides the same schema migration capabilities as you have in RDBMS or object-relational database. For the example in your question, you can place the following query in a migration script:
UPDATE UserProfile
SET Car = TO_ARRAY(Car),
LastVisit = NOW_STR();
This will migrate all the documents in your bucket to your new schema. Note that update statements in Couchbase provide document-level atomicity, not statement-level atomicity. But since this update is idempotent (repeatable), you can run it multiple times if you run into errors. Note: similar to the last paragraph of David's answer above.

MySQL Alternative to Trigger on Select

For a project I am currently working on, I am required to use triggers to restrict users' access to a database. However, since there are no triggers on SELECT statements, I need to find some alternative method for adding restrictions before a SELECT statement is executed. Are there any alternatives to "Triggers on SELECT"? if so, what are they?
Note: This is an assignment, thus I cannot add the restrictions at application level since the point of the assignment is to add restrictions at the DB Level.
I have also read all other posts that I can find on this, please don't close this post pointing me to some other related posts unless they have several alternatives I can choose from. I'm not saying these posts are of no use or the solutions are not proper, However, I would like to explore all of the possible solutions.
One solution is to change the table to a view, such that querying the view invokes a stored function, and the stored function logs access to the table.
Here's a blog by the great Roland Bouman, describing the process in detail:
http://rpbouman.blogspot.com/2005/08/mysql-create-dirty-little-tricker-for.html

Transaction in Enterprise library

How to handle transaction within a scope using Enterprise library. I've 3 stored procedures, which I need to execute in a single scope. I dont want to use System.Transaction name space
You can call the BeginTransaction method on a connection object to get a DbTransaction object. Then use the Entlib Database object's overloads that take a DbTransaction. However, it's a giant pain to manage. You'll need to create and close least one connection manually rather than relying on Entlib to do the right thing, and you'll have to pass the DbTransaction object around to everything that needs it.
TransactionScope really is the right answer here. If you've got some blocking scenario that really prevents you from using it that isn't some brain-dead corporate policy, I'd love to know what it is.

Calling function in one MDB from another MDB

We have developed a consolidation function that will be used by other processes and want to position the function in its own MDB (call it "remote") so that it can be referenced and called from "caller.mdb" when its needed. The function is designed to return an array and works great when executed called directly from within "remote." However, with "remote" properly referenced in the "caller" VBA project, when "caller" makes the call the function returns errors. We get a variety of errors such as
3078: Jet cannot find the input table or query
QUESTION. Within "remote", how does one properly set references to the db and its local objects (e.g. one table and several queries including INSERT and UPDATE queries)? CurrentDB is apparently not the answer; we have also experimented with the AccessObject and CodeData objects. "Remote" and "caller" currently reside on the same drive, so that wouldn't seem to be the problem.
Instead of CurrentDb you could use with CodeDb wich points to the mdb currently executing the code.
Set db = CodeDb
The way Access itself does this (with all the wizards, which are all programmed in Access), is to use Application.Run. It does mean the code you're calling has to be a function, though it doesn't matter what it returns. Application.Run requires no references, just a path:
Application.Run("MyCodeDatabase.MyFunction()")
Obviously, if the code database is not in the path that Access uses (which includes its own app folders (including the app-specific folders in the user's profile) and the folder where your main application front end is stored), you'll need to specify the full path.
Application.Run() is a function that returns a value, but it is typed as variant. This may or may not work with your array. It's not clear from the object browser whether or not the arguments are passed ByVal or ByRef, but if they are ByRef (which is what I'd expect), you might just pass the array in and let the function work on it and then use it after the code in the remote database has completed.
On the other hand, the arguments are probably variants, so there's not much difference between that approach and just using the structure returned by Application.Run().
Marcand gave you the answer to your immediate question. There are other problems and irritations when it comes to using add-ins or referenced Access databases. See my Add-in Tips, Hints and Gotchas page.
There are a number of differences and nuances to calling forms and functions through a reference in a another MDB or ADP. I have run into issues in both situations, and what you are referring to as the "remote" database, I refer to as a central library.
At my Tips and Tricks page at http://www.mooresw.com/tips.php, I have pages devoted to programatically changing references, getting Access to search for the referenced file instead of having a broken reference, and calling forms through a reference.
Programatically changing references is needed when you publish the database from the development environment to the user or production environment. When working in the development folder, it's fine for the program to have a reference to the central library directly, but we wouldn't want code that 20 users are running tying up the central library in our development area. (An MDB file opened through a reference gets locked just as though your users were opening it directly)
The situation of running a form in a central library (or "remote" database) where there are no links or tables can be tricky. In that situation I've chosen to open a connection to the "caller.mdb" using ADO code with a Jet connection string in the open event of the forms. Doing so provides the ability for the code in the form (or functions in the library) to gain access to the tables and queries in the calling mdb.
For further information, see my pages at the tips link above, and in particular, see:
http://www.mooresw.com/call_a_form_in_another_MDB_through_a_reference.php
which I believe is most relevant to your situation.

Difference between DBEngine.BeginTrans and DBEngine.Workspaces(0).BeginTrans

In Access, what is the difference between these two statements?
DBEngine.BeginTrans
and
DBEngine.Workspaces(0).BeginTrans
The documentation for both leads to the same place.
Have a look here: DAO Workspace
And then here: DAO Workspace: Opening a Separate Transaction Space
(The links are for MFC, but they're applicable to whatever you're coding in.)
DBEngine.Workspaces(0) is the default workspace. Other workspaces can be created, which let you work with separate sessions; the idea is that BeginTrans and EndTrans apply to the whole workspace, but if you need to do stuff outside that transaction, you can create another workspace and use it independently of your transactions in the first workspace.
Personally, I never had occasion to use more than one workspace when doing DAO in VBA. * shrug *
My own answer:
It appears that DBEngine.BeginTrans and DBEngine.Workspaces(0).BeginTrans do the same thing because this code works (see below). "Workspaces" is the default member of DBEngine.
Dim db As Database
Set db = CurrentDb
DBEngine.BeginTrans
db.Execute "Update Table1 SET CITY='Newark'"
DBEngine.Workspaces(0).Rollback
In the Access application interface, you can only have one database container open at a time. In VBA code, you can open multiple database instances within a Workspace. See the help file documentation for the Workspace.OpenDatabase method (or http://msdn.microsoft.com/en-us/library/bb243164.aspx) for an example where more than one Database is opened in one Workspace.
One would infer that when transactions are supported by all of the underlying Databases that are open in a Workspace, the BeginTrans method of the Workspace would apply across all of the databases. I suspect there be dragons there, but I'm sure it would work with two MDBs inside of one Workspace. When there's only one database open in the Workspace, Workspace.BeginTrans and Database.BeginTrans are indeed the same.
As Spock once said, a difference which makes no difference is no difference...