Access Continuous Form with Linked Table - How to Avoid Hitting Database Server for Every Row in Form? - ms-access

I'm migrating the data from an Access database to SQL Server via the SQL Server Migration Assistant (SSMA). The Access application will continue to be used with the local tables converted to linked tables.
One continuous form hangs for 15 - 30 seconds when it's loading. It displays approximately 2000 records. When I looked in SQL Server Profiler to see what it was doing, it was making a separate call to the backend database for each record in the form. So the delay when the form opens is caused by the 2000-odd separate calls to the database.
This is amazingly inefficient. Is there any way to get Access to make a single call to the backend database and retrieve all the records at once?
I don't know if this is relevant but the Record Source for the form is a view in the SQL Server backend database, which is linked to via an Access linked table (so, hopefully, Access just sees it as a table, not a view). I needed an Instead Of trigger on the view in SQL Server, and a unique index on the linked table in Access, to allow the records to be updated via the form.

If the act of opening that continuous form really does generate ~2000 separate SQL queries (one for every row in the view) then that is unusual behaviour for Access interacting with a SQL Server linked "table". Under normal circumstances what takes place is:
Access submits a single query to return all of the Primary Key values for all rows in the table/view. This query may be filtered and/or sorted by other columns based on the Filter and Order By properties of the form. This gives Access a list of the key values for every row that might be displayed in the form, in the order in which they will appear.
Access then creates a SQL prepared statement using sp_prepexec to retrieve entire rows from the table/view ten (10) rows at a time. The first call looks something like this...
declare #p1 int
set #p1=4
exec sp_prepexec #p1 output,N'#P1 int,#P2 int,#P3 int,#P4 int,#P5 int,#P6 int,#P7 int,#P8 int,#P9 int,#P10 int',N'SELECT "ID","AgentName" FROM "dbo"."myTbl" WHERE "ID" = #P1 OR "ID" = #P2 OR "ID" = #P3 OR "ID" = #P4 OR "ID" = #P5 OR "ID" = #P6 OR "ID" = #P7 OR "ID" = #P8 OR "ID" = #P9 OR "ID" = #P10',358,359,360,361,362,363,364,365,366,367
select #p1
...and each subsequent call uses sp_execute, something like this
exec sp_execute 4,368,369,370,371,372,373,374,375,376,377
Access repeats those calls until it has retrieved enough rows to fill the current page of continuous forms. It then displays those forms immediately.
Once the forms have been displayed, Access will "pre-fetch" a couple of more batches of rows (10 rows each) in anticipation of the user hitting PgDn or starting to scroll down.
If the user clicks the "Last Record" button in the record navigator, Access again uses sp_prepexec and sp_execute to request enough 10-row batches to fill the last page of the form, and possibly pre-fetch another couple of batches in case the user decides to hit PgUp or start scrolling up.
So in your case if Access really is causing SQL Server to run individual queries for every single row in the view then there may be something particular about your SQL View that is causing it. You could test that by creating an Access linked table to a single SQL Table or a simple one-table SQL View, then use SQL Server Profiler to check if opening that linked table causes the same behaviour.

Turned out the problem was two aggregate fields. One field's Control Source was =Count(ID) and the other field's Control Source was =Sum(Total_Qty).
Clearing the control sources of those two fields allowed the form to open quickly. SQL Server Profiler shows it calling sp_execute, as Gord Thompson described, to retrieve seven batches of 10 rows at a time. Much quicker than making 2000 calls to retrieve one row at a time.

I've come across the same problem again but this time with a different cause. I'm including it here for completeness, to help anyone in a similar situation:
This time the underlying query was hanging and SQL Server Profiler showed the same behaviour as before, with Access making separate calls to the SQL Server database to bring back one record at a time, for every record in the query.
The cause turned out to be the ORDER BY clause in the query. I guess Access had to pull back all records in the linked table from SQL Server before being able to order them. Makes sense when I think of it. Although I don't know why Access doesn't just pull all records through at once, instead of getting the records one at a time.

I would try setting the Recordset Type to Snapshot (on the Data tab of the Form's property sheet and/or the property sheet of the query you are using for the form source)

Related

SQL Server rows not editable for Access after Insert

I have this problem: I'm using a SQL Server 2008R2 backend and MS Access 2000 frontend where some tables are connected via ODBC.
Following Structure (Tables all on SQL-Server):
Import (not connected to Access)
Products (connected via ODBC to Access)
Pricing (connected via ODBC to Access)
I want to fill the Pricing table automatically with some data from Products and Import. This is supposed to run as a SQL Agent job with a T-SQL script.
I want to insert the data from "Products" with following command:
INSERT INTO Pricing (Productnr, Manufacturernr)
(SELECT Productnr, Manufacturernr
FROM Products
WHERE Valid = 1
AND Productnr NOT IN (SELECT Productnr FROM Pricing ));
Right after that the inserted rows are locked for Access, I can't change anything. If I execute sql queries with SQL Server Management Suite or if i start queries as SQL Agent jobs everything works fine.
Why are the rows locked in ms access after the query ran (even if it finished successfully)? and how can I unlock them or make it unlock itself right after the query/job ran?
Thanks
When SQL Server inserts new rows, those new rows are in fact exclusively locked to prevent other transactions from reading or manipulating them - that's by design, and it's a good thing! And it's something you cannot change - you cannot insert without those locks.
You can unlock them by committing the transaction that they're being inserted under - once they're committed to SQL Server, you can access them again normally.
The error message i get says, that the dataset has been changed by another user and if i save it, i would undo the changes of the other user. (and asks me for copying into clipboard).
This is different from "locked", and completely normal.
If you have a ODBC linked table (or form based on the table) open, and change data in the backend, Access doesn't know about the change.
You need to do a full requery (Shift+F9) in Access to reload the data, afterwards all records can be edited again.
Got the solution for my Problem now.
I had to add a timestamp row into the pricing table, so access could recognize the change.
Access loads the data into the front end when the table is first accessed. If something in the backend changes the data, you need Access to refresh it first, before you can edit it from the front end (or see the changes).
Do this by (in Access) by closing and reopening the table, or switching to the table and pressing shift-F9 as Andre suggested, or programmatically using a requery statement. You must requery, not refresh, for it to release the locks and register the changes made in SQL.

How to run append query from data macro MS Access?

I have 2 table, one is local named 'Client' and other is a linked table to MySQL DB in my web-server named 'ClientSql'.
When I insert data into the table Client, I want it to be insert into clientSql too. I try it with data macro (after insert), but it shows me an error saying
It's not possible in linked tables.
I have tried to create an append query successfully, and it works, but it just works if I execute it manually. My question is:
Is it possible to call it from a data macro? if is possible, can you show me how? If not, can you point me to a solution?
I'm not sure why you should be able to do it manually, but not via a macro. According to this link
http://dev.mysql.com/doc/connector-odbc/en/connector-odbc-examples-tools-with-access-linked-tables.html
you should be able to do it either way.
Another thought is to eliminate the local access client table and have the access program update the mySql table directly. However, if another program is accessing the client table at the same time, this could become tricky due to multi-user and locking situations.

SSIS OLE DB conditional "insert"

I have no idea whether this can be done or not, but basically, I have the following data flow:
Extracts the data from an XML file (works fine)
Simply splits the records based on an enclosed condition (works fine)
Had to add a derived column object due to some character set issues (might be better methods, but it works)
Now "Step 4" is where I'm running into a scenario where I'd only like to insert the values that have a corresponding match in my database, for instance, the XML has about 6000 records, and from those, I have maybe 10 of them that I need to match back against and insert them instead of inserting all 6000 of them and doing the compare after the fact (which I could also do, but was hoping there'd be another method). I was thinking that I might be able to perform a sql insert command within the OLE DB DESTINATION object where the ID value in the file matches, but that's what I'm not 100% clear on or if it's even possible for that matter. Should I simply go the temp table route and scrub the data after the fact, or can I do this directly in the destination piece? Any suggestions would be greatly appreciated.
EDIT
Thanks to the last comment from billinkc, I managed to get bit closer, where I can identify the matches and use that result set, but somehow it seems to be running the data flow twice, which is strange.... I took the lookup object out to see whether it was causing it and somehow it seems to be the case, any reason why it would run this entire flow twice with the addition of the lookup? I should have a total of 8 matches, which I confirmed with the data viewer output, but then it seems to be running it a second time for the same file.
Is there a reason you can't use a Lookup transformation to find existing records. Configure it so that it routes non-match records to the no match output and then only connect the match found connector to the "Navigator Staging Manager Funds"
I believe that answers what you've asked but I wonder if you're expressing the right desire? My assumption is the lookup would go against the existing destination and so the lookup returns the id 10 for a row. All of the out of the box destinations in SSIS only perform inserts, so that row that found a match would now get doubled. As you are looking for existing rows, that usually implies you'd want to perform an update to an existing row. If that's the case, there is a specially designed transformation, the OLE DB Command. It is the component that allows for updates. There is a performance problem with that component, it issues a single update statement per row flowing through it. For 10 rows, I think it'd be fine. Otherwise, the pattern you'd use is to write all the new rows (inserts) into your destination table and then write all of your changed rows (updates) into a second staging-type table. After the data flow is complete, then use an Execute SQL Task to perform a set based update statement.
There are third party options that handle combined upserts. I know Pragmatic Works has an option and there are probably others on the tasks and components site.

SQL Server: unique key for batch loads

I am working on a data warehousing project where several systems are loading data into a staging area for subsequent processing. Each table has a "loadId" column which is a foreign key against the "loads" table, which contains information such as the time of the load, the user account, etc.
Currently, the source system calls a stored procedure to get a new loadId, adds the loadId to each row that will be inserted, and then calls a third sproc to indicate that the load is finished.
My question is, is there any way to avoid having to pass back the loadId to the source system? For example, I was imagining that I could get some sort of connection Id from Sql Server, that I could use to look up the relevant loadId in the loads table. But I am not sure if Sql Server has a variable that is unique to a connection?
Does anyone know?
Thanks,
I assume the source systems are writing/committing the inserts into your source tables, and multiple loads are NOT running at the same time...
If so, have the source load call a stored proc, newLoadStarting(), prior to starting the load proc. This stored proc will update a the load table (creates a new row, records start time)
Put a trigger on your loadID column that will get max(loadID) from this table and insert as the current load id.
For completeness you could add an endLoading() proc which sets an end date and de-activates that particular load.
If you are running multiple loads at the same time in the same tables...stop doing that...it's not very productive.
a local temp table (with one pound sign #temp) is unique to the session, dump the ID in there then select from it
BTW this will only work if you use the same connection
In the end, I went for the following solution "pattern", pretty similar to what Markus was suggesting:
I created a table with a loadId column, default null (plus some other audit info like createdDate and createdByUser);
I created a view on the table that hides the loadId and audit columns, and only shows rows where loadId is null;
The source systems load/view data into the view, not the table;
When they are done, the source system calls a "sp__loadFinished" procedure, which puts the right value in the loadId column and does some other logging (number of rows received, date called, etc). I generate this from a template as it is repetitive.
Because loadId now has a value for all those rows, it is no longer visible to the source system and it can start another load if required.
I also arrange for each source system to have its own schema, which is the only thing it can see and is its default on logon. The view and the sproc are in this schema, but the underlying table is in a "staging" schema containing data across all the sources. I ensure there are no collisions through a naming convention.
Works like a charm, including the one case where a load can only be complete if two tables have been updated.

solution needed - 2 users running a program

So I've developed this Access 2007 application with about 2 forms, a lot of VBA code and a bunch of tables.
Now the business wants to run this off a network drive (call it G:\ for example). My current solution (which I've already implemented is have a table similar to:
__________________
|Setting | Value |
==================
Updating 1
UpdateBy User1
So let me give you a context. When the application runs there is a button called "update" which updates the local table from a remote server so we can apply filtering. Now when two people (user1, user2) launch the application, and one person clicks update then the updating field is set to true, and the updateby is set to their name.
So User number 2 tries to update, it checks if the updating field is true, if it is then it gives them a message (to user two, not to user one).
It works beautifully right now, but here is the problem: Lets say user1 is updating, and closes his program (or taskkills it) or the power disrupts, then the application shuts off with the updating field set to to true. Now no matter who launches it, they can not update because its "already updating"
Can you guys think of a solution to this? Maybe a workaround?
Consider a different locking strategy. In the click event of your "update" button, you can first open a recordset based on your tblUpdateStatus (the table where you've been writing UpdateBy) with dbDenyWrite + dbDenyRead options.
Set rst = db.OpenRecordset("tblUpdateStatus", _
dbOpenTable, dbDenyWrite + dbDenyRead)
Then do your other operations for the "update" button. Afterward, close and release the recordset ... which releases the tblUpdateStatus lock.
Trap the error when a user is unable to open the recordset (because another user has the table locked), give them a message to try later and exit your click event subroutine.
With this approach, when user1 locks tblUpdateStatus but exits Access uncleanly, her lock on tblUpdateStatus is released. You may not even need to update tblUpdateStatus unless you want to record which user has it locked.
See Create and Use Flexible AutoNumber Fields (from the Access Cookbook) for more details about using a recordset with dbDenyWrite + dbDenyRead.
Please do not run an Access application with more than one user when you have not split the database. It will cause endless trouble. The data portion (back-end) should be put on the server and the code and forms (front-end) should be put on each users desktop.
More information: http://support.microsoft.com/kb/162522
Read my article on why you split here:
http://www.members.shaw.ca/AlbertKallal/Articles/split/index.htm
In the above, I don't just tell you to do this, but I tell you WHY you split.
It should help you a lot in terms of users tripping over each other.