I needed to find minimum revenue from a table tbl_Revenue. I found out two methods to do that:
Method 1
Dim MinRevenueSQL As String
Dim rsMinRev As DAO.Recordset
MinRevenueSQL = "SELECT Min(tbl_Revenue.Revenue_Value)As MinRevenue FROM tbl_Revenue WHERE (((tbl_Revenue.Division_ID)=20) AND ((tbl_Revenue.Period_Type)='Annual'));"
Set rsMinRev = CurrentDb.OpenRecordset(MinRevenueSQL)
MinRev = rsMinRev!MinRevenue
Method 2
MinRev2 = DMin("Revenue_Value", "tbl_Revenue", "(((tbl_Revenue.Division_ID)=20) AND ((tbl_Revenue.Period_Type)='Annual'))")
I have following questions:
which one of them is computationally more efficient? Is there a lot of difference in computational efficiency if instead of tbl_Revenue table there is a select statment using joins?
Is there a problem with accuracy of DMin fundtion? (By accuracy I mean are there any loopholes that I need to be aware of before using DMin.)
I suspect that the answer may vary depending on your situation.
In a single user situation, #transistor1 testing method will give you a good answer for an isolated lookup.
But on a db that's shared on a network, IF you already Set db = CurrentDb, then the SELECT method should be faster, since it does not require opening a second connection to the db, which is slow.
The same way, it is more efficient to Set db = CurrentDb and reuse that db everywhere.
In situations where I want to make sure I have the best speed, I use Public db as DAO.Database when opening the app. Then in every module where it is required, I use
If db is Nothing Then set db = CurrentDb.
In your specific code, you are running it once so it doesn't make much of a difference. If it's in a loop or a query and you are combining hundreds or thousands of iterations, then you will run into issues.
If performance over thousands of iterations is important to you, I would write something like the following:
Sub runDMin()
x = Timer
For i = 1 To 10000
MinRev2 = DMin("Revenue_Value", "tbl_Revenue", "(((tbl_Revenue.Division_ID)=20) AND ((tbl_Revenue.Period_Type)='Annual'))")
Next
Debug.Print "Total runtime seconds:" & Timer - x
End Sub
Then implement the same for the DAO query, replacing the MinRev2 part. Run them both several times and take an average. Try your best to simulate the conditions it will be run under; for example if you will be changing the parameters within each query, do the same, because that will most likely have an effect on the performance of both methods. I have done something similar with DAO and ADO in Access and was surprised to find out that under my conditions, DAO was running faster (this was a few years ago, so perhaps things have changed since then).
There is definitely a difference when it comes to using DMin in a query to get a minimum from a foreign table. From the Access docs:
Tip: Although you can use the DMin function to find the minimum value
from a field in a foreign table, it may be more efficient to create a
query that contains the fields that you need from both tables, and
base your form or report on that query.
However, this is slightly different than your situation, in which you are running both from a VBA method.
I have tended to believe (maybe erroneously because I don't have any evidence) that the domain functions (DMin, DMax, etc.) are slower than using SQL. Perhaps if you run the code above you could let us know how it turns out.
If you write the DMin call correctly, there are no accuracy issues that I am aware of. Have you heard that there were? Essentially, the call should be: DMin("<Field Name>", "<Table Name>", "<Where Clause>")
Good luck!
Related
I know it is possible to call query by name as follow:
DoCmd.OpenQuery "yourQueryName", acViewNormal, acEdit
OR
CurrentDb.OpenRecordset("yourQueryName")
But is it possible to call them with numbers like Sheets in Excel?
Something like:
CurrentDb.OpenRecordset Queries(1)
or any other way possible?
Note: I want to do that because my queries are in Japanese and I would like to avoid the hard way to read them in VBA.
You can certainly execute queries using the QueryDefs collection. However, referring to them by position is a dangerous thing to do, since the position may change and you might be opening the wrong query.
To do so, you can use CurrentDb.QueryDefs(1).OpenRecordset
Note that Access also makes temporary/internal queries available to the QueryDefs collection. Their name starts with ~, and there are often many on a large database.
(in my general development database, the first real query is currently QueryDefs(20))
Say I use a value, ME!txtUsername.value throughout an event, and pass it through to many functions.
Is it more efficient or best practice to:
A) Set The Value Into A Variable
DIM username as string
username = ME!txtUsername.value
OR
B) Use It Explicitly Through The Event
DIM username as string
iAmAFunction(ME!txtUsername.value)
OR
C) There is a negligible difference and it is simply preference?
I think the faster way for retrieving the same value multiple times would be to assign it to a variable the first time and retrieve it from the variable thereafter.
But, I suspect you would need a fairly extreme border case to actually notice a difference. So I'll say the correct answer is C - negligible difference.
Personally, I wouldn't be concerned about a performance difference with this. I would likely prefer to repeatedly type and read strUsername instead of ME!txtUsername.value
My gut reaction is this is a micro-optimization which is seldom worth worrying about.
Bit of a theoretical question here.
I have made a database search interface for an ASP.NET website. I am using linq to sql for the BL operations. I have been looking at different ways to implement efficient paging and I think I have got the best method now, but I am not sure how great the difference in performance really is and was wondering if any experts have any explanations/advice to give?
METHOD 1:The traditional method I have seen in a lot of tutorials uses pure linq to sql and seems to create one method to get the data. And then one method which returns the count for the pager. I guess this could be grouped in a single method, but essentially the result seems to be that an IQueryable is created which holds the data to the whole query and IQueryable.Count() and IQueryable.Skip().Take() are then performed.
I saw some websites criticising this method because it causes two queries to be evaluated, which apparently is not as efficient as using a stored procedure...
Since I am using fulltext search anyway, I needed to write a SP for my search, so in light of the previous comments I decided to do it paging and counting in the SP. So I got:
METHOD 2: A call to the stored procedure from the BL. In the SP the WHERE clause is assembled according to the fields specified by the user and a dynamic query created. The results from the dynamic query are inserted into a table variable with a temporary identity key on which I perform a COUNT(*) and SELECT WHERE (temp_ID >= x and temp_ID < y).
It looks to me like those two methods are in principle performing the same operations...
I was wondering whether method 2 actually is more efficient than method 1 (regardless of the fact that fulltext is not available in linq to sql...). And why? And by how much?
In my understanding, the SP requires the query to be generated only once, so that should be more efficient. But what other benefits are there?
Are there any other ways to perform efficient paging?
I finally got around to doing some limited benchmarks on this. I've only tested this on a database with 500 entries, because that is what I have to hand so far.
In one case I used a dynamic SQL query with
SELECT *, ROW_COUNT() OVER(...) AS RN ... FROM ... WHERE RN BETWEEN #PageSize * #PageCount AND #PageSize * (#PageCount + 1)
in the other I use the exact same query, but without the ROW_COUNT() AND WHERE ... clause and do a
db.StoredProcedure.ToList().Skip(PageSize * PageCount).Take(PageSize);
in the method.
I tried returning datasets of 10 and 100 items and as far as I can tell the difference in the time it takes is negligible: 0.90s for the stored procedure, 0.89s for the stored procedure.
I also tried adding count methods as you would do if you wanted to make a pager. In the stored procedure this seems to add a very slight overhead (going from 0.89s to 0.92s) from performing a second select on the full set of results. That would probably increase with the size of the dataset.
I added a second call to the Linq to SQL query with a .Count() on it, as you would do if you used had the two methods required ASP.NET paging, and that didn't seem to affect execution speed at all.
These tests probably aren't very meaningful given the small amount of data, but that's the kind of datasets I work with at the moment. You'd probably expect a performance hit in Linq to SQL as the datasets to evaluate become larger...
Hallo all.
I need to run the 'replace([column], [new], [old])' in a query executing on n Access 2003 DB. I know of all the equivalent stuff i could use in SQL, and believe me I would love to, but i don't have this option now. I'm trying to do a query where all the alpha chars are stripped out of a column ie. '(111) 111-1111' simply becomes '1111111111'. I can also write an awsum custom VBA function and execute the query using this, but once again, can't use these functions through JET. Any ideas?
Thanx for the replies guys. Ok let me clarify the situation. I'm running an .NET web application. This app uses an Access 2003 db. Im trying to do an upgrade where I incorporate a type of search page. This page executes a query like: SELECT * FROM [table] WHERE replace([telnumber], '-', '') LIKE '1234567890'. The problem is that there are many records in the [telnumber] column that has alpha chars in, for instance '(123) 123-1234'. This i need to filter out before i do the comparison. So the query using a built in VBA function executes fine when i run the query in a testing environment IN ACCESS, but when i run the query from my web app, it throws an exception stating something like "Replace function not found". Any ideas?
Based on the sample query from your comment, I wonder if it could be "good enough" to rewrite your match pattern using wildcards to account for the possible non-digit characters?
SELECT * FROM [table] WHERE telnumber LIKE '*123*456*7890'
Your question is a little unclear, but Access does allow you to use VBA functions in Queries. It is perfectly legal in Access to do this:
SELECT replace(mycolumn,'x','y') FROM myTable
It may not perform as well as a query without such functions embedded, but it will work.
Also, if it is a one off query and you don't have concerns about locking a bunch of rows from other users who are working in the system, you can also get away with just opening the table and doing a find and replace with Control-H.
As JohnFx already said, using VBA functions (no matter if built in or written by yourself) should work.
If you can't get it to work with the VBA function in the query (for whatever reason), maybe doing it all per code would be an option?
If it's a one-time action and/or not performance critical, you could just load the whole table in a Recordset, loop through it and do your replacing separately for each row.
EDIT:
Okay, it's a completely different thing when you query an Access database from a .net application.
In this case it's not possible to use any built-in or self-written VBA functions, because .net doesn't know them. No way.
So, what other options do we have?
If I understood you correctly, this is not a one-time action...you need to do this replacing stuff every time someone uses your search page, correct?
In this case I would do something completely different.
Even if doing the replace in the query would work, performance wise it's not the best option because it will likely slow down your database.
If you don't write that often to your database, but do a lot of reads (which seems to be the case according to your description), I would do the following:
Add a column "TelNumberSearch" to your table
Every time when you save a record, you save the phone number in the "TelNumber" column, and you do the replacing on the phone number and save the stripped number in the "TelNumberSearch" column
--> When you do a search, you already have the TelNumberSearch column with all the stripped numbers...no need to strip them again for every single search. And you still have the column with the original number (with alpha chars) for displaying purposes.
Of course you need to fill the new column once, but this is a one-time action, so looping through the records and doing a separate replace for each one would be okay in this case.
I have been doing a lot of reading but not coming up with any good answers on LinqToSql caching...I guess the best way to ask my question is to just ask it.
I have a jQuery script calling a WCF service based on info that the script is getting from the 1st two cells of each row of a table. Basically its looping through the table, calling the service with info from the table cells, and updating the row based on info returned from the service.
The service itself is running a query based on the info from the client basically in the form of:
Dim b = From r In db.batches _
Where r.TotalDeposit = amount _
And r.bDate > startDate AndAlso r.bDate < endDate _
Select r
Using firebug I noticed that each response was taking anywhere between 125ms-3secs per. I did some research and came across a article about caching LINQ objects and applied it to my project. I was able to return stuff like the count of the object (b.Count) as a Response in a page and noticed that it was caching, so I thought I was cooking with grease...however when I tried running the above query against the cached object the times became a consistent 700ms, too long.
I read somewhere that LINQ caches automatically so I did the following:
Dim t As New List(Of batch)
Dim cachedBatch = From d In db.batches _
Select d
t = From r In cachedBatch _
Where r.TotalDeposit = amount _
And r.bDate > startDate AndAlso r.bDate < endDate _
Select r
Return t
Now the query runs at a consistent 120-140ms response time...what gives??? I'm assuming its caching since running the query against the db takes a little while (< 35,000 records).
My question I guess then is, should I be trying to cache LINQ objects? Is there a good way to do so if I'm missing the mark?
As usual, thanks!!!
DO NOT USE the code in that linked article. I don't know what that person was smoking, but the code basically reads the entire contents of a table and chucks it in a memory cache. I can't think of a much worse option for a non-trivial table (and 35,000 records is definitely non-trivial).
Linq to SQL does not cache queries. Linq to SQL tracks specific entities retrieved by queries, using their primary keys. What this means is that if you:
Query the DataContext for some entities;
Change those entities (but don't call SubmitChanges yet);
Run another query that retrieves the same entities.
Then the results of #3 above will be the same entities you retrieved in (1) with the changes you made in (2) - in other words, you get back the existing entities that Linq is already tracking, not the old entities from the database. But it still has to actually execute the query in order to know which entities to load; change tracking is not a performance optimization.
If your database query is taking more than about 100 ms then the problem is almost certainly on the database side. You probably don't have the appropriate indexes on the columns that you are querying on. If you want to cache instead of dealing with the DB perf issue then you need to cache the results of specific queries, which you would do by keying them to the parameters used to create the query. For example (C#):
IEnumerable<Batch> GetBatches(DateTime startDate, DateTime endDate,
Decimal amount)
{
string cacheKey = string.Format("GetBatches-{0}-{1}-{2}",
startDate, endDate, amount);
IEnumerable<Batch> results = Cache[cacheKey];
if (results != null)
{
return results;
}
results = <LINQ QUERY HERE>.ToList();
Cache.Add(cacheKey, results, ...);
return results;
}
This is fine as long as the results can't be changed while the item is in the cache, or if you don't care about getting stale results. If this is an issue, then it starts to become a lot more complicated, and I won't get into all of the subtleties here.
The bottom line is, "caching" every single record in a table is not caching at all, it's turning an efficient relational database (SQL Server) into a sloppy, inefficient in-memory database (a generic list in a cache). Don't cache tables, cache queries if you need to, and before you even decide to do that, try to solve the performance issue in the database itself.
For the record I should also note that someone seems to have implemented a form of caching based on the IQueryable<T> itself. I haven't tested this method, and I'm not sure how much easier it would be than the above to use in practice (you still have to specifically choose to use it, it's not automatic), but I'm listing it as a possible alternative.