How do I track down slow MS Access Queries? - ms-access

I run a website for a non-profit, and the server crashed last week. I moved the code over to another server (hosted by Network Solutions). It has a Microsoft Access back end. Now, all the users are experiencing delays. I think it might be because certain queries need to be optimized (?). Is there a way to figure out which queries might be slow? (the website has ~150 pages)
Here's a very specific example. On the old server, this query was super fast:
SELECT ACCNo, clubid, First_Name, Last_Name, State
FROM tblAccOne
WHERE clubid > 0 (GR_L+GR_ClubPts+GR_VisitorPts+GR_CurRegPts+GR_CurNatPts+GR_TransPts) >= 6000 AND
PlatinumAward = False
ORDER BY last_name, first_name
However, it runs like a DOG on the new server until I remove "clubid > 0" at which point it works great. Goes from taking 30 seconds to less than one sec.

So Access is a front end Windows app - and really isn't even a candidate to serve as a back end db. You would go to SQL Server to be behind a web server.
But in any case the syntax that you write where I see: WHERE clubid > 0
immediately after needs to be AND / OR
before
(GR_L+GR_ClubPts+GR_VisitorPts............

Related

How to get number of select queries made to a specific database for a duration (not whole mysql instance)?

I have a website running in shared hosting environment? I would like to know how many select queries are being run on database?
When I run the following query, I doubt that the result is for whole mysql instance through which the shared hosting is running. Because the difference in result when run with a minute difference is very huge. Definitely not what my website would get.
show global status like "Com_select"
How could I get the number of select queries made in a duration only for my database?

MySQL Connection Limit Advice

I've hit a problem with my MySQL queries and was hoping someone could offer some help/advice.
I'm developing a PHP-based system which combines quite a lot of data in different tabs on one page (tab1 = profile, tab2 = address, tab3 = payments etc.) and as a result, one page can have up to 34/40 MySQL queries pulling from different tables or with different criteria.
The page load became really slow and I asked my web host if they knew what was wrong and they advised it's because of slow MySQL queries (some over 2 seconds). They also said that my MySQL user is only allowed 15 connections at a time.
If my page has 40 queries and only 15 connections are allowed at a time, does this mean they effectively queue and wait for one to complete? If this was the case then I can understand why the page is taking a while to load but i'm not sure of the solution. Is 15 MySQL queries considered a lot or is this quite a tight restriction by my host (HostMonster)?
Also, if there were 15 users accessing the system at the same time, would this 15 connections be split between each of them or is it 15 connections per user logged into the site? I assume they mean per database user but all people who access the system will be using the same database user so it seems impossible to create a system in which several users can access at one?
The whole connections thing has confused me a little.
Thanks in advance for any help!
Have one connection per page. Put the queries into sequence
Optimize those queries - see explain and use indexes
Perhaps combine queries to reduce the through put.
BTW 10+ queries per page is excessive IMHO.
If having max connect error problem then use this command from command line
mysqladmin flush-hosts -uuser -p'password'
this will flush hosts that MySQL has recorded and will build the list again. In the newer version of MYSQL 5.6 you get more information on this but not on previous version.
You can set the following
max_connect_errors 10000
to avoid the message to appear again.
15 queries or connection is not a problem at all, in busy databases we have seen thousand connections and tens of thousands of queries per second.

Kill SQL Server transactions after some minutes

Is there any way to configure SQL Server 2008 to kill a transaction if it has neither been canceled nor committed for some time? (Say power, network connection or whatever gets cut to the computer having it open.)
Either so it happens automatically after some defined rule-sets or by making and calling a command line application that queries the SQL server for active transactions + time they have been running... and then instructs SQL Server to close those down that are "frozen".
To quote Gail Shaw from here:
SQL Server does not time queries out, the connecting application (in this case query analyser) does.
Whichever tech that you're using to connect (ADO, etc.) will probably have a connection timeout and and execution timeout property that you can change in your calling code. Defaults are usually 30 secs.
You could potentially wrap something like this in a loop that kills each offending spid:
select datediff(second, last_batch, getdate()) as secs_running, *
from sys.sysprocesses
where hostname != ''
and open_tran = 1
There would probably be many opinions on how to best find which processes are "safe" to kill, and I would certainly be a little worried about automatically doing such a thing based upon an arbitrary timespan. I'm also not sure that any data changes done in the process are guaranteed to be rolled-back.

Intermittent slow query when using WHERE clause

I'm using SQL Server 2008 and just recently started having an intermittent problem while querying a database.
At least once a day I'm having timeouts with many of our applications because of a slow query. There is no particular time this happens; sometimes in the morning, sometimes afternoon. Every time I begin troubleshooting the problem, it fixes itself within minutes.
Normally I use this query:
SELECT Name FROM Demographics WHERE Name IS NOT NULL
and it runs in < 1 second. However, during these "problem times" the query will take around 3 minutes. Once the query goes through, I can run it again and it works just fine (almost instantly).
Also, while the query above is running, I can use this:
SELECT Name FROM Demographics
and it runs perfectly. No delay. The only difference is the WHERE clause. So, where do I begin troubleshooting? What tools should I be using to find the cause?
Thanks in advance.
The first thing to do is to look at the execution plan of the query. To do this, open a query window in Management Studio, and then choose Include Actual Execution Plan in the Query menu. Run your query, then go to the Execution Plan tab and save the plan.
When you see the performance problem, repeat these steps. Then, load both execution plans, and compare them to see what is different. If there are differences, they will probably point you in the right direction to find the problem.
Look to see if you are being blocked by another process during the trouble periods.

Query Execution time in Management Studio & profiler. What does it measure?

I have my production SQL Server in a remote data center(and the web servers are located in the same data center). During development we observed that one particular view takes a long time to execute (about 60-80 secs) in our local development SQL Server, and we were OK with it.It was promoted to production and when I run the same query on Production DB (which is in the data center)from my local Management Studio I see that the query takes about 7 minutes,17 secs to run (available the bottom right corner of the management studio).When I ran a profiler I see that the time taken to execute that query is 437101 microseconds milliseconds, though it shows up in management studio as 7:17. , which actually is about 437101 milliseconds. My DBA says that in prod the view takes just about 60 to 80 seconds though I see different numbers from profiler and management studio.Can someone tell me what these durations mean in Profiler and management studio ?
My guess: duration between sending the last request byte and receiving the last response byte from the server. The client statistics were as follows:
Client Processing time: 90393
Total Execution time:92221
Wait time on server replies: 1828
My best guess on what "duration" on the profiler means is "the time taken by SQL Server (optimization engine to parse the query,generate the query plan or use the existing query plan + fetch records from different pages) to generate the result set which excludes the time taken by data to travel over the wire to the client"
Edit: I find that both these times are about the same (management studio vs profiler). How do they relate with the times I see in client statistics ?
Can some one throw more light on these ?
If I'm understanding your question correctly, you are first questioning the difference between the Duration reported by Profiler and the statistics presented in SSMS (either in lower right-hand corner for general time and/or by SET STATISTICS TIME ON). In addition to that, you seem to be unconvinced of the production DBA's comment that the view is executing in the expected duration of ~60 seconds.
First, from Books Online, the statics that SSMS would report back via SET STATISTICS TIME ON:
"Displays the number of milliseconds
required to parse, compile, and
execute each statement."
You're spot-on for this. As for Duration in Profiler, it is described as:
"The duration (in microseconds) of the
event."
From where I sit, these two should be functionally equivalent (and, as I'm sure you noticed, Profiler will report in microseconds if your going against SQL 2005 or later). I say this because the "event" in this case (regarding Duration in Profiler) is the execution of the select, which includes delivery to the client; this is consistent in both cases.
It seems you suspect that geography is the culprit to the long duration when executing the query remotely. This very well may be. You can test for this by executing the select on the view in one query window then spawning another query window and reviewing the wait type on the query:
select
a.session_id
,a.start_time
,a.status
,a.command
,db_name(a.database_id) as database_name
,a.blocking_session_id
,a.wait_type
,a.wait_time
,a.cpu_time
,a.total_elapsed_time
,b.text
from sys.dm_exec_requests a
cross apply sys.dm_exec_sql_text(a.sql_handle) b
where a.session_id != ##spid;
I would suspect that you would see something like ASYNC_NETWORK_IO as the wait type if geography is the problem - otherwise, check out what does come of this. If you're Profiling the query of your remote execution, the Duration will be reflective of the time statistics you see in SSMS. HOWEVER, if you're using Profiler and finding that the duration of this query when executed from one of the web servers that sits in the same data center as the SQL Server is still taking 7 minutes, then the DBA is a big, fat liar :). I would use Profiler to record queries that take longer than 1 minute, try to filter for your view and take the average to see if you're on target for performance.
Because there are no other answers posted, I'm concerned that I'm way off base here - but it's late and I'm new to this so I thought I'd give it a go!
I was struggling with that until i found this...
http://blog.sqlauthority.com/2009/10/01/sql-server-sql-server-management-studio-and-client-statistics/
Also, if you open the Property tab for your query you may find some magical "Elapsed Time" that may give you some execution time...
Hope it helps...
Try with this:
DECLARE #time AS DATETIME = CURRENT_TIMESTAMP
-- Your Query
SELECT CAST(DATEDIFF(SECOND, #time, CURRENT_TIMESTAMP) AS VARCHAR)
+ ','
+ CAST(DATEDIFF(MICROSECOND, #time, CURRENT_TIMESTAMP) AS VARCHAR)
AS 'Execution Time'