I need to take last 50 records from user statistic table(table has more than one million records). If I run query below I get very slow response and IIS7 Pool memory usage jumps up to 800mb. I thought that the query will be executed on SQL Server and not in application itself. How to optimize this query?
user.Statistics.OrderByDescending(p => p.DateStamp).Take(50);
This query will be executed on the server. My guess is that the query is slow because you don't have an appropriate index on the DateStamp column.
I would strongly recommend you get a copy of LinqPad (if you don't already have a copy), execute this query in LinqPad, see what T-SQL is being sent to the server (which LinqPad allows you to do), take the T-SQL and look at the query execution plan in SSMS. I would bet a table scan is being done, instead of an index seek.
With the appropriate index in place, this query should execute in < 1-2 seconds at most, even with 10 million rows in the table.
Related
Below is the sql query taking ~47 sec to query the SQL Server 2008 r2 Database for the first time.
but for the 2nd time it would query with in 2-3 sec. i dont know wt would be the issue. i am new to sql server. Please help me, Thanks in advance.
SELECT DISTINCT(SectionName) , Name, DispalyName, TypeVal
FROM ReportData
WHERE FileVersion = 1
ORDER BY SectionName;
in the above query ReportData table contains 21,54,514 number of lines, but it would query only 241 number of lines becuase might be DISTINCT key word.
The query execution plan gets cached when you run it the first time. When you run the same query a second time, therefore, it uses that plan, which removes the overhead in creating a new plan. This reduces the time to execute the query and return results.
You can read up on this on MSDN.
Secondly, SQL Server provides a Buffer Cache to reduce the impact caused by having to read from disk everytime. When a query is made, the read data is stored in the buffer cache. If you are running the same query, I believe that it will read from the buffer cache rather than going to disk, which would reduce the execution time considerably.
On a side note, I think 47 seconds may be a reasonable time given that you have more than 2 million records in the table, and your query does filtering and ordering as well. You might want to index the columns that you use for filtering or grouping.
e.g. You can create an index on SectionName which you specify in your DISTINCT clause like so:
CREATE INDEX idx_SectionName on ReportData(SectionName)
I have a huge database with more than 250 tables. Different type of queries are ran on the database. Since the database has grown over the years and now I need to optimise the database and queries. I have already followed optimisation concepts such as indexing and so on.
My problem is, How to log the query and its execution time of each query which runs on the database ? So I can analyse which query takes how many seconds and optimise them.
Given that I know that MYSQL Trigger would be ideal for this but I don't know how to write a trigger for the whole database, so that it logs each query to a table with query's execution time. I want the trigger to log all the CRUD operation which occurred in the database.
How can I get it done ?
Use mysql slow query log. This will help you to get only queries which are slow instead of log / analyze all queries. You just need to set param for long_query_time like 1 or 2 second.
Also you could set long_query_time = 0 and you will see all sql queries!
I have a MySQL (InnoDB) database that contains tables with rows count between 1 000 000 and 50 000 000.
At night there is aggregating job which counts some information and writes them to reporting tables.
Fist job execution is very fast. Every query executes between 100ms and 1s.
After that almost every single query is very slow.
The example query is:
SELECT count(*) FROM tableA
JOIN tableB ON tableA.id = tableB.tableA_id
execution plan for that query shows that for both tables indexes will be used.
Important thing is that CPU, I/O, memory usage is very low.
MySQL server version: 5.5.28 with default setup (just installed on windows 7 developer computer).
It is difficult to tell from the information provided. I am assuming you have done EXPLAIN etc. In a previous experience, one of my queries suddenly slowed down and I realized that a certain field was suddenly populated with a huge amount of data. Instead of using count(*) maybe try count(tableA.id).
See if this helps or provide more information to debug.
Maybe, it's not really the query but the writing to reporting tables, which is slow.
I would try two things:
Measure the performance of the inserts or updates of your reporting tables
Reorder your jobs. Take a slow one to the front, to see whether the first job is fast or whether the job, which is run first, is fast
While working with MySQL and some really "performance greedy queries" I noticed, that if I run such a greedy query it could take 2 or 3 minutes to be computed. But if I retry the query immediately after it finished the first time, it takes only some seconds. Does MySQL store something like "the last x queries"?
The short answer is yes. there is a Query Cache.
The query cache stores the text of a SELECT statement together with the corresponding result that was sent to the client. If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again. The query cache is shared among sessions, so a result set generated by one client can be sent in response to the same query issued by another client.
from here
The execution plan for the query will be calculated and re-used. The data can be cached, so subsequent executions will be faster.
Yes, depending on how the MySQL Server is configured, it may be using the query cache. This stores the results of identical queries until a certain limit (which you can set if you control the server) has been reached. Read http://dev.mysql.com/doc/refman/5.1/en/query-cache.html to find out more about how to tune your query cache to speed up your application if it issues many identical queries.
I am trying to profile two different queries that do the same thing to find which one is faster. For testing, I have put SQL_NO_CACHE into both queries to prevent the query cache from messing up the timing.
Query A is consistently 50ms.
Query B is 100ms the first time it is run and 10ms if I run it a second time shortly after.
Why is Query B faster the second time? The query cache should not be speeding up the queries. Could it be that the first run of query B loads the data from disk into memory so that the second query is running in memory and faster? Is there a way to test this? I tried to test this myself by doing select * from the table before I ran Query B, but it still exhibited the same behavior. Is SQL_NO_CACHE perhaps not working to disable the query cache?
Query B looks something like this:
SELECT SQL_NO_CACHE foo,bar FROM table1 LEFT JOIN table2 ON table1.foo=table2.foo WHERE bar=1
Depending on the storage engine you're using, yes it is most probably being loaded from a data cache and not a query cache.
MyISAM provides no storage engine level caching for data, and only caches indexes. However, the operating system often serves up data from its own caches which may well be speeding up your query execution.
You can try benchmarking the query in a real scenario, just log that specific query to the database every time its executed (along with its execution time).
Depending on the size of your indexes and your table type, it may be that indexes are not in memory the first time the query is run. So MySQL will pull indexes into memory the first time the query is run, causing a significant slowdown. The next time, most of what MySQL needs may in memory, resulting in the performance gain.
Is your app making a connection and doing the authentication handshake on the first query? If so the 2nd query will already have an open/authenticated connection to execute from. Try running it a 3rd time and see if the 2nd and 3rd tries are close to the same time.