I have a db which is migrated from 2008 to 2016. But it is found that the performance of sql 2016 is much lower than 2008 I think the possibility is C.E compatibility Execution I have set the compatibility level to 130 still the issue remains same. I dont wanna change the query execution plan any other suggestion's to improve the query performances
Related
I've got a sql that uses FETCH and OFFSET to choose between specific rows, however i'm concerned about the possible performance and cost of running theese queries within tables with lots of rows. Looking at FETCH AND OFFSET in sql server 2012 it seems like it has bad performance by looking here http://sqlblogcasts.com/blogs/sqlandthelike/archive/2010/11/10/denali-paging-is-it-win-win.aspx and as you also can see the memory used was 44mb for the 10 000 row table.
Questions:
1.should i worry about the performance using this method within sql server 2014 when the table grows?
2.have they done anything since to improve this?
3.Is there any alternative way of doing this that is considered better?
Hi I know that this is an old question but, if you read http://social.technet.microsoft.com/wiki/contents/articles/23811.paging-a-query-with-sql-server.aspx you will find an excellent analysis about using FECHT and OFFSET vs ROW_NUMBER and how was improved since the SP1 of MSSSQL 2012.
I wish to analyze poor MYSQL Query Performance which was happened in different time stamp.
In Oracle, I used to use sqlt (SQLTXPLAIN) report when it's required to analyze poor query performance before and after major version upgrade or executes a query with drastic execution variance in same environment with similar server load. It could provide why SQL is not performing as expected and crucial information (Performance history for the SQL statement, DB parameters, State of CBO stats, Changes on histograms, indexes compare including state, execution plan analysis and all) to actually find root causes before trying to fix performance issue.
Sample Oracle SQLT Report for the reference - http://carlossierradotnet1.files.wordpress.com/2013/04/sqlt_s60032_main.pdf
Can I produce similar report for the MYSQL query which executes with best execution time and worst performance occasionally in the same server environment?
Update:
Capturing long running query and analyze execution plan for a query doesn't work to resolve the problem that I have mentioned in the question. Query doesn't have any problem as it runs without any issue(less than a sec), rarely it behaves with worst performance(hangs sometimes). I am pretty sure, we could pin point the root cause if we get details of Performance history for the SQL statement, State of CBO stats, Changes on histograms, indexes compare including state, execution plan analysis and all for the same query in best and worst timestamp.
I don't have particular query here as it can happen with any complex query if they are running in a big batch, I'd like to provide scenario which might be helpful to understand the problem:
QUERY BATCH#1 - Complex Query#1; Query#2;....Query#10....Complex Query#N;
Situation#1:
All queries are perfectly tuned and runs smoothly when they are executed one by one in a MYSQL prompt.
Situation#2:
QUERY BATCH#1 executing smoothly every night smoothly on 01/05, 02/05, 04/05, 06/05/2014 but Query#2 was taking unacceptable execution time only on 03/05 and Query#10 was taking unacceptable execution time on 05/05/2014. Please be informed that issue was happened with different queries and resource usage (cpu, ram, n/w i/o, storage i/o, etc) and number of db connections are same every day.
Is there any way to check why optimizer took unacceptable time for Query#2 and Query#10 only on 03/05 and 05/05 respectively? Now a days QUERY BATCH#1 runs smoothly every day without applying any changes in the database/system/application.
I'm migrating my data warehouse from SQL Server 2005 to SQL Server 2008 . There is a large performance decrease on the table updates. The inserts work great.
I'm using the same SSIS package in both environments, but 2008 still doesn't update right.
I've run update stats full on all tables. The process uses a temp table. I've dropped all indexes (except one needed for the update) but none of these measures helped. I also wrote an update statement that mimics what SSIS is doing, and it runs fast as expected.
The update process uses a data flow task (there are other things in the task, like inserting into a processed table to know what data was used in the update).
This is a brand new database with nothing else running on it. Any suggestions?
Captured statistics IO
2005, CPU=0, Reads=150
2008, CPU=1700, Reads=33,000
Database RAM:
2005, 40GB Total / 18 Sql Server
2008, 128GB Total / 110GB Sql Server
The problem was found in the execution plan. The plan in 2008 was using different tables build the update statement. Background: since we use indexed views which don't allow any other access while querying those tables, we built smaller/leaner tables the iViews use rather than our dimensions to keep them available to users. The optimizer was choosing those tables rather than the ones we specified in the query.
When I originally did the explain plans, I used the wrong query, which did not have this functionality. This made all the difference.
Thanks!
Can anyone explain to me why there is a dramatic difference in performance between MySQL and SQL Server for this simple select statement?
SELECT email from Users WHERE id=1
Currently the database has just one table with 3 users. MySQL time is on average 0.0003 while SQL Server is 0.05. Is this normal or the MSSQL server is not configured properly?
EDIT:
Both tables have the same structure, primary key is set to id, MySQL engine type is InnoDB.
I tried the query with WITH(NOLOCK) but the result is the same.
Are the servers of the same level of power? Hardware makes a difference, too. And are there roughly the same number of people accessing the db at the same time? Are any other applications using the same hardware (databases in general should not share servers with other applications).
Personally I wouldn't worry about this type of difference. If you want to see which is performing better, then add millions of records to the database and then test queries. Database in general all perform well with simple queries on tiny tables, even badly designed or incorrectly set up ones. To know if you will have a performance problem you need to test with large amounts of data and many simulataneous users on hardware similar to the one you will have in prod.
The issue with diagnosing low cost queries is that the fixed cost may swamp the variable costs. Not that I'm a MS-Fanboy, but I'm more familiar with MS-SQL, so I'll address that, primarily.
MS-SQL probably has more overhead for optimization and query parsing, which adds a fixed cost to the query when decising whether to use the index, looking at statistics, etc. MS-SQL also logs a lot of stuff about the query plan when it executes, and stores a lot of data for future optimization that adds overhead
This would all be helpful when the query takes a long time, but when benchmarking a single query, seems to show a slower result.
There are several factors that might affect that benchmark but the most significant is probably the way MySQL caches queries.
When you run a query, MySQL will cache the text of the query and the result. When the same query is issued again it will simply return the result from cache and not actually run the query.
Another important factor is the SQL Server metric is the total elapsed time, not just the time it takes to seek to that record, or pull it from cache. In SQL Server, turning on SET STATISTICS TIME ON will break it down a little bit more but you're still not really comparing like for like.
Finally, I'm not sure what the goal of this benchmarking is since that is an overly simplistic query. Are you comparing the platforms for a new project? What are your criteria for selection?
I have a problem in SQL Server 2008 Reporting Services. The problem is that the report is sometimes too slow to render (it takes more than 30 min), although I took the query and executed it in SQL Server Management Studio and it didn't take more than 25 seconds.
The query returns a large table (about 5000 rows) and I use it to draw a pie chart in the report, I tried to optimize the query so that it returns only 4 rows but the report was slow again.
What confuses me is that sometimes the report (with different input) is as fast as the query (about 30 sec), I thought it might be because of low number of users so I tried with some colleagues to view it at the same time but the reports still are fast, I tried to change in the configuration but I had no luck.
I've been searching for a solution for this problem for more than two months, so if anyone could help me on this I will be very thankful.
If you have access to the ReportServer sql database execute the following query or similar against the ExecutionLog view:
select TimeStart, TimeEnd, TimeDataRetrieval, TimeProcessing, TimeRendering, Status, ReportID from executionlog
This will provide you with a good breakdown of your report rendering (with different parameters).
Pay close attention to TimeRendering, TimeProcessing and TimeDataRetrieval.
Large or high values for any of these columns will illustrate where your bottleneck is.
One problem that I have experienced in the past is when you are returning a fairly large dataset to the report (5000 rows is large enough for this scenario) and then you are using the inbuilt ssrs filtering, the rendering is very slow and this would result in a very high TimeRendering value.
All rendering should be done at the database layer, grouping and filtering does not perform well will large amounts of data when performed in the ssrs report itself.