I am trying to calculate some performance metrics for a number of SQL queries. I have found the benchmarking queries for MS SQL Server and I would like the same queries for MySQL Workbench (Windows environment). The queries I am using are the following:
--Elapsed time and CPU time
SET STATISTICS TIME ON;
Query to measure
SET STATISTICS TIME OFF;
-- RAM
select
(physical_memory_in_use_kb)Phy_Memory_usedby_Sqlserver_KB,
(virtual_address_space_committed_kb/1024 )Total_Memory_UsedBySQLServer_MB
from sys. dm_os_process_memory
--HD
sp_msforeachtable N'EXEC sp_spaceused [?]';
Could you please help me to convert the above queries to MySQL?
I would not like to use the reports provided by the Performance Schema of MySQL Workbench since I want to compare the results with the above queries of MS SQL Server.
Thank you in advance
As Jeff pointed out, you can't get exactly those features in MySQL. Nor can you even get close. But, here are some useful things:
Turn on the slowlog with long_quer_time set to 0. Then look in the slowlog for a variety of information about the query.
Use performance_schema (which needs "turning on")
For simple timing, either do it in the client, or do
SELECT SYSDATE(6), ..., SYSDATE(6) FROM ...;
Memory for a specific query is not available. (However, MariaDB has something like that.) Usually a query takes a relatively fixed, and small, amount of memory. Most of what it uses is shared -- various caches.
EXPLAIN SELECT ...
EXPLAIN FORMAT=JSON SELECT ...
Optimizer trace. This needs to be "turned on", and you need to fetch the results.
Related
In oracle sql plus, while doing performance testing, I would do
set autotrace traceonly:
which would display the query plan and statistics without printing the actual results. Is there anything equivalent in mysql?
No, there's no equivalent available in MySQL, at least not in the community edition.
MySQL does not implement the kind of "instrumentation" that Oracle has in its code; so there's no equivalent to an event 10046 trace.
You can preface your SELECT statement with the EXPLAIN keyword, and that will produce output with information about the execution plan that MySQL would use to run the statement, but that's just an estimate, and not a monitoring of the actual execution.
You can also enable the slow query log on the server, to capture SQL statements that take longer than long_query_time seconds to execute, but that really only identifies the long running queries. That would give you the SQL text, along with elapsed time and a count of rows examined.
To get the query plan, just add EXPLAIN to the beginning of a SELECT query.
EXPLAIN SELECT * FROM table
It also estimates the number of rows to be read if that's what statistics you're talking about.
Can anyone explain to me why there is a dramatic difference in performance between MySQL and SQL Server for this simple select statement?
SELECT email from Users WHERE id=1
Currently the database has just one table with 3 users. MySQL time is on average 0.0003 while SQL Server is 0.05. Is this normal or the MSSQL server is not configured properly?
EDIT:
Both tables have the same structure, primary key is set to id, MySQL engine type is InnoDB.
I tried the query with WITH(NOLOCK) but the result is the same.
Are the servers of the same level of power? Hardware makes a difference, too. And are there roughly the same number of people accessing the db at the same time? Are any other applications using the same hardware (databases in general should not share servers with other applications).
Personally I wouldn't worry about this type of difference. If you want to see which is performing better, then add millions of records to the database and then test queries. Database in general all perform well with simple queries on tiny tables, even badly designed or incorrectly set up ones. To know if you will have a performance problem you need to test with large amounts of data and many simulataneous users on hardware similar to the one you will have in prod.
The issue with diagnosing low cost queries is that the fixed cost may swamp the variable costs. Not that I'm a MS-Fanboy, but I'm more familiar with MS-SQL, so I'll address that, primarily.
MS-SQL probably has more overhead for optimization and query parsing, which adds a fixed cost to the query when decising whether to use the index, looking at statistics, etc. MS-SQL also logs a lot of stuff about the query plan when it executes, and stores a lot of data for future optimization that adds overhead
This would all be helpful when the query takes a long time, but when benchmarking a single query, seems to show a slower result.
There are several factors that might affect that benchmark but the most significant is probably the way MySQL caches queries.
When you run a query, MySQL will cache the text of the query and the result. When the same query is issued again it will simply return the result from cache and not actually run the query.
Another important factor is the SQL Server metric is the total elapsed time, not just the time it takes to seek to that record, or pull it from cache. In SQL Server, turning on SET STATISTICS TIME ON will break it down a little bit more but you're still not really comparing like for like.
Finally, I'm not sure what the goal of this benchmarking is since that is an overly simplistic query. Are you comparing the platforms for a new project? What are your criteria for selection?
i have mysql that is used on production server for php webshop application.
sometimes it works very slow. so, i will change indexes for several tables.
but before that, i have to make some kind of "snapshot" of current performances (several times per day). after that, i will change indexes, and create new "performance snapshot". then i will made some more changes in database, and made another "performance snapshot".
how can i make that "performance snapshot"? is it possible to use some kind of tool, or to ckeck some logs, or...?
if you can help me how to do that.
thank you in advance!
If you want to buy a commercial product, there is the MySQL Query Analyzer
Otherwise, you could use the SQL Profiler which is already included with MySQL.
The SQL Profiler is built into the database server and can be dynamically enabled/disabled via the MySQL client utility. To begin profiling one or more SQL queries, simply issue the following command:
mysql> set profiling=1;
Thereafter, you will see the duration of each of your queries as you run them.
Slow query log and queries not using indexes
query cache hit rate
innodb monitor
and of course your database hard-disk I/O, memory usage ...
Can anyone suggest a good MYSQL optimization tool which helps in finding the bottlenecks in a long query and hence help in optimization?? I am looking for a query profiler.
thanks...
Well, You mean Query Optimization? I guess EXPLAIN <query> is excellent in giving hits as to where the bottlenecks are. After which redefine you indexes & ...
UPDATE1: You could check out - MySQL optimization tools
UPDATE2: After digging up in my code, I see that I used to do 2 things for query optimization.
Turn on Slow Query Log - MySQL can record expensive SQL queries in the slow query log. You can define your expectations in seconds using parameter long_query_time.
mysqldumpslow command - After logging is turned on you can analyze the log contents using mysqldumpslow command. mysqldumpslow /path/to/your/mysql-slow-queries.log -t 10. This will show you top 10 performance killers. For each statement in the output you can see the number of identical calls, execution time in seconds, rows affected and the statement itself.
I found EXPLAIN SELECT query very useful in MySQL because it gives information on how SQL will be executed and gives the opportunity to analyze, for e.g., missing indexes you should add in order to improve response BEFORE doing the query itself and analyzing stats.
My question is: In databases like MS Sql, Firebird, Ingres, is there a similar command available?
In Firebird we have PLAN, but is very weak because many times one has to run very long queries in order to view a simple mistake.
Best regards,
Mauro H. Leggieri
In Oracle:
EXPLAIN PLAN FOR SELECT …
In PostgreSQL:
EXPLAIN SELECT …
In SQL Server:
SET SHOWPLAN_XML ON
GO
SELECT …
GO
For mssql server you can use
SET SHOWPLAN_TEXT ON and SET SHOWPLAN_TEXT OFF
this will prevent queries from actually being exectued but it will return they query plan.
For oracle you can use
SET AUTOTRACE ON or EXPLAIN PLAN
(I don't know about firebird or ingres)
In Oracle we have
EXPLAIN PLAN for sql
http://www.adp-gmbh.ch/ora/explainplan.html
In MS SQL Server you can get an text or XML version of the execution plan.
SET SHOWPLAN_XML ON|OFF
SET SHOWPLAN_TEXT ON|OFF
However these are best viewed using the visual tool in Sql Server Management Studio/TOAD.
http://msdn.microsoft.com/en-us/library/ms176058.aspx
Something else that is quite handy is
SET STATISTICS IO ON|OFF
For Ingres, the following will give you the final plan chosen with estimates as to the number of rows, disk IOs and CPU cycles:
set qep
To get the plan but not execute the SELECT also add
set optimizeonly
re-enable query execution:
set nooptimizeonly
to get the the actual statistics for the executed query, to compare with the output from "set qep":
set trace point qe90
See http://docs.ingres.com/Ingres/9.2/SQL%20Reference%20Guide/set.htm for more information on the above.
MS SQL has a utility in Management Studio called Display Execution Plan (Estimated and Exact) when executing a query. it can also display statistics for the query (run time, number of rows, traffic etc )
For Ingres, see also these resources:
Example of Reading and Interpreting a Query Execution Plan (QEP) [pdf]
A brief case study that demonstrates analysis and interpretation of a QEP
Getting Ingres Qep LockTrace Using JDBC
The Query Execution Plan (QEP)