improving performance of mysql stored function - mysql

I have a stored function I ported from SQL Server that is super slow for MySql. Since my application needs to support both SQL Server and MySql ( via ODBC drivers ), I sort of require this function. The stored function is normally used in the "where" clause, but it is slow even when it is only in the "select" clause.
Are there any tricks for improving the performance of functions? Maybe any helpful tools that point out potential stored function problems or slow points?
They are complex functions. So while one statement in the function is relatively quick, everything put together is slow.
Also, views are used inside these functions. Don't know how much that impacts things. The views themselves seem to run in a reasonable time.
I know I am not giving much specifics, but I am more looking for a performance tool or some high level stored function performance tips.
I did see various posts online advising people not to use functions in "where" clauses, but I am sort of stuck since I want to go back and forth between SQL Server and MySql with the same executable, and the functions are to complex to embed directly into the SQL in my application.
Edit based on Gruber answer, to clarify things with an example:
Here is an example query:
SELECT count(*) FROM inv USE INDEX(inv_invsku) WHERE invsku >= 'PBM116-01' AND WHMSLayer_IS_LOC_ACCESSIBLE( '', 'PFCSYS ', invloc ) > 0 ORDER BY invsku;
If I get rid of the call to IS_LOC_ACCESSIBLE, it is considerably faster. IS_LOC_ACCESSIBLE is just one of 3 such functions. It has a bunch of IF statements, queries of other tables and views, etc. That is why I call it "complex", because of all that extra logic and conditional paths.

You can try using a query profiler, for instance the one included in dbForge Studio, and try to figure out exactly what the bottlenecks are.
Maybe your tables aren't properly indexed?
You can sometimes achieve good improvements in MySQL (and SQL Server) by moving things around in your queries, creating identical output but changing the internal execution path. For instance, try removing the complex WHERE clause, use the complex code as output from a wrapped SELECT statement which you can use WHERE on subsequently.

Related

how to efficiently reuse mysql queries that are called from an api

I am making a website and have a series of sql statements that are used over and over. I'm wondering if there's any way to optimize this process (in terms of performance) using views, procedures, or something else that I don't know about. The backend works like so:
The frontend make a request to www.api.com/{page}/{user} for the page and user that it needs data for
The backend receives the request and executes a pre-written prepared sql statement, simply passing in the user's name (each page returns the same amount/type of data, the only difference is what user's data we need to get)
The backend converts the result into json and passes it to the frontend
The mysql query ends up looking like SELECT * FROM ... WHERE user = :user for each page. Because it's essentially the same query being run over and over, is there any way to optimize this for performance using the various features of MySQL?
Views are syntactic sugar -- no performance gain.
Stored procedures are handy when you can bundle several things together. However, you can do similar stuff with an application subroutine. One difference is that the SP is all performed on the server, thereby possibly avoiding some network lag between the client and server.
Within an SP, there is PREPARE and EXECUTE. This provides only a small performance improvement.
The best help (in your one example) is to have INDEX(user) on that table.
Will you be performing a query more than a thousand times a second? If so, we need to dig deeper into more of the moving parts of the app.
"Premature optimization" comes to mind for the simple example given.

Using MySQL server side prepared statement to improve app performance

My situation:
MySQL 5.5, but possible to migrate to 5.7
Legacy app is executing single MySQL query to get some data (1-10 rows, 20 columns)
Query can be modified via application configuration
Query is very complex SELECT with multiple JOINS and conditions, it's about 20KB of code
Query is well profiled, index usage fine-tuned, I spent much time on this and se no room for improvement without splitting to smaller queries
With traditional app I would split this large query to several smaller and use caching to avoid many JOINS, but my legacy app does not allow to do that. I can use only one query to return results
My plan to improve performance is:
Reduce parsing time. Parsing 20KB of SQL on every request, while only parameters values are changed seems ineffective
I'd like to turn this query into prepared statement and only fill placeholders with data
Query will be parsed once and executed multiple times, should be much faster
Problems/questions:
First of all: does above solution make sense?
MySQL prepared statements seem to be session related. I can't use that since I cannot execute any additional code ("init code") to create statements for each session
Other solution I see is to use prepared statement generated inside procedure or function. But examples I saw rely on dynamically generating queries using CONCAT() and making prepared statement executed locally inside of procedure. It seems that this kind of statements will be prepared every procedure call, so it will not save any processing time
Is there any way to declare server-wide and not session related prepared statement in MySQL? So they will survive application restart and server restart?
If not, is it possible to cache prepared statements declared in functions/procedures?
I think the following will achieve your goal...
Put the monster in a Stored Routine.
Arrange to always execute that Stored Routine from the same connection. (This may involve restructuring your client and/or inserting a "web service" in the middle.)
The logic here is that Stored Routines are compiled once per connection. I don't know whether that includes caching the "prepare". Nor do I know whether you should leave the query naked, or artificially prepare & execute.
Suggest you try some timings, plus try some profiling. The latter may give you clues into what I am uncertain about.

Impact of (server-side) Prepared Statements on a MySQL 5.5+ Query Plan

So, since cursory googling doesn't reveal anything enlightening:
How does MySQL generate a query plan for a Prepared Statement such as the server-side ones implemented in Connector/J for JDBC? Specifically, does it generate it at the time that the SQL statement is compiled and then reuse it with every execution regardless of the parameters or will it actually adjust the plan in the same manner that would be achieved with issuing each SQL query separately?
If it does happen to be "smart" about it, an explanation of how it does this would be great (e.g. variable peeking)
In almost all cases, the query plan is built when you execute the statement. In MySQL (unlike competing products), building the plan is very fast, so you don't really need to worry about whether it is cached in any way.
Also, but building as needed, different values in the query can lead to different query plans, hence faster execution.
(In the extreme, I have seen one statement, with different constants, have 6 different query plans.)

mysql performance of database access in loop

It is obvious that executing database query in loops has performance issues. but if the query is used as prepared statement, does it make any difference?
What is preferable joining together the tables and get the results or using prepared statement in loop?
Using join would almost always be preferred instead of looping over a result set to get additional results.
Relational Database Management Systems are built for combining related results, and does so very efficiently... additionally, this will you save many round trips to the database, which can become costly if used excessively - regardless of if you're using prepared statements or not.
The overhead to the prepared statements is probably not going to be the escaping of the inputs, it's going to be the connection to the database, or reconnection, or act of sending of the finalized sql statement. That interface between your code and the relational database is likely to be the slow point of the process more than anything else.
However, for my part, I would generally go for whatever is simplest and most able to be maintained from the start, and only worry about performance if the performance actually shows itself to be slow. Write the data-grabbing functionality in a separate function or method so that the implementation can change if the performance proves to need optimization, though.
At that point you can then start optimizing your sql, and use joins or unions as alternatives to multiple prepared statements.

Functions with in Stored Procedure- SQL 2008

I have a SQl Query which returns 30,000+ records, with 15 columns . I am passing a NVARCHR(50) parameter for the store procedure.
At the moment I am using stored procedure to get the data from the database.
As there are 30,000+ records to be fetched and its taking time, What would be the suggestions for me.
Do I get any performance benefits if I use functions with in the stored procedure(to get individual columns based on the parameter I am passing)
Please let me know, if you need more info on the same.
Thank you
I wouldn't use functions unless there is no other way to get your data.
From SQL2005 you have extra functionality in stored procedures such as WITH and CROSS APPLY clauses that makes easier certain restrictions we had in previous versions of SQL that could be solved using UDF's.
In terms of performance, the stored procedure will generally be quicker, but it depends how optimized is your query and/or how the tables have been designed, maybe you could give us an example of what you are trying to achieve.
Functions would probably not be the way to go. 30000 rows isn't that many, depending on how conplex the query is. You would be better to focus on optimising the SQL in the proc, or on checking that your indexing is setup correctly.