mysql performance of database access in loop - mysql

It is obvious that executing database query in loops has performance issues. but if the query is used as prepared statement, does it make any difference?
What is preferable joining together the tables and get the results or using prepared statement in loop?

Using join would almost always be preferred instead of looping over a result set to get additional results.
Relational Database Management Systems are built for combining related results, and does so very efficiently... additionally, this will you save many round trips to the database, which can become costly if used excessively - regardless of if you're using prepared statements or not.

The overhead to the prepared statements is probably not going to be the escaping of the inputs, it's going to be the connection to the database, or reconnection, or act of sending of the finalized sql statement. That interface between your code and the relational database is likely to be the slow point of the process more than anything else.
However, for my part, I would generally go for whatever is simplest and most able to be maintained from the start, and only worry about performance if the performance actually shows itself to be slow. Write the data-grabbing functionality in a separate function or method so that the implementation can change if the performance proves to need optimization, though.
At that point you can then start optimizing your sql, and use joins or unions as alternatives to multiple prepared statements.

Related

DB query: parameterised values or appended values?

I would like to know which one is a better approach in-terms of performance and security. I'm using MySql DB and the values are of type varchar.
Should I directly concatenate the values to form a query string and execute the query? (or)
Should I use parameterized queries?
Thanks in advance
You should care about data security and then performance, If you concatenate query parameters your security is risked and also this may break your query statement; this is not a better way.
So, you should parameterise your query instead of concatenation for security measure, this takes very few time to process query string.
I think security is really the only major issue here. If you don't build your dynamic query using a statement, then you leave open the possibility for SQL injection, especially if some of the inputs into the query have unsterilized data coming from the outside world. If you do use statements, then you greatly minimize the chance of this to happen.
Regarding performance, there would be some overhead in building and executing a statement, but I would wager that the penalty for using a statement over a raw string is fairly small. The bigger issue for your performance would probably be the query itself, and how you have tuned your schema.
So, my vote is for using the statement, under the assumption that performance is a minor factor to consider here.
Using parameterized query is most recommended solution. It will prevent the threat of SQL injection as well as it provides you the flexibility of updating the query as and when required without any exposure of your query.
you can use the constant query as part of basic development but when the production code is concerned one should use parameterized query and should try to use prepared statements than normal string Statements. Using prepared Statement in Java JDBC add both performance and Security benefits.

Using MySQL server side prepared statement to improve app performance

My situation:
MySQL 5.5, but possible to migrate to 5.7
Legacy app is executing single MySQL query to get some data (1-10 rows, 20 columns)
Query can be modified via application configuration
Query is very complex SELECT with multiple JOINS and conditions, it's about 20KB of code
Query is well profiled, index usage fine-tuned, I spent much time on this and se no room for improvement without splitting to smaller queries
With traditional app I would split this large query to several smaller and use caching to avoid many JOINS, but my legacy app does not allow to do that. I can use only one query to return results
My plan to improve performance is:
Reduce parsing time. Parsing 20KB of SQL on every request, while only parameters values are changed seems ineffective
I'd like to turn this query into prepared statement and only fill placeholders with data
Query will be parsed once and executed multiple times, should be much faster
Problems/questions:
First of all: does above solution make sense?
MySQL prepared statements seem to be session related. I can't use that since I cannot execute any additional code ("init code") to create statements for each session
Other solution I see is to use prepared statement generated inside procedure or function. But examples I saw rely on dynamically generating queries using CONCAT() and making prepared statement executed locally inside of procedure. It seems that this kind of statements will be prepared every procedure call, so it will not save any processing time
Is there any way to declare server-wide and not session related prepared statement in MySQL? So they will survive application restart and server restart?
If not, is it possible to cache prepared statements declared in functions/procedures?
I think the following will achieve your goal...
Put the monster in a Stored Routine.
Arrange to always execute that Stored Routine from the same connection. (This may involve restructuring your client and/or inserting a "web service" in the middle.)
The logic here is that Stored Routines are compiled once per connection. I don't know whether that includes caching the "prepare". Nor do I know whether you should leave the query naked, or artificially prepare & execute.
Suggest you try some timings, plus try some profiling. The latter may give you clues into what I am uncertain about.

Impact of (server-side) Prepared Statements on a MySQL 5.5+ Query Plan

So, since cursory googling doesn't reveal anything enlightening:
How does MySQL generate a query plan for a Prepared Statement such as the server-side ones implemented in Connector/J for JDBC? Specifically, does it generate it at the time that the SQL statement is compiled and then reuse it with every execution regardless of the parameters or will it actually adjust the plan in the same manner that would be achieved with issuing each SQL query separately?
If it does happen to be "smart" about it, an explanation of how it does this would be great (e.g. variable peeking)
In almost all cases, the query plan is built when you execute the statement. In MySQL (unlike competing products), building the plan is very fast, so you don't really need to worry about whether it is cached in any way.
Also, but building as needed, different values in the query can lead to different query plans, hence faster execution.
(In the extreme, I have seen one statement, with different constants, have 6 different query plans.)

improving performance of mysql stored function

I have a stored function I ported from SQL Server that is super slow for MySql. Since my application needs to support both SQL Server and MySql ( via ODBC drivers ), I sort of require this function. The stored function is normally used in the "where" clause, but it is slow even when it is only in the "select" clause.
Are there any tricks for improving the performance of functions? Maybe any helpful tools that point out potential stored function problems or slow points?
They are complex functions. So while one statement in the function is relatively quick, everything put together is slow.
Also, views are used inside these functions. Don't know how much that impacts things. The views themselves seem to run in a reasonable time.
I know I am not giving much specifics, but I am more looking for a performance tool or some high level stored function performance tips.
I did see various posts online advising people not to use functions in "where" clauses, but I am sort of stuck since I want to go back and forth between SQL Server and MySql with the same executable, and the functions are to complex to embed directly into the SQL in my application.
Edit based on Gruber answer, to clarify things with an example:
Here is an example query:
SELECT count(*) FROM inv USE INDEX(inv_invsku) WHERE invsku >= 'PBM116-01' AND WHMSLayer_IS_LOC_ACCESSIBLE( '', 'PFCSYS ', invloc ) > 0 ORDER BY invsku;
If I get rid of the call to IS_LOC_ACCESSIBLE, it is considerably faster. IS_LOC_ACCESSIBLE is just one of 3 such functions. It has a bunch of IF statements, queries of other tables and views, etc. That is why I call it "complex", because of all that extra logic and conditional paths.
You can try using a query profiler, for instance the one included in dbForge Studio, and try to figure out exactly what the bottlenecks are.
Maybe your tables aren't properly indexed?
You can sometimes achieve good improvements in MySQL (and SQL Server) by moving things around in your queries, creating identical output but changing the internal execution path. For instance, try removing the complex WHERE clause, use the complex code as output from a wrapped SELECT statement which you can use WHERE on subsequently.

MySQL prepared statements vs simple queries performance

I made a few tests, first I tested mysql prepared statement with $pdo->prepare() and $insert_sth->execute() for 10k inserts (with named parameters if it matters), and it took 301s.
After that I made simple insert queries and inserting each time for the same 10k inserts too and it took 303s.
So I would like to know: does prepared statements really give performance benefits? Because my tests didn't show it , or I have to optimize my prepared statements version in order for them to be they faster?
I can give my source code if it's needed.
I prefer prepared statements in terms of security rather than performance (not sure if it is faster) for example to avoid sql injection.
INSERTs are most likely IO-bound, since they're generally not very complex in terms of SQL - just a list of columns and data to put in them. Thus, what you use to perform the queries isn't as significant in the run time as the amount of data that you're stuffing into the database, how fast you can get the data to the DB server, and how fast the DB server can store it.