Long query retrieval time for a simple query in Laravel - mysql

I wrote a query in Laravel which is:
$policy = DB::table('policies')->
join('customers','policies.customer_id','=','customers.id')->
join('cities','customers.city_id','=','cities.id')->
join('policy_motors','policies.id','=','policy_motors.policy_id')->
join('vehicle_makes','policy_motors.vehicle_make','=','vehicle_makes.id')->
join('vehicle_models','policy_motors.vehicle_model','=','vehicle_models.id')->
select('policies.policy_number','policies.insurance_premium','policies.commission',
'policies.effective_start_date',
'policies.effective_end_date','customers.name_en',
'customers.address1','customers.address2','cities.name_en','policy_motors.policy_type',
'vehicle_makes.name_en','vehicle_models.name_en')->
where('policies.policy_number','=','DB202017036583')->first();
This query worked perfectly on my Mac. However, when my colleague ran the same query on his Windows machine, it was taking forever. So he wrote one himself, that is:
$policy = Policy::with('customer', 'motor', 'user')->
where('policy_number', 'RK202117017053')->first();
His query worked perfectly on his Windows and my Mac.
Questions:
1. Although my query is selecting only required columns, it is taking forever. But his query, which takes all the columns of the joined table executes faster. Why is that happening?
2. What difference does it make to run a query on different machines, the time difference should be that significant?

Although my query is selecting only required columns, it is taking forever. But his query, which takes all the columns of the joined
table executes faster. Why is that happening?
Even though your query is only selecting a few columns, it does a lot of sub-queries to the table that, if they don't have a proper index, will cause a long run time execution.
His query is faster because the way laravel do eager loading. Laravel do not do sub-queries on the same query, it does a lot of query and the make a relation using collections. What I mean is basically that your query runs a lot of inside queries while your partner's do multiple queries and then merge them using collections
What difference does it make to run a query on different machines, the time difference should be that significant?
Also, there may be some difference if the queries are running locally. Usually SQL consults take ram and processor power to do searching and joining, so if your PC is running low for whatever reason it will take more time than a PC in the right conditions. But if the SQL machine is in the cloud there shouldn't be any difference in execution

The reason why second query is faster is it's using eager loading
it eager loads relationships
Take a look at this link
Eager Loading

Related

Debugging a MySQL query permanently affects the execution time

I am trying to debug a simple but very slow running MySQL query on a table with a JOIN to a very large table (13m rows), the large table has multiple indexes.
The join is very basic, just a join from ID on the small table to foreign_ID on the big table.
This query has been fast to run in the past, however a lot of new data has been added since then. It took 30ms to run previously, it now takes 5 minutes.
On live, I tried repairing the large table by using an alter command to set it to InnoDb. But this made no difference.
So to debug the query, I run EXPLAIN and try removing the joins etc until the query runs very quickly again.
The join types started out as ALL, eq_ref, ref and ref.
Then as I re-enable joins and to try to find a way of making it work in a performant way I find that actually now, the ORIGINAL QUERY now works quickly again.
The only thing that has changed is the query execution plan.
The join types are now range, eq_ref, eq_ref and ref.
What happened? Why is MySQL now treating this same query differently to how it did before?
And how can I make my live server do this too? And how can I stop this from happening again in the future?
EDIT: MySQL version on prod and locally is 5.7
You seem to be falling foul of a query planner bug that frequently manifests on MySQL 5.7 and later. What happens is that the query planner will decide on the wrong execution plan (indexes, join order), which results in the same query on the same data set sometimes running quickly (with the correct execution plan) or slowly (with the wrong execution plan, often resulting in a full table scan). I have seen this happen on every MySQL 5.7 and 8.0 deployment I have worked on. On MySQL 5.6 and earlier and MariaDB, this sort of behaviour from the query planner is only provocable by having an unusually large number of indexes on a table (10+). So if you have a lot of indexes on one of the tables involved, it my be worth trying to rationalize the number of them down.
Apart from keeping the number of indexes on each table as low as you reasonably can, you have two options to address this:
1) When you identify queries that encounter this bug, constrain them using index hints (USE/FORCE INDEX (index_name)) and, if necessary, STRAIGHT_JOIN to force the JOIN ordering.
2) Switch to MariaDB which doesn't seem to suffer from this problem.

I am running a large number of SELECT queries at once and it takes seconds to run. Why isn't it quicker due to MySQL caching?

I have a table with 70 rows. For learning/testing purposes I wrote out a query for each row. So I wrote:
SELECT * FROM MyTable WHERE id="id1";
SELECT * FROM MyTable WHERE id="id2";
/*etc*/
SELECT * FROM MyTable WHERE id="id70";
And ran it in Sequel Pro. All of the queries took a total of 5 seconds. This seems like a really long time since I had read that MySQL has a feature called The MySQL Query Cache. It seems like a query cache, if it is this slow, is pretty useless and I might as well write my own layer of query caching between the database layer and the frontend.
Is it correct that the MySQL query cache is this slow? Or do I need to activate something or fix something to get it to work?
Per the cache documentation, it maps the text of a select statement to the returned result. Since all of those are different, the result wouldn't be cached until they have all been executed once. Does it take just as long the second time?
5 seconds seems slow even without the cache for a normal case though. How big is the table? Is id the primary key? If it is not the PK, then the server is reading every row, and just returning the one that met the criteria you asked for.
Edit - Since you're using a hosted solution, are you running the query from something on the host network, or across the internet? If it's across the internet, then the problem is almost certainly network latency rather than execution time. Especially running the queries individually, since you'll incur transit time for each select.
If you query just based on primary key, you might as well use the memcached interface.
https://dev.mysql.com/doc/refman/5.6/en/innodb-memcached.html

Why is it that when the same query is executed twice in MySQL, it returns two very different response times?

my question is as follows:
Why if I do the same query two times in shell MySql get two very different response times (ie,
the first time and the second a much shorter time)?
and how can I do to prevent this from happening??
thank you very much in advance
This is most likely down to query and/or result caching. if you run a query once, MySQL stores the compiled version of that query, and also has the indexes stored in memory for those particular tables, so any subsequent queries are vastly faster than the original.
This could be due to 1.query caching is turned on or due to 2.the difference in performance states of the system in which it is being executed.
In query caching if you run a query once mysql stores the compiled version of the query and is fetched when called again . the time for compiling is not there in the repeated execution of the same query . query caching can be turned off but it is not a good idea

Optimizing MySQL queries/database

I have two tables TABLE A and TABLE B.
TABLE A contain 1 million (1,000,000) records and 4 fields while TABLE 2 contain 60,000 and 3 fields.
I am running a query which joins these two tables and usees WHERE clause to find specific products like WHERE product like '%Bags%' and product like 'Bags%' e.t.c.
When I run the query directly in phpMyAdmin then it returns records in around 1 or 2 seconds. But when they are being used on website, they are sometime taking 9 or 10 seconds according to MySQL 'slow query' log. Actually my website response was very slow at times so upon investigation I found out it is due to MySQL as I came to know about 'slow query log'.
The slow query log consists of all SQL statements that took more than long_query_time seconds to execute and required at least min_examined_row_limit rows to be examined.
So according to that log "query_time" for above query was 13 seconds while in some cases they even had "query_time" exceeding 50 seconds.
Both my tables are using PRIMARY keys as well as INDEXES. So I want to know how can I optimize them more or is there any way I can optimize MySQL settings in general?
This slowness of website doesn't happen all the time but sometimes (may be once in a week) and lasts for around 1 or 2 minutes. It gets decent amount of traffic and there are many other queries too, the above I posted was just one example.
Thanks
For all things MySQL and performance related, check out http://www.mysqlperformanceblog.com/
Check your queries with EXPLAIN, see here and here for info on how to use EXPLAIN as query diagnostic tool.
It's not enough to just have indexes. Are you indexing the fields searched in the WHERE clause? Also do you have indexes for the fields used in the WHERE clause (including the fields you mention in ORDER BY, GROUP BY, and HAVING clauses as well as JOINs)? If you have grouped fields in a single index, that index won't be hit unless you have a query that searches all those fields together. If you group fields in an index make sure they the index will actually be used in your query (EXPLAIN is your friend).
That said, it could be many other things as well: poorly configured MySQL server, poorly tuned server, bad schema. But your queries and your indexes are good place to start your investigation.
Here is a nice summary of performance best practices from Jay Pipes of MySQL.
like '%Bags%' query cannot be optimized using indexes.
The only way to improve performance here is to use fulltext indexes or get sphinx to search.
Its because of some other queries are run at the time when you are going to refresh the page of your website. so if for example your website going to run 8-10 queries at time of page refresh then it will take some more time than you run single query in phpmyadmin. and if its take 1-1.5 min to execute then its may not the query problem but it may have prob with the server speed also.
and you also can use MATCH() AGAINST() statement for optimize this type of search queries.
Otherwise you are already using PRIMARY KEY, INDEXES and JOINS so there is no need to worry about other things.
just check it out.
Thanks.
There are many ways to optimize Databases and queries. My method is the following.
Look at the DB Schema and see if it makes sense
Most often, Databases have bad designs and are not normalized. This can greatly affect the speed of your Database. As a general case, learn the 3 Normal Forms and apply them at all times. The normal forms above 3rd Normal Form are often called de-normalization forms but what this really means is that they break some rules to make the Database faster.
What I suggest is to stick to the 3rd normal form except if you are a DBA (which means you know subsequent forms and know what you're doing). Normalization after the 3rd NF is often done at a later time, not during design.
Only query what you really need
Filter as much as possible
Your Where Clause is the most important part for optimization.
Select only the fields you need
Never use "Select *" -- Specify only the fields you need; it will be faster and will use less bandwidth.
Be careful with joins
Joins are expensive in terms of time. Make sure that you use all the keys that relate the two tables together and don't join to unused tables -- always try to join on indexed fields. The join type is important as well (INNER, OUTER,... ).
Optimize queries and stored procedures (Most Run First)
Queries are very fast. Generally, you can retrieve many records in less than a second, even with joins, sorting and calculations. As a rule of thumb, if your query is longer than a second, you can probably optimize it.
Start with the Queries that are most often used as well as the Queries that take the most time to execute.
Add, remove or modify indexes
If your query does Full Table Scans, indexes and proper filtering can solve what is normally a very time-consuming process. All primary keys need indexes because they makes joins faster. This also means that all tables need a primary key. You can also add indexes on fields you often use for filtering in the Where Clauses.
You especially want to use Indexes on Integers, Booleans, and Numbers. On the other hand, you probably don't want to use indexes on Blobs, VarChars and Long Strings.
Be careful with adding indexes because they need to be maintained by the database. If you do many updates on that field, maintaining indexes might take more time than it saves.
In the Internet world, read-only tables are very common. When a table is read-only, you can add indexes with less negative impact because indexes don't need to be maintained (or only rarely need maintenance).
Move Queries to Stored Procedures (SP)
Stored Procedures are usually better and faster than queries for the following reasons:
Stored Procedures are compiled (SQL Code is not), making them faster than SQL code.
SPs don't use as much bandwidth because you can do many queries in one SP. SPs also stay on the server until the final results are returned.
Stored Procedures are run on the server, which is typically faster.
Calculations in code (VB, Java, C++, ...) are not as fast as SP in most cases.
It keeps your DB access code separate from your presentation layer, which makes it easier to maintain (3 tiers model).
Remove unneeded Views
Views are a special type of Query -- they are not tables. They are logical and not physical so every time you run select * from MyView, you run the query that makes the view and your query on the view.
If you always need the same information, views could be good.
If you have to filter the View, it's like running a query on a query -- it's slower.
Tune DB settings
You can tune the DB in many ways. Update statistics used by the optimizer, run optimization options, make the DB read-only, etc... That takes a broader knowledge of the DB you work with and is mostly done by the DBA.
****> Using Query Analysers****
In many Databases, there is a tool for running and optimizing queries. SQL Server has a tool called the Query Analyser, which is very useful for optimizing. You can write queries, execute them and, more importantly, see the execution plan. You use the execution to understand what SQL Server does with your query.

MySQL - Concurrent SELECTS - one client waits for another?

I have the following scenario:
I have a database with a particular MyISAM table of about 4 million rows. I use stored procedures (MySQL Version 5.1) and one in particular to search through these rows on various criteria. This table has several indexes on it, and the queries through this stored procedure are normally very fast ( <1s). Basically I use a prepared statement and create and execute some dynamic SQL in this search sp. After executing the prepared statement, I perform "DEALLOCATE PREPARED stmt;"
Most of the queries run in under a second (I use LIMIT to get just 15 rows at any time). However, there are some rare queries which take longer to run (say 2-3s). I have optimized the searched table as far as I can.
I have developed a web application and I can run and see the results of the fast queries in under a second on my development machine.
However, if I open two browser instances and do a simultaneous search (against the development machine), one with the longer running query, and the other with the faster query, the results are returned at the same time, i.e. it seems as if the fast query waits for the slower query to finish before returning the results. i.e. both queries will take 2-3 seconds...
Is there a reason for this? Because I thought that MyISAM handles SELECTS irrespective of one another and currently this is not the behaviour I am experiencing...
Thanks in advance!
Tim
This is just due to you doing it from the same machine, if the searches were coming from two different machines they would go at the same time. Would you really like one person to be able to bog down your MySQL server just by opening a bunch of browser windows and hitting refresh?
That is right. Each select query on a MyISAM table locks the entire table until it is finished. Their excuse is that this achieves "a very high read throughput". Switching to innoDB will allow concurrent reads.