laravel performance difference just get vs select and get - mysql

I have a project(Laravel 5.4) where i need to improve performance as much as i can.
So i was wondering what is the performance difference between:
$model->get()
get method takes all variables('created_at','updated_at', etc), so the select should be faster.
$model->select('many variables to select')->get();
select method is an additional query, so it takes more time, so maybe just get is faster?
I wanted to know if select and get is better in all occasions or are there any moments where just get is better?

The differences between Model::get() and Model::select(['f1', 'f2'])->get() is only at the query
// Model::get()
SELECT * FROM table
// Model::select(['f1', 'f2'])->get()
SELECT f1, f2 FROM table
Both runs database query ONCE, and prepare the collection of model instances to you. select simply construct eloquent to only select fields you need. Performance gain is almost negligible or can be worse. Read about it here: Is it bad for performance to select all columns?

Related

Which one is better (more select or more records) for performance in mysql?

which below item is better in terms of performance in mysql:
select from more records in one table
more select from fever records in one table
(range of records >= 50000)
There is overhead in performing a query -- sending the query to the server, parsing the query, Optimizing, gathering, sending back, etc. So, it is almost always better (faster) to do more work with fewer queries.
Please provide some concrete examples if you wish to discuss further.

Should you always do a COUNT(*) before a SELECT * to determine if there are any rows?

In MySQL, is it generally a good idea to always do a COUNT(*) first to determine if you should do a SELECT * to actually fetch the rows, or is it better to just do the SELECT * directly and then check if it returned any rows?
Unless you lock the table/s in question, doing a select count(*) is useless. Consider:
Process 1:
SELECT COUNT(*) FROM T;
Process 2:
INSERT INTO T
Process 1:
...now doing something based on the obsolete count retrieved before...
Of course, locking a table is not a very good idea in a server environment.
It depends on whether you need the number, but in particular in mysql there's a calc_found_rows, IIRC. Look up the docs.
always the SELECT [field1, field2 | *] FROM.... The SELECT COUNT(*) will just bloat your code, add additional transport and data overhead and generally be unmaintainable.
The form is 2 queries, the latter is 1 query. Each query needs to talk with the database server. Do the math.
The answer is as in many of this kind questions - "it depends". What you shouldn't do is performing those two queries when you do not have an index on a table. In general, performing just COUNT is a waste of IO time, so if if this operation will help you to save some time spent on IO in MOST cases, than it might be an option.
In some cases some db driver implementations may not return the count of actually selected rows for select statement that returns records itself. The 'count(*)' issued beforehand is useful when you need to know the precise size of resulting recordset before you select actual data.

Mysql SELECT query and performance

I was wondering if there is a performance gain between a SELECT query with a not very specific WHERE clause and another SELECT query with a more specific WHERE clause.
For instance is the query:
SELECT * FROM table1 WHERE first_name='Georges';
slower than this one:
SELECT * FROM table1 WHERE first_name='Georges' AND nickname='Gigi';
In other words is there a time factor that is link to the precision of the WHERE clause ?
I'm not sure to be very understandable and even if my question takes into account all the components that are involved in database query (MYSQL in my case)
My question is related to the Django framework because I would like to cache an evaluated queryset, and on a next request, take back this cached-evaluated queryset, filter it more, and evaluate it again.
There is no hard and fast rule about this.
There can be either an increase or decrease in performance by adding more conditions to the WHERE clause, as it depends on, among other things, the:
indexing
schema
data quantity
data cardinality
statistics
intelligence of the query engine
You need to test with your data set and determine what will perform the best.
MySql server must compare all columns in your WHERE clause (if all joined by AND ).
So if you don't have any index on column nickname second query will by slightly slower.
Here you can read how column indexes works (with examples similar to your question): http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html
I think is difficult to answer this question, too many aspects (e.g.: indexes) are involved. I would say that the first query is faster than the first one, but I can't say for sure.
If this is crucial for you, why don't you run a simulation (e.g.: run 1'000'000 of queries) and check the time?
Yes, it can be slower. It will all depend on indexes you have and data distribution.
Check the link Understanding the Query Execution Plan
for information on how to know what MySQL is going to do when executing your query.

benchmarks between multiple mysql queries and one single complex query

I was wondering whats the speed difference between multiple mysql queries and one single complex query? Is there a difference? Anyone have any benchmarks or tips?
Example
"SELECT *, ( SELECT COUNT(DISTINCT stuff) FROM stuff where stuff.id = id) as stuff, ( SELECT SUM(morestuff) FROM morestuff where morestuff.id = id) as morestuff, (SELECT COUNT(alotmorestuff) FROM alotmorestuff where alotmorestuff.id = id) as alotmorestuff FROM inventory, blah WHERE id=id"
vs single select queries for each.
Well, your complex query actually isn't. It's a bunch of independent selects mashed together which may or may not work - I see no reason there should be any noticeable difference. You may save a bit if you've got high latency connection to your db.

IN Criteria performance for update operation in MySQL

I have read that creating a temporary table is best if the number of parameters passed in the IN criteria is large. This is for select queries. Does this hold true for update queries as well ?? I have an update query which uses 3 table joins (Inner Joins) and passes 1000 parameters in the IN criteria and this query runs in a loop for 200 or more times. Which is the best approach to execute this query ?
IN operations are usually slow. Passing 1000 parameters to any query sounds awful. If you can avoid that, do it. Now, I'd really have a go with the temp table. You can even play with the indexing of the table. I mean, instead of just putting values in it, play with the indexes that would help you optimize your searches.
On the other hand, adding with indexes is slower that adding without indexes. Go for an empiric test there. Now, what I think is a must, bear in mind that when using the other table you don't need to use the IN clause because you can use the EXISTS clause which results usually in better performance. I.E.:
select * from yourTable yt
where exists (
select * from yourTempTable ytt
where yt.id = ytt.id
)
I don't know your query, nor data, but that would give you an idea about how to do it. Note the inner select * is as fast as select aSingleField, as the database engine optimizes it.
Those are all my thoughts. But remember, to be 100% sure of what is best for your problem, there is nothing like performing both tests and timing them :) Hope this help.