What difference does it make which column SQL COUNT() is run on? - mysql

Firstly, this is not asking In SQL, what's the difference between count(column) and count(*)?.
Say I have a users table with a primary key user_id and another field logged_in which describes if the user is logged in right now.
Is there a difference between running
SELECT COUNT(user_id) FROM users WHERE logged_in=1
and
SELECT COUNT(logged_in) FROM users WHERE logged_in=1
to see how many users are marked as logged in? Maybe a difference with indexes?
I'm running MySQL if there are DB-specific nuances to this.

In MySQL, the count function will not count null expressions, so the results of your two queries may be different. As mentioned in the comments and Remus' answer, this is as a general rule for SQL and part of the spec.
For example, consider this data:
user_id logged_in
1 1
null 1
SELECT COUNT(user_id) on this table will return 1, but SELECT COUNT(logged_in) will return 2.
As a practical matter, the results from the example in the question ought to always be the same, as long as the table is properly constructed, but the utilized indexes and query plans may differ, even though the results will be the same. Additionally, if that's a simplified example, counting on different columns may change the results as well.
See also this question: MySQL COUNT() and nulls

For the record: the two queries return different results. As the spec says:
Returns a count of the number of non-NULL values of expr in the rows
retrieved by a SELECT statement.
You may argue that given the condition for logged_in=1 the NULL logged_in rows are filtered out anyway, and user_id will not have NULLs in a table users. While this may be true, it does not change the fundamentals that the queries are different. You are asking the query optimizer to make all the logical deductions above, for you they may be obvious but for the optimizer may be is not.
Now, assuming that the results are in practice always identical between the two, the answer is simple: don't run such a query in production (and I mean either of them). Is a scan, no matter how you slice it. logged_in has too low cardinality to matter. Keep a counter, update it at each log in and each log out event. It will drift in time, refresh as often as needed (once a day, once an hour).
As for the question itself: SELECT COUNT(somefield) FROM sometable can use a narrow index on somefield resulting in less IO. The recommendation is to use * because this room for the optimizer to use any index it sees fit (this will vary from product to product though, depending on how smart a query optimizer are we dealing with, YMMV). But as you start adding WHERE clauses the possibile alternatives (=indexes to use) quickly vanish.

Related

Faster counts with mysql by sampling table

I'm looking for a way I can get a count for records meeting a condition but my problem is the table is billions of records long and a basic count(*) is not possible as it times out.
I thought that maybe it would be possible to sample the table by doing something like selecting 1/4th of the records. I believe that older records will be more likely to match so I'd need a method which accounts for this (perhaps random sorting).
Is it possible or reasonable to query a certain percent of rows in mysql? And is this the smartest way to go about solving this problem?
The query I currently have which doesn't work is pretty simple:
SELECT count(*) FROM table_name WHERE deleted_at IS NOT NULL
SHOW TABLE STATUS will 'instantly' give an approximate Row count. (There is an equivalent SELECT ... FROM information_schema.tables.) However, this may be significantly far off.
A count(*) on an index on any column in the PRIMARY KEY will be faster because it will be smaller. But this still may not be fast enough.
There is no way to "sample". Or at least no way that is reliably better than SHOW TABLE STATUS. EXPLAIN SELECT ... with some simple query will do an estimate; again, not necessarily any better.
Please describe what kind of data you have; there may be some other tricks we can use.
See also Random . There may be a technique that will help you "sample". Be aware that all techniques are subject to various factors of how the data was generated and whether there has been "churn" on the table.
Can you periodically run the full COUNT(*) and save it somewhere? And then maintain the count after that?
I assume you don't have this case. (Else the solution is trivial.)
AUTO_INCREMENT id
Never DELETEd or REPLACEd or INSERT IGNOREd or ROLLBACKd any rows
ADD an index key with deleted_at column, to improve time execution
and try to count id if id is set.

Can I use index in MySQL in this way? [duplicate]

If I have a query like:
Select EmployeeId
From Employee
Where EmployeeTypeId IN (1,2,3)
and I have an index on the EmployeeTypeId field, does SQL server still use that index?
Yeah, that's right. If your Employee table has 10,000 records, and only 5 records have EmployeeTypeId in (1,2,3), then it will most likely use the index to fetch the records. However, if it finds that 9,000 records have the EmployeeTypeId in (1,2,3), then it would most likely just do a table scan to get the corresponding EmployeeIds, as it's faster just to run through the whole table than to go to each branch of the index tree and look at the records individually.
SQL Server does a lot of stuff to try and optimize how the queries run. However, sometimes it doesn't get the right answer. If you know that SQL Server isn't using the index, by looking at the execution plan in query analyzer, you can tell the query engine to use a specific index with the following change to your query.
SELECT EmployeeId FROM Employee WITH (Index(Index_EmployeeTypeId )) WHERE EmployeeTypeId IN (1,2,3)
Assuming the index you have on the EmployeeTypeId field is named Index_EmployeeTypeId.
Usually it would, unless the IN clause covers too much of the table, and then it will do a table scan. Best way to find out in your specific case would be to run it in the query analyzer, and check out the execution plan.
Unless technology has improved in ways I can't imagine of late, the "IN" query shown will produce a result that's effectively the OR-ing of three result sets, one for each of the values in the "IN" list. The IN clause becomes an equality condition for each of the list and will use an index if appropriate. In the case of unique IDs and a large enough table then I'd expect the optimiser to use an index.
If the items in the list were to be non-unique however, and I guess in the example that a "TypeId" is a foreign key, then I'm more interested in the distribution. I'm wondering if the optimiser will check the stats for each value in the list? Say it checks the first value and finds it's in 20% of the rows (of a large enough table to matter). It'll probably table scan. But will the same query plan be used for the other two, even if they're unique?
It's probably moot - something like an Employee table is likely to be small enough that it will stay cached in memory and you probably wouldn't notice a difference between that and indexed retrieval anyway.
And lastly, while I'm preaching, beware the query in the IN clause: it's often a quick way to get something working and (for me at least) can be a good way to express the requirement, but it's almost always better restated as a join. Your optimiser may be smart enough to spot this, but then again it may not. If you don't currently performance-check against production data volumes, do so - in these days of cost-based optimisation you can't be certain of the query plan until you have a full load and representative statistics. If you can't, then be prepared for surprises in production...
So there's the potential for an "IN" clause to run a table scan, but the optimizer will
try and work out the best way to deal with it?
Whether an index is used doesn't so much vary on the type of query as much of the type and distribution of data in the table(s), how up-to-date your table statistics are, and the actual datatype of the column.
The other posters are correct that an index will be used over a table scan if:
The query won't access more than a certain percent of the rows indexed (say ~10% but should vary between DBMS's).
Alternatively, if there are a lot of rows, but relatively few unique values in the column, it also may be faster to do a table scan.
The other variable that might not be that obvious is making sure that the datatypes of the values being compared are the same. In PostgreSQL, I don't think that indexes will be used if you're filtering on a float but your column is made up of ints. There are also some operators that don't support index use (again, in PostgreSQL, the ILIKE operator is like this).
As noted though, always check the query analyser when in doubt and your DBMS's documentation is your friend.
#Mike: Thanks for the detailed analysis. There are definately some interesting points you make there. The example I posted is somewhat trivial but the basis of the question came from using NHibernate.
With NHibernate, you can write a clause like this:
int[] employeeIds = new int[]{1, 5, 23463, 32523};
NHibernateSession.CreateCriteria(typeof(Employee))
.Add(Restrictions.InG("EmployeeId",employeeIds))
NHibernate then generates a query which looks like
select * from employee where employeeid in (1, 5, 23463, 32523)
So as you and others have pointed out, it looks like there are going to be times where an index will be used or a table scan will happen, but you can't really determine that until runtime.
Select EmployeeId From Employee USE(INDEX(EmployeeTypeId))
This query will search using the index you have created. It works for me. Please do a try..

Efficiency of query using NOT IN()?

I have a query that runs on my server:
DELETE FROM pairing WHERE id NOT IN (SELECT f.id FROM info f)
It takes two different tables, pairing and info and says to DELETE all entries from pairing whenever the id of that pairing is not in info.
I've run into an issue on the server where this is beginning to take too long to execute, and I believe it has to do with the efficiency (or lack of constraints in the SELECT statement).
However, I took a look at the MySQL slow_log and the number of compared entries is actually LOWER than it should be. From my understanding, this should be O(mn) time where m is the number of entries in pairing and n is the number of entries in info. The number of entries in pairing is 26,868 and in info is 34,976.
This should add up to 939,735,168 comparisons. But the slow_log is saying there are only 543,916,401: almost half the amount.
I was wondering if someone could please explain to me how the efficiency of this specific query works. I realize the fact that it's performing quicker than I think it should is a blessing in this case, but I still need to understand where the optimization comes from so that I can further improve upon it.
I haven't used the slow query log much (at all) but isn't it possible that the difference can just be chalked up to simple... can't think of the word. Basically, 939,735,168 is the theoretical worst case scenario where the query literally checks every single row except the one it needs to first. Realistically, with a roughly even distribution (and no use of indexing), a check of row in pairing will on average compare to half the rows in info.
It looks like your real world performance is only 15% off (worse) than what would be expected from the "average comparisons".
Edit: Actually, "worse than expected" should be expected when you have rows in pairing that are not in info, as they will skew the number of comparisons.
...which is still not great. If you have id indexed in both tables, something like this should work a lot faster.
DELETE pairing
FROM pairing LEFT JOIN info ON pairing.id = info.id
WHERE info.id IS NULL
;
This should take advantage of an index on id to make the comparisons needed something like O(NlogM).

Indeterminate sort ordering - MySQL vs Postgres

I have the following query
SELECT *
FROM table_1
INNER JOIN table_2 ON table_1.orders = table_2.orders
ORDER BY table_2.purchasetime;
The above query result is indeterminate i.e it can change with different queries when the purchase time is of same value as per the MySQL manual itself.To overcome this we give sort ordering on a unique column and combine it with the regular sort ordering.
The customer does not want to see different results with different page refreshes so we have put in the above fix specifically for MySQL which is unnecessary and needs extra compound indexes for both asc and desc.
I am not sure whether the same is applicable for postgres.So far I have not been able to reproduce the scenario.I would appreciate if someone could answer this for postgres or point me in the right direction.
Edit 1 : The sort column is indexed.So assuming the disk data has no ordering, but in the case of index (btree data structure) a constant ordering might be possible with postgres ?
No, it will not be different in PostgreSQL (or, in fact, in any other relational database that I know of).
See http://www.postgresql.org/docs/9.4/static/queries-order.html :
After a query has produced an output table (after the select list has been processed) it can optionally be sorted. If sorting is not chosen, the rows will be returned in an unspecified order. The actual order in that case will depend on the scan and join plan types and the order on disk, but it must not be relied on. A particular output ordering can only be guaranteed if the sort step is explicitly chosen.
Even if by accident you manage to find a PostgreSQL version and index that will guarantee the order in all the test you run, please don't rely on it. Any database upgrade, data change or a change in the Maya calendar or the phase of the moon can suddenly upset your sorting order. And debugging it then is a true and terrible pain in the neck.
Your concern seems to be that order by table_2.purchasetime is indeterminate when there are multiple rows with the same value.
To fix this -- in any database or really any computer language -- you need a stable sort. You can turn any sort into a stable sort by adding a unique key. So, adding a unique column (typically an id of some sort) fixes this in both MySQL and Postgres (and any other database).
I should note that instability in sorts can be a very subtle problem, one that only shows up under certain circumstances. So, you could run the same query many times and it is fine. Then you insert or delete a record (perhaps even one not chosen by the query) and the order changes.

The process order of SQL order by, group by, distinct and aggregation function?

Query like:
SELECT DISTINCT max(age), area FROM T_USER GROUP BY area ORDER BY area;
So, what is the process order of order by, group by, distinct and aggregation function ?
Maybe different order will get the same result, but will cause different performance. I want to merge multi-result, I got the sql, and parsed.So I want to know the order of standard sql dose.
This is bigger than just group by/aggregation/order by. You want to have an sense of how a query engine creates a result set. At a high level, that means creating an execution plan, retrieving data from the table into the query's working set, manipulating the data to match the requested result set, and then returning the result set back to the caller. For very simple queries, or queries that are well matched to the table design (or table schemas that are well-designed for the queries you'll need to run), this can mean streaming data from a table or index directly back to the caller. More often, it means thinking at a more detailed level, where you roughly follow these steps:
Look at the query to determine which tables will be needed.
Look at joins and subqueries, to determine which of those table depend on other tables.
Look at the conditions on the joins and in the where clause, in conjunction with indexes, to determine the how much space from each table will be needed, and how much work it will take to extract the portions of each table that you need (how well the query matches up with your indexes or the table as stored on disk).
Based the information collected from steps 1 through 3, figure out the most efficient way to retrieve the data needed for the select list, regardless of the order in which tables are included in the query and regardless of any ORDER BY clause. For this step, "most efficient" is defined as the method that keeps the working set as small as possible for as long as possible.
Begin to iterate over the records indicated by step 4. If there is a GROUP BY clause, each record has to be checked against the existing discovered groups before the engine can determine whether or not a new row should be generated in the working set. Often, the most efficient way to do this is for the query engine to conduct an effective ORDER BY step here, such that all the potential rows for the results are materialized into the working set, which is then ordered by the columns in the GROUP BY clause, and condensed so that only duplicate rows are removed. Aggregate function results for each group are updated as the records for that group are discovered.
Once all of the indicated records are materialized, such that the results of any aggregate functions are known, HAVING clauses can be evaluated.
Now, finally, the ORDER BY can be factored in, as well.
The records remaining in the working set are returned to the caller.
And as complicated as that was, it's only the beginning. It doesn't begin to account for windowing functions, common table expressions, cross apply, pivot, and on and on. It is, however, hopefully enough to give you a sense of the kind of work the database engine needs to do.