mysql select distinct * - mysql

I'd rather post an image here, but it says that I don't have enough reputation to do it.
I tried to find something similar to my question, but there are to much "distinct" and "group by" to find something useful... "distinct *" in search gives the same, as "distinct"..
Here is link to xlsx-file with table with source example data and table with desired result
Source example data - is a simplified result of some complex query.
The question is here:
I'd like to apply to this result of select some grouping, which gives query "select distinct * from table_below".
But, this variant is not very good for performance reason. My original non-simplified table has about 4000 rows, and 10 columns. So, "select distinct *" takes 20 sec to give needed result.
To be clear, I want to make grouping by 4-th column, but within every "column_id", as shown in xlsx attached file.
Thanks in advance

It is better to write out the coloumn names. Could you try this?
SELECT DISTINCT coming_id, spare_id ,spare_sum, product_id
FROM Table
Order by coming_id, spare_id, product_id

Related

What is the fastest way to count total of rows in mySQL

I need to know what is the fastest way to count the rows of views according to each product. I tried to run the query by join it between two table. A 'product_db' and 'product_views'. But it took about a minute to complete a query.
Here is my code:
select *,
count(product_views.vwr_id) as product_viewer,
from product_db
inner join product_viewer on product_db.id=product_views.vwr_cid
where product_id='$pid' order by id desc
Where '$pid' is a product id.
This is my product_views table.
I need to include a column of viewers into my table. But it takes very long time to load. I either tried to count a separate query, but no luck. Please you guy suggest a more brilliant way.
Regards,
It sounds like your query is slow, not the counting. Two things you could try:
Make sure the product_id field has an index on it.
If the product_id is a numeric field, remove the single quotes around it. In other words change this where product_id='$pid' to this where product_id=$pid. MySQL could be doing a conversion on the product_id field to convert it to a string for the comparison and ignoring the index if it does exist.

MySql Explain ignoring the unique index in a particular query

I started looking into Index(es) in depth for the first time and started analyzing our db beginning from the users table for the first time. I searched SO to find a similar question but was not able to frame my search well, I guess.
I was going through a particular concept and this first observation left me wondering - The difference in these Explain(s) [Difference : First query is using 'a%' while the second query is using 'ab%']
[Total number of rows in users table = 9193]:
1) explain select * from users where email_address like 'a%';
(Actually matching columns = 1240)
2) explain select * from users where email_address like 'ab%';
(Actually matching columns = 109)
The index looks like this :
My question:
Why is the index totally ignored in the first query? Does mySql think that it is a better idea not to use the index in the case 1? If yes, why?
If the probability, based statistics mysql collects on distribution of the values, is above a certain ratio of the total rows (typically 1/11 of the total), mysql deems it more efficient to simply scan the whole table reading the disks pages in sequentially, rather than use the index jumping around the disk pages in random order.
You could try your luck with this query, which may use the index:
where email_address between 'a' and 'az'
Although doing the full scan may actually be faster.
This is not a direct answer to your question but I still want to point it out (in case you already don't know):
Try:
explain select email_address from users where email_address like 'a%';
explain select email_address from users where email_address like 'ab%';
MySQL would now use indexes in both the queries above since the columns of interest are directly available from the index.
Probably in the case where you do a "select *", index access is more costly since the optmizer has to go through the index records, find the row ids and then go back to the table to retrieve other column values.
But in the query above where you only do a "select email_address", the optmizer knows all the information desired is available right from the index and hence it would use the index irrespective of the 30% rule.
Experts, please correct me if I am wrong.

Getting count from complicated mysql statement

I'm trying to get the count of unique questions from the following mysql statement but every time I try to add count(q.id) as questionCount the statement only returns one result. I'm obviously doing something wrong but I can't figure out what it is.
http://www.sqlfiddle.com/#!2/34906/58
Hope somebody can help.
Steve
Just edit 2nd line of your query to this one:
select
count(distinct FinalQA.QUESTION_ID) from.....
It appears you want the total questions "stamped" on every row... for example you are auto-generating a test and want it to show "Out of 5 questions" in the output. To simplify this, since you KNOW you want 5 questions via your WHERE clause, I would slightly adjust it to...
select
FinalQA.*
from
( select
5 as TotalQuestionsOffered,
QWithAllAnswers.*,
... rest of query ) FinalQA
where
FinalQA.ARankSeq <= FinalQA.TotalQuestionsOffered

MYSQL - create single record out of similar rows, chose values of greatest length for most columns

Here is my case, I have a database table with below fields:
name
place_code
email
phone
address
details
estd
others
and example data
If you look at the above example table, first three records are talking about xyz and place code 1020.
I want to create a single record for these three records based on
substring(name,1,4)
place_code
(I am lucky here for all the similar records satisfies this condition and unique in the table.)
For the other columns which record column length has max. For example again for the above 3 records email should be test#te.com, phone should be 657890 and details should be "testdetails".
This should be done for all the table. (Some has single records and some has max 10 records.)
Any help on query that helps me to get the desired result?
Answer
Some one posted the below answer and deleted it . But that looks a good solution
SELECT max(name),
place_code,
max(email),
max(phone),
max(address),
max(details),
max(estd),
max(others)
FROM table_x
GROUP BY substring(name,1,4),place_code
Please let me know if you guys see any issues in it ?
Thank You all
Kiran
You need the awesome GROUP_CONCAT aggregate function.
SELECT place_code,
substring(name,1,4) name,
GROUP_CONCAT(email),
GROUP_CONCAT(Phone),
GROUP_CONCAT(details)
FROM table
GROUP BY place_code, substring(name,1,4)
It has options allowing you to control things like the order of items in the string and the separators. See http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat
SELECT max(name),
place_code,
max(email),
max(phone),
max(address),
max(details),
max(estd),
max(others)
FROM table_x
GROUP BY substring(name,1,4),place_code

Sql count without duplication in statment

i have an sql query that selects a bunch of data. I would also like to get the number of records selected by the query (before i limit it). All the examples i have seen of the count statment duplicated the select. My select statment is about 50 lines long and i would rarther not duplicate it.
Thanks
Your question would be easier to answer if you could give us an example SQL statement, however, from what you have said so far, the following should be correct:
Select Columns, Count(Distinct Value) From Table Where x=y Group By Columns
Yes.
http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_found-rows
It isn't really possible to get the number of rows that a query would return without running it or a version of it.
There's sql_calc_found_rows which will let you put a limit clause on the statement and return the total number of rows it would have found had there not been a limit clause in a subsequent call to found_rows(), but it's expensive.
Thanks everyone, i was just trying out sql_calc_found_rows, and your dam right Nick, it is expensive. I think ill just create a separte query, thanks