I have a long table with following columns:
Id(Serial), FirstName(varchar15), LastName(varchar15), StartDate(date), EndDate(date)
and the entries are like:
* 1, Amar, XoXo, 2009-07-01, 2014-05-23.
* 2, Madhujita, Mami, 2009-03-11, 2014-06-24.
* 3, Akbak Ladar, 2000-04-12, 2009-01-01.
* 4, Abhashuk, Genjin, 2005-06-03, 2005-09-09.
* 5, Sinra, Iao, 2014-01-01, 2014-04-06
and so on till 500 members.
How can I find out which two people have spent most time together and how many days?
Like for the given data Amar and Madhujita have spent the maximum time.
Can this be done in a single query?
Thank You.
Assuming there is one row per person, this is a question of getting the overlap between two spans. The following query uses MySQL syntax to do this (MySQL supports least(), greatest() and limit). This can be done in almost any database, but the exact syntax might vary:
select lt1.firstname, lt1.lastname, lt2.firstname, lt2.lastname,
greatest(0, datediff(least(lt1.EndDate, lt2.EndDate), greatest(lt1.StartDate, lt2.StartDate))) as overlap
from LongTable lt1 cross join
LongTable lt2
where lt1.id <> lt2.id
order by overlap desc
limit 1;
Here is a SQL Fiddle demonstrating it.
Thanx Gordon, I followed your trail and landed up with a query myself with better understanding
select x.firstname as firstname1,x.lastname as surname1,y.firstname as firstname2,y.lastname as surname2
from staff x inner join staff y on
x.startDate <= y.endDate and y.startDate <= x.endDate and x.firstname!=y.firstname and x.surname!=y.surname HAVING MIN(ABS(DATEDIFF(x.startdate,x.endDate)-DATEDIFF(y.startDate,y.endDate)))
Related
I have two tables viz. customers and estimate_size.
customers
estimate_size
I want to get following result:
customers under $1000 23
customers between $1000-$10000 45
customers between $10000-25000 23
etc.
I am using following SQL:
SELECT `e`.`estimate_type`, COUNT(*) AS 'Total Customers' FROM `customers` AS `c` INNER JOIN `estimate_size` AS `e` ON `c`.`customer_estimate` = `e`.`estimate_value` GROUP BY `e`.`estimate_value` ORDER BY `e`.`estimate_type`
But I get wrong results:
£10,000-25,000 3071
£1000-10,000 3071
£25,000-50,000 3071
Over £50,000 3071
Under £1000 3071
What is wrong here?
What is wrong is that you can't store a where clause in a string in a column and then expect the database to realize, and somehow apply it just because you've said to it "this number here (customer_estimate) is equal to that string there ("customer_estimate > 50000")" - it isn't; these values will never be equal
What you need to do is have your estimates table look like this:
estimate_id, estimate_type, min_value, max_value
1, "under 1000", 0, 1000
2, "1000 to 10000", 1001, 10000
3, "over 10000", 10001, 9999999999
And then a query that looks like this:
SELECT
e.estimate_type,
COUNT(*)
FROM
customer c
INNER JOIN
estimate_size e
ON
c.value BETWEEN e.min_value and e.max_value
GROUP BY
e.estimate_type
If you do want to persist with things as they are now, youre probably going to have to get really complicated/involved in cutting that string up/parsing it into a min/max so you can use a query like I have here - it's not worth it (fragile) and I'd change the table to make life simple
I have table 33_PROBLEM with columns ROOT and ROOT_CUBED. Ten i have a simple procedure, that insert data, let´s begins with ROOT from -10000 to 10000, which means ROOT_CUBED from -10000^3 to 10000^3.
Question is simple:
How can I get all triplets combinations of ROOT_CUBED values, that add to number given?
Said in different way:
I want to find A, B, C for which is true, that A^3 + B^3 + C^3 = number_given
Here is some example for searched number 33:
SELECT T1.r1,
T2.r2,
T3.r3
FROM (SELECT root_3 AS R1
FROM `33_problem`) AS T1,
(SELECT root_3 AS R2
FROM `33_problem`) AS T2,
(SELECT root_3 AS R3
FROM `33_problem`) AS T3
WHERE T1.r1 + T2.r2 + T3.r3 = 33
It works well ... on a small amount of rows. This query makes (COUNT *)^3 rows, which for 20000 input lines equals to 8e+12 rows !! ... RIP serever ...
what is the right way to solve this one?
( I got the idea from https://www.youtube.com/watch?v=wymmCdLdPvM and I hope, when someone comes with some answers, i will understand better, how SQL works and how queries and databases should be designed to work good even for big data )
1) you could try to only select sequential triplets, such that R1 <= R2 <= R3,
2) if you have duplicates, select distinct
SELECT T1.R1
,T2.R2
,T3.R3
FROM (
SELECT DISTINCT ROOT_3 AS R1
FROM `33_PROBLEM`
) AS T1
,(
SELECT DISTINCT ROOT_3 AS R2
FROM `33_PROBLEM`
WHERE R2>=R1
) AS T2
,(
SELECT DISTINCT ROOT_3 AS R3
FROM `33_PROBLEM`
WHERE R3>=R2
) AS T3
WHERE T1.R1 + T2.R2 + T3.R3 = 33
I tried looking from -10000 to 10000 and there isn't a solution then i watched the youtube video and they say that they already tried up to 10 to the 14th and still no solution.
I did it with python code though when i tried -10000 to 10000...and to optimize of looking for the the C value. First I look at the sum of A cubed and B cubed...the subtract that from 33 and calculate cube root the answer to try to find C in one hit... , this optimizes it a little because then you don't have to loop through all possible values of C.
Since there is no solution for up to 10 to 14th i don't think i can find a solution since just with -10000 to 10000 It took my computer over 2 hours to search. If i looked to 10 to the 14th it would takes like millions of years or something crazy.
You could work with a single table if integers, then do a "self join" using a "cross join".
For R3, you only need to check -ROUND(POW(r1.root_3 + r2.root3, 1/3)). This should significantly speed things up. Also, to make this work, be sure that you have a positive number.
SELECT t1.r1, t2.r2, -ROUND(POW(r1.root_3 + r2.root3, 1/3))
FROM `33_PROBLEM` AS t1
JOIN `33_PROBLEM` AS t2
WHERE t1.root_3 > 0
AND t2.root_3 > -t1.root_3
AND (t1.root_3 + t2.root_3) = ROUND(POW(r1.root_3 + r2.root3, 1/3))
I just want to know how to call after 10 rows in flash column.
I want to call all category = today. Is it possible?
For Example
SELECT * FROM news WHERE category='today' AND flash='true' limit 60
Is this what you're looking for (it's an official solution shown in the MySQL SELECT Syntax)?
SELECT * FROM news
WHERE flash='true'
LIMIT 10, 18446744073709551615;
Update:
After reading your comments, maybe this is what you're looking for:
SELECT *
FROM (SELECT *
FROM `news`
WHERE (`flash` = 'true')
LIMIT 10, 18446744073709551615) `after_flash`
WHERE `after_flash`.`category` = 'today';
Do you mean fetch 60 records from positions 11-70?
SELECT * FROM news WHERE category='today' AND flash='true' limit 10,60
You may need to do an ORDER BY clause as well to sort the data.
I want to count from the row with the least value to the row with a specific value.
For example,
Name / Point
--------------------
Pikachu / 7
Voltorb / 1
Abra / 4
Sunflora / 3
Squirtle / 8
Snorlax / 12
I want to count to the 7, so I get the returned result of '4' (counting the rows with values 1, 3, 4, 7)
I know I should use count() or mysql_num_rows() but I can't think of the specifics.
Thanks.
I think you want this :
select count(*) from mytable where Point<=7;
Count(*) counts all rows in a set.
If you're working with MySQL, then you could ORDER BY Point:
SELECT count(*) FROM table WHERE Point < 7 ORDER BY Point ASC
If you want to know all about ORDER BY, check out the w3schools page: http://www.w3schools.com/sql/sql_orderby.asp
Just in case you want to only count the rows based on the Point values:
SELECT count(*) FROM table WHERE Point < 7 GROUP BY Point
This may help you to get rows falling between range of values :
select count(*) from table where Point >= least_value and Point<= max_value
Let's say I have a list of values, like this:
id value
----------
A 53
B 23
C 12
D 72
E 21
F 16
..
I need the top 10 percent of this list - I tried:
SELECT id, value
FROM list
ORDER BY value DESC
LIMIT COUNT(*) / 10
But this doesn't work. The problem is that I don't know the amount of records before I do the query. Any idea's?
Best answer I found:
SELECT*
FROM (
SELECT list.*, #counter := #counter +1 AS counter
FROM (select #counter:=0) AS initvar, list
ORDER BY value DESC
) AS X
where counter <= (10/100 * #counter);
ORDER BY value DESC
Change the 10 to get a different percentage.
In case you are doing this for an out of order, or random situation - I've started using the following style:
SELECT id, value FROM list HAVING RAND() > 0.9
If you need it to be random but controllable you can use a seed (example with PHP):
SELECT id, value FROM list HAVING RAND($seed) > 0.9
Lastly - if this is a sort of thing that you need full control over you can actually add a column that holds a random value whenever a row is inserted, and then query using that
SELECT id, value FROM list HAVING `rand_column` BETWEEN 0.8 AND 0.9
Since this does not require sorting, or ORDER BY - it is O(n) rather than O(n lg n)
You can also try with that:
SET #amount =(SELECT COUNT(*) FROM page) /10;
PREPARE STMT FROM 'SELECT * FROM page LIMIT ?';
EXECUTE STMT USING #amount;
This is MySQL bug described in here: http://bugs.mysql.com/bug.php?id=19795
Hope it'll help.
I realize this is VERY old, but it still pops up as the top result when you google SQL limit by percent so I'll try to save you some time. This is pretty simple to do these days. The following would give the OP the results they need:
SELECT TOP 10 PERCENT
id,
value
FROM list
ORDER BY value DESC
To get a quick and dirty random 10 percent of your table, the following would suffice:
SELECT TOP 10 PERCENT
id,
value
FROM list
ORDER BY NEWID()
I have an alternative which hasn't been mentionned in the other answers: if you access from any language where you have full access to the MySQL API (i.e. not the MySQL CLI), you can launch the query, ask how many rows there will be and then break the loop if it is time.
E.g. in Python:
...
maxnum = cursor.execute(query)
for num, row in enumerate(query)
if num > .1 * maxnum: # Here I break the loop if I got 10% of the rows.
break
do_stuff...
This works only with mysql_store_result(), not with mysql_use_result(), as the latter requires that you always accept all needed rows.
OTOH, the traffic for my solution might be too high - all rows have to be transferred.