Hi this is driving me crazy consultation, such as selecting the maximum of this consultation, I made a temporary table to which the maximum then sack him, but quiciera know if the same query can be removed without temporary tables.
I have 2 tables N AND ACCRUED PAYROLL D
Table I occupy the payroll payroll 201314-201320
and table I need the EMPLOYEEIDNO accrued; cod_tiponomina; cod_nomina and accrued employee wages are earned in 14 days;
Good help is that I need to get the maximum of the salary calculation with the sum ();
thanks for the help.
SELECT MAX((SUM(d.devengado) / (COUNT(d.devengado)*14))*30) salario_30dias
FROM devengados d
JOIN nominas n
ON n.cod_nomina = d.cod_nomina
WHERE d.cod_empleado = 564
AND d.cod_tiponomina = 1
AND d.cod_nomina BETWEEN 201314 AND 201320
AND d.devengado > 0
GROUP
BY YEAR(n.fecha_cierre)
, MONTH(n.fecha_cierre);
'Max' and 'Sum' are always executed according to the Group By Clause.
You can nest 2 Selects to get the Maximum of Different Sums:
SELECT MAX(sum) FROM
(SELECT SUM(column) AS sum FROM table GROUP BY crit1)
GROUP BY sum
But a better Way would be to sort the sums, and pick the first one to achieve the same as a sourrounding MIN/MAX (It would not require the nesting of selects):
SELECT SUM(column) AS sum FROM table GROUP BY crit1 ORDER BY sum DESC LIMIT 0,1
(for MIN you would need to sort ASC)
Related
I have a very simple query like this for my event_prizes table:
SELECT id, prize FROM event_prizes WHERE event_prizes.event_id = x;
Instead of getting the individual price amounts, I need to show the total amount of the given prize for every 50 rows in this query. How can I use the SUM function to calculate the total value of every 50 rows?
assuming you have a colum with the row number (this id db depending )
You could try grouping by the floor(your_row_num/50)
SELECT floor(your_row_num/50), sum(prize )
FROM event_prizes WHERE event_prizes.event_id = x
GROUP BY floor(your_row_num/50);
if you have a mysql version 8 you could use ROW_NUMBER otherwise
use a var and increment
i have this SQL Schema: http://sqlfiddle.com/#!9/eb34d
In particular these are the relevant columns for this question:
ut_id,ob_punti
I need to get the average of the TOP n (where n is 4) values of "ob_punti" for each user (ut_id)
This query returns the AVG of all values of ob_punti grouped by ut_id:
SELECT ut_id, SUM(ob_punti), AVG(ob_punti) as coefficiente
FROM vw_obiettivi_2015
GROUP BY ut_id ORDER BY ob_punti DESC
But i can't figure out how to get the AVG for only the TOP 4 values.
Can you please help?
It will give SUM and AVG of top 4. You may replace 4 by n to get top n.
select ut_id,SUM(ob_punti), AVG(ob_punti) from (
select #rank:=if(#prev_cat=ut_id,#rank+1,1) as rank,ut_id,ob_punti,#prev_cat:=ut_id
from Table1,(select #rank:=0, #prev_cat:="")t
order by ut_id, ob_punti desc
) temp
where temp.rank<=4
group by ut_id;
This is not exactly related to the question asked, I am placing this because some one might get benefited.
I got the hackerearth problem to write mysql query to fetch top 10 records based on average of product quantity in stock available.
SELECT productName, avg(quantityInStock) from products
group by quantityInStock
order by quantityInStock desc
limit 10
Note: If someone can make better the above query, please welcome to modify.
I have my SQL table like this:
**CLIENTS:**
id
country
I want to echo a table with all countries I have with percentage fo each.
For example, if I have 2 Canadians and 1 French in my table, I want:
1 - Canada - 66%
2 - France - 33%
What I tried:
SELECT country FROM `mytable` GROUP BY `Country`;
It works, but how to have the percentage for each ?
Thanks.
You can use subquery:
SELECT
country,
COUNT(id) * 100 / (SELECT COUNT(id) FROM `mytable`) AS `something`
FROM
`mytable`
GROUP BY
`Country`;
You don't specify a falvor of SQL, but years ago microsoft posted their suggested solution:
select au_id
,(convert(numeric(5,2),count(title_id))
/(Select convert(numeric(5,2),count(title_id)) from titleauthor)) * 100
AS "Percentage Of Total Titles"
from titleauthor group by au_id
To calculate the percentage of total records contained within a group
is a simple result that you can compute. Divide the number of records
aggregated in the group by the total number of records in the table,
and then multiply the result by 100. This is exactly what the
preceding query does. These points explain the query in greater
detail:
The inner nested query returns the total number of records in the
TitleAuthor table: [ Select convert(numeric(5,2),count(title_id)) from
titleauthor ]
The value returned by the COUNT(title_id) in the outer
GROUP BY query returns the number of titles written by a specific
author.
The value returned in step 2 is divided by the value returned
in step 1, and the result is multiplied by 100 to compute and display
the percentage of the total number of titles written by each author.
The nested SELECT is executed once for each row returned by the outer
GROUP BY query
The CONVERT function is used to cast the values
returned by the COUNT aggregate function to the numeric data type with
a precision of 5 and a scale of 3 to achieve the required level of
precision.
SELECT A.horse, A.datum, Sum(IIf([prev_place] = ---)) AS cum_show_ct, count(prev_place) AS cum_race_ct, (cum_show_ct/cum_race_ct) AS cum_show_pct INTO SHOW_PCT
FROM (SELECT PLACE_2.horse, PLACE_2.a_id, PLACE_2.place, PLACE_2.datum, b.place AS prev_place FROM PLACE_2 INNER JOIN PLACE_2 AS b ON PLACE_2.horse = b.horse WHERE (PLACE_2.datum > b.datum) ORDER BY place_2.horse, place_2.datum) AS A
GROUP BY A.horse, A.datum;
So this expression created the table in the link below.
http://postimg.org/image/ke08u94i3/
What this did was to calculate a horses winning percentage (in past horse races up to the day of that race). This winning percentage is simply the number of times a horse has showed (finished 1, 2, or 3rd place) divided the cumulative number of races that it has been in in the past up to the day of the race (cum_show/cum_count). I also have a link below for the PLACE_2 table where most of the data for the calculation is calculated from.
Place_2 Table Part 1:
http://postimg.org/image/68jy808vh/
Place_2 Table Part 2(it has many columns):
http://postimg.org/image/exmvc1c0l/
Place 2 Table Part 3
http://postimg.org/image/cbaf0u7sd/
I would like to modify the above script so that for every time the finish column in table place_2= --- for the horse or that the horse didn't finish the race, the script will pull out a cumulative non-finish percentage (cum_nonfinish/cum_count_of_races) up to the day of the race rather than the cumulative show percentage. Before it used the place column to calculate the cumulative show percentage.
Thank you so much,
SELECT original.*, #cumulative:=#cumulative+cum_show_pct
from (
your select here
) original, (SELECT #cumulative:=0) sess_var
That query above suggest a common approach to sum up row values using session variable.
You can wrap the #cumulative:=#cumulative+cum_show_pct in any required IF condition.
I have this table,
person_id int(10) pk
points int(6) index
other columns not very important
I have this random function which is very fast on a table with 10M rows:
SELECT person_id
FROM persons AS r1 JOIN
(SELECT (RAND() *
(SELECT MAX(person_id)
FROM persons)) AS id)
AS r2
WHERE r1.person_id >= r2.id
ORDER BY r1.person_id ASC
LIMIT 1
This is all great but now I wish to show only people with points > 0. Example table:
PERSON_ID POINTS
1 4
2 6
3 0
4 3
When I append AND points > 0 to the where clause, person_id 3 can't be selected, so a gap is created and when the random select person_id 3, person_id 4 will be selected. This gives person 4 a bigger chance to be chosen. Any one got suggestions how I can adjust the query to make it work with the where clause and give all rows same % of chance to be selected.
Info table: The table is uniform, no gaps in person_id's. About 90% will have 0 points. I want to make the query for where points = 0 and points > 0.
Before someone will say, use rand(): this is not solution for tables with more than a few 100k rows.
Bonus question: will it be possible to select x random rows in 1 query, so I do not have to call this query a few times when i want more random rows?
Important note: performance is key, with 10M+ rows the query may not take much longer than the current query, which takes 0.0005 seconds, I prefer to stay under 0.05 second.
Last note: If you think the query will never be this fast with above requirements, but another solution is possible (like fetching 100 rows and showing x random which has more than 0 points), please tell :)
Really appreciate your help and all help is welcome :)
You could generate in-line gap-free id's for the records that you really want to work with, and generate then the random selector using the total number of records available.
Try with this (props to the chosen answer here for the row_number generator):
SELECT r1.*
FROM
(SELECT person_id,
#curRow := #curRow + 1 AS row_number
FROM persons as p,
(SELECT #curRow := 0) r0
WHERE points>0) r1
, (SELECT COUNT(1) * RAND() id
FROM persons
WHERE points>0) r2
WHERE r1.person_id>=r2.id
ORDER BY r1.person_id ASC
LIMIT 1;
You can mess with it in this sqlfiddle.