Is there an equivalent of the Oracle SQL DECODE function in N1QL?
I.e. a function which allows you to choose to output one or other value, based on a conditional check:
select sum(decode(type = 'car', 1, 0)) carCount from mybucket
thanks
DECODE will be available in next release. In mean time you can use CASE expression
SELECT SUM(CASE WHEN type = 'car' THEN 1 ELSE 0 END) carCount
FROM mybucket;
Related
I need some help over here.
I'm working with some reports generation using MySQL. One of the columns I created was called TOTALSUM and it was created by an operation using a CASE clause. When I try to use an ORDER BY, it accepts this TOTALSUM, but if I include this inside a IF os CASE clause, it doesn't accept anymore.
SUM(CASE WHEN COL1 = 01 THEN COL2 ELSE 0 END) + SUM(CASE WHEN COL1 = 03 THEN -COL2 ELSE 0 END) AS TOTALSUM,
If I do something like this, it works:
ORDER BY TOTALSUM
But if I do something like that, it doesn't work, giving the following error: #42S22Reference 'TOTALSUM' not supported (reference to group function)
ORDER BY IF(:INPUTVALUE = "X",TOTALSUM,ITEMCODE)
Is there anyway to make this work?
The scenario is that i have two columns one is Quantity and other is Type. Now what i am trying to do is check if type is "rec" then it take all the values from quantity and add them and if the type is "issue" then it will get only those fields whose type is receiving and add them all on the basis of ITEM ID. The SQL Query i have written is here:
SELECT f.`Itemm_ID`,ABS(SUM(f.`Quantity`)) AS recieving, TYPE ,
(CASE
WHEN f.`Type` = 'issue'
THEN ABS(SUM(f.`Quantity`))
END)
FROM stock_journal AS f
WHERE f.`Itemm_ID`='1'
Now the thing is everything is working fine except CASE statement which is returning null.
Please help me in resolving my issue. Thank you
It seems that you need in
SELECT f.`Itemm_ID`,
ABS(SUM(f.`Quantity`)) AS recieving,
TYPE,
ABS(SUM(CASE WHEN f.`Type` = 'issue'
THEN f.`Quantity`
ELSE 0
END))
FROM stock_journal AS f
WHERE f.`Itemm_ID`='1'
PS. Does f.Quantity may be negative? If not then ABS() is excess. If it may then ABS() must wrap inner f.Quantity, not the whole SUM(), maybe.
PPS. TYPE in output is formally incorrect (contradicts with ONLY_FULL_GROUP_BY), I'd recommend wrap it with ANY_VALUE().
i didn't get your recommendation of wrapping type with value can you please elaborate more.
I mean that (maybe, I'm not sure) you need
SELECT f.`Itemm_ID`,
SUM(ABS(f.`Quantity`)) AS recieving,
TYPE,
SUM(CASE WHEN f.`Type` = 'issue'
THEN ABS(f.`Quantity`)
ELSE 0
END)
FROM stock_journal AS f
WHERE f.`Itemm_ID`='1'
Have you checked syntax for CASE I think you are missing ELSE part in the query
Eg:-
SELECT OrderID, Quantity,
CASE
WHEN Quantity > 30 THEN "The quantity is greater than 30"
WHEN Quantity = 30 THEN "The quantity is 30"
ELSE "The quantity is under 30"
END
FROM OrderDetails;
check here for syntax
Hi I was looking for a mysql query result like
As you can see there are some values have the kind of values (Ex: BV and BR or C5 and C7) how can I combine then together into one common value lets say B or C and group by that in sql?
I have the following query:
SELECT
type,
sum(case when status ='valid' then 1 else 0 end) valid_jobs,
sum(case when status ='non-valid' then 1 else 0 end) non_valid_jobs,
sum(case when status IS NULL then 1 else 0 end) null_jobs
from
main_table
where
SUBSTRING_INDEX(CAST(CAST(from_unixtime(date_generated) AS DATE) AS CHAR), '-',2) REGEXP '^2016'
group by type
Thanks in advance guys.
Otcome will look like:
Just use an expression that evaluates the value of the type column, and returns the desired result.
What's not clear from the question is the "mapping" from type to the value you want returned in the first column. It looks like we might be looking at just the first character of value in the type column.
SUBSTR(type,1,1)
If the "mapping" is more involved, then we could use a CASE expression. For example:
CASE
WHEN type IN ('BV','BR','BT','ZB') THEN 'B'
WHEN type IN ('C5','C7') THEN 'C'
WHEN ... THEN ...
ELSE type
END
We'd use that as the first expression in the SELECT list (replacing the reference to the type column in the original query), and in the GROUP BY clause.
On an (unrelated) performance note, we'd prefer conditions in the WHERE clause to be on bare columns. That allows MySQL to make use of an (efficient) range scan operation on an appropriate index.
With this condition:
WHERE SUBSTRING_INDEX(CAST(CAST(FROM_UNIXTIME( t.date_generated ) AS DATE) AS CHAR), '-',2)
REGEXP '^2016'
We're forcing MySQL to evaluate the expression on the left side for every row in the table. And the value returned by the expression is compared.
If what we're really trying to do is get date_generated values in 2016, assuming that date_generated is INTEGER type, storing 32-bit unix-style number of seconds since beginning of the era 1970-01-01...
We can do something like this:
WHERE t.date_generated >= UNIX_TIMESTAMP('2016-01-01')
AND t.date_generated < UNIX_TIMESTAMP('2017-01-01')
MySQL will see that as a range operation on the values in te date_generated column. And with that, MySQL can make effective use of an index that has date_generated as a leading column.
Just replace expr with the expression that returns the values you want in the first column:
SELECT expr
, SUM(IF( t.status = 'valid' ,1,0)) AS valid_jobs
, SUM(IF( t.status = 'non-valid' ,1,0)) AS non_valid_jobs
, SUM(IF( t.status IS NULL ,1,0)) AS null_jobs
FROM main_table t
WHERE t.date_generated >= UNIX_TIMESTAMP('2016-01-01')
AND t.date_generated < UNIX_TIMESTAMP('2017-01-01')
GROUP BY expr
EDIT
To guarantee that rows are returned in a particular sequence, add an ORDER BY clause, e.g.
ORDER BY 1
try this,
SELECT
LEFT(type,1) AS type,
sum(case when status ='valid' then 1 else 0 end) valid_jobs,
sum(case when status ='non-valid' then 1 else 0 end) non_valid_jobs,
sum(case when status IS NULL then 1 else 0 end) null_jobs
FROM
main_table
WHERE
SUBSTRING_INDEX(CAST(CAST(from_unixtime(date_generated) AS DATE) AS CHAR), '-',2) REGEXP '^2016'
GROUP BY
type
I have written this query to get my data, and all the data is fine.
I have one column which has either Pass Or Fail. I want to calculate the % of number of bookings that failed, and output it in a single value.
I will have to write another query to show that one number.
For example : The below data, I have 4 bookings , out which 2 failed. So 50% is the failure rate. I am omitting some columns , in the display, but can be seen in the query.
That's an aggregation over all records and simple math:
select count(case when decision = 'Fail' then 1 end) / count(*) * 100
from (<your query here>) results;
Explanation: COUNT(something) counts non null values. case when decision = 'Fail' then 1 end is 1 (i.e. not null) for failures and null otherwise (as null is the default for no match in CASE/WHEN ‐ you could as well write else null end explicitly).
Modify your original condition to the following. Notice that there is no need to wrap your query in a subquery.
CONCAT(FORMAT((100 * SUM(CASE WHEN trip_rating.rating <= 3 AND
(TIMESTAMPDIFF(MINUTE,booking.pick_up_time,booking_activity.activity_time) -
ROUND(booking_tracking_detail.google_adjusted_duration_driver_coming/60)) /
TIMESTAMPDIFF(MINUTE,booking.pick_up_time,booking_activity.activity_time)*100 >= 15
THEN 1
ELSE 0
END) / COUNT(*)), 2), '%') AS failureRate
This will also format your failure rate in the format 50.0%, with a percentage sign.
I am migrating a database from mysql to postgres. The migration itself was ok, following the postgres documentation.
Right now, I'm fixing our specific mysql queries.
In some point, we have now something like this:
select(%(
SUM(CASE WHEN income THEN value ELSE 0 END) AS rents,
SUM(CASE WHEN not income THEN value ELSE 0 END) AS expenses
))
In mysql, it was a sum(if(incomes, value, 0)) etc, and it was working as expected.
With PG, it returns a string instead of a numeric.
I already checked the database and the data type is correct.
What can I do, besides cast to_d or to_f?
EDIT: the complete query:
SELECT
SUM(CASE WHEN income THEN value ELSE 0 END) AS rents,
SUM(CASE WHEN not income THEN value ELSE 0 END) AS expenses
FROM "transactions"
WHERE "transactions"."type" IN ('Transaction')
AND "transactions"."user_id" = 1
AND "transactions"."paid" = 't'
AND (transactions.date between '2013-09-01' and '2013-09-30')
LIMIT 1
As far as I know, using .to_f, .to_i or whatever is the answer - the Rails PostGres adapter seems adamant that everything is a String unless it's an ActiveRecord model.
See: connection.select_value only returns strings in postgres with pg gem
I don't particularly approve of this, but it is, as the saying goes, 'working as intended'.