I'm having a serious brain fart over this but essentially i have a table that looks similar to this -
+ id + staff_id + location + date + dismiss_boolean +
+------+-------------+-------------------+------------+-----------------+
+ 1 + 22 + Bedfordshire + 2011-11-01 + 0 +
+ 2 + 22 + Hertfordshire + 2011-11-02 + 1 +
+ 3 + 16 + Bedfordshire + 2011-12-01 + 0 +
+ 4 + 17 + Bedfordshire + 2011-11-22 + 0 +
+ 5 + 77 + Hertfordshire + 2011-11-01 + 1 +
+ 6 + 77 + Cambridgeshire + 2011-11-01 + 1 +
What i'm after is (in a single query) -
If the row exists, ie: there is a row where staff_id = 6 and location = Bedfordshire, then UPDATE the row only if the date field is older than X date.
Otherwise, if the row doesn't exist (there isn't a row where staff_id = 6 and location = Bedfordshire) then INSERT the data as a new row.
Usually you would use -
INSERT....ON DUPLICATE KEY UPDATE...
But you can't, iirc, use WHERE clauses in the UPDATE statement if using ON DUPLICATE. And again you can't use UNIQUE Indexes on the location and staff_id fields due to duplicates.
So i'm after a query along the lines of -
IF(
(SELECT COUNT(*) FROM `notifications` WHERE `staff_id` = '6' AND `location` = 'Bedfordshire') > 0
, UPDATE `notifications` SET `dismiss_boolean` = '1', `date` = '2011-12-10' WHERE `staff_id` = '6' AND `location` = 'Bedfordshire' AND `date` < '2011-12-10'
, INSERT INTO `notifications` (`staff_id`, `location`, `date`, `dismiss_boolean`) VALUES ('6', 'Bedfordshire', '2011-12-10', '1')
)
But that throws syntax errors and it's incorrect use of the IF function from what I remember.
So has anyone got any ideas how I can accomplish this? The only solution i can think of is to query the table prior to updating or insert the data but as said, ideally i want to do this in a single query.
Any help will be appreciated as I've been rattling around this problem for most of the day and I've yet to come up trumps searching Google/Stackoverflow.
I am not sure if this is still actual, but just for reference:
INSERT INTO notifications
(staff_id, location, `date`, dismiss_boolean
VALUES ('6', 'Bedfordshire', '2011-12-10', '1')
ON DUPLICATE KEY UPDATE
dismiss_boolean = if(VALUES(`date`) < '2011-12-10',
VALUES(dismiss_boolean),
dismiss_boolean),
`date` = if(VALUES(`date`) < '2011-12-10',
VALUES(`date`),
`date`);
Related
I have this query:
SELECT name, SUM(count_1 + count_2 + count_3 + count_4 + count_5 + count_6) AS Total
FROM my_table
Is there a way to add these values count_1 + count_2 + count_3 + count_4 + count_5 + count_6 and so on.. more efficiently? MySQL keeps crashing for me when I add huge numbers of fields.
Regardless of whether the db design is right or wrong if you use an aggregation function you should use group by
SELECT name, SUM(count_1 + count_2 + count_3 + count_4 + count_5 + count_6) AS Total
FROM my_table
GROUP BY name
I have a table with fields: country_code, short_name, currency_unit, a2010, a2011, a2012, a2013, a2014, a2015. a2010-a2015 fields are type of double.
How do I make a query which orders the results by average of fields a2010-a2015, keeping in mind that these fields might have NULL value?
I tried this code and it did not work (returns a mistake, which tells there is something wrong in ORDER BY part. mistake was saying something about coumn names and GROUP BY). The logic is: ORDER BY ((A)/(B)) where A - sum of not NULL fields and B - count of not NULL fields.
Any ideas?
(if important, the code is going to be used in BigInsights environment)
SELECT country_code, short_name, currency_unit, a2010, a2011, a2012,
a2013, a2014, a2015
FROM my_schema.my_table
WHERE Indicator_Code = 'SE.PRM.TENR'
ORDER BY
(
(
Coalesce(a2010,0) + Coalesce(a2011,0) + Coalesce(a2012,0)
+Coalesce(a2013,0) + Coalesce(a2014,0) + Coalesce(a2015,0)
)
/
(
COUNT(Coalesce(a2010)) + COUNT(Coalesce(a2011)) + COUNT(Coalesce(a2012))
+ COUNT(Coalesce(a2013)) + COUNT(Coalesce(a2014)) +
COUNT(Coalesce(a2015))
)
) DESC;
use MySQL ifnull
IFNULL(expression_1,expression_2)
in your query :-
IFNULL(
(
COUNT(Coalesce(a2010)) + COUNT(Coalesce(a2011)) + COUNT(Coalesce(a2012))
+ COUNT(Coalesce(a2013)) + COUNT(Coalesce(a2014)) +
COUNT(Coalesce(a2015))
),
1
)
I'm trying to make a calculated value to show in another column in another table.
Can someone please explain why this doesn't work
CREATE TABLE #Medition (ID int,AVG decimal(18,4))
INSERT INTO #Medition (ID, AVG)
SELECT ID, SUM(125Hz + 250Hz + 500Hz + 750Hz + 1000Hz + 1500Hz + 2000Hz + 3000Hz + 4000Hz + 6000Hz + 8000Hz)/11 AS AVG FROM tonvarden
UPDATE matningar SET matningar.tonmedelvarde =
#Medition.AVG FROM matningar INNER JOIN #Medition ON matningar.ID =#Medition.ID
DROP TABLE #Medition
I am getting this error
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INSERT INTO #Medition (ID, AVG) SELECT ID, SUM(125Hz + 250Hz + 500Hz + 750Hz + ' at line 2
No need to create a temporary table to do this.
UPDATE matningar a
join tonvarden b on a.ID = b.ID
set a.tonmedelvarde = (`125Hz` + `250Hz` + `500Hz` + `750Hz` + `1000Hz` +
`1500Hz` + `2000Hz` + `3000Hz` + `4000Hz` + `6000Hz` +
`8000Hz`)/11;
If you would like to update matningar whenever a new row is inserted into tonvarden, then you can create the following trigger:
create trigger update_matningar before insert on tonvarden
for each row
update matningar
set tonmedelvarde =
(new.`125Hz` + new.`250Hz` + new.`500Hz` + new.`750Hz`
+ new.`1000Hz` + new.`1500Hz` + new.`2000Hz`
+ new.`3000Hz` + new.`4000Hz` + new.`6000Hz`
+ new.`8000Hz`)/11
where id = new.id;
in one of the table the date column values are like below
date
25052008112228
26052008062717
table name is transaction
i tried using the below query but its throwing error
select * from transaction where date between '2012-01-06' and '2012-06-30'
select * from transaction where date between '2012/01/06' and '2012/06/30'
give me a solution.
The problem is that the [date] column doesn't contain a date in a format that will be automatically converted to an appropriate datetime value - it doesn't even contain a supported format value. So you're left shredding the text using string operations:
declare #Transactions table (TDate char(14))
insert into #Transactions (TDate) values
('25052008112228'),
('26052008062717')
select CONVERT(datetime,
SUBSTRING(TDate,5,4) + '-' +
SUBSTRING(TDate,3,2) + '-' +
SUBSTRING(TDate,1,2) + 'T' +
SUBSTRING(TDate,9,2) + ':' +
SUBSTRING(TDate,11,2) + ':' +
SUBSTRING(TDate,13,2))
from
#Transactions
Results:
2008-05-25 11:22:28.000
2008-05-26 06:27:17.000
You could wrap the CONVERT/SUBSTRING operations into a UDF, if you need to perform this kind of conversion often. Of course, ideal would be to change the column definition to store a genuine datetime value - almost all datetime issues arise when people treat them as text.
(Note, I've renamed both the table and the column, since using reserved words is usually a bad idea)
Your query could be something like:
;with converted as (
select *,CONVERT(datetime,
SUBSTRING([Date],5,4) + '-' +
SUBSTRING([Date],3,2) + '-' +
SUBSTRING([Date],1,2) + 'T' +
SUBSTRING([Date],9,2) + ':' +
SUBSTRING([Date],11,2) + ':' +
SUBSTRING([Date],13,2)) as GenuineDate
from [Transaction]
)
select * from converted where GenuineDate between '20120106' and '20120630'
(Note that I've also changed the date literals in the final query to a safe format also)
-- asp time stamp
select * from [transaction] where
cast(SUBSTRING([date],5,4) + '-' + SUBSTRING([date],3,2) + '-' +
SUBSTRING([date],1,2) + ' ' + SUBSTRING([date],9,2) +
':' + SUBSTRING([date],11,2) + ':' +
SUBSTRING([date],13,2) as datetime)
between '2008-05-26' and '2012-01-06'
-- unix epoch time
select * from [transaction] where [date]
between DATEDIFF( SECOND, '01-01-1970 00:00:00', '2012-01-06' )
and DATEDIFF( SECOND, '01-01-1970 00:00:00', '2012-06-30')
Below is what I have
++++++++++++++++++++++++
+ id + field1 + field2 +
++++++++++++++++++++++++
+ 1 + 1 + +
+ 1 + 23 + +
+ 1 + + 1 +
+ 1 + + 33 +
+ 2 + 55 + +
+ 2 + + 2 +
+ 2 + + 23 +
++++++++++++++++++++++++
What I want is
++++++++++++++++++++++++
+ id + field1 + field2 +
++++++++++++++++++++++++
+ 1 + 23 + 33 +
+ 2 + 55 + 23 +
++++++++++++++++++++++++
I want to combine the rows (with greatest data) and show data for user in one row against multiple rows like I have in table.
Any idea how to do it?
Note : I don't have any row who have data for all fields. Only 1 data in one row and two or more rows per user.
I tried with
SELECT id, GROUP_CONCAT(MAX(field1)), GROUP_CONCAT(MAX(field2)) from myTable
GROUP BY id;
but its giving error as
Invalid use of group function:
data at sqlfiddle
This question is bit advanced to my earlier question, showing data in one row (from multiple rows)
SELECT id, MAX(field1), MAX(field2) FROM myTable GROUP BY id;
This simple query should do the trick.
SELECT id, MAX(field1), MAX(field2)
FROM myTable
GROUP BY id;
it groups all the rows with the same id and selects the maximum value within each group for each column