timestampdiff() parameters error - mysql

I am using mysql version 5.6 on sql fiddle, and I'm trying to use timestampdiff() function to find the difference between the minimum value in the first column and the largest value in the second one in a table named Task as following
select TimeStampDiff(month, , max(Task.End_Date), min(Task.Start_Date));
but when I run this code I get the following error:
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ' months(max(End_Date)), months(min(Start_Date)))' at line 1
is it that timestampdiff() does not accept an aggregate function as a parameter ? and how can I solve this problem?
here's my complete fiddle

Try:
mysql> DROP TABLE IF EXISTS `Task`;
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> CREATE TABLE IF NOT EXISTS `Task` (
-> `ID` INT NOT NULL,
-> `Pro_ID` INT NOT NULL,
-> `Start_Date` DATE,
-> `End_Date` DATE,
-> `Description` VARCHAR(255),
-> PRIMARY KEY (`ID`)
-> );
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO `Task` VALUES
-> (1, 1, '2017-01-01', '2017-02-01', "no-Description-yet"),
-> (2, 1, '2017-01-01', '2017-02-01', "no-Description-yet"),
-> (3, 1, '2017-01-01', '2017-06-01', "no-Description-yet"),
-> (4, 2, '2017-01-01', '2017-03-01', "no-Description-yet"),
-> (5, 3, '2017-01-01', '2017-02-01', "no-Description-yet"),
-> (6, 4, '2017-01-01', '2017-03-01', "no-Description-yet");
Query OK, 6 rows affected (0.02 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> SELECT
-> `Pro_ID`,
-> TIMESTAMPDIFF(MONTH, MIN(`Start_Date`), MAX(`End_Date`)) `MONTH_DIFF`
-> FROM `Task`
-> GROUP BY `Pro_ID`;
+--------+------------+
| Pro_ID | MONTH_DIFF |
+--------+------------+
| 1 | 5 |
| 2 | 2 |
| 3 | 1 |
| 4 | 2 |
+--------+------------+
4 rows in set (0.00 sec)
Example db-fiddle.

Related

MySQL INSERT INTO SELECT not matching records are triggered by function str_to_date

I'm having a weird behavior of MySQL INSERT SELECT where I need to convert the dt_int from TABLE2 to the date time dt in TABLE1.
The table structure is
TABLE1
PK INT(11) -- auto increment
dt datetime
TABLE2
PK INT(11) -- auto increment
dt_int INT(11)
I have as insert select query like this
INSERT INTO TABLE1(dt)
(
SELECT str_to_date(dt_int, '%Y%m%d')
FROM TABLE2
WHERE str_to_date(dt_int, '%Y%m%d') IS NOT NULL
)
It works fine if all the dates in the table are valid. However if the table consists of data similar like this
TABLE2
PK | dt_int
1 20201209
2 20202020
it would hit Error Code 1411: Incorrect datetime value '20202020' for function str_to_date.
The internal select statements returns only the valid dates, but the insert statements still try to converts the date for those that are filtered. Why is this happening? Is there anything I can do?
[Edited]
The MySQL version is 5.7 and engine is InnoDB. Currently hosted in Windows environment.
INSERT INTO table1
SELECT PK, CONCAT(dt_int DIV 10000, '-1-1')
+ INTERVAL dt_int MOD 10000 DIV 100 - 1 MONTH
+ INTERVAL dt_int MOD 100 - 1 DAY
FROM table2
WHERE 0 + DATE_FORMAT(CONCAT(dt_int DIV 10000, '-1-1')
+ INTERVAL dt_int MOD 10000 DIV 100 - 1 MONTH
+ INTERVAL dt_int MOD 100 - 1 DAY, '%Y%m%d') = dt_int;
fiddle
You can set/unset the SLQ_MODE like this. Then it working
SELECT REPLACE(##SQL_MODE, ',', '\n');
SET ##SQL_MODE = REPLACE(##SQL_MODE, ',STRICT_TRANS_TABLES', '');
INSERT INTO table2(dt) SELECT * FROM (
SELECT DATE(str_to_date( dt_int , '%Y%m%d'))
FROM table1
WHERE date(str_to_date( dt_int, '%Y%m%d')) is not null
) as x;
see also SQL_MODE
Sample
Create Tables
mysql> CREATE TABLE `table1` (
-> `id` int(11) unsigned NOT NULL AUTO_INCREMENT,
-> `dt_int` int(11) DEFAULT NULL,
-> PRIMARY KEY (`id`)
-> ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.04 sec)
mysql> INSERT INTO `table1` (`id`, `dt_int`)
-> VALUES
-> (1, 20200202),
-> (2, 20202020);
Query OK, 2 rows affected (0.01 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> CREATE TABLE `table2` (
-> `id` int(11) unsigned NOT NULL AUTO_INCREMENT,
-> `dt` date DEFAULT NULL,
-> PRIMARY KEY (`id`)
-> ) ENGINE=InnoDB AUTO_INCREMENT=23 DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.06 sec)
Show Data
mysql> SELECT DATE(str_to_date( dt_int , '%Y%m%d'))
-> FROM table1
-> WHERE date(str_to_date( dt_int, '%Y%m%d')) is not null;
+---------------------------------------+
| DATE(str_to_date( dt_int , '%Y%m%d')) |
+---------------------------------------+
| 2020-02-02 |
+---------------------------------------+
1 row in set, 2 warnings (0.00 sec)
Run Query with Error
mysql> INSERT INTO table2(dt)
-> SELECT DATE(str_to_date( dt_int , '%Y%m%d'))
-> FROM table1
-> WHERE date(str_to_date( dt_int, '%Y%m%d')) is not null;
ERROR 1411 (HY000): Incorrect datetime value: '20202020' for function str_to_date
mysql>
Get SQL_MODE and remove it
mysql>
mysql> SELECT REPLACE(##SQL_MODE, ',', '\n');
+-------------------------------------------------------------------------------------------------------------------------------------------+
| REPLACE(##SQL_MODE, ',', '\n') |
+-------------------------------------------------------------------------------------------------------------------------------------------+
| ONLY_FULL_GROUP_BY
STRICT_TRANS_TABLES
NO_ZERO_IN_DATE
NO_ZERO_DATE
ERROR_FOR_DIVISION_BY_ZERO
NO_AUTO_CREATE_USER
NO_ENGINE_SUBSTITUTION |
+-------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> SET ##SQL_MODE = REPLACE(##SQL_MODE, ',STRICT_TRANS_TABLES', '');
Query OK, 0 rows affected, 1 warning (0.00 sec)
Run Query without Error
mysql> INSERT INTO table2(dt) SELECT * FROM (
-> SELECT DATE(str_to_date( dt_int , '%Y%m%d'))
-> FROM table1
-> WHERE date(str_to_date( dt_int, '%Y%m%d')) is not null
-> ) as x;
Query OK, 1 row affected, 2 warnings (0.00 sec)
Records: 1 Duplicates: 0 Warnings: 2
mysql>

How to extract unique nested variable names out of one string variable?

Case
In our MySql database the data is stored in combined json-strings like this:
| ID | DATA |
| 100 | {var1str: "sometxt", var2double: 0,01, var3integer: 1, var4str: "another text"} |
| 101 | {var3integer: 5, var2double: 2,05, var1str: "txt", var4str: "more text"} |
Problem
Most of the DATA-fields hold over 2500 variables. The order of variables in the DATA-string is random (as shown in above example). Right now we only know how to extract data with the following querie:
select
ID,
json_extract(DATA,'var1str'),
json_extract(DATA,'var2double'),
FROM table
With this querie, only the values of var1str and var2double will be returned as result. Values of variable 3 and 4 are ignored. There is no overview of what possible variables are hiding in the data fields.
With almost 60.000 entries and over 3.000 possible unique variable names, I would like to create a query that loops through all of the 60.000 DATA-fields and extracts every unique variable name that is found in there.
Solution?
The querie I am looking for would give the following result:
var1str
var2double
var3integer
var4str
My knowledge of MySql is very limited. Any direction given to get to this solution is much appreciated.
What version of MySQL are you using?.
From MySQL 8.0.4 and later JSON_TABLE function is supported and can be useful in this case.
mysql> SELECT VERSION();
+-----------+
| VERSION() |
+-----------+
| 8.0.11 |
+-----------+
1 row in set (0.00 sec)
mysql> DROP TABLE IF EXISTS `table`;
Query OK, 0 rows affected (0.09 sec)
mysql> CREATE TABLE IF NOT EXISTS `table` (
-> `ID` BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
-> `DATA` JSON NOT NULL
-> ) AUTO_INCREMENT=100;
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO `table`
-> (`DATA`)
-> VALUES
-> ('{"var1str": "sometxt", "var2double": 0.01, "var3integer": 1, "var4str": "another text"}'),
-> ('{"var3integer": 5, "var2double": 2.05, "var1str": "txt", "var4str": "more text"}');
Query OK, 2 rows affected (0.00 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> SELECT
-> DISTINCT `der`.`key`
-> FROM
-> `table`,
-> JSON_TABLE(
-> JSON_KEYS(`DATA`), '$[*]'
-> COLUMNS(
-> `key` VARCHAR(64) PATH "$"
-> )
-> ) `der`;
+-------------+
| key |
+-------------+
| var1str |
| var4str |
| var2double |
| var3integer |
+-------------+
4 rows in set (0.01 sec)
Be aware of the Bug #90610 ERROR 1142 (42000) when using JSON_TABLE.

Modify a SQL table to condense similar rows while summing up a column

Here is a question that is the general gist of what I am trying to do:
Sum values from multiple rows into one row
However, to my knowledge, I am seeking further functionality, which would permanently modify the table in question to look like the result of the SELECT statement that is being suggested in that other thread.
So the table:
Sales
--------------------------------------
account product qty amount
--------------------------------------
01010 bottle 10 200
01010 bottle 20 100
01010 bottle 5 10
11111 can 50 200
11111 can 25 150
...would be permanently modified to look like this
Sales
--------------------------------------
account product qty amount
--------------------------------------
01010 bottle 35 310
11111 can 75 350
As is answered in the link, using a SELECT with SUM and GROUP BY can show me what the table needs to look like, but how to I actually apply those changes to the sales table?
edit: This query will be run every time a new batch of sales is added into the system. It's intended to clean up the sales table after new records have been added.
Alternative approach
New records in sales are inserted in from a different table using something like this:
"INSERT INTO sales
SELECT account, product, qty, amount
FROM new_sales;"
If there is a way to take care of the summation during that previous INSERT, instead of adding duplicate rows in the first place, that would also be acceptable. Keep in mind, this solution would still need to work for new records that don't have existing duplicate rows in sales.
EDIT: for posterity
General response seems to be that my initial approach is not possible-- short of creating a temp_sales table with a CREATE and SELECT, then purging sales completely, and then copying the contents of temp_sales into the cleared sales table, and truncating temp_sales for future use.
The accepted solution uses the "Alternative approach" that I had also alluded to.
Assuming new_sales is truncated after sales has been updated and then starts to refill you could use insert..on duplicate key..update for example
MariaDB [sandbox]> drop table if exists t,t1;
Query OK, 0 rows affected (0.20 sec)
MariaDB [sandbox]>
MariaDB [sandbox]> create table t
-> (account varchar(5), product varchar(20), qty int default 0, amount int default 0);
Query OK, 0 rows affected (0.16 sec)
MariaDB [sandbox]> create table t1
-> (account varchar(5), product varchar(20), qty int default 0, amount int default 0);
Query OK, 0 rows affected (0.24 sec)
MariaDB [sandbox]>
MariaDB [sandbox]> alter table t
-> add unique key k1(account,product);
Query OK, 0 rows affected (0.15 sec)
Records: 0 Duplicates: 0 Warnings: 0
MariaDB [sandbox]>
MariaDB [sandbox]> truncate table t1;
Query OK, 0 rows affected (0.23 sec)
MariaDB [sandbox]> insert into t1 values
-> ('01010' , 'bottle' , 10 , 200),
-> ('01010' , 'bottle' , 20 , 100),
-> ('01010' , 'bottle' , 5 , 10),
-> ('11111' , 'can' , 50 , 200),
-> ('11111' , 'can' , 25 , 150);
Query OK, 5 rows affected (0.02 sec)
Records: 5 Duplicates: 0 Warnings: 0
MariaDB [sandbox]>
MariaDB [sandbox]> truncate table t;
Query OK, 0 rows affected (0.28 sec)
MariaDB [sandbox]> insert into t
-> select account,product,t1qty,t1amount
-> from
-> (
-> select t1.account,t1.product,sum(t1.qty) t1qty,sum(t1.amount) t1amount from t1 group by t1.account,t1.product
-> ) s
-> on duplicate key
-> update qty = t.qty + t1qty, amount = t.amount + t1amount;
Query OK, 2 rows affected (0.02 sec)
Records: 2 Duplicates: 0 Warnings: 0
MariaDB [sandbox]>
MariaDB [sandbox]> truncate table t1;
Query OK, 0 rows affected (0.32 sec)
MariaDB [sandbox]> insert into t1 values
-> ('01010' , 'bottle' , 10 , 200),
-> ('01011' , 'bottle' , 20 , 100),
-> ('01011' , 'bottle' , 5 , 10),
-> ('11111' , 'can' , 50 , 200),
-> ('11111' , 'can' , 25 , 150);
Query OK, 5 rows affected (0.02 sec)
Records: 5 Duplicates: 0 Warnings: 0
MariaDB [sandbox]>
MariaDB [sandbox]> insert into t
-> select account,product,t1qty,t1amount
-> from
-> (
-> select t1.account,t1.product,sum(t1.qty) t1qty,sum(t1.amount) t1amount from t1 group by t1.account,t1.product
-> ) s
-> on duplicate key
-> update qty = t.qty + t1qty, amount = t.amount + t1amount;
Query OK, 5 rows affected (0.02 sec)
Records: 3 Duplicates: 2 Warnings: 0
MariaDB [sandbox]>
MariaDB [sandbox]>
MariaDB [sandbox]> select * from t;
+---------+---------+------+--------+
| account | product | qty | amount |
+---------+---------+------+--------+
| 01010 | bottle | 45 | 510 |
| 11111 | can | 150 | 700 |
| 01011 | bottle | 25 | 110 |
+---------+---------+------+--------+
3 rows in set (0.00 sec)
MariaDB [sandbox]>
You can create a table from a select statement.
So you could do something like:
create table sales_sum as
select
account,
product,
sum(qty),
sum(amount)
from
sales
group by
account,
product
That creates a table with the right structure, and it'll insert the records that you want to have. Of course you can adapt the query or the table name.
This query does what a ETL tool can do but every you need to run the entire script :
----------Old Table
CREATE TABLE yourtable (
[state] varchar(2),
[month] varchar(7),
[ID] int,
[sales] int
)
;
INSERT INTO yourtable ([state], [month], [ID], [sales])
VALUES ('FL', 'June', 0001, '12000'),
('FL', 'June', 0001, '6000'),
('FL', 'June', 0001, '3000'),
('FL', 'July', 0001, '6000'),
('FL', 'July', 0001, '4000'),
('TX', 'January', 0050, '1000'),
('MI', 'April', 0032, '5000'),
('MI', 'April', 0032, '8000'),
('CA', 'April', 0032, '2000');
SELECT
state,
month,
id,
SUM(sales) Total
FROM yourtable
GROUP BY state,
month,
id;
-----Creating new table from old table
CREATE TABLE yourtable1 (
[state] varchar(2),
[month] varchar(7),
[ID] int,
[sales] int
)
;
----Inserting aggregation logic
INSERT INTO yourtable1 (state, month, id, sales)
SELECT
state,
month,
id,
SUM(sales)
FROM yourtable
GROUP BY state,
month,
id;
-----Fetching records
SELECT
*
FROM yourtable1;

STR_TO_DATE giving an error instead of null

I am taking data from a csv file and throwing it all into a tempory table, so everything is in string format.
So even date fields are in string format, so I need to convert date from string to a date. All dates are in this format 28/02/2013
I used STR_TO_DATE for this, but I am having a problem.
Here is a snippet of my code.
INSERT INTO `invoice` (`DueDate`)
SELECT
STR_TO_DATE('','%d/%m/%Y')
FROM `upload_invoice`
There are of course more fields than this, but I am concentrating on the field that doesn't work.
Using this command if a date is invalid it should put in a null, but instead of a null being put in, it generates an error instead.
#1411 - Incorrect datetime value: '' for function str_to_date
I understand what the error means. It means it is getting an empty field instead of a properly formatted date, but after reading the documentation it should not be throwing an error, but it should inserting a null.
However if I use the SELECT statement without the INSERT it works.
I could do the following line which actually works to a point
IF(`DueDate`!='',STR_TO_DATE(`DueDate`,'%d/%m/%Y'),null) as `DueDate`
So STR_TO_DATE doesn't run if the field is empty. Now this works, but it can't check for a date which is not valid eg if a date was ASDFADFAS.
So then I tried
IF(TO_DAY(STR_TO_DATE(`DueDate`,'%d/%m/%Y') IS NOT NULL),STR_TO_DATE(`DueDate`,'%d/%m/%Y'),null) as `DueDate`
But this still gives the #1411 error on the if statement.
So my question is why isn't STR_TO_DATE not returning NULL on an incorrect date? I should not be getting the #1411 error.
This is not an exact duplicate of the other question. Also there was not a satisfactory answer. I solved this a while and I have added my solution, which is actually a better solution that was given in the other post, so I think because of my better answer this should stay.
An option you can try:
mysql> SELECT VERSION();
+-----------+
| VERSION() |
+-----------+
| 5.7.19 |
+-----------+
1 row in set (0.00 sec)
mysql> DROP TABLE IF EXISTS `upload_invoice`, `invoice`;
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE TABLE IF NOT EXISTS `invoice` (
-> `id` BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
-> `DueDate` DATE
-> );
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE TABLE IF NOT EXISTS `upload_invoice` (
-> `DueDate` VARCHAR(10)
-> );
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO `upload_invoice`
-> (`DueDate`)
-> VALUES
-> ('ASDFADFAS'), (NULL), (''),
-> ('28/02/2001'), ('30/02/2001');
Query OK, 5 rows affected (0.01 sec)
Records: 5 Duplicates: 0 Warnings: 0
mysql> INSERT INTO `invoice`
-> SELECT
-> NULL,
-> IF(`DueDate` REGEXP '[[:digit:]]{2}/[[:digit:]]{2}/[[:digit:]]{4}' AND
-> UNIX_TIMESTAMP(
-> STR_TO_DATE(`DueDate`, '%d/%m/%Y')
-> ) > 0,
-> STR_TO_DATE(`DueDate`, '%d/%m/%Y'),
-> NULL)
-> FROM `upload_invoice`;
Query OK, 5 rows affected (0.00 sec)
Records: 5 Duplicates: 0 Warnings: 0
mysql> SELECT `id`, `DueDate`
-> FROM `invoice`;
+----+------------+
| id | DueDate |
+----+------------+
| 1 | NULL |
| 2 | NULL |
| 3 | NULL |
| 4 | 2001-02-28 |
| 5 | NULL |
+----+------------+
5 rows in set (0.00 sec)
See db-fiddle.
I forgot I posted this question, but I solved this problem a while ago like this
IF(`{$date}`!='',STR_TO_DATE(`{$date}`,'%d/%m/%Y'),null) as `{$date}`
So because the line is long and confusing I made a function like this
protected function strDate($date){
return "IF(`{$date}`!='',STR_TO_DATE(`{$date}`,'%d/%m/%Y'),null) as `{$date}`";
}
INSERT INTO `invoice` (`DueDate`)
SELECT
{$this->strDate('DueDate')}
FROM `upload_invoice`
I really forgot I posted this question. It seems like an eternity away, but this is how I solved the issue.

Is Offset should specify at the end of Sql query?

SELECT * CUSTOMERS LIMIT 5 OFFSET 0.
Assume CUSTOMERS is table of details. The above query works fine but if i specify offset other than end of query i get error.
Created a table with following details.
Table name is sms_view
Query:
SELECT SMS FROM sms_view WHERE read=2 LIMIT 5 OFFSET 0;
Result is
The above result is expected and it is based on read value. So, the table is created based on read value, offset and limit applied on the created table. so the result is shown above.
But My requirement is, offset and Limit should apply on entire table and read value should apply on created table.
Expected result is:
I need a query on expected result.
Yes, it should be at the end. See https://dev.mysql.com/doc/refman/5.7/en/select.html
SELECT * FROM CUSTOMERS
ORDER BY somecolumn -- important to get consistent results
LIMIT 5 OFFSET 0
Another way to do the same thing is:
SELECT * FROM CUSTOMERS
ORDER BY somecolumn
LIMIT 0, 5
or in this case (as the offset is 0):
SELECT * FROM CUSTOMERS
ORDER BY somecolumn
LIMIT 5
MariaDB [sandbox]> Drop table if exists sms_view;
Query OK, 0 rows affected (0.10 sec)
MariaDB [sandbox]> create table sms_view(SMS int,db_id int, `read` int);
Query OK, 0 rows affected (0.28 sec)
MariaDB [sandbox]> insert into sms_view values
-> (1, 2, 3) ,
-> (2, 2, 3),
-> (3, 2, 2) ,
-> (4, 2, 2) ,
-> (5, 2, 2) ,
-> (6, 2, 2) ,
-> (7, 2, 2) ,
-> (8, 2, 2) ,
-> (9, 2, 2) ,
-> (10, 2, 2);
Query OK, 10 rows affected (0.04 sec)
Records: 10 Duplicates: 0 Warnings: 0
MariaDB [sandbox]>
MariaDB [sandbox]> select sms from
-> (
-> SELECT * FROM sms_view LIMIT 5 OFFSET 0
-> ) s
-> WHERE `read` = 2;
+------+
| sms |
+------+
| 3 |
| 4 |
| 5 |
+------+
3 rows in set (0.00 sec)