I'm using the Play Framework 2.1.2, the JDBC MySQL Connector and Scala 2.10. The following query is my problem:
DB.withConnection { implicit connection =>
SQL("""SELECT SUM(r.dayFrequency)
FROM relationships AS r
WHERE r.id = {id}
AND
(r.date BETWEEN {from} AND {to})""").on(
'id -> id,
'from -> from,
'to -> to).as(scalar[Int](bigDecimalToInt).single)
}
It raises this exception:
Execution exception[[RuntimeException: UnexpectedNullableFound(ColumnName(.SUM(r.dayFrequency),Some(SUM(r.dayFrequency))))]]
The console logs the following query:
SELECT SUM(r.dayFrequency)
FROM relationships AS r
WHERE r.id = 26180
AND
(r.date BETWEEN 2014-08-04 12:00:00.0 AND 2014-08-04 12:00:00.0)
If I run this query on my MySQL Workbench it returns null, which confirms the exception. But with this change in the query it works:
(r.date BETWEEN '2014-08-04' AND '2014-08-04')
For the conversion of Joda DateTime, I use this piece of code: Joda DateTime Field on Play Framework 2.0's Anorm
and frequency and date field looks like the following:
date DATE NOT NULL,
dayFrequency INT
Can anyone help with this problem? Seems that something is wrong with the conversion.
EDIT after first POST below:
From the view I receive date strings like this 2014-08-04 and I convert them into Joda DateTime in my controller to compare them to other and use them in MySQL queries like this:
private def clientDateStringToTimestamp(date: String) = {
val Array(year, month, day) = date.split("-")
new DateTime(year.toInt, month.toInt, day.toInt, 12, 0, 0).getMillis()
}
new DateTime(clientDateStringToTimestamp("2014-08-04"))
For the MySQL queries I want to compare only the date part not the time part.
So i did a simple experiment in mysql:
mysql> create table t (v int);
Query OK, 0 rows affected (0.01 sec)
mysql> insert into t values (null);
Query OK, 1 row affected (0.00 sec)
mysql> select sum(v) from t;
+--------+
| sum(v) |
+--------+
| NULL |
+--------+
1 row in set (0.01 sec)
mysql> insert into t values (1);
Query OK, 1 row affected (0.00 sec)
mysql> select sum(v) from t;
+--------+
| sum(v) |
+--------+
| 1 |
+--------+
1 row in set (0.00 sec)
mysql> update t set v = NULL;
Query OK, 1 row affected (0.00 sec)
Rows matched: 2 Changed: 1 Warnings: 0
mysql> select sum(v) from t;
+--------+
| sum(v) |
+--------+
| NULL |
+--------+
1 row in set (0.00 sec)
So this tells us that a summing nulls to nulls gives us null but summing nulls to numbers gives us numbers.
I suspect that your first query (r.date BETWEEN 2014-08-04 12:00:00.0 AND 2014-08-04 12:00:00.0) returns just rows with null dayFrequency values, where the second query (r.date BETWEEN '2014-08-04' AND '2014-08-04'), which is offset 12 hours earlier returns at least one non-null frequency. So since null is possible, you will have to use scalar[Option[Int]] for the sum, then turn it to 0 with getOrElse. A better way, if you can is to make the dayFrequency column in the database NOT NULL DEFAULT 0. Then it will give you a 0, and you can sum away
Also related, direct support for Joda temporal types in Anorm: https://github.com/playframework/playframework/commit/bdbbbe90822a6fb150c7044e68b33e2e52a7323d
Related
I want to show rows that have updated_at more than 3 hours ago. MySQL seems to be completely ignoring the ORDER BY clause. Any idea why?
Edit: as pointed out by Sebastian, this only occurs in certain timezones, like GMT+5 or GMT+8.
mysql> SET time_zone='+08:00';
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE DATABASE test1; USE test1;
Query OK, 1 row affected (0.01 sec)
Database changed
mysql> CREATE TABLE `boxes` (
-> `box_id` int unsigned NOT NULL AUTO_INCREMENT,
-> `updated_at` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,
-> PRIMARY KEY (`box_id`)
-> ) ENGINE=InnoDB;
Query OK, 0 rows affected (0.01 sec)
mysql> INSERT INTO `boxes` (`box_id`, `updated_at`) VALUES
-> (1, '2020-08-22 05:25:35'),
-> (2, '2020-08-26 18:49:05'),
-> (3, '2020-08-23 03:28:30'),
-> (4, '2020-08-23 03:32:55');
Query OK, 4 rows affected (0.00 sec)
Records: 4 Duplicates: 0 Warnings: 0
mysql> SELECT NOW();
+---------------------+
| NOW() |
+---------------------+
| 2020-08-26 20:49:59 |
+---------------------+
1 row in set (0.00 sec)
mysql> SELECT b.box_id, updated_at, (b.updated_at < NOW() - INTERVAL 3 HOUR) AS more_than_3hr
-> FROM boxes b
-> ORDER BY more_than_3hr DESC;
+--------+---------------------+---------------+
| box_id | updated_at | more_than_3hr |
+--------+---------------------+---------------+
| 1 | 2020-08-22 05:25:35 | 1 |
| 2 | 2020-08-26 18:49:05 | 0 | <--- WHY IS THIS HERE???
| 3 | 2020-08-23 03:28:30 | 1 |
| 4 | 2020-08-23 03:32:55 | 1 |
+--------+---------------------+---------------+
4 rows in set (0.00 sec)
Expectation: the rows with "1" should show up first.
Actual results: ORDER BY is ignored, and the resultset is sorted by primary key
I have a hunch it has something to do with MySQL storing timestamps as UTC and displaying them in the current timezone. My current timezone is GMT+8. However, it still doesn't make sense -- I am sorting the results based on the aliased expression, and the expression's value is clearly shown in the resultset.
MySQL version 8.0.21.
I also tried moving the expression to the ORDER BY clause, and the results are the same.
I don't know why but it compares wrong timezones in the background and thus values at the end are correct, but comparisons are invalid (for specific timezones).
When you query a TIMESTAMP value, MySQL converts the UTC value back to
your connection’s time zone. Note that this conversion does not take
place for other temporal data types such as DATETIME.
https://www.mysqltutorial.org/mysql-timestamp.aspx/
Changing type from TIMESTAMP to DATETIME fixes problem.
Other solution may be casting to the decimal number.
SELECT b.box_id, updated_at, FORMAT((b.updated_at < NOW() - INTERVAL 3 HOUR),0) AS more_than_3hr
FROM boxes b
ORDER BY more_than_3hr DESC;
From the documentation:
https://dev.mysql.com/doc/refman/8.0/en/user-variables.html
HAVING, GROUP BY, and ORDER BY, when referring to a variable that is assigned a value in the select expression list do not work as expected because the expression is evaluated on the client and thus can use stale column values from a previous row.
Basically, you can't use a variable name you created with "AS" in your sorting.
The solution is to use the verbose statement you used for the AS in sorting. Yeah, it's verbose. 🤷♂️ It is what it is.
I am running the following query on my wordpress website
$error = $wpdb->update( $table_name,
array( 'value' => $update_value ),
array( 'lead_id' => $lead_id,
'field_number' => 31.3 ),
array( '%s' ),
array( '%d', '%f' )
);
It is intended to update gravity form entries after the user has initially submitted the form. This query runs fine for 24 fields but returns 0 for 2.
I have so far tried the following troubleshooting steps:
Storing the result as $error and running var_dump($error); after the query, it returns 0.
Running var_dump( $wpdb->last_query ); immediately after the query to copy/paste the resulting SQL string into phpMyAdmin, which also reports 0 rows affected.
Manually selecting the row in phpMyAdmin using:
SELECT * FROM `table_name` WHERE `field_number` = 31.3
Which also returns no rows. However, I know that there are rows which match as I can se them in the table.
Manually selecting another field which updates as expected using the same query as above - this worked fine.
Changed the $where_format from float to string. No resulting change. Upon checking the db fields, the field_number field is stored as a float.
Used $wpdb->prepare to run a prepared query. Still no movement.
Prepare statement as follows:
$error = $wpdb->query(
$wpdb->prepare(
"
UPDATE %s
SET value = %s
WHERE lead_id = %d AND field_number = %f
",
$table_name, $update_value, $lead_id, 31.3
)
);
Which, when var_dumped gives the following result:
string '
UPDATE prefix_rg_lead_detail
SET `value` = 'a:3:{i:0;s:9:\"Liverpool\";i:1;s:10:\"Manchester\";i:2;s:5:\"Leeds\";}'
WHERE `lead_id` = 4 AND `field_number` = 31.300000
' (length=188)
I'm at my wits end now as I've tried everything I can possibly think of and still cant get 2 fields to update.
Your final problem is indeed with the FLOAT type. It is an imprecise value, which is leading you to your problem. Although the database is showing the value 31.3 it most likely internally in the database is something like 31.30000000000001 which is why the where condition is not working. Take a look at the documentation here: Problems with Floating-Point Values.
So down the road lets go to the tests:
create table test (
n float
);
insert into test values (31.3);
mysql> select * from test;
+------+
| n |
+------+
| 31.3 |
+------+
1 row in set (0.17 sec)
Running a select statement on it with that value 31.3 will evaluate to nothing:
mysql> select * from test where n=31.3;
Empty set (0.00 sec)
There are a few ways you can solve it without changing the column type:
1- Using the ROUND(field,number_of_decimals) function
mysql> select * from test where round(n,2)=31.3;
+------+
| n |
+------+
| 31.3 |
+------+
1 row in set (0.00 sec)
2- Casting it as DECIMAL type
mysql> select * from test where cast(n as decimal(5,2))=31.3;
+------+
| n |
+------+
| 31.3 |
+------+
1 row in set (0.00 sec)
So in order to your update to work with this data you have to use one of those options in your update command like:
$wpdb->query( $wpdb->prepare("UPDATE %s
SET value = %s
WHERE lead_id = %d
AND round(field_number,2) = %f ",
$table_name,
$update_value,
$lead_id,
31.3 )
);
My recommendation is that you change the field type. AFAIK there is no situation where your plugin may break as of this change. What it may happens is a rounding value like you try to insert a value like 31.696 in a Decimal field of (6,2) it will became 31.67. Also the difference is that the value of the field will be formatted as the decimals number you chose so 31.3 will start to apear as 31.30 You can change it as:
alter table yourTableName modify field_name DECIMAL(6,2);
Here are some test on that explanation:
mysql> alter table test modify column n decimal(10,2);
Query OK, 1 row affected, 1 warning (0.90 sec)
Records: 1 Duplicates: 0 Warnings: 1
mysql> select * from test;
+-------+
| n |
+-------+
| 31.30 |
+-------+
1 row in set (0.00 sec)
mysql> select * from test where n=31.3;
+-------+
| n |
+-------+
| 31.30 |
+-------+
1 row in set (0.00 sec)
And to show the rounding:
mysql> insert into test values (31.696);
Query OK, 1 row affected, 1 warning (0.01 sec)
mysql> select * from test;
+-------+
| n |
+-------+
| 31.30 |
| 31.67 |
+-------+
2 rows in set (0.01 sec)
I am taking data from a csv file and throwing it all into a tempory table, so everything is in string format.
So even date fields are in string format, so I need to convert date from string to a date. All dates are in this format 28/02/2013
I used STR_TO_DATE for this, but I am having a problem.
Here is a snippet of my code.
INSERT INTO `invoice` (`DueDate`)
SELECT
STR_TO_DATE('','%d/%m/%Y')
FROM `upload_invoice`
There are of course more fields than this, but I am concentrating on the field that doesn't work.
Using this command if a date is invalid it should put in a null, but instead of a null being put in, it generates an error instead.
#1411 - Incorrect datetime value: '' for function str_to_date
I understand what the error means. It means it is getting an empty field instead of a properly formatted date, but after reading the documentation it should not be throwing an error, but it should inserting a null.
However if I use the SELECT statement without the INSERT it works.
I could do the following line which actually works to a point
IF(`DueDate`!='',STR_TO_DATE(`DueDate`,'%d/%m/%Y'),null) as `DueDate`
So STR_TO_DATE doesn't run if the field is empty. Now this works, but it can't check for a date which is not valid eg if a date was ASDFADFAS.
So then I tried
IF(TO_DAY(STR_TO_DATE(`DueDate`,'%d/%m/%Y') IS NOT NULL),STR_TO_DATE(`DueDate`,'%d/%m/%Y'),null) as `DueDate`
But this still gives the #1411 error on the if statement.
So my question is why isn't STR_TO_DATE not returning NULL on an incorrect date? I should not be getting the #1411 error.
This is not an exact duplicate of the other question. Also there was not a satisfactory answer. I solved this a while and I have added my solution, which is actually a better solution that was given in the other post, so I think because of my better answer this should stay.
An option you can try:
mysql> SELECT VERSION();
+-----------+
| VERSION() |
+-----------+
| 5.7.19 |
+-----------+
1 row in set (0.00 sec)
mysql> DROP TABLE IF EXISTS `upload_invoice`, `invoice`;
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE TABLE IF NOT EXISTS `invoice` (
-> `id` BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
-> `DueDate` DATE
-> );
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE TABLE IF NOT EXISTS `upload_invoice` (
-> `DueDate` VARCHAR(10)
-> );
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO `upload_invoice`
-> (`DueDate`)
-> VALUES
-> ('ASDFADFAS'), (NULL), (''),
-> ('28/02/2001'), ('30/02/2001');
Query OK, 5 rows affected (0.01 sec)
Records: 5 Duplicates: 0 Warnings: 0
mysql> INSERT INTO `invoice`
-> SELECT
-> NULL,
-> IF(`DueDate` REGEXP '[[:digit:]]{2}/[[:digit:]]{2}/[[:digit:]]{4}' AND
-> UNIX_TIMESTAMP(
-> STR_TO_DATE(`DueDate`, '%d/%m/%Y')
-> ) > 0,
-> STR_TO_DATE(`DueDate`, '%d/%m/%Y'),
-> NULL)
-> FROM `upload_invoice`;
Query OK, 5 rows affected (0.00 sec)
Records: 5 Duplicates: 0 Warnings: 0
mysql> SELECT `id`, `DueDate`
-> FROM `invoice`;
+----+------------+
| id | DueDate |
+----+------------+
| 1 | NULL |
| 2 | NULL |
| 3 | NULL |
| 4 | 2001-02-28 |
| 5 | NULL |
+----+------------+
5 rows in set (0.00 sec)
See db-fiddle.
I forgot I posted this question, but I solved this problem a while ago like this
IF(`{$date}`!='',STR_TO_DATE(`{$date}`,'%d/%m/%Y'),null) as `{$date}`
So because the line is long and confusing I made a function like this
protected function strDate($date){
return "IF(`{$date}`!='',STR_TO_DATE(`{$date}`,'%d/%m/%Y'),null) as `{$date}`";
}
INSERT INTO `invoice` (`DueDate`)
SELECT
{$this->strDate('DueDate')}
FROM `upload_invoice`
I really forgot I posted this question. It seems like an eternity away, but this is how I solved the issue.
I am using a mySQL database to store a range of inputs that feed a model. I have a number of different dates that are stored as TIMESTAMP. However, some of the values can be hundreds of years in the future. When I look in the DB, they are stored as '0000-00-00 00:00:00' when the actual timestamp should be something like '2850-12-01 00:00:00'.
While searching on Google, I noticed that the maximum value is sometime in 2038. Has anyone found a work-around for longer-dated TIMESTAMPs?
You can convert them to DATETIME, it will store what you want. Compare:
MariaDB [test]> create table t (t timestamp, d datetime);
Query OK, 0 rows affected (0.59 sec)
MariaDB [test]> insert into t values ('2850-12-01 00:00:00','2850-12-01 00:00:00');
Query OK, 1 row affected, 1 warning (0.08 sec)
MariaDB [test]> select * from t;
+---------------------+---------------------+
| t | d |
+---------------------+---------------------+
| 0000-00-00 00:00:00 | 2850-12-01 00:00:00 |
+---------------------+---------------------+
1 row in set (0.00 sec)
I have a database that has an array of data stored in a JSON column. I need to find all values that have a null value at a particular position in the JSON array. While pulling out the data with JSON_EXTRACT seemed trivial, none of my comparisons to null have worked, all of them claiming the value is null.
Here is the example code that should work as far as I can tell:
SELECT JSON_EXTRACT(`COLUMNS_HEADERS`, '$[1]') , (JSON_EXTRACT(`COLUMNS_HEADERS`, '$[1]') is null)
FROM ate.readings_columns_new;
The first few rows of my results table look like this:
null | 0
"INTERNALTEMPERATURE" | 0
"INPUT_VOLTAGE" | 0
null | 0
null | 0
"AH1" | 0
I have tried every comparison I can think of, and they all result in a 0:
(JSON_EXTRACT(`COLUMNS_HEADERS`, '$[1]') is null)
(JSON_EXTRACT(`COLUMNS_HEADERS`, '$[1]') <=> null)
ISNULL(JSON_EXTRACT(`COLUMNS_HEADERS`, '$[1]'))
(JSON_EXTRACT(`COLUMNS_HEADERS`, '$[1]') <=> 'null')
Is there some key to comparing null values pulled from a JSON_EXTRACT?
SELECT
JSON_EXTRACT(`COLUMNS_HEADERS`, '$[1]'),
(JSON_EXTRACT(`COLUMNS_HEADERS`, '$[1]') = CAST('null' AS JSON))
FROM ate.readings_columns_new;
or
SELECT
JSON_EXTRACT(`COLUMNS_HEADERS`, '$[1]'),
(JSON_TYPE(JSON_EXTRACT(`COLUMNS_HEADERS`, '$[1]')) = 'NULL')
FROM ate.readings_columns_new;
See the docs for JSON_TYPE.
A bit of a belated answer but I just hit this problem and couldn't find anything reasonably documented. The solution I ended ended up using was the json_type function as 'abl' pointed out above.
The trick was to compare with the string 'NULL' not null or NULL.
As a test throw the following into a mysql prompt and play around with the values
(if using phpMyAdmin don't forget to check 'show this query here again' and 'retain query box' - the universe is frustrating enough without losing edits..)
set #a='{"a":3,"b":null}';
select if(json_type(json_extract(#a,'$.b')) = 'NULL',1,0);
I ended up with the following.
mysql> set #a='{"a":3,"b":null}';
Query OK, 0 rows affected (0.00 sec)
mysql> select if(json_type(json_extract(#a,'$.b')) = 'NULL',1,0);
+----------------------------------------------------+
| if(json_type(json_extract(#a,'$.b')) = 'NULL',1,0) |
+----------------------------------------------------+
| 1 |
+----------------------------------------------------+
1 row in set (0.00 sec)
mysql> set #a='{"a":3,"b":1}';
Query OK, 0 rows affected (0.00 sec)
mysql> select if(json_type(json_extract(#a,'$.b')) = 'NULL',1,0);
+----------------------------------------------------+
| if(json_type(json_extract(#a,'$.b')) = 'NULL',1,0) |
+----------------------------------------------------+
| 0 |
+----------------------------------------------------+
1 row in set (0.00 sec)
As the bare bones of a stored procedure - which is what I needed it for - using the 'if' statements rather than the if() function.
drop procedure if exists test;
delimiter $$
create procedure test(in x json)
begin
if json_type(json_extract(x,'$.b')) = 'NULL' then
select 1;
else
select 0;
end if;
end$$
delimiter;
mysql> call test('{"a":3,"b":1}');
+---+
| 0 |
+---+
| 0 |
+---+
1 row in set (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
mysql> call test('{"a":3,"b":null}');
+---+
| 1 |
+---+
| 1 |
+---+
1 row in set (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Well I had a suspicion but I found a workaround that confirms that a JSON null value is not the same as a MySQL null value.
I tried various methods to get a similar null value but the only one that works is to extract a null JSON value from an array like the value I'm attempting to check against:
SELECT JSON_EXTRACT(`COLUMNS_HEADERS`, '$[1]') , (JSON_EXTRACT(`COLUMNS_HEADERS`, '$[1]') = JSON_EXTRACT('[null]', '$[0]'))
FROM ate.readings_columns_new;
This seems like bad form, but was the only way I could get a value that evaluated as equal to the null values in my array.
Another trick is MySQL's NULLIF function
SELECT COLUMNS_HEADERS->>"$[1]", NULLIF(COLUMNS_HEADERS->>"$[1]",'null') IS NULL)
(I'm also using ->> which is an alias for JSON_UNQUOTE(JSON_EXTRACT())
That way querying a column containing {"id":1},{"id":2},{"id":null} & {"name":4} for the JSON path $.id will return 1,2,NULL,NULL instead of 1,2,null,NULL