Getting MySQL 5.7 to show milliseconds, not microseconds - mysql

The MySQL 5.7 docs seem to imply that a column with datatype DATETIME(3) will store and format a datetime value to precisely three milliseconds.
Link to docs
The relevant example from the docs:
mysql> CREATE TABLE fractest( c1 TIME(2), c2 DATETIME(2), c3 TIMESTAMP(2) );
mysql> INSERT INTO fractest VALUES
('17:51:04.777', '2018-09-08 17:51:04.777', '2018-09-08 17:51:04.777');
mysql> SELECT * FROM fractest;
+-------------+------------------------+------------------------+
| c1 | c2 | c3 |
+-------------+------------------------+------------------------+
| 17:51:04.78 | 2018-09-08 17:51:04.78 | 2018-09-08 17:51:04.78 |
+-------------+------------------------+------------------------+
In that example, c2 rounds to and displays exactly .78 seconds.
When I try to do the same thing with a DATETIME(3) column, MySQL correctly truncates to three fractional seconds places, but still formats to six places. So, the last three places are always zero, but it's formatting to show the microseconds place.
+----------------------------+
| timestamp |
+----------------------------+
| 2018-04-12 14:08:19.296000 |
| 2018-04-13 14:08:22.312000 |
| 2018-04-14 14:08:25.914000 |
+----------------------------+
How can I replicate the behavior from the MySQL docs? If the field is DATETIME(3), I'd like it to only display to the third fractional second place:
+-------------------------+
| timestamp |
+-------------------------+
| 2018-04-12 14:08:19.296 |
| 2018-04-13 14:08:22.312 |
| 2018-04-14 14:08:25.914 |
+-------------------------+
I'd also prefer to have it be the default behavior, rather than having to call a formatting function on timestamp on every select.

This was a side effect of using the mycli MySQL command line tool. It does not occur when using standard mysql.

Related

Mysql "ERROR 1292 (22007): Truncated incorrect time value: '2355:46:39.000000'" during insert

I am currently computing the time difference between one datetime column and the lagged variable of another datetime column from a smaller table.
Afterwards the result is inserted into a bigger, final table.
This is part of a procedure where I have a few smaller tables as csv and for each of them the lag difference has to be computed and loaded into the final table (the final table is roughly 20GB and the 12 smaller tables are roughly 2.5GB each)
I have done the separate inserting without the lagged variable several times before and everything was fine.
However in table 6 of 12 somewhere along the way I now get the following error and I cannot figure out why:
ERROR 1292 (22007): Truncated incorrect time value: '2355:46:39.000000'
I can provide a test example which worked for the rest of the tables:
DROP TABLE IF EXISTS single_test;
CREATE TABLE single_test(
medallion VARCHAR(64),
hack_license VARCHAR(64),
pickup_datetime DATETIME,
dropoff_datetime DATETIME,
id INT NOT NULL,
PRIMARY KEY (id)
);
INSERT INTO single_test VALUES
('a', '1' , '2013-01-06 00:18:35','2013-01-06 02:10:33',1),
('a', '1' , '2013-01-06 02:40:58','2013-01-06 03:40:01',2),
('b', '1' , '2013-01-06 04:07:21','2013-01-06 05:00:41',3),
('c', '1' , '2013-01-07 13:12:08','2013-01-07 13:32:27',4),
('a', '2', '2013-01-06 03:50:30','2013-01-06 04:22:13',5),
('a', '2', '2013-01-06 04:41:23','2013-01-06 04:57:04',6),
('d', '2', '2013-01-07 12:22:56','2013-01-07 13:02:14',7),
('d', '3', '2013-01-07 13:03:24','2013-01-07 15:47:31',8)
;
CREATE TABLE final_test(
medallion VARCHAR(64),
hack_license VARCHAR(64),
pickup_datetime DATETIME,
dropoff_datetime DATETIME,
id INT NOT NULL,
delta VARCHAR(20),
current_dropoff DATETIME,
current_hack VARCHAR(64),
PRIMARY KEY (id)
);
SET #quot= '000-00-00 19:19:19';
SET #current_hack = '';
INSERT INTO final_test
SELECT medallion, hack_license, pickup_datetime, dropoff_datetime, id,
IF(#current_hack = hack_license,TIMEDIFF(pickup_datetime, #quot),NULL) as delta,
#quot:= dropoff_datetime current_dropoff, #current_hack:= hack_license
FROM single_test ORDER BY hack_license, pickup_datetime;
The result looks something like this:
SELECT * FROM final_test;
+-----------+--------------+---------------------+---------------------+----+-----------------+---------------------+--------------+
| medallion | hack_license | pickup_datetime | dropoff_datetime | id | delta | current_dropoff | current_hack |
+-----------+--------------+---------------------+---------------------+----+-----------------+---------------------+--------------+
| a | 1 | 2013-01-06 00:18:35 | 2013-01-06 02:10:33 | 1 | NULL | 2013-01-06 02:10:33 | 1 |
| a | 1 | 2013-01-06 02:40:58 | 2013-01-06 03:40:01 | 2 | 00:30:25.000000 | 2013-01-06 03:40:01 | 1 |
| b | 1 | 2013-01-06 04:07:21 | 2013-01-06 05:00:41 | 3 | 00:27:20.000000 | 2013-01-06 05:00:41 | 1 |
| c | 1 | 2013-01-07 13:12:08 | 2013-01-07 13:32:27 | 4 | 32:11:27.000000 | 2013-01-07 13:32:27 | 1 |
| a | 2 | 2013-01-06 03:50:30 | 2013-01-06 04:22:13 | 5 | NULL | 2013-01-06 04:22:13 | 2 |
| a | 2 | 2013-01-06 04:41:23 | 2013-01-06 04:57:04 | 6 | 00:19:10.000000 | 2013-01-06 04:57:04 | 2 |
| d | 2 | 2013-01-07 12:22:56 | 2013-01-07 13:02:14 | 7 | 31:25:52.000000 | 2013-01-07 13:02:14 | 2 |
| d | 3 | 2013-01-07 13:03:24 | 2013-01-07 15:47:31 | 8 | NULL | 2013-01-07 15:47:31 | 3 |
+-----------+--------------+---------------------+---------------------+----+-----------------+---------------------+--------------+
8 rows in set (0,00 sec)
In contrast the ERROR message does not make much sense since I would expect TIMEDIFF to truncate any invalid input:
# Extremely Large difference
SELECT TIMEDIFF("2013-01-01 19:00:00","1900-01-01 19:00:00");
+-------------------------------------------------------+
| TIMEDIFF("2013-01-01 19:00:00","1900-01-01 19:00:00") |
+-------------------------------------------------------+
| 838:59:59 |
+-------------------------------------------------------+
1 row in set, 1 warning (0,00 sec)
# Invalid/ unrealistic datetime format due to to high/ to low values
SELECT TIMEDIFF("2013-01-01 19:00:00","000-00-00 19:19:19");
+------------------------------------------------------+
| TIMEDIFF("2013-01-01 19:00:00","000-00-00 19:19:19") |
+------------------------------------------------------+
| 838:59:59 |
+------------------------------------------------------+
1 row in set, 1 warning (0,00 sec)
# Invalid/ unrealistic datetime format due to character in values
SELECT TIMEDIFF("2013-01-01 19:00:00","000-00-00T 19:19:19");
+-------------------------------------------------------+
| TIMEDIFF("2013-01-01 19:00:00","000-00-00T 19:19:19") |
+-------------------------------------------------------+
| NULL |
+-------------------------------------------------------+
1 row in set, 1 warning (0,00 sec)
I am working with Mysql 5.7 .
I have also searched the smaller data for invalid characters from the alphabet but found nothing.
Best Regards
PS: I am aware of this SO thread, but it didn't provide any help Error Code: 1292. Truncated incorrect time value
The issue can be reproduced with the following script:
create table test(
tdiff varchar(20)
);
set #dt1 = '1900-01-01 19:00:00';
set #dt2 = '2013-01-01 19:00:00';
select TIMEDIFF(#dt2, #dt1);
insert into test (tdiff) select TIMEDIFF(#dt2, #dt1);
While the SELECT statement returns 838:59:59, the INSERT statement with the same expression will raise an error:
Error: ER_TRUNCATED_WRONG_VALUE: Truncated incorrect time value:
'990552:00:00'
You will have similar problems with queries like
insert into test (tdiff) select cast('abc' as char(2));
or
insert into test (tdiff) select '9999-12-31' + interval 1 day;
while the corresponding SELECT statements would return ab and NULL without errors.
The reason for the errors is the STRICT_TRANS_TABLES mode. We can argue, if that behavior makes sense - But I doubt that it will be changed.
So what can you do?
1. Use INSERT IGNORE ..
insert ignore into test (tdiff) select TIMEDIFF(#dt2, #dt1);
Using IGNORE after INSERT will convert those errors to warnings. This seems to be the simplest way.
2. Disable STRICT_TRANS_TABLES mode
You can disable the STRICT_TRANS_TABLES mode just for one statement:
set #old_sql_mode = ##sql_mode;
set session sql_mode = replace(##sql_mode, 'STRICT_TRANS_TABLES', '');
<your INSERT statement here>;
set session sql_mode = #old_sql_mode;
3. Use a conditional expression
Since the valid range is from -838:59:59 to +838:59:59, we can check if the absolute difference in hours is less then 839 - Otherwise return some other value:
insert into test (tdiff) select
case when abs(timestampdiff(hour, #dt2, #dt1)) < 839
then TIMEDIFF(#dt2, #dt1)
else 'out of range'
end
4. Save seconds instead of time
This would be my prefered solution. Use TIMESTAMPDIFF() to get the difference in seconds:
insert into test (tdiff) select timestampdiff(second, #dt1, #dt2);
Note that TIMESTAMPDIFF() is using a different parameter order than TIMEDIFF(). So the least DATETIME value should come first, if you want to get a positive result.
From the Official MySQL docs:
The DATETIME type is used for values that contain both date and time
parts. MySQL retrieves and displays DATETIME values in 'YYYY-MM-DD
hh:mm:ss' format. The supported range is '1000-01-01 00:00:00' to
'9999-12-31 23:59:59'.
2355:46:39.00000 is clearly outside the supported range of 00:00:00 - 23:59:59.
| TIMEDIFF("2013-01-01 19:00:00","1900-01-01 19:00:00") |
+-------------------------------------------------------+
| 838:59:59
If your expected result from this query is 00:00:00, not 838:59:59, try this instead:
TIMEDIFF(TIME("2013-01-01 19:00:00"),TIME("1900-01-01 19:00:00"));
Source: https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_time

How should I construct a database to store a lot of SHA1 data

I'm having trouble constructing a database to store a lot of SHA1 data and efficiently return results.
I will admit SQL is not my strongest skill but as an exercise I am trying to use the data from https://haveibeenpwned.com/Passwords which returns results pretty quickly
This is my data:
mysql> describe pwnd;
+----------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| pwndpass | binary(20) | NO | | NULL | |
+----------+------------------+------+-----+---------+----------------+
mysql> select id, hex(pwndpass) from pwnd order by id desc limit 10;
+-----------+------------------------------------------+
| id | hex(pwndpass) |
+-----------+------------------------------------------+
| 306259512 | FFFFFFFEE791CBAC0F6305CAF0CEE06BBE131160 |
| 306259511 | FFFFFFF8A0382AA9C8D9536EFBA77F261815334D |
| 306259510 | FFFFFFF1A63ACC70BEA924C5DBABEE4B9B18C82D |
| 306259509 | FFFFFFE3C3C05FCB0B211FD0C23404F75E397E8F |
| 306259508 | FFFFFFD691D669D3364161E05538A6E81E80B7A3 |
| 306259507 | FFFFFFCC6BD39537AB7398B59CEC917C66A496EB |
| 306259506 | FFFFFFBFAD0B653BDAC698485C6D105F3C3682B2 |
| 306259505 | FFFFFFBBFC923A29A3B4931B63684CAAE48EAC4F |
| 306259504 | FFFFFFB58E389A0FB9A27D153798956187B1B786 |
| 306259503 | FFFFFFB54953F45EA030FF13619B930C96A9C0E3 |
+-----------+------------------------------------------+
10 rows in set (0.01 sec)
My question relates to quickly finding entries as it currently takes over 6 minutes
mysql> select hex(pwndpass) from pwnd where hex(pwndpass) = '0000000A1D4B746FAA3FD526FF6D5BC8052FDB38';
+------------------------------------------+
| hex(pwndpass) |
+------------------------------------------+
| 0000000A1D4B746FAA3FD526FF6D5BC8052FDB38 |
+------------------------------------------+
1 row in set (6 min 31.82 sec)
Do I have the correct data types? I search for storing sha1 data and a Binary(20) field is advised but not sure how to optimising it for searching the data.
My MySQL install is a clean turnkey VM https://www.turnkeylinux.org/mysql I have not adjusted any settings other than giving the VM more disk space
The two most obvious tips are:
Create an index on the column.
Don't convert every single row to hexadecimal on every search:
select hex(pwndpass)
from pwnd
where hex(pwndpass) = '0000000A1D4B746FAA3FD526FF6D5BC8052FDB38';
-- ^^^ This is forcing MySQL to convert every hash stored from binary to hexadecimal
-- so it can determine whether there's a match
In fact, you don't even need hexadecimal at all, save for display purposes:
select id, hex(pwndpass) -- This is fine, will just convert matching rows
from pwnd
where pwndpass = ?
... where ? is a placeholder that, in your client language, corresponds to a binary string.
If you need to run the query right in command-line, you can also use an hexadecimal literal:
select id, hex(pwndpass) -- This is fine, will just convert matching rows
from pwnd
where pwndpass = 0x0000000A1D4B746FAA3FD526FF6D5BC8052FDB38

Truncate column names in SELECT (MySQL client)

When I'm looking into new databases to explore what is there, usually I get tables with long column names but short contents, like:
mysql> select * from Seat limit 2;
+---------+---------------------+---------------+------------------+--------------+---------------+--------------+-------------+--------------+-------------+---------+---------+----------+------------+---------------+------------------+-----------+-------------+---------------+-----------------+---------------------+-------------------+-----------------+
| seat_id | seat_created | seat_event_id | seat_category_id | seat_user_id | seat_order_id | seat_item_id | seat_row_nr | seat_zone_id | seat_pmp_id | seat_nr | seat_ts | seat_sid | seat_price | seat_discount | seat_discount_id | seat_code | seat_status | seat_sales_id | seat_checked_by | seat_checked_date | seat_old_order_id | seat_old_status |
+---------+---------------------+---------------+------------------+--------------+---------------+--------------+-------------+--------------+-------------+---------+---------+----------+------------+---------------+------------------+-----------+-------------+---------------+-----------------+---------------------+-------------------+-----------------+
| 4897 | 2016-09-01 00:05:54 | 330 | 331 | NULL | NULL | NULL | 0 | NULL | NULL | 0 | NULL | NULL | NULL | 0.00 | NULL | NULL | free | NULL | NULL | 0000-00-00 00:00:00 | NULL | NULL |
| 4898 | 2016-09-01 00:05:54 | 330 | 331 | NULL | NULL | NULL | 0 | NULL | NULL | 0 | NULL | NULL | NULL | 0.00 | NULL | NULL | free | NULL | NULL | 0000-00-00 00:00:00 | NULL | NULL |
+---------+---------------------+---------------+------------------+--------------+---------------+--------------+-------------+--------------+-------------+---------+---------+----------+------------+---------------+------------------+-----------+-------------+---------------+-----------------+---------------------+-------------------+-----------------+
Since the length of the header is longer that the contents of each row, I see a unformatted output which is hard to standard, specially when you search for little clues like fields that aren't being used and so on.
Is there any way to tell mysql client to truncate column names automatically, for example, to 10 characters as maximum? With the first 10 character is usually enough to know which column they refer to.
Of course I could stablish column aliases for that with AS, but if there's too much columns and you want to do a fast exploration, that would take too long for each table.
Other solution will be to tell mysql to remove the prefix seat_ for each column for example (of course, for each column I would need to change the used prefix).
I don't think there's any way to do that automatically. Some options are:
1) Use a graphical UI such as PhpMyAdmin to view the table contents. These typically allow you to adjust column widths.
2) End the query with \G instead of ;:
mysql> SELECT * FROM seat LIMIT 2\G
This will display the columns horizontally instead of vertically:
seat_id: 4897
seat_created: 2016-09-01 00:05:54
seat_event_id: 330
...
I often use the latter for tables with lots of columns because reading the horizontal format can be difficult, especially when it wraps around on the terminal.
3) Use the less pager in a mode that doesn't wrap lines. You can then scroll left and right with the arrow keys.
mysql> pager less -S
See How to better display MySQL table on Terminal
You can skip the column names completely by running the MySQL client with the -N or --skip-column-names option. Then the width of your columns will be determined by the widest data, not the column name. But there would be no row for the column names.
You can also use column aliases to set your own column names, but you'd have to enter these yourself manually.

Disable scientific notation in MySQL command-line client?

I have a MySQL table with many numeric columns (some INT, some FLOAT). I would like to query it with the MySQL command-line client (specifically, mysql Ver 14.14 Distrib 5.1.41, for debian-linux-gnu (x86_64) using readline 6.1), like so:
SELECT * FROM table WHERE foo;
Unfortunately, if the value of any numeric field exceeds 10^6, this client displays the result in scientific notation, which makes reading the results difficult.
I could correct the problem by FORMAT-ing each of the fields in my query, but there are many of them and many tables I would like to query. Instead I'm hoping to find a client variable or flag I can set to disable scientific notation for all queries.
I have not been able to find one in the --help or the man page, nor searching Google or this site. Instead all I find are discussions of preserving/removing scientific notation when using <insert-programming-language>'s MySQL API.
Thank you for any tips.
::edit::
Here's an example table ...
mysql> desc foo;
+--------------+-------------+------+-----+-------------------+
| Field | Type | Null | Key | Default |
+--------------+-------------+------+-----+-------------------+
| date | date | NO | PRI | NULL |
| name | varchar(20) | NO | PRI | NULL |
| val | float | NO | | NULL |
| last_updated | timestamp | NO | | CURRENT_TIMESTAMP |
+--------------+-------------+------+-----+-------------------+
and some example values ...
mysql> select * from foo where date='20120207';
+------------+--------+--------------+---------------------+
| date | name | val | last_updated |
+------------+--------+--------------+---------------------+
| 2012-02-07 | A | 88779.5 | 2012-02-07 13:38:14 |
| 2012-02-07 | B | 1.00254e+06 | 2012-02-07 13:38:14 |
| 2012-02-07 | C | 78706.5 | 2012-02-07 13:38:15 |
+------------+--------+--------------+---------------------+
Now, the actual values I loaded into the third field are:
88779.5, 1002539.25, 78706.5390625
and they can be seen exactly if I manipulate the value:
mysql> select date, name, ROUND(val, 10), last_updated from foo where ...
+------------+---+--------------------+---------------------+
| 2012-02-07 | A | 88779.5000000000 | 2012-02-07 13:38:14 |
| 2012-02-07 | B | 1002539.2500000000 | 2012-02-07 13:38:14 |
| 2012-02-07 | C | 78706.5390625000 | 2012-02-07 13:38:15 |
Something in the client seems to be enforcing that I only be allowed to see six significant figures, even though there are more in the table.
If a query such as
mysql> select ROUND(*, 2) from foo ...
were possible, that would be great! Otherwise I can't really take the time to individually wrap 100 column names in "ROUND()" whenever I need to inspect some data.
Interestingly, I occasionally use a phpMyAdmin interface to browse the contents of some of these tables, and that interface also has this 6 significant figure limitation. So it's not limited to just the CLI.
Well, after reading the documentation more thoroughly, I still can't see any reason why a client would limit itself to displaying only 6 sig figs from a FLOAT (especially when the table itself is definitely storing more).
Nonetheless, an acceptable solution (for this weary user) is to change all my tables to use DECIMAL(16,4) instead of FLOAT. Unfortunately, this makes all my numbers show up with 4 decimal places (even if they're all '0'). But at least all numbers have the same width now, and my client never displays them in scientific notation or limits the number of sig figs in its output.
Wouldn't the CAST function allow you to request that the values for a certain field are returned as DECIMAL ? Not an expert and haven't tried it, but that would be the first thing I try.
I know this is old but this helped me.. I used a view..
create view foo2 as select date, name, ROUND(val, 10) val, last_updated from foo
Then just do your queries on foo2. also works in phpmyadmin

Set cumulative value when faced with overflowing Int16

I have cumulative input values that start life as smallints.
I read these values from a Access database, and aggregate them into a MySQL database.
Now I'm faced with input values of type smallint that are cumulative, thus always increasing.
Input Required output
---------------------------------
0 0
10000 10000
32000 32000
-31536 34000 //overflow in the input
-11536 54000
8464 74000
I process these values by inserting the raw data into a blackhole table and in the trigger to the blackhole I upgrade the data before inserting it into the actual table.
I know how to store the previous input and output, or if there is none, how to select the latest (and highest) inserted value.
But what's the easiest/fastest way to deal with the overflow, so I get the correct output.
Given you have a table named test with a primary key called id and the column is named value Then just do this:
SELECT
id,
test.value,
(SELECT SUM(value) FROM test AS a WHERE a.id <= test.id) as output
FROM test;
This would be the output:
------------------------
| id | value | output |
------------------------
| 1 | 10000 | 10000 |
| 2 | 32000 | 42000 |
| 3 | -31536 | 10464 |
| 4 | -11536 | -1072 |
| 5 | 8464 | 7392 |
------------------------
Hope this helps.
If it doesn't work, just convert your data to INT (or BIGINT for lots of data). It does not hurt and memory is cheap this days.