My primary key is an integer.
I get the error #1062 - Duplicate entry '4294967295' for key 'PRIMARY' on next insert.
Reason: Obviously maximum value is reached.
Is there any way i can find all tables whose integer columns have reached max values or close to reaching max values to avoid such errors
Query:
SELECT
id, IF(max(id)<4294967295, 'true', 'false') as insert_flag
FROM Table
Note: I have checked in above query, whether max id's value is less
then Int Range (4294967295). Query returns true and false in
insert_flag column. You can check this in your code before insertion.
If you were talking about id as some kind of AUTO_INCREMENT field, then you can get this information for all tables with show table status (please use help show table status for more details).
e.g.
mysql> show table status \G
*************************** 1. row ***************************
Name: test_table
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 0
Avg_row_length: 0
Data_length: 16384
Max_data_length: 0
Index_length: 65536
Data_free: 0
Auto_increment: 2
Create_time: 2014-11-07 13:17:16
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.00 sec)
Here's a way to check all the auto-increment columns to see how close they are to their maximum:
SELECT c.TABLE_SCHEMA, c.TABLE_NAME, c.COLUMN_NAME,
(#max_auto_increment_size := CASE
WHEN c.COLUMN_TYPE like 'tinyint% unsigned' THEN 255
WHEN c.COLUMN_TYPE like 'tinyint%' THEN 127
WHEN c.COLUMN_TYPE like 'smallint% unsigned' THEN 65535
WHEN c.COLUMN_TYPE like 'smallint%' THEN 32767
WHEN c.COLUMN_TYPE like 'mediumint% unsigned' THEN 16777215
WHEN c.COLUMN_TYPE like 'mediumint%' THEN 8388607
WHEN c.COLUMN_TYPE like 'int% unsigned' THEN 4294967295
WHEN c.COLUMN_TYPE like 'int%' THEN 2147483647
WHEN c.COLUMN_TYPE like 'bigint% unsigned' THEN 18446744073709551615
WHEN c.COLUMN_TYPE like 'bigint%' THEN 9223372036854775807
ELSE 127
END) AS MAX_AUTO_INCREMENT_SIZE,
ROUND(#max_auto_increment_size - t.AUTO_INCREMENT) AS Headroom,
(CASE
WHEN ROUND(#max_auto_increment_size - t.AUTO_INCREMENT) < 1000 THEN 'CRITICAL'
WHEN ROUND(#max_auto_increment_size - t.AUTO_INCREMENT) < 10000 THEN 'LOW'
ELSE 'OK'
END) AS Remark
FROM information_schema.columns c
JOIN tables t ON (t.TABLE_SCHEMA = c.TABLE_SCHEMA AND t.TABLE_NAME = c.TABLE_NAME)
WHERE c.TABLE_SCHEMA NOT IN('information_schema','mysql','performance_schema')
AND c.EXTRA = 'auto_increment'
ORDER BY c.TABLE_SCHEMA, c.TABLE_NAME, c.COLUMN_NAME;
Related
This question already has an answer here:
Mysql inconsistent number of rows count(*) vs table.table_rows in information_schema
(1 answer)
Closed 6 years ago.
Figures reported by MySQL count(*) and on information_schema.TABLES are totally different.
mysql> SELECT * FROM information_schema.TABLES WHERE TABLE_NAME = 'my_table'\G
*************************** 1. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: my_db
TABLE_NAME: my_table
TABLE_TYPE: BASE TABLE
ENGINE: InnoDB
VERSION: 10
ROW_FORMAT: Compact
TABLE_ROWS: 31016698
AVG_ROW_LENGTH: 399
DATA_LENGTH: 12378439680
MAX_DATA_LENGTH: 0
INDEX_LENGTH: 4863262720
DATA_FREE: 5242880
AUTO_INCREMENT: NULL
CREATE_TIME: 2016-06-14 18:54:24
UPDATE_TIME: NULL
CHECK_TIME: NULL
TABLE_COLLATION: utf8_general_ci
CHECKSUM: NULL
CREATE_OPTIONS:
TABLE_COMMENT:
1 row in set (0.00 sec)
mysql> select count(*) from my_table;
+----------+
| count(*) |
+----------+
| 46406095 |
+----------+
1 row in set (27.45 sec)
Note that there are 31,016,698 rows according to information_schema, count() however report 46,406,095 rows...
Now which one can be trusted? Why these stats are different?
I'm using MySQL server v5.6.30.
The count in that metadata, similar to the output of SHOW TABLE STATUS, cannot be trusted. It's often off by a factor of 100 or more, either over or under.
The reason for this is the engine does not know how many rows are in the table until it calculates this. Under heavy load you might have a lot of contention on the primary key index which makes pinning down an exact value an expensive computation.
This approximation is computed based on the total data length divided by the average row length. It's rarely even close to what it should be unless your records are all about the same length and you haven't been deleting a lot of them.
The only value that can be truly trusted is COUNT(*) but that operation can take a long time to complete, so be warned.
I am trying to optimize my database by adjusting indices.
SHOW INDEXES FROM my_table
outputs
Table ... Key_name ... Column_name ... Cardinality ...
---------------------------------------------------------------------
my_table ... idx_field1 ... field1 ... 1 ...
while
SELECT field1 FROM my_table PROCEDURE ANALYSE()\G
outputs
*************************** 1. row ***************************
Field_name: my_db.my_table.field1
Min_value: ow
Max_value: rt
Min_length: 2
Max_length: 2
Empties_or_zeros: 0
Nulls: 0
Avg_value_or_avg_length: 2.0000
Std: NULL
Optimal_fieldtype: ENUM('ow','rt') NOT NULL
1 row in set (0.26 sec)
i.e., the reported cardinality (1) is not equal to the number of unique values (2). Why?
PS. I did perform
analyze table my_table
before running the queries.
The "cardinality" in SHOW INDEXES is an approximation. ANALYSE() gets the exact value because it is derived from an exhaustive scan of the table.
The former is used for deciding how to optimize a query. Generally, a low cardinality (whether 1 or 2) implies that an index on that field is not worth using.
Where are you headed with this question?
The following query returns 0.000 when I expected it to return 0.
SELECT IFNULL(TRUNCATE(NULL, 3), 0) FROM DUAL
Why is that?
Breaking it apart works as expected and described in the TRUNCATE function documentation and IFNULL docs :
SELECT TRUNCATE(NULL, 3) FROM DUAL
returns null.
SELECT IFNULL(null, 0) FROM DUAL
this returns 0. So why do I get 0.000 when nesting them?
The type of TRUNCATE(NULL,n) is DOUBLE. This can be seen by running mysql with the --column-type parameter:
$mysql -u root --column-type testdb
mysql> SELECT(TRUNCATE(NULL,3));
Field 1: `(TRUNCATE(NULL,3))`
Catalog: `def`
Database: ``
Table: ``
Org_table: ``
Type: DOUBLE
Collation: binary (63)
Length: 20
Max_length: 0
Decimals: 3
Flags: BINARY NUM
+--------------------+
| (TRUNCATE(NULL,3)) |
+--------------------+
| NULL |
+--------------------+
1 row in set (0,00 sec)
According to the IFNULL documentation page:
The default result value of IFNULL(expr1,expr2) is the more “general” of the two expressions, in the order STRING, REAL, or INTEGER
Therefore your result is 0.000, the 0 as DOUBLE truncated to 3 decimal places.
Your expectation is wrong. TRUNCATE(NULL, 3) is going to return a decimal value with three decimal places. Although the value is NULL, NULL has a type associated with it. The type is integer by default. But this is not a default situation.
So, 0 is converted to a decimal with three decimal places.
EDIT:
To understand what I mean, consider this code:
create table t as
select truncate(NULL, 3) as x;
describe t;
You will see that the column has a precision of "3". The NULL value is not typeless. You can see this on SQL Fiddle.
I can not understand why this option.
The signed TINYINT data type can store integer values between -128 and 127.
mysql> create table b (i tinyint(1));
mysql> insert into b values (42);
mysql> select * from b;
+------+
| i |
+------+
| 42 |
+------+
Data-wise, tinyint(1), tinyint(2), tinyint(3) etc. are all exactly the same. They are all in the range -128 to 127 for SIGNED or 0-255 for UNSIGNED. As other answers noted the number in parenthesis is merely a display width hint.
You might want to note, though, that application=wise things may look different. Here, tinyint(1) can take a special meaning. For example, the Connector/J (Java connector) treats tinyint(1) as a boolean value, and instead of returning a numerical result to the application, it converts values to true and false. this can be changed via the tinyInt1isBit=false connection parameter.
A tinyint(1) can hold numbers in the range -128 to 127, due to the datatype being 8 bits (1 byte) - obviously an unsigned tinyint can hold values 0-255.
It will silently truncate out of range values:
mysql> create table a
-> (
-> ttt tinyint(1)
-> );
Query OK, 0 rows affected (0.01 sec)
mysql> insert into a values ( 127 );
Query OK, 1 row affected (0.00 sec)
mysql> insert into a values ( -128 );
Query OK, 1 row affected (0.00 sec)
mysql> insert into a values ( 128 );
Query OK, 1 row affected, 1 warning (0.00 sec)
mysql> insert into a values ( -129 );
Query OK, 1 row affected, 1 warning (0.00 sec)
mysql> select * from a;
+------+
| ttt |
+------+
| 127 |
| -128 |
| 127 |
| -128 |
+------+
4 rows in set (0.00 sec)
mysql>
... unless you change the sql_mode or change the server config:
mysql> set sql_mode=STRICT_ALL_TABLES;
Query OK, 0 rows affected (0.00 sec)
mysql> insert into a values ( -129 );
ERROR 1264 (22003): Out of range value for column 'ttt' at row 1
mysql>
The value used in the DDL for the datatype (eg: tinyint(1)) is, as you suspected, the display width. However, it is optional and clients don't have to use it. The standard MySQL client doesn't use it, for example.
https://dev.mysql.com/doc/refman/5.1/en/integer-types.html
https://dev.mysql.com/doc/refman/5.0/en/numeric-type-overview.html
MySql: Tinyint (2) vs tinyint(1) - what is the difference?
The lenght parameter for numeric data types only affect the display width, but not the value that can be stored.
TINYINT -128 to 127 (or 0-255 unsigned)
SMALLINT -32768 to 32767 (or 0-65535 unsigned)
MEDIUMINT -8388608 to 8388607 (or 0-16777215 unsigned)
INT -2147483648 to 2147483647 (or 0-4294967295 unsigned)
BIGINT -9223372036854775808 to 9223372036854775807 (or 0-18446744073709551615 unsigned)
I have a table with a row count of 48769914. The problem is the bogus information when querying the database, i.e., the data_length. Any ideas on how to correct this misbehavior?
mysql> show table status like "events"\G
*************************** 1. row ***************************
Name: events
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 0
Avg_row_length: 0
Data_length: 16384
Max_data_length: 0
Index_length: 32768
Data_free: 7405043712
Auto_increment: 59816602
Create_time: 2012-06-05 05:12:37
Update_time: NULL
Check_time: NULL
Collation: utf8_general_ci
Checksum: NULL
Create_options:
Comment:
1 row in set (0.88 sec)
exact count:
mysql> select count(id) from events;
+-----------+
| count(id) |
+-----------+
| 48769914 |
+-----------+
1 row in set (5 min 37.67 sec)
Update The status information looks like the table was empty. Zero rows, zero row length and basically no data in the table. How can I get MySQL to show correct estimates for that data.
InnoDB row count is not precise, because InnoDB does not keep track of records count internally, it can only estimate this by amount of allocated space in the tablespace.
See InnoDB restrictions in the manual for more information
InnoDB doesn't store row count in the table status so that isn't bogus. You solve this by running your SELECT query where you want the row count.