Strange problem I can't seem to get my head around. I have a table in a MySQL database with the following structure...
CREATE TABLE IF NOT EXISTS `tblbaseprices` (
`base_id` bigint(11) NOT NULL auto_increment,
`base_size` int(10) NOT NULL default '0',
`base_label` varchar(250) default NULL,
`base_price_1a` float default NULL,
`base_price_2a` float default NULL,
`base_price_3a` float default NULL,
`base_price_1b` float default NULL,
`base_price_2b` float default NULL,
`base_price_3b` float default NULL,
`site_id` int(11) default NULL,
PRIMARY KEY (`base_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=134 ;
The last base_id I have in there is 132. I assume a couple of records have been deleted to auto_increment is set to 134, as you can see about. I am trying to run the following SQL statement, and when I do, I get the error "Duplicate entry '2147483647' for key 1".
INSERT INTO tblbaseprices (site_id, base_size, base_price_1a, base_price_2a, base_price_3a, base_price_4a) VALUES ('', '', '', '', '', '')
Does anybody have any ideas?
Many thanks!
2^31 − 1 = 2,147,483,647
The number 2,147,483,647 is ... the maximum value for a 32-bit signed integer in computing
2147483647 is the largest int value for mysql. Just change the type from int to bigint.
With you code I got this error - Unknown column 'base_price_4a' in 'field list'.
It means that you are trying to insert into another table (maybe in another schema), and that table has primary key INT and AUTO_INCREMENT=2147483647.
you've hit the 32-bit integer limit, thus preventing the auto increment from incrementing. switching your pk to bigint with a higher column length should fix the issue.
Also, if your PK is never going to be negative, switching to an unsigned int should give you more space.
Try changing the auto_increment column to bigint instead of int, then the max value would be '9223372036854775807' or even '18446744073709551615' if you make it unsigned (no values below 0).
Change your Auto_Increment to the last id in the column so it is continued where it left off.
Be sure you do not delete auto_increment, otherwise it will continue to produce the error.
You're inserting empty strings into numerical columns. As far as I can see, you're also inserting into a column that does not exist in the schema. My guess is this has something to do with your error.
signed and unsigned issue
alter table tblbaseprices
modify column site_id int(10) unsigned NOT NULL;
reference - http://dev.mysql.com/doc/refman/5.0/en/numeric-type-overview.html
make sure unsigned for foreign key (in this case could be the site_id)
it could be caused by trigger,
there is no int(11), the max it can go is int(10)
there is no need to allow negative value for ID
to be consistently using same data type for primary key
Today I got the error duplicate key 2147483647
I think it came out when I tried to insert a record into database from PhpMyAdmin, while typing, I also tried to enter the key value and it was eider lower than the current Next autoindex or I tried to type something like 99999999999999 as the key field, and that caused it to set Next autoindex to maximum
Anyway, the erorr was caused because Next autoindex was 2147483647 for that table.
My table was empty so I fixed it by this query:
ALTER TABLE table_name AUTO_INCREMENT = 0
if your table contains data, then replace 0 with your maximum key plus 1
it's a database issue. check your phpmyadmin > your DB > structure, your primary key should be setted in "bigint", not just "int"
CREATE TABLE IF NOT EXISTS `tblbaseprices` (
`base_id` bigint(11) NOT NULL auto_increment,
`base_size` int(10) NOT NULL default '0',
`base_label` varchar(250) default NULL,
`base_price_1a` float default NULL,
`base_price_2a` float default NULL,
`base_price_3a` float default NULL,
`base_price_1b` float default NULL,
`base_price_2b` float default NULL,
`base_price_3b` float default NULL,
`site_id` int(11) default NULL,
PRIMARY KEY (`base_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=134 ;
A good explanation of that is here: http://realtechtalk.com/Duplicate_entry_2147483647_for_key_PRIMARY_MySQL_Error_Solution-2015-articles
Essentially you are trying to insert a value larger than the maximum size an INT supports which is literally the number being given to you in the error.
If you are importing data than one of the fields contains a larger value than the INT size. You could also modify your table to be a BIGINT which would take care of the issue as well (of course at the cost of extra disk space).
A common reason is that you are using some script generating large/random numbers. You should add some check to make sure the size is the same or lower than that maximum INT size of 2147483647 and you'll be good to go.
Duplicate entry '57147-2147483647' for key 'app_user' [ INSERT INTO user_lookup (user_id, app_id, app_user_id, special_offers, ip_address) VALUES ('2426569', '57147', '4009116545', 1, 1854489853) ]
Related
Looking at integer values. It seems that setting the Unsigned attribute overrides the field length.
Traditionally, MYSQL translates the BOOLEAN alias to TINYINT(1).
According to the inter-webs, as of MySQL 8.0.17, display width specifications for integer data types have been deprecated. There are two exceptions to this which include: TINYINT(1)
However, there is a bug (known or unknown IDK) where when I set UNSIGNED on any TINYINT value, the display length is dropped.
Steps to reproduce:
Create a table with a field intended to be used as a BOOLEAN;
CREATE TABLE users (
id int unsigned NOT NULL AUTO_INCREMENT ,
user_name varchar(50) NOT NULL,
password varchar(255) NOT NULL,
is_active tinyint(1) NOT NULL DEFAULT '1',
PRIMARY KEY (id),
ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
Observe that the display length on TINYINT(1) is in fact set.
Alter the table to make is_active an UNSIGNED value:
ALTER TABLE users
CHANGE COLUMN is_active is_active TINYINT(1) UNSIGNED NOT NULL DEFAULT '1' ;
Observe that TINYINT no longer has a display length.
List item
Expected result:
I argue that the correct field attribute for a "BOOLEAN" would be unsigned since your options would be 0 or 1. Not negatives. Therefore I would expect that the UNSIGNED behavior for TINYINT(1) be identical to the signed behavior, and field display length would be set / retained.
Question:
Has anyone else encountered this behavior? Any ideas on a work around? Right now I am sticking with signed tinyint's...
I have a MySQL database table with more than 34M rows (and growing).
CREATE TABLE `sensordata` (
`userID` varchar(45) DEFAULT NULL,
`instrumentID` varchar(10) DEFAULT NULL,
`utcDateTime` datetime DEFAULT NULL,
`dateTime` datetime DEFAULT NULL,
`data` varchar(200) DEFAULT NULL,
`dataState` varchar(45) NOT NULL DEFAULT 'Original',
`gps` varchar(45) DEFAULT NULL,
`location` varchar(45) DEFAULT NULL,
`speed` varchar(20) NOT NULL DEFAULT '0',
`unitID` varchar(5) NOT NULL DEFAULT '1',
`parameterID` varchar(5) NOT NULL DEFAULT '1',
`originalData` varchar(200) DEFAULT NULL,
`comments` varchar(45) DEFAULT NULL,
`channelHashcode` varchar(12) DEFAULT NULL,
`settingHashcode` varchar(12) DEFAULT NULL,
`status` varchar(7) DEFAULT 'Offline',
`id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`),
UNIQUE KEY `id_UNIQUE` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=98772 DEFAULT CHARSET=utf8
I access this table from multiple threads (at least 400 threads) every minute to insert data into the table.
As the table was growing, it was getting slower to read and write the data. One SELECT query used to take about 25 seconds, then I added a unique index
UNIQUE INDEX idx_userInsDate ( userID,instrumentID,utcDateTime)
This reduced the read time from 25 seconds to some milliseconds but it has increased the insert time as it has to update the index for each record.
Also If I run a SELECT query from multiple threads as the same time the queries take too long to return the data.
This is an example query
Select dateTime from sensordata WHERE userID = 'someUserID' AND instrumentID = 'someInstrumentID' AND dateTime between 'startDate' AND 'endDate' order by dateTime asc;
Can someone help me, to improve the table schema or add an effective index to improve the performance, please.
Thank you in advance
A PRIMARY KEY is a UNIQUE key. Toss the redundant UNIQUE(id) !
Is id referenced by any other tables? If not, then get rid of it all together. Instead have just
PRIMARY KEY ( userID, instrumentID, utcDateTime)
That is, if that triple is guaranteed to be unique. You mentioned DST -- use the datatype TIMESTAMP instead of DATETIME. Doing that, you can convert to DATETIME if needed, thereby eliminating one of the columns.
That one index (the PK) takes virtually no space since it is "clustered" with the data in InnoDB.
Your table is awfully fat with all those VARCHARs. For example, status can be reduced to a 1-byte ENUM. Others can be normalized. Things like speed can be either a 4-byte FLOAT or some smaller DECIMAL, depending on how much range and precision you need.
With 34M wide rows, you have probably recently exceeded the cacheability of the RAM you have. By making the row narrower, you will postpone that overflow.
Why attack the indexes? Every UNIQUE (including PRIMARY) index is checked before allowing the row to be inserted. By getting it down to 1 index, that minimizes the cost there. (InnoDB really needs a PRIMARY KEY.)
INT is 4 bytes. Do you have a billion instruments? Maybe instrumentID could be SMALLINT UNSIGNED, which is 2 bytes, with a max of 64K? Think about all the other IDs.
You have 400 INSERTs/minute, correct? That is not bad. If you get to 400/second, we need to have a different talk.
("Fill factor" is not tunable in MySQL because it does not make much difference.)
How much RAM do you have? What is the setting for innodb_buffer_pool_size? Optimal is somewhere around 70% of available RAM.
Let's see your main queries; there may be other issues to address.
It's not the indexes at fault here. It's your data types. As the size of the data on disk grows, the speed of all operations decrease. Indexes can certainly help speed up selects - provided your data is properly structured - but it appears that it isnt
CREATE TABLE `sensordata` (
`userID` int, /* shouldn't this have a foreign key constraint? */
`instrumentID` int,
`utcDateTime` datetime DEFAULT NULL,
`dateTime` datetime DEFAULT NULL,
/* what exactly are you putting here? Are you sure it's not causing any reduncy? */
`data` varchar(200) DEFAULT NULL,
/* your states will be a finite number of elements. They can be represented by constants in your code or a set of values in a related table */
`dataState` int,
/* what's this? Sounds like what you are saving in location */
`gps` varchar(45) DEFAULT NULL,
`location` point,
`speed` float,
`unitID` int DEFAULT '1',
/* as above */
`parameterID` int NOT NULL DEFAULT '1',
/* are you sure this is different from data? */
`originalData` varchar(200) DEFAULT NULL,
`comments` varchar(45) DEFAULT NULL,
`channelHashcode` varchar(12) DEFAULT NULL,
`settingHashcode` varchar(12) DEFAULT NULL,
/* as above and isn't this the same as */
`status` int,
`id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`),
UNIQUE KEY `id_UNIQUE` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=98772 DEFAULT CHARSET=utf8
1st of all: Avoid varchars for indexes and especially IDs. Each character position in the varchar generates an own index-entry internally!
2nd: Your select uses dateTime, your index is set to utcDateTime. It will only take userID and instrumentID and ignore the utcDateTime-Part.
Advise: Change your data types for the ids and change your index to match the query (dateTime, not utcDateTime)
Using an index decreases your performance on inserts, unluckily, there is nothing such as a fill factor for indexes in mysql right now. So the best thing you can do is try the indexes to be as small as possible.
Another approach on heavily loaded databases with random access would be: write to an unindexed table, read from an indexed one. At a given time, build the indexes and swap the tables (may require a third table for the index creation while leaving the other ones untouched in between).
I need to add multiple records to a mysql database. I tried with multiple queries and its working fine, but not efficient. So I tried it with just one query like below,
INSERT INTO data (block, length, width, rows) VALUES
("BlockA", "200", "10", "20"),
("BlockB", "330", "8", "24"),
("BlockC", "430", "7", "36")
ON DUPLICATE KEY UPDATE
block=VALUES(block),
length=VALUES(length),
width=VALUES(width),
rows=VALUES(rows)
But it always update the table (columns are block_id, block, length, width, rows).
Should I do any changes on the query with adding block_id also. block_id is the primary key. Any help would be appreciated.
I've run your query without any problem, are you sure you don't have other keys defined with the data table ? And also make sure you have 'auto increment' set for the id field. without auto_increment, the query always update existing row
***** Updated **********
Sorry I've mistaken your questions. Yes, with only one auto_increment key, you query will always insert new rows instead of updating existing one ( because the primary key is the only way to detect 'existing' / duplication ), since the key is auto_increment, there's never a duplication if the primary key is not given in the insert query.
I think what you want to achieve is different, you might want to set up composite unique key on all fields (i.e. block, field, width, rows )
By the way, i've set up a SQL fiddle for you.
http://sqlfiddle.com/#!2/e7216/1
The syntax to add the unique key:
CREATE TABLE `data` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`block` varchar(10) DEFAULT NULL,
`length` int(11) DEFAULT NULL,
`width` int(11) DEFAULT NULL,
`rows` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `uniqueme` (`block`,`length`,`width`,`rows`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;
I am using a VARCHAR as my primary key. I want to auto increment it (base 62, lower/upper case, numbers), However, the below code fails (for obvious reasons):
CREATE TABLE IF NOT EXISTS `campaign` (
`account_id` BIGINT(20) NOT NULL,
`type` SMALLINT(5) NOT NULL,
`id` VARCHAR(16) NOT NULL AUTO_INCREMENT PRIMARY KEY
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
however, this works:
CREATE TABLE IF NOT EXISTS `campaign` (
`account_id` BIGINT(20) NOT NULL,
`type` SMALLINT(5) NOT NULL,
`id` VARCHAR(16) NOT NULL PRIMARY KEY
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
What is the best way to keep track of incrementation of 'id' myself? (Since auto_increment doesn't work). Do i need to make another table that contains the current iteration of ID? Or is there a better way to do this?
EDIT: I want to clarify that I know that using INT is a auto_increment primary key is the logical way to go. This question is in response to some previous dialogue I saw. Thanks
you have to use an INT field
and translate it to whatever format you want at select time
example of a solution to your problem:
create a file with a unique number and then increment with a function.
the filename can be the prefix and the file binary content represent a number.
when you need a new id to the reg invoque the function
Example
String generateID(string A_PREFIX){
int id_value = parsetoInt(readFile(A_PREFIX).getLine())
int return_id_value = id_value++
return return_id_value
}
where "A_PREFIX-" is the file name wich you use to generate the id for the field.
Or just create a sequence and maintain the pk field using the sequence to generate the primary key value with nextval function. And if perf is an issue, use cache on sequence.
But as others have stated, this is sub-optimal, if your primary key contains a numbered sequence then it's better to use int and auto-increment.
I don't see a use case where pk has to auto-increment but be a varchar data type, it doesn't make sense.
Assuming that for reasons external to the database, you do need that varchar column, and it needs to autoIncrement, then how about creating a trigger that grabs the existing autoIncrement value and uses Convert() to convert that value into a VarChar, dropping the VarChar into the field of interest. As mentioned in a previous answer, you could concatenate the table-name with the new varChar value, if there is some advantage to that.
I inserted some info into my mysql database and I got the following error listed below. What does this mean and how can I fix it?
1 row(s) inserted.
Inserted row id: 1
Warning: #1265 Data truncated for column 'summary' at row 1
Here is my Mysql tables structure below.
CREATE TABLE mem_articles (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
member_id INT UNSIGNED NOT NULL,
title VARCHAR(255) NOT NULL,
summary VARCHAR(255) DEFAULT NULL,
content LONGTEXT NOT NULL,
date_created DATETIME NOT NULL,
date_updated DATETIME DEFAULT NULL,
PRIMARY KEY (id)
);
I think it means that the amount of characters you attempted to insert into the summary column exceeded 255, perhaps you should alter it to be TEXT instead of VARCHAR(255).
http://dev.mysql.com/doc/refman/5.1/en/alter-table.html
It means that the data were "truncated", which in MySQL terminology means either it was truncated, or it was changed into something totally different if it was incompatible with the type.
This behaviour sucks; if you don't want it, use
SET SQL_MODE='TRADITIONAL'
Then it will behave like a sensible database (unfortunately this will probably break your entire code base if it's an existing application)
I would suggest setting the type to "longtext" or something larger.