I have a VARCHAR field in a MySQL table like so -
CREATE TABLE desc(
`pk` varchar(10) NOT NULL UNIQUE,
...
);
The value in pk field is of the type - (xx0000001, xx0000002, ...). But when I insert these into my table the values in pk field get truncated to (xx1, xx2, ...).
How to prevent this?
UPDATE: Adding the INSERTstatement
INSERT INTO desc (pk) VALUES ("xx0000001");
It could be that the viewer you are using to LOOK at the values is displaying the info incorrectly because it is trying to interpret that string as a number, or that mysql may be interpreting your numbers as hexadecimal or something strange.
What happens if you do
INSERT INTO desc (pk) VALUES ("xx0000099");
Does it come back as xx99? or some other value?
Looks like you are referencing different tables in your two statements, text and desc?
Possibly somewhere along your program logic the value is interpreted as a hexadecimal or octal number?
Related
I had a table with 3 columns and 3600K rows. Using MySQL as a key-value store.
The first column id was VARCHAR(8) and set to primary key.The 2nd and 3rd columns were MEDIUMTEXT. When calling SELECT * FROM table WHERE id=00000 MySQL took like 54 sec ~ 3 minutes.
For testing I created a table containing VARCHAR(8)-VARCHAR(5)-VARCHAR(5) where data casually generated from numpy.random.randint. SELECT takes 3 sec without primary key. Same random data with VARCHAR(8)-MEDIUMTEXT-MEDIUMTEXT, the time cost by SELECT was 15 sec without primary key.(note: in second test, 2nd and 3rd column actually contained very short text like '65535', but created as MEDIUMTEXT)
My question is: how can I achieve similar performance on my real data? (or, is it impossible?)
If you use
SELECT * FROM `table` WHERE id=00000
instead of
SELECT * FROM `table` WHERE id='00000'
you are looking for all strings that are equal to an integer 0, so MySQL will have to check all rows, because '0', '0000' and even ' 0' will all be casted to integer 0. So your primary key on id will not help and you will end up with a slow full table. Even if you don't store values that way, MySQL doesn't know that.
The best option is, as all comments and answers pointed out, to change the datatype to int:
alter table `table` modify id int;
This will only work if your ids casted as integer are unique (so you don't have e.g. '0' and '00' in your table).
If you have any foreign keys that references id, you have to drop them first and, before recreating them, change the datatype in the other columns too.
If you have a known format you are storing your values (e.g. no zeros, or filled with 0s up to the length of 8), the second best option is to use this exact format to do your query, and include the ' to not cast it to integer. If you e.g. always fill 0 to 8 digits, use
SELECT * FROM `table` WHERE id='00000000';
If you never add any zeros, still add the ':
SELECT * FROM `table` WHERE id='0';
With both options, MySQL can use your primary key and you will get your result in milliseconds.
If your id column contains only numbers so define it as int , because int will give you better performance ( it is more faster)
Make the column in your table (the one defined as key) integer and retry. Check first performance by running a test within your DB (workbench or simple command line). You should get a better result.
Then, and only if needed (I doubt it though), modify your python to convert from integer to string (and/or vise-versa) when referencing the key column.
I have a mysql table:
id int
user_id int
date datetime
what happens is, if I insert a varchar string in user_id, mysql insert this field setting it to 0.
Can I prevent this insert in mysql? if it is a string, do not convert to 0 and insert.
If the int column(s) are not nullable, something like this might work as a filter (depending on settings, MySQL might just convert the NULL to 0):
INSERT INTO theTable(intField)
VALUES (IF(CAST(CAST('[thevalue]' AS SIGNED) AS CHAR) = '[thevalue]', '[thevalue]', NULL))
;
If you have an opportunity to process and validate values before they're inserted in the database, do it there. Don't think of these checks as "manual", that's absurd. They're necessary, and it's a standard practice.
MySQL's philosophy is "do what I mean" which often leads to trouble like this. If you supply a string for an integer field it will do its best to convert it, and in the case of invalid numbers it will simply cast to 0 as that the closest it can get. It also quietly truncates string fields that are too long, for example, as if you cared about the additional data you would've made your column bigger.
This is the polar opposite of many other databases that are extremely picky about the type of data that can be inserted.
I've run into a strange situation with SQL Server 2008 R2. I have a large table with too many columns (over 100). Two of the columns are defined as varchar, but inserting text data into these columns via an INSERT-VALUES statement generates an invalid cast exception.
CREATE TABLE TooManyColumns (
...
, column_A varchar(20)
, column_B varchar(10)
, ...
)
INSERT INTO TooManyColumns (..., column_A, column_B, ...)
VALUES (..., 'text-A1', 'text-B1', ...)
, (..., 'text-A2', 'text-B2', ...)
Results in these errors
Conversion failed when converting the varchar value 'text-A1' to data type int.
Conversion failed when converting the varchar value 'text-B1' to data type int.
Conversion failed when converting the varchar value 'text-A2' to data type int.
Conversion failed when converting the varchar value 'text-B2' to data type int.
I've verified the position of the values against the position of the columns. Further, changing the text values into numbers or into text implicitly convertible to numbers, fixes the error and inserts the numeric values into the expected columns.
What should I look for to trouble shoot this?
I've already looked for constraints on the two columns but could not find any - so either I'm looking in the wrong place or they do not exist. The SSMS object explorer states that the two columns are defined as varchar(20) and varchar(10). Using SSMS tools to script the table's schema to a query window also confirms this.
Anything else I should check?
Your question is 'What should I look for to trouble shoot this?' So this answers that question, it does not necessarily solve your problem.
I would do this:
select * into TooManyColumns_2 from
TooManyColumns
truncate table TooManyColumns
update TooManyColumns_2
set --your troubled columns with text data that is not convetable to int
insert TooManyColumns select * from TooManyColumns_2
If the insert select is successful then the problem is likely your column ordering on your inserts, because this proves that these columns can take text data. If that fails, then report back for further troubleshooting tips.
I have a 2 columns in my table: a varchar(8) and an int.
I want to auto-increment the int column and when I do, I want to copy the value into the varchar(8) column, but pad it with 0's until it is 8 characters long, so for example, if the int column was incremented to 3, the varchar(8) column would contain '00000003'.
My two questions are, what happens when the varchar(8) column gets to '99999999' because I don't want to have duplicates?
How would I do this in MySQL?
If my values can be between 00000000 to 99999999, how many values can i have before I run out?
This is my alternative approach to just creating a random 8 character string and checking MySQL for duplicates. I thought this was a better approach and would allow for a greater number of values.
Because your formatted column depends upon, and is derivable from, the id column, your table design violates 3NF.
Either create a view that has your derived column in it (see this in sqlfiddle):
CREATE VIEW myview AS
SELECT *, substring(cast(100000000 + id AS CHAR(9)), 2) AS formatted_id
FROM mytable
or just start your auto-increment at 10000000, then it will always be 8 digits long:
ALTER TABLE mytable AUTO_INCREMENT = 10000000;
Simple, if the column is unique, it will throw an exception telling that the value already do exists. But if not unique, after 99999999 you'll get error message that the value is truncated.
Alternatives, why not use INT AUTO_INCREMENT? or a custom ID with a combination of date/time, eg
YYMMDD-00000
This will have a maximum record of 99999 records per day. It will reset on the next day.
I just can't understand why is my database (mysql) behaving like this! My console shows that the record is created properly (please, notice the "remote_id" value):
Tweet Create (0.3ms)
INSERT INTO `tweets` (`remote_id`, `text`, `user_id`, `twitter_account_id`)
VALUES (12325438258, 'jamaica', 1, 1)
But when I check the record, it shows that the remote_id is 2147483647 intead of the provided value (12325438258 in the example above)...
This table has many entries, but this field is always written with 2147483647... It was supposed to fill this space with an unique id (which I guarantee you is being generated properly).
That's because you're using the INT numeric type which has a limit of '2147483647', use BIGINT instead.
Source: http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html
My guess is that the value you are trying to insert is too large for the column. That number is suspicious in that it is the max value of a 32 bit signed integer.
Is the column type INT? To store that value you should make it a BIGINT. See Numeric Types from the MySQL manual.
As it is obvious you used a value with less size, so you need to use a larger type like BigInt ( and in application use long or int64