avoid insert varchar string in a INT field mysql - mysql

I have a mysql table:
id int
user_id int
date datetime
what happens is, if I insert a varchar string in user_id, mysql insert this field setting it to 0.
Can I prevent this insert in mysql? if it is a string, do not convert to 0 and insert.

If the int column(s) are not nullable, something like this might work as a filter (depending on settings, MySQL might just convert the NULL to 0):
INSERT INTO theTable(intField)
VALUES (IF(CAST(CAST('[thevalue]' AS SIGNED) AS CHAR) = '[thevalue]', '[thevalue]', NULL))
;

If you have an opportunity to process and validate values before they're inserted in the database, do it there. Don't think of these checks as "manual", that's absurd. They're necessary, and it's a standard practice.
MySQL's philosophy is "do what I mean" which often leads to trouble like this. If you supply a string for an integer field it will do its best to convert it, and in the case of invalid numbers it will simply cast to 0 as that the closest it can get. It also quietly truncates string fields that are too long, for example, as if you cared about the additional data you would've made your column bigger.
This is the polar opposite of many other databases that are extremely picky about the type of data that can be inserted.

Related

How Mysql treats comparing a no string value to a indexed varchar column?

Lately I discovered a performance issue in the following use case
Before I had a table "MyTable" with a INT indexed column "MyCode"
Afterwhile Ineeded to change the table structure converting "MyCode" column to VARCHAR (index on the column was preserved)
ALTER TABLE MyTable CHANGE MyCode MyCode VARCHAR(250) DEFAULT NULL
Then experienced a unexpected latency, query were being performed like:
SELECT * FROM MyTable where MyCode = 1234
This query was completely ignoring the MyCode VARCHAR indexing, impression was it was full scanning the table
Converting the query to
SELECT * FROM MyTable where MyCode = "1234"
Performance get back to optimal leveraging on VARCHAR indexing
So the question is.... how to explain it... and how does actually MySQL treat indexing. Or maybe some DB setting to be changed to avoid this ?
int_col = 1234 -- no problem; same type
char_col = "1234" -- no problem; same type
int_col = "1234" -- string is converted to number, then no problem
char_col = 1234 -- converting all the strings to numbers -- tedious
In the 4th case, the index is useless, so the Optimizer looks for some other way to perform the query. This is likely to lead to a "full table scan".
The main exception involves a "covering index", which is only slightly faster -- involving a "full index scan".
I accepted Rick James answer because he got the point.
But I'd like to add more info after having some testing.
The case in the question is: how does actually MySQL compares two values when the filtered column is varchar type and the provided value to filter by is not a string.
If this is the case you'll lose the opportunity to leverage on the index applied to the VARCHAR column having a dramatically loss of performances in your query, supposed instead to be immediate and simple.
Explanation is that MySQL in front of a given value which has a different type from
VARCHAR will perform a full table scan and for every record's field will to perform a CAST(varcharcol as providedvaluetype) and compare the result with provided value.
E.g.
having a VARCHAR column named "code" and filtering
SELECT * FROM table WHERE code=1234
will full scan every record just like doing doing
SELECT * FROM table WHERE CAST(code as UNSIGNED)=1234
Notice that if you'll test it against 0
SELECT * FROM table WHERE CAST(code as UNSIGNED)=0
you'll get back ALL records having a string that its CAST to UNSIGNED won't have a unsigned meaning for mysql CAST function.

mySQL valid bit - alternatives?

Currently, I have a mySQL table with columns that looks something like this:
run_date DATE
name VARCHAR(10)
load INTEGER
sys_time TIME
rec_time TIME
valid TINYINT
The column valid is essentially a valid bit, 1 if this row is the latest value for this (run_date,name) pair, and 0 if not. To make insertions simpler, I wrote a stored procedure that first runs an UPDATE table_name SET valid = 0 WHERE run_date = X AND name = Y command, then inserts the new row.
The table reads are in such a way that I usually use only the valid = 1 rows, but I can't discard the invalid rows. Obviously, this schema also has no primary key.
Is there a better way to structure this data or the valid bit, so that I can speed up both inserts and searches? A bunch of indexes on different orders of columns gets large.
In all of the suggestions below, get rid of valid and the UPDATE of it. That is not scalable.
Plan A: At SELECT time, use 'groupwise max' code to locate the latest run_date, hence the "valid" entry.
Plan B: Have two tables and change both when inserting: history, with PRIMARY KEY(name, run_date) and a simple INSERT statement; current, with PRIMARY KEY(name) and INSERT ... ON DUPLICATE KEY UPDATE. The "usual" SELECTs need only touch current.
Another issue: TIME is limited to 838:59:59 and is intended to mean 'time of day', not 'elapsed time'. For the latter, use INT UNSIGNED (or some variant of INT). For formatting, you can use sec_to_time(). For example sec_to_time(3601) -> 01:00:05.

MySQL used ALL Type while searching on Primary Key

My table schema:
My above table has ~10L data. While using EXPLAIN, it shows as,
From this, type shows ALL, Extra shows Using where and rows not in O(1). But, for searching on primary key, the type should be const, rows be in O(1) ?? I can't able to figure out the issue, which results in slowing the queries.
Your id field is varchar while you pass the value you are looking for as a number.
This means that mysql has to perform an implicit data conversion and will not be able to use the index for looking up the value:
For comparisons of a string column with a number, MySQL cannot use an
index on the column to look up the value quickly. If str_col is an
indexed string column, the index cannot be used when performing the
lookup in the following statement:
SELECT * FROM tbl_name WHERE str_col=1;
The reason for this is that
there are many different strings that may convert to the value 1, such
as '1', ' 1', or '1a'.
Either convert your id field to number or pass the value as string.
As your id column is varchar, you need to provide it String while searching.
Try, id= '123456'
Reason:
Since you are comparing a varchar column to Int, it will first convert all rows to Int, and then match it with 123456(int).

Inserting string data into varchar column causes invalid cast to int

I've run into a strange situation with SQL Server 2008 R2. I have a large table with too many columns (over 100). Two of the columns are defined as varchar, but inserting text data into these columns via an INSERT-VALUES statement generates an invalid cast exception.
CREATE TABLE TooManyColumns (
...
, column_A varchar(20)
, column_B varchar(10)
, ...
)
INSERT INTO TooManyColumns (..., column_A, column_B, ...)
VALUES (..., 'text-A1', 'text-B1', ...)
, (..., 'text-A2', 'text-B2', ...)
Results in these errors
Conversion failed when converting the varchar value 'text-A1' to data type int.
Conversion failed when converting the varchar value 'text-B1' to data type int.
Conversion failed when converting the varchar value 'text-A2' to data type int.
Conversion failed when converting the varchar value 'text-B2' to data type int.
I've verified the position of the values against the position of the columns. Further, changing the text values into numbers or into text implicitly convertible to numbers, fixes the error and inserts the numeric values into the expected columns.
What should I look for to trouble shoot this?
I've already looked for constraints on the two columns but could not find any - so either I'm looking in the wrong place or they do not exist. The SSMS object explorer states that the two columns are defined as varchar(20) and varchar(10). Using SSMS tools to script the table's schema to a query window also confirms this.
Anything else I should check?
Your question is 'What should I look for to trouble shoot this?' So this answers that question, it does not necessarily solve your problem.
I would do this:
select * into TooManyColumns_2 from
TooManyColumns
truncate table TooManyColumns
update TooManyColumns_2
set --your troubled columns with text data that is not convetable to int
insert TooManyColumns select * from TooManyColumns_2
If the insert select is successful then the problem is likely your column ordering on your inserts, because this proves that these columns can take text data. If that fails, then report back for further troubleshooting tips.

Why do values in the row I insert not match the values in the insert query?

I just can't understand why is my database (mysql) behaving like this! My console shows that the record is created properly (please, notice the "remote_id" value):
Tweet Create (0.3ms)
INSERT INTO `tweets` (`remote_id`, `text`, `user_id`, `twitter_account_id`)
VALUES (12325438258, 'jamaica', 1, 1)
But when I check the record, it shows that the remote_id is 2147483647 intead of the provided value (12325438258 in the example above)...
This table has many entries, but this field is always written with 2147483647... It was supposed to fill this space with an unique id (which I guarantee you is being generated properly).
That's because you're using the INT numeric type which has a limit of '2147483647', use BIGINT instead.
Source: http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html
My guess is that the value you are trying to insert is too large for the column. That number is suspicious in that it is the max value of a 32 bit signed integer.
Is the column type INT? To store that value you should make it a BIGINT. See Numeric Types from the MySQL manual.
As it is obvious you used a value with less size, so you need to use a larger type like BigInt ( and in application use long or int64