I've run into a strange situation with SQL Server 2008 R2. I have a large table with too many columns (over 100). Two of the columns are defined as varchar, but inserting text data into these columns via an INSERT-VALUES statement generates an invalid cast exception.
CREATE TABLE TooManyColumns (
...
, column_A varchar(20)
, column_B varchar(10)
, ...
)
INSERT INTO TooManyColumns (..., column_A, column_B, ...)
VALUES (..., 'text-A1', 'text-B1', ...)
, (..., 'text-A2', 'text-B2', ...)
Results in these errors
Conversion failed when converting the varchar value 'text-A1' to data type int.
Conversion failed when converting the varchar value 'text-B1' to data type int.
Conversion failed when converting the varchar value 'text-A2' to data type int.
Conversion failed when converting the varchar value 'text-B2' to data type int.
I've verified the position of the values against the position of the columns. Further, changing the text values into numbers or into text implicitly convertible to numbers, fixes the error and inserts the numeric values into the expected columns.
What should I look for to trouble shoot this?
I've already looked for constraints on the two columns but could not find any - so either I'm looking in the wrong place or they do not exist. The SSMS object explorer states that the two columns are defined as varchar(20) and varchar(10). Using SSMS tools to script the table's schema to a query window also confirms this.
Anything else I should check?
Your question is 'What should I look for to trouble shoot this?' So this answers that question, it does not necessarily solve your problem.
I would do this:
select * into TooManyColumns_2 from
TooManyColumns
truncate table TooManyColumns
update TooManyColumns_2
set --your troubled columns with text data that is not convetable to int
insert TooManyColumns select * from TooManyColumns_2
If the insert select is successful then the problem is likely your column ordering on your inserts, because this proves that these columns can take text data. If that fails, then report back for further troubleshooting tips.
Related
I have a table full of traffic accident data with column headers such as 'Vehicle_Manoeuvre' which contains integers for example 13 represents the vehicle manoeuvre which caused the accident was 'overtaking moving vehicle'.
I know the mappings from integers to text as I have a (quite large) excel file with this data.
An example of what I want to know is percentage of the accidents involved this type of manoeuvre but I don't want to have to open the excel file and find the mappings of integers to text every time I write a query.
I could manually change the integers of all the columns (write query with all the possible mappings of each column, add them as new column, then delete the orginial columns) but this sould take a long time.
Is it possible to create some type of variable (like an array with first column as integers and second column with the mapped text) that SQL could use to understand how text relates to the integers allowing me to write a query below:
SELECT COUNT(Vehicle_Manoeuvre) FROM traffictable WHERE Vehicle_Manoeuvre='overtaking moving vehicle';
rather than:
SELECT COUNT(Vehicle_Manoeuvre) FROM traffictable WHERE Vehicle_Manoeuvre=13;
even though the data in the table is still in integer form?
You would do this with a Maneeuvres reference table:
create table Manoeuvres (
ManoeuvreId int primary key,
Name varchar(255) unique
);
insert into Manoeuvres(ManoeuvreId, Name)
values (13, 'Overtaking');
You might even have such a table already, if you know that 13 has a special meaning.
Then use a join:
SELECT COUNT(*)
FROM traffictable tt JOIN
Manoeuvres m
ON tt.Vehicle_Manoeuvre = m.ManoeuvreId
WHERE m.name = 'Overtaking';
I have a mysql table:
id int
user_id int
date datetime
what happens is, if I insert a varchar string in user_id, mysql insert this field setting it to 0.
Can I prevent this insert in mysql? if it is a string, do not convert to 0 and insert.
If the int column(s) are not nullable, something like this might work as a filter (depending on settings, MySQL might just convert the NULL to 0):
INSERT INTO theTable(intField)
VALUES (IF(CAST(CAST('[thevalue]' AS SIGNED) AS CHAR) = '[thevalue]', '[thevalue]', NULL))
;
If you have an opportunity to process and validate values before they're inserted in the database, do it there. Don't think of these checks as "manual", that's absurd. They're necessary, and it's a standard practice.
MySQL's philosophy is "do what I mean" which often leads to trouble like this. If you supply a string for an integer field it will do its best to convert it, and in the case of invalid numbers it will simply cast to 0 as that the closest it can get. It also quietly truncates string fields that are too long, for example, as if you cared about the additional data you would've made your column bigger.
This is the polar opposite of many other databases that are extremely picky about the type of data that can be inserted.
I am developing an application which uses external datasources. My Application supports multiple databases(viz. MySQl,MsSQl,Teradata, Oracle, DB2 etc.). When i create a datasource, I allow user to assign a primary key(pk) to the datasource. Now, I am not checking if the user selected column is primary key or not in actual database. I just want that, while retrieving data from database, the records which have null/blank value in user selected primary key should get dropped. I have created a filter supporting all other databases except for DB2 and Teradata.
Sample Query for other databases:
Select * from MY_TABLE where PK_COLUMN IS NOT NULL and PK_COLUMN !='';
Select * from MY_TABLE where PK_COLUMN IS NOT NULL AND cast(PK_COLUMN as varchar) !=''
DB2 and Teradata:
The PK_COLUMN !='' and cast(PK_COLUMN as varchar) !='' conditions gives error for int datatype in DB2 and teradata because:
- column with int type cannot be gven the above mentioned conditions and also we cannot cast the int type columns to varchar directly in DB2 and Teradata.
I want to create a query to drop null/blank value from the database provided table name and user pk column name as string. (I do not know the pk_column_type while creating the query so the query should be uniform to support all datatypes)
NOTE: The pk here is not actual pk, it is just a dummy pk assigned by my application user. So this can be a normal column and thus can have null/blank values.
I have created a query as:
Select * from MY_TABLE where PK_COLUMN IS NOT NULL AND cast(cast(PK_COLUMN as char) as varchar) !=''
My Question:
Will this solution(double casting) support all datatypes in DB2 and Teradata?
If not, can I come up with a better solution?
Of course you can cast an INT to a VARCHAR in both Teradata and DB2, but you have to specify the length of a VARCHAR, there's no default length.
Casting to a CHAR without a length defaults to CHAR(1) in Standard SQL, which might cause some "string truncation" error.
You need to cast to a VARCHAR(n) where n is the maximum length based on the DBMS.
Plus there's no != operator in SQL, this should be <> instead.
Finally there's a fundamental difference between an empty string and a NULL (except on Oracle), one or more blanks might also have a meaning and will be filtered when compared to ''.
And what is an empty INT supposed to be? If your query worked zero would be casted to '0' which is not equal to '', so it would fail.
You should simply use IS NOT NULL and add a check for an empty string only for character column (and add an option for the user to decide if an empty string is equal to NULL).
I have a VARCHAR field in a MySQL table like so -
CREATE TABLE desc(
`pk` varchar(10) NOT NULL UNIQUE,
...
);
The value in pk field is of the type - (xx0000001, xx0000002, ...). But when I insert these into my table the values in pk field get truncated to (xx1, xx2, ...).
How to prevent this?
UPDATE: Adding the INSERTstatement
INSERT INTO desc (pk) VALUES ("xx0000001");
It could be that the viewer you are using to LOOK at the values is displaying the info incorrectly because it is trying to interpret that string as a number, or that mysql may be interpreting your numbers as hexadecimal or something strange.
What happens if you do
INSERT INTO desc (pk) VALUES ("xx0000099");
Does it come back as xx99? or some other value?
Looks like you are referencing different tables in your two statements, text and desc?
Possibly somewhere along your program logic the value is interpreted as a hexadecimal or octal number?
I just can't understand why is my database (mysql) behaving like this! My console shows that the record is created properly (please, notice the "remote_id" value):
Tweet Create (0.3ms)
INSERT INTO `tweets` (`remote_id`, `text`, `user_id`, `twitter_account_id`)
VALUES (12325438258, 'jamaica', 1, 1)
But when I check the record, it shows that the remote_id is 2147483647 intead of the provided value (12325438258 in the example above)...
This table has many entries, but this field is always written with 2147483647... It was supposed to fill this space with an unique id (which I guarantee you is being generated properly).
That's because you're using the INT numeric type which has a limit of '2147483647', use BIGINT instead.
Source: http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html
My guess is that the value you are trying to insert is too large for the column. That number is suspicious in that it is the max value of a 32 bit signed integer.
Is the column type INT? To store that value you should make it a BIGINT. See Numeric Types from the MySQL manual.
As it is obvious you used a value with less size, so you need to use a larger type like BigInt ( and in application use long or int64