inserting multiple value in one column in sql server - sql-server-2008

This is my sql query
update employee
set StaticIp='(59.90.187.91),( 117.218.1.147)'
where EmpId=1001
Error msg:Msg 8152, Level 16, State 14, Line 1
String or binary data would be truncated.
The statement has been terminated.

This means that the value you want to save in that column will not fit and will be truncated.
There is no way around this except if you increase the size of the column.
You can see the size of your column by running:
sp_columns #table_name = 'employee', #column_name = 'StaticIp'
In order to change the size of your column you need to run this command:
ALTER TABLE employee
ALTER COLUMN StaticIp NVARCHAR(MAX)

This message occures when you try to insert the length data more than the database column size
run the following query.
sp_help employee
check the length of staticip column , it must be less than you are entering.

Storing Comma separated values in a column is bad practice.
You're getting this error of String or binary data would be truncated because StaticIP column's length is less than what you're trying to update with.
Resolution: Increase the length of StaticIP column. like StaticIP nvarchar(255)

Related

Mysql Error Code 1265: Data truncated for column after increasing varchar column size

I have a varchar column in a table that was varchar(1000) and was increased to varchar(6000). After updating this table, I get this error mentioned when trying to update a specific row in the table.
This row has a string currently with length of 647 characters (no special characters in there, just alphanumeric and $ symbol in the string).
If I try to update it like:
update TradeEntries set DataValue = 'test' where ID = 16632;
I get the error:
Error Code: 1265. Data truncated for column 'DataValue' at row 1
If I try:
delete from TradeEntries where ID = 16632;
I also get the same error:
Error Code: 1265. Data truncated for column 'DataValue' at row 1
Do you know what could be wrong and how I can fix this? I can't edit or delete this row anymore. The current value in this row for DataValue is:
orderNo$TRUE,eoOrderIdorderNo$TRUE,eoOrderId$TRUE,orderStatusId,oldOrderId$TRUE,chainOrderNo$TRUE,equityOptionInd,orderTypeCode,accountType$TRUE,repID,status,tradeAction,lplAcct,acctName,securityID,quantity,stopPrice,conditions$TRUE,timeInForce,acctType,orderType,clientName,orderDate,canEdit,canEditAction$TRUE,canCancel,canCancelAction$TRUE,totalRecords,accountID,clientID,originCode$TRUE,securityNo,actionCode$TRUE,updateSource$TRUE,errorResponse$TRUE,closingTriggerPrice$TRUE,orderStatusId,oldOrderId$TRUE,chainOrderNo$TRUE,equityOptionInd,orderTypeCode,accountType$TRUE,repID,status,tradeAction,lplAcct,acctName,securityID,quantity,stopPrice,conditions$TRUE,timeInForce,acctType,orderType,clientName,orderDate,canEdit,canEditAction$TRUE,canCancel,canCancelAction$TRUE,totalRecords,accountID,clientID,originCode$TRUE,securityNo,actionCode$TRUE,updateSource$TRUE,errorResponse$TRUE,closingTriggerPrice$TRUE
This was due to a trigger that was now affected by the larger column change. Adjusted the column length in the table the trigger was inserting into and resolved it.

Truncating a BINARY column in MySQL using ALTER TABLE

I have a MySQL table t with over 150 million rows. One of the columns (c) was a VARCHAR(64) containing a 64-digit hexadecimal number. To save space and make things faster, I wanted to decode the hex and turn it into a BINARY(32) column.
My plan was to use three queries:
ALTER TABLE t CHANGE c c BINARY(64) NOT NULL;
UPDATE t SET c=UNHEX(c);
ALTER TABLE t CHANGE c c BINARY(32) NOT NULL;
The first 2 worked perfectly, but on the 3rd query I'm getting the error:
#1265 - Data truncated for column 'c' at row 1
I understand that I am truncating data, that's exactly what I want. I want to get rid of the 32 0x00 bytes at the end of the BINARY(64) to turn it into a BINARY(32).
Things I've tried:
UPDATE t SET c=LEFT(c, 32); did not seem to do anything at all.
Using ALTER IGNORE TABLE gives me a syntax error.
To get around the #1265 - Data truncated for column ... error you must remove the STRICT_TRANS_TABLES flag from the global sql_mode variable.
The query SHOW VARIABLES LIKE 'sql_mode'; gave me:
STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
So I ran this query:
SET GLOBAL sql_mode = 'NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION';
For safety, I will re-enable strict mode after I'm done truncating columns.

unexpected error 1054 in sql

After table setup, I suddenly remember to update it by adding one column and assign all the same value to that column. So I wrote the queries on Workbench like this
ALTER TABLE sthebc3_cle ADD COLUMN Species char(30) AFTER Genome_ACC;
SET SQL_SAFE_UPDATES=0;
UPDATE cle.sthebc3_cle
SET Species='StHe';
But it reported error like
error 1054: unknown column "Species" in "field list
I checked table after ALTER, The new column "Species" was indeed added to the column and values are NULL.
How could be the error reported?
You need to make sure that the current session is the database you want. There is always a selected database in MySQL.
If you want to be 100% sure which database you are using you Always put nameofthedatabase.tablename,
For the table
ALTER TABLE cle.sthebc3_cle ADD COLUMN Species char(30) AFTER Genome_ACC;
SET SQL_SAFE_UPDATES=0;
UPDATE cle.sthebc3_cle
SET Species='StHe';
For you view, try this
USE cle;
CREATE VIEW all_cle AS
(SELECT Species, Genome_ACC, CLE_start, CLE_end, CLE_domain
FROM nameofdatabase.cowpea_cle)
UNION
(SELECT Species, Genome_ACC, CLE_start, CLE_end, CLE_domain
FROM nameofdatabase.sthebc3_cle);

Inserting datetime value into sql server table column

I'm attempting to insert a datetime('2013-08-30 19:05:00') value into a SQL server database table column(smalldatetime) and the value stays "NULL" after the insert.
I'm doing this to 6 other columns that are the exact same type. What is this only occuring on one column? I've triple checked that the names of the columns are correct. Any ideas?
Assuming the situation is as you describe
CREATE TABLE T
(
S SMALLDATETIME NULL
)
INSERT INTO T
VALUES('2013-08-30 19:05:00')
SELECT *
FROM T /*Returns NULL*/
There are only two ways I can think of that this can happen.
1) That is an ambiguous datetime format. Under the wrong session options this won't cast correctly and if you have some additional options OFF it will return NULL rather than raise an error (e.g.)
SET LANGUAGE Italian;
SET ansi_warnings OFF;
SET arithabort OFF;
INSERT INTO T
VALUES('2013-08-30 19:05:00')
SELECT *
FROM T /*NULL inserted*/
2) You may have missed the column out in an INSTEAD OF trigger, or have an AFTER trigger that actually sets the value back to NULL.

Changing collation on a column with multipage rows

We're in the process of changing the collation of our database.
We've run into a problem, when I try to alter one of the columns (with the datatype varchar(max)) I get the following error:
Cannot create a row of size 8083 which is greater than the allowable maximum row size of 8060.
If I check the size of the biggest post.
select top 1 LEN(Document) as l1,* from GroupDocument where LEN(document) > 8000 order by LEN(document) desc
I get the size 39431 which would be approx 10 pages.
I assume that this is the problem why I cant change the collation. I havent run into this problem earlier with the other columns. Any help would be appreciated.
I guess one solution would be to copy all the content of the table to another table, change collation and then move it back again. But I'd rather not do that if it's possbile.
EDIT:
Tried the following:
create table temptable (id int, document nvarchar(max))
insert into temptable (id, document) select GroupDocumentID, Document from GroupDocument
alter table GroupDocument drop column Document
alter table temptable alter column document nvarchar(max)
ALTER TABLE [GroupDocument] add [Document] ntext COLLATE Finnish_Swedish_CI_AS NULL
update GroupDocument set Document = (select temptable.document from temptable where temptable.id = GroupDocument.GroupDocumentID)
Still the same problem.
The row that is causing the problem has a varchar that is 7996 bytes, that + some ints makes it a boundary case I guess.
I did a combination of forcing larger values to be out of row.
EXEC sp_tableoption 'dbo.GroupDocument',
'large value types out of row', 1
Cleaning the table.
dbcc cleantable('ExamDoc', 'groupdocument', 0)
and finally dropping and rebuilding the indexes for the table.
Solved the problem! :D