I'm trying out the BCP utility on SQL Server 2008 Express. I don't think that what I'm trying to do could be more trivial, but still I'm getting a primary key violation when trying to insert two rows into an empty table.
Here is the table DDL:
CREATE TABLE [dbo].[BOOKS](
[BOOK_ID] [numeric](18, 0) NOT NULL,
[BOOK_DESCRIPTION] [varchar](200) NULL,
CONSTRAINT [BOOKS PK] PRIMARY KEY CLUSTERED
(
[BOOK_ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
Here is the BCP format file:
10.0
2
1 SQLNUMERIC 0 3 "\t" 1 BOOK_ID ""
2 SQLCHAR 0 0 "\r\n" 2 BOOK_DESCRIPTION Modern_Spanish_CI_AS
and here is my input file:
101 BOOK_ABC_001<CR><LF>
102 BOOK_ABC_002<CR><LF>
finally here is the command I run:
bcp Database.dbo.BOOKS in books.txt -T -f BOOKS-format.fmt
and here is the error I get:
Starting copy...
SQLState = 23000, NativeError = 2627
Error = [Microsoft][SQL Server Native Client 10.0][SQL Server]Violation of PRIMARY KEY constraint 'BOOKS PK'. Cannot insert duplicate key in object 'dbo.BOOKS'.
SQLState = 01000, NativeError = 3621
Warning = [Microsoft][SQL Server Native Client 10.0][SQL Server]The statement has been terminated.
BCP copy in failed
Now, BCP succeeds if I use an input file with a single line. In this case, the BOOK_ID column gets assigned a value of 0. So it seems that the first field in my input file is being ignored, and 0 is being used as the value for BOOK_ID for all the rows, which would explain the PK violation error.
So the question is, what is wrong in my format or input files that causes the first column to be ignored?
Thanks.
I've never seen a primary key column with datatype DEC, not sure if decimals work. I've always used integer.
BUT what i think the problem is the PK column doesn't have identity set, so it's not auto incrementing when it adds a new row. In your table create code, replace:
[BOOK_ID] [numeric](18, 0) NOT NULL,
with
[BOOK_ID] [int] IDENTITY(1,1) NOT NULL,
Cheers
Related
I have a problem, I nedd to restore a sql file, but when I try to do this with mysql -u user -p --database test < file.sql I get this error ERROR 1062 (23000) at line 50: Duplicate entry '0' for key 'PRIMARY'
My first attribut is AUTO_INCREMENT and NOT NULL and PRIMARY
I have searched and the probem is in my sql file for my primary key I don't have value I have just simple quote. For example INSERT INTO log VALUES ('','app1','name','hello') as you can see my first value is only simple quote, how can I import this sql file without value beacause I have lot of lines in my file...
Definition of the table
CREATE TABLE `log` (
`id_log` int(11) NOT NULL AUTO_INCREMENT,
`application` varchar(20) NOT NULL,
`module` text NOT NULL,
`action` text NOT NULL,
PRIMARY KEY (`id_log`),
) ENGINE=InnoDB AUTO_INCREMENT=646 DEFAULT CHARSET=latin1;
You just need to rewrite your queries.
The query should look like this:
INSERT INTO log(application, module, action) VALUES ('app1','name','hello');
This will enter the remaining row in the table and consider the column id_log for auto-increment.
I would assume mysql is trying to cast '' to an int since it is an AUTO_INCREMENT field.
It casts it to 0 so for the first entry everything is fine, but on the second one, it already exists and you get the error.
You will have to replace the '' with actual, unique integer values or remove it altogether and add a columns list.
Per https://learn.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-data-loading-guidance?view=sql-server-2017, we're making some optimizations for a bulk load operation for a columnstore index and, whenever we attempt the insert into the CCI, we get the following:
Location: columnset.cpp:3707
Expression: !pColBinding->IsLobAccessor()
SPID: 55
Process ID: 1988
Msg 3624, Level 20, State 1, Line 3
A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a Hotfix from Technical Support.
Msg 596, Level 21, State 1, Line 0
Cannot continue the execution because the session is in the kill state.
Msg 0, Level 20, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
There is no data corruption--DBCC CHECKDB runs without errors. Inserting a small number of rows succeeds, but it fails when we try over 1000 (we haven't tried to figure out the exact number where failure occurs, but we also tried over a million). We are running SQL Server 2017, 14.0.3223.3.
How to reproduce the problem:
Step 1: Create a sample staging table
CREATE TABLE [dbo].[Data](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[Description] [varchar](50) NOT NULL,
[JSON] [nvarchar](max) NOT NULL,
CONSTRAINT [PK_Data] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[Data] WITH CHECK ADD CONSTRAINT [CK_Data] CHECK ((isjson([JSON])=(1)))
GO
ALTER TABLE [dbo].[Data] CHECK CONSTRAINT [CK_Data]
GO
Step 2: Fill the staging table with sample data (our JSON column is over 100KB)
DECLARE #i INT = 1
WHILE (#i < 1000)
BEGIN
INSERT INTO Data
SELECT 'Test' AS Description, BulkColumn as JSON
FROM OPENROWSET (BULK 'C:\Temp\JSON.json', SINGLE_CLOB) AS J
SET #i = #i + 1
END
Step 3: Create a sample target table and CCI
CREATE TABLE [dbo].[DataCCI](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[Description] [varchar](50) NOT NULL,
[JSON] [nvarchar](max) NOT NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[DataCCI] WITH CHECK ADD CONSTRAINT [CK_DataCCI] CHECK ((isjson([JSON])=(1)))
GO
ALTER TABLE [dbo].[DataCCI] CHECK CONSTRAINT [CK_DataCCI]
GO
CREATE CLUSTERED COLUMNSTORE INDEX [cci] ON [dbo].[DataCCI] WITH (DROP_EXISTING = OFF, COMPRESSION_DELAY = 0) ON [PRIMARY]
GO
Step 4: Bulk load from sample staging to CCI
INSERT INTO DataCCI WITH (TABLOCK)
SELECT Description, JSON FROM Data
What am I missing? Is there a better way to do this or a workaround?
Thank you.
I was able to workaround this issue by removing the constraints from the target table.
Cheers!
Currently, I create a table like this:
CREATE TABLE [dbo].[AM_Module](
[ModuleID] [int] IDENTITY(1,1) NOT NULL,
[ModuleName] [nvarchar](100) NULL,
[ParentID] [int] NULL,
CONSTRAINT [PK_AM_ModuleID] PRIMARY KEY CLUSTERED
(
[ModuleID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[AM_Module] WITH CHECK ADD CONSTRAINT [FK_AM_Module_AM_Module_ParentID] FOREIGN KEY([ParentID])
REFERENCES [dbo].[AM_Module] ([ModuleID])
GO
ALTER TABLE [dbo].[AM_Module] CHECK CONSTRAINT [FK_AM_Module_AM_Module_ParentID]
GO
With ParentID contains ModuleID's parent.
And I have 2 row like this:
ModuleID ModuleName ParentID
1 ParentName NULL
2 ChildName 1
Now, in my EF, I run the LinQ to SQL:
var q2 = from a in unitOfWork.RepositoryAsync<AM_Module>().Queryable() where a.ParentID == 1 select a;
var w2 = q2.ToList();
When I check by the SQL Profiler, I see the query will be parsed like this:
SELECT
[Extent1].[ModuleID] AS [ModuleID],
[Extent1].[ModuleName] AS [ModuleName],
[Extent1].[ParentID] AS [ParentID],
FROM [dbo].[AM_Module] AS [Extent1]
WHERE 1 = [Extent1].[ParentID]
The SQL returns 1 row.
But when I check w2[0].AM_Module1 (relationship with itself), I have value of parent with ModuleName too.
Why it happens? The SQL was returned only one row. How the EF know parent's data?
Please advise.
This first statement failed on cannot use asc
CREATE TABLE [Gabe2a_ENRONb].[dbo].[FTSindexMO] (
[sID] [int] NOT NULL,
[wordPos] [int] NOT NULL,
[wordID] [int] NOT NULL,
[charPos] [int] NOT NULL,
CONSTRAINT [FTSindexMO] PRIMARY KEY
NONCLUSTERED HASH ([sID] asc, [wordPos] asc) WITH(BUCKET_COUNT = 100)
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY )
but when I fix the create I get an error [FTSindexMO] exists
CREATE TABLE [Gabe2a_ENRONb].[dbo].[FTSindexMO] (
[sID] [int] NOT NULL,
[wordPos] [int] NOT NULL,
[wordID] [int] NOT NULL,
[charPos] [int] NOT NULL,
CONSTRAINT [FTSindexMO] PRIMARY KEY
NONCLUSTERED HASH ([sID], [wordPos]) WITH(BUCKET_COUNT = 100)
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY )
Msg 2714, Level 16, State 5, Line 74
There is already an object named 'FTSindexMO' in the database.
Msg 1750, Level 16, State 0, Line 74
Could not create constraint or index. See previous errors.
but I cannot drop the database
drop table [Gabe2a_ENRONb].[dbo].[FTSindexMO]
Msg 3701, Level 11, State 5, Line 72
Cannot drop the table 'Gabe2a_ENRONb.dbo.FTSindexMO', because it does not exist or you do not have permission.
That name is not in sysObjects
That table name is not displayed in SSMS (and I did refresh)
If I create another table with proper syntax then I can delete it
What is interesting is if I use proper syntax twice the error message is not same
It does not include the constraint error
I had a problem a while ago with regular table that got corrupt and I was able to delete it from View Object Explorer Detail but this table is not listed their either
In you example t-sql statements, you are trying to create a constraint with the same name as that of the table i.e 'FTSindexMO'. Hence, you are getting the error message "There is already an object named 'FTSindexMO' in the database." You cannot create a constraint with the same name as the table. This is the same behavior for disk-based tables and memory-optimized tables. You would need to use a different name for the constraint.
Thanks & Regards, Pooja Harjani, Sr. Program Manager, SQL Server, Microsoft.
I have a MSSQL queries file(.sql), now I need to convert it to MYSQL queries.
Please help me. The script like this:
CREATE TABLE [dbo].[Artist](
[ArtistId] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](120) NULL,
PRIMARY KEY CLUSTERED
(
[ArtistId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
If you want to convert the DDL by hand, then you can do this by building up rules on a case by case basis, e.g. as follows:
[] need to be replaced with backticks
IDENTITY(1,1) can be replaced with AUTO_INCREMENT
Most of the ANSI options and Device settings
can be ignored (these seem to be present only because the table has
been rescripted)
w.r.t. dbo, MySQL doesn't implement schemas in the same way as SQL Server - you will either need to separate schemas into databases, or drop the schema, or mangle the schema name into the tablename (e.g. as a Prefix)
This will leave you with something like the following:
CREATE TABLE `Artist`(
`ArtistId` int NOT NULL AUTO_INCREMENT,
`Name` nvarchar(120) NULL,
PRIMARY KEY CLUSTERED
(
`ArtistId` ASC
)
);
Fiddle here
However, it is usually much easier to do this migration with a migration tool - search for the section on How to Transition from SQL Server to MySQL