Per https://learn.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-data-loading-guidance?view=sql-server-2017, we're making some optimizations for a bulk load operation for a columnstore index and, whenever we attempt the insert into the CCI, we get the following:
Location: columnset.cpp:3707
Expression: !pColBinding->IsLobAccessor()
SPID: 55
Process ID: 1988
Msg 3624, Level 20, State 1, Line 3
A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a Hotfix from Technical Support.
Msg 596, Level 21, State 1, Line 0
Cannot continue the execution because the session is in the kill state.
Msg 0, Level 20, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
There is no data corruption--DBCC CHECKDB runs without errors. Inserting a small number of rows succeeds, but it fails when we try over 1000 (we haven't tried to figure out the exact number where failure occurs, but we also tried over a million). We are running SQL Server 2017, 14.0.3223.3.
How to reproduce the problem:
Step 1: Create a sample staging table
CREATE TABLE [dbo].[Data](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[Description] [varchar](50) NOT NULL,
[JSON] [nvarchar](max) NOT NULL,
CONSTRAINT [PK_Data] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[Data] WITH CHECK ADD CONSTRAINT [CK_Data] CHECK ((isjson([JSON])=(1)))
GO
ALTER TABLE [dbo].[Data] CHECK CONSTRAINT [CK_Data]
GO
Step 2: Fill the staging table with sample data (our JSON column is over 100KB)
DECLARE #i INT = 1
WHILE (#i < 1000)
BEGIN
INSERT INTO Data
SELECT 'Test' AS Description, BulkColumn as JSON
FROM OPENROWSET (BULK 'C:\Temp\JSON.json', SINGLE_CLOB) AS J
SET #i = #i + 1
END
Step 3: Create a sample target table and CCI
CREATE TABLE [dbo].[DataCCI](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[Description] [varchar](50) NOT NULL,
[JSON] [nvarchar](max) NOT NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[DataCCI] WITH CHECK ADD CONSTRAINT [CK_DataCCI] CHECK ((isjson([JSON])=(1)))
GO
ALTER TABLE [dbo].[DataCCI] CHECK CONSTRAINT [CK_DataCCI]
GO
CREATE CLUSTERED COLUMNSTORE INDEX [cci] ON [dbo].[DataCCI] WITH (DROP_EXISTING = OFF, COMPRESSION_DELAY = 0) ON [PRIMARY]
GO
Step 4: Bulk load from sample staging to CCI
INSERT INTO DataCCI WITH (TABLOCK)
SELECT Description, JSON FROM Data
What am I missing? Is there a better way to do this or a workaround?
Thank you.
I was able to workaround this issue by removing the constraints from the target table.
Cheers!
Related
This first statement failed on cannot use asc
CREATE TABLE [Gabe2a_ENRONb].[dbo].[FTSindexMO] (
[sID] [int] NOT NULL,
[wordPos] [int] NOT NULL,
[wordID] [int] NOT NULL,
[charPos] [int] NOT NULL,
CONSTRAINT [FTSindexMO] PRIMARY KEY
NONCLUSTERED HASH ([sID] asc, [wordPos] asc) WITH(BUCKET_COUNT = 100)
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY )
but when I fix the create I get an error [FTSindexMO] exists
CREATE TABLE [Gabe2a_ENRONb].[dbo].[FTSindexMO] (
[sID] [int] NOT NULL,
[wordPos] [int] NOT NULL,
[wordID] [int] NOT NULL,
[charPos] [int] NOT NULL,
CONSTRAINT [FTSindexMO] PRIMARY KEY
NONCLUSTERED HASH ([sID], [wordPos]) WITH(BUCKET_COUNT = 100)
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY )
Msg 2714, Level 16, State 5, Line 74
There is already an object named 'FTSindexMO' in the database.
Msg 1750, Level 16, State 0, Line 74
Could not create constraint or index. See previous errors.
but I cannot drop the database
drop table [Gabe2a_ENRONb].[dbo].[FTSindexMO]
Msg 3701, Level 11, State 5, Line 72
Cannot drop the table 'Gabe2a_ENRONb.dbo.FTSindexMO', because it does not exist or you do not have permission.
That name is not in sysObjects
That table name is not displayed in SSMS (and I did refresh)
If I create another table with proper syntax then I can delete it
What is interesting is if I use proper syntax twice the error message is not same
It does not include the constraint error
I had a problem a while ago with regular table that got corrupt and I was able to delete it from View Object Explorer Detail but this table is not listed their either
In you example t-sql statements, you are trying to create a constraint with the same name as that of the table i.e 'FTSindexMO'. Hence, you are getting the error message "There is already an object named 'FTSindexMO' in the database." You cannot create a constraint with the same name as the table. This is the same behavior for disk-based tables and memory-optimized tables. You would need to use a different name for the constraint.
Thanks & Regards, Pooja Harjani, Sr. Program Manager, SQL Server, Microsoft.
I' trying to generate the scripts for ma DB in Sql Server 2008.. and i'm able to do that, the scripts generated is :
USE [Cab_Booking]
GO
/****** Object: Table [dbo].[User] Script Date: 05/19/2013 10:33:05 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[User]') AND type in (N'U'))
BEGIN
CREATE TABLE [dbo].[User](
[U_Id] [int] IDENTITY(1,1) NOT NULL,
[UserName] [nvarchar](50) NOT NULL,
[Password] [nvarchar](50) NOT NULL,
add column new int not null,
CONSTRAINT [PK_User] PRIMARY KEY CLUSTERED
(
[U_Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
END
GO
What should i do if i need to add a new column in my table through scripts...
I know this sounds easy...But, i dont know what am i missing...
thanks..
Right click on the table name on SQL Server Management Studio and select "Design". Then add a column using designer, but don't save. Right click anywhere on table designer and select "Generate Change Script". Now you have the script required to add new column to table. This method also works for removing columns, changing data types, etc.
I need to import the data from the multiple distributed database ( around 70 ) to the single source table .So how is it possible through SSIS 2008
Assuming that you can run the same query against each of the 70 source servers, you can use a ForEach Loop with a single Data Flow Task. The source connection manager's ConnectionString should be an expression using the loop variables.
Here's an example reading the INFORMATION_SCHEMA.COLUMNS view from multiple DBs. I created the following tables on my local instance:
<!-- language: lang-sql -->
CREATE TABLE [MultiDbDemo].[SourceConnections](
[DatabaseKey] [int] IDENTITY(1,1) NOT NULL,
[ServerName] [varchar](50) NOT NULL,
[DatabaseName] [varchar](50) NOT NULL,
CONSTRAINT [PK_SourceConnections] PRIMARY KEY CLUSTERED
(
[DatabaseKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [MultiDbDemo].[SourceColumns](
[ColumnKey] [int] IDENTITY(1,1) NOT NULL,
[ServerName] [varchar](50) NOT NULL,
[DatabaseName] [varchar](50) NOT NULL,
[SchemaName] [varchar](50) NOT NULL,
[TableName] [varchar](50) NOT NULL,
[ColumnName] [varchar](50) NOT NULL,
CONSTRAINT [PK_SourceColumns] PRIMARY KEY CLUSTERED
(
[ColumnKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
This is the control flow for the SSIS package:
The Source_AdoDotNet connection manager's ConnectionString property is set to the following expression:
SQL_GetSourceList's SQLStatement property is SELECT ServerName, DatabaseName FROM MultiDbDemo.SourceConnections, and the ResultSet is mapped to the User::SourceList variable.
The ForEach Loop task is configured thusly:
Note that the ADO object source variable is set to the User::SourceList variable populated in the SQL_GetSourceList task.
And the data flow looks like this:
ADO_SRC_SourceInfo is configured thusly:
The next effect of all this is that, for each database listed in the SourceConnections table, we execute the query SELECT LEFT(TABLE_SCHEMA, 50) AS SchemaName, LEFT(TABLE_NAME, 50) AS TableName, LEFT(COLUMN_NAME, 50) AS ColumnName FROM INFORMATION_SCHEMA.COLUMNS and save the results in the SourceColumns table.
You will still need 70 destination components. Simply specify the same table in all of them.
I have a MSSQL queries file(.sql), now I need to convert it to MYSQL queries.
Please help me. The script like this:
CREATE TABLE [dbo].[Artist](
[ArtistId] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](120) NULL,
PRIMARY KEY CLUSTERED
(
[ArtistId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
If you want to convert the DDL by hand, then you can do this by building up rules on a case by case basis, e.g. as follows:
[] need to be replaced with backticks
IDENTITY(1,1) can be replaced with AUTO_INCREMENT
Most of the ANSI options and Device settings
can be ignored (these seem to be present only because the table has
been rescripted)
w.r.t. dbo, MySQL doesn't implement schemas in the same way as SQL Server - you will either need to separate schemas into databases, or drop the schema, or mangle the schema name into the tablename (e.g. as a Prefix)
This will leave you with something like the following:
CREATE TABLE `Artist`(
`ArtistId` int NOT NULL AUTO_INCREMENT,
`Name` nvarchar(120) NULL,
PRIMARY KEY CLUSTERED
(
`ArtistId` ASC
)
);
Fiddle here
However, it is usually much easier to do this migration with a migration tool - search for the section on How to Transition from SQL Server to MySQL
I'm trying out the BCP utility on SQL Server 2008 Express. I don't think that what I'm trying to do could be more trivial, but still I'm getting a primary key violation when trying to insert two rows into an empty table.
Here is the table DDL:
CREATE TABLE [dbo].[BOOKS](
[BOOK_ID] [numeric](18, 0) NOT NULL,
[BOOK_DESCRIPTION] [varchar](200) NULL,
CONSTRAINT [BOOKS PK] PRIMARY KEY CLUSTERED
(
[BOOK_ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
Here is the BCP format file:
10.0
2
1 SQLNUMERIC 0 3 "\t" 1 BOOK_ID ""
2 SQLCHAR 0 0 "\r\n" 2 BOOK_DESCRIPTION Modern_Spanish_CI_AS
and here is my input file:
101 BOOK_ABC_001<CR><LF>
102 BOOK_ABC_002<CR><LF>
finally here is the command I run:
bcp Database.dbo.BOOKS in books.txt -T -f BOOKS-format.fmt
and here is the error I get:
Starting copy...
SQLState = 23000, NativeError = 2627
Error = [Microsoft][SQL Server Native Client 10.0][SQL Server]Violation of PRIMARY KEY constraint 'BOOKS PK'. Cannot insert duplicate key in object 'dbo.BOOKS'.
SQLState = 01000, NativeError = 3621
Warning = [Microsoft][SQL Server Native Client 10.0][SQL Server]The statement has been terminated.
BCP copy in failed
Now, BCP succeeds if I use an input file with a single line. In this case, the BOOK_ID column gets assigned a value of 0. So it seems that the first field in my input file is being ignored, and 0 is being used as the value for BOOK_ID for all the rows, which would explain the PK violation error.
So the question is, what is wrong in my format or input files that causes the first column to be ignored?
Thanks.
I've never seen a primary key column with datatype DEC, not sure if decimals work. I've always used integer.
BUT what i think the problem is the PK column doesn't have identity set, so it's not auto incrementing when it adds a new row. In your table create code, replace:
[BOOK_ID] [numeric](18, 0) NOT NULL,
with
[BOOK_ID] [int] IDENTITY(1,1) NOT NULL,
Cheers