I am using SQL server 2008, Want to import my db to Azure. I have (*.bak) file. Is their any work around to restore my db to Azure without changing my db structure.
I tried SQLAzureMW but it is giving me this error
'Filegroup reference and partitioning scheme' is not supported in this version of SQL Server.
I searched Filegroup keyword in scripts but it isn't there.
I also tried Azure SilverLight Managment Tool but it giving me same error.
While running this script i am getting the above error.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [Mohsin].[Supplier](
[SuppID] [int] NOT NULL,
[Name] [varchar](50) NULL,
[street] [varchar](30) NULL,
[City] [varchar](20) NULL,
[State] [varchar](20) NULL,
[County] [varchar](30) NULL,
[PostalCode] [varchar](25) NULL,
[Phone] [varchar](17) NULL,
[Fax] [varchar](17) NULL,
[Active] [bit] NULL,
CONSTRAINT [PK_Supplier] PRIMARY KEY CLUSTERED
(
[SuppID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
Make sure there isn't a reference for an Index or object indicating that it should go on the Primary filegroup. Search for ON [PRIMARY] or ON PRIMARY, or just the word PRIMARY.
The SQL Azure Migration Wizard is usually pretty good about pointing out exactly what it doesn't like.
Related
I have seen this question asked quite a few times and most of them end with a logical explanation. My table doesn't seem to be anywhere near the maximum row size.
My Dev Server is SQL 2008 Express Edition
This is my table definition. I have one varchar(max) column and the rest of my columns should be tiny. The "Notes" filed didn't contain much text, only a few characters.
CREATE TABLE [dbo].[PegBoard](
[ID] [int] IDENTITY(1,1) NOT NULL,
[OwnersCorporationID] [int] NOT NULL,
[PegNumber] [int] NOT NULL,
[DepositRequired] [bit] NOT NULL,
[KeyRegister] [bit] NOT NULL,
[Notes] [varchar](max) NULL,
[Locked] [bit] NOT NULL,
[NonLoanable] [bit] NULL,
[DepositAmount] [decimal](10, 2) NULL,
[LastReviewedDate] [datetime] NULL,
CONSTRAINT [PK_PegBoard] PRIMARY KEY NONCLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
CONSTRAINT [PegBoard_PN_Cnst] UNIQUE NONCLUSTERED
(
[PegNumber] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
When I insert a row I receive the following message.
Error:System.Data.SqlClient.SqlException: Cannot create a row of size
8066 which is greater than the allowable maximum row size of 8060.
The statement has been terminated.
I have seen a similar warning when I added new columns to the table but it didn't seem to cause a failure until now.
Any ideas what could cause this problem and what I might try to fix it.
Thanks in Advance
David
Update
Here is the structure of my Audit Table if that helps.
The trigger itself is quite complicated as it was generated from some code I found that does all tables automatically.
CREATE TABLE [dbo].[Audit](
[AuditID] [int] IDENTITY(1,1) NOT NULL,
[Type] [char](1) NULL,
[TableName] [varchar](128) NULL,
[PrimaryKeyField] [varchar](1000) NULL,
[PrimaryKeyValue] [varchar](1000) NULL,
[FieldName] [varchar](128) NULL,
[OldValue] [varchar](1000) NULL,
[NewValue] [varchar](1000) NULL,
[UpdateDate] [datetime] NULL,
[UserName] [varchar](128) NULL
) ON [PRIMARY]
It is a common misunderstanding, that using VARCHAR(MAX) is a good idea in any cases... If you really have to deal with strings larger than 8000 bytes you could think about VARBINARY(MAX) or XML.
Read this: https://www.simple-talk.com/sql/database-administration/whats-the-point-of-using-varchar(n)-anymore/
But it may work: Read this: why row insert above 8053 bytes not giving error when it should because max allowed row limit is 8060
Another problem with VARCHAR(MAX) is, that in some statements the implicitly used data type is the "normal" varchar and you need extra casts:
Read this: https://stackoverflow.com/a/33031838/5089204
Conclusio: If you do not expect really large text it's better to use a VARCHAR(XX)definition.
I' trying to generate the scripts for ma DB in Sql Server 2008.. and i'm able to do that, the scripts generated is :
USE [Cab_Booking]
GO
/****** Object: Table [dbo].[User] Script Date: 05/19/2013 10:33:05 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[User]') AND type in (N'U'))
BEGIN
CREATE TABLE [dbo].[User](
[U_Id] [int] IDENTITY(1,1) NOT NULL,
[UserName] [nvarchar](50) NOT NULL,
[Password] [nvarchar](50) NOT NULL,
add column new int not null,
CONSTRAINT [PK_User] PRIMARY KEY CLUSTERED
(
[U_Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
END
GO
What should i do if i need to add a new column in my table through scripts...
I know this sounds easy...But, i dont know what am i missing...
thanks..
Right click on the table name on SQL Server Management Studio and select "Design". Then add a column using designer, but don't save. Right click anywhere on table designer and select "Generate Change Script". Now you have the script required to add new column to table. This method also works for removing columns, changing data types, etc.
I need to import the data from the multiple distributed database ( around 70 ) to the single source table .So how is it possible through SSIS 2008
Assuming that you can run the same query against each of the 70 source servers, you can use a ForEach Loop with a single Data Flow Task. The source connection manager's ConnectionString should be an expression using the loop variables.
Here's an example reading the INFORMATION_SCHEMA.COLUMNS view from multiple DBs. I created the following tables on my local instance:
<!-- language: lang-sql -->
CREATE TABLE [MultiDbDemo].[SourceConnections](
[DatabaseKey] [int] IDENTITY(1,1) NOT NULL,
[ServerName] [varchar](50) NOT NULL,
[DatabaseName] [varchar](50) NOT NULL,
CONSTRAINT [PK_SourceConnections] PRIMARY KEY CLUSTERED
(
[DatabaseKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [MultiDbDemo].[SourceColumns](
[ColumnKey] [int] IDENTITY(1,1) NOT NULL,
[ServerName] [varchar](50) NOT NULL,
[DatabaseName] [varchar](50) NOT NULL,
[SchemaName] [varchar](50) NOT NULL,
[TableName] [varchar](50) NOT NULL,
[ColumnName] [varchar](50) NOT NULL,
CONSTRAINT [PK_SourceColumns] PRIMARY KEY CLUSTERED
(
[ColumnKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
This is the control flow for the SSIS package:
The Source_AdoDotNet connection manager's ConnectionString property is set to the following expression:
SQL_GetSourceList's SQLStatement property is SELECT ServerName, DatabaseName FROM MultiDbDemo.SourceConnections, and the ResultSet is mapped to the User::SourceList variable.
The ForEach Loop task is configured thusly:
Note that the ADO object source variable is set to the User::SourceList variable populated in the SQL_GetSourceList task.
And the data flow looks like this:
ADO_SRC_SourceInfo is configured thusly:
The next effect of all this is that, for each database listed in the SourceConnections table, we execute the query SELECT LEFT(TABLE_SCHEMA, 50) AS SchemaName, LEFT(TABLE_NAME, 50) AS TableName, LEFT(COLUMN_NAME, 50) AS ColumnName FROM INFORMATION_SCHEMA.COLUMNS and save the results in the SourceColumns table.
You will still need 70 destination components. Simply specify the same table in all of them.
I have a MSSQL queries file(.sql), now I need to convert it to MYSQL queries.
Please help me. The script like this:
CREATE TABLE [dbo].[Artist](
[ArtistId] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](120) NULL,
PRIMARY KEY CLUSTERED
(
[ArtistId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
If you want to convert the DDL by hand, then you can do this by building up rules on a case by case basis, e.g. as follows:
[] need to be replaced with backticks
IDENTITY(1,1) can be replaced with AUTO_INCREMENT
Most of the ANSI options and Device settings
can be ignored (these seem to be present only because the table has
been rescripted)
w.r.t. dbo, MySQL doesn't implement schemas in the same way as SQL Server - you will either need to separate schemas into databases, or drop the schema, or mangle the schema name into the tablename (e.g. as a Prefix)
This will leave you with something like the following:
CREATE TABLE `Artist`(
`ArtistId` int NOT NULL AUTO_INCREMENT,
`Name` nvarchar(120) NULL,
PRIMARY KEY CLUSTERED
(
`ArtistId` ASC
)
);
Fiddle here
However, it is usually much easier to do this migration with a migration tool - search for the section on How to Transition from SQL Server to MySQL
I have a simple stored procedure for INSERT, lately I noticed I am getting unusual errors one doesn't seems to be like one
I get following error when I try to compile the following stored procedure:
Msg 137, Level 15, State 2, Procedure usp_AddArticleCategory, Line 15
Must declare the scalar variable "#ArtcileCategoryActive".
This is my code:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- =============================================
CREATE PROCEDURE [dbo].[usp_AddArticleCategory]
#ArticleCategoryName nvarchar(200),
#LangID int,
#ArticleCategoryActive bit,
#Type nvarchar(100)
AS
SET NOCOUNT ON;
BEGIN
INSERT INTO art_Category (ArticleCategoryName,[LangID],ArtcileCategoryActive,[Type])
VALUES (#ArticleCategoryName, #LangID, #ArtcileCategoryActive, #Type)
END
Table structure:
CREATE TABLE [dbo].[art_Category](
[ArticleCategoryID] [int] IDENTITY(1,1) NOT NULL,
[ArticleCategoryName] [nvarchar](200) NULL,
[LangID] [int] NULL,
[ArtcileCategoryActive] [bit] NULL,
[Type] [nvarchar](100) NULL,
CONSTRAINT [PK_art_Category] PRIMARY KEY CLUSTERED
(
[ArticleCategoryID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
I compared table columns, datatypes but I am not sure why this error keeps coming.
Appreciate help with this
You're declaring;
ArticleCategoryActive
and using;
ArtcileCategoryActive
Spot the difference? :)