Noob to SQLAlchemy. :/
Using Python 2.7 with SQLAlchemy 0.9.2 on Windows 7.
I receive the message
Traceback (most recent call last):
File "test.py", line 46, in <module>
result = Session().query(Table_Enum_country).get(1)
File "c:\Python27\lib\site-packages\sqlalchemy-0.9.2-py2.7.egg\sqlalchemy\orm\query.py", line 786, in get
mapper = self._only_full_mapper_zero("get")
File "c:\Python27\lib\site-packages\sqlalchemy-0.9.2-py2.7.egg\sqlalchemy\orm\query.py", line 322, in _only_full_mapper_zero
"a single mapped class." % methname)
sqlalchemy.exc.InvalidRequestError: get() can only be used against a single mapped class.
I have a SQL Server 2012 table defined as follows :-
CREATE TABLE [dbo].[enum_countries](
[created_at] [datetime] NULL,
[updated_at] [datetime] NULL,
[updated_by_id] [int] NULL,
[created_by_id] [int] NULL,
[name] [nvarchar](256) NULL,
[description] [nvarchar](256) NULL,
[_enabled] [bit] NULL,
[nationality] [nvarchar](50) NULL,
[id] [int] IDENTITY(1,1) NOT NULL,
[code] [nvarchar](10) NULL,
CONSTRAINT [PK__enum_countries__4CA06362] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
My Python code is as follows :-
import logging
logging.basicConfig(level=logging.DEBUG)
logging.getLogger("sqlalchemy").setLevel(logging.WARN)
from sqlalchemy import create_engine
from sqlalchemy import MetaData
from sqlalchemy import Table
from sqlalchemy.orm import sessionmaker
# Initialize SQLAlchemy Environment
db = create_engine(
"mssql://user:password#localhost/localDB")
Session = sessionmaker(bind=db)
metadata = MetaData()
# Grab ALL tables in one hit.
# metadata.reflect(bind=db)
Table_Enum_country = Table(u'enum_countries', metadata, autoload=True, autoload_with=db)
result = Session().query(Table_Enum_country).get(1)
print result
What am I doing wrong? Why can't I simply 'get()' the row?
If there is a better way of doing this, I am all eyes :)
Thanks in advance.
...Lyall
Session.get() is for mapped classes, not for sql entities, for that you should use Table.select().
session.execute(Table_Enum_country.select().where( Table_Enum_country.columns.id == 1))
alternatively, map the table to a class:
from sqlalchemy.orm import mapper
class EnumContries(object):
pass
mapper(EnumContries, Table_Enum_country)
instance = session.query(EnumContries).get(1)
Related
Per https://learn.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-data-loading-guidance?view=sql-server-2017, we're making some optimizations for a bulk load operation for a columnstore index and, whenever we attempt the insert into the CCI, we get the following:
Location: columnset.cpp:3707
Expression: !pColBinding->IsLobAccessor()
SPID: 55
Process ID: 1988
Msg 3624, Level 20, State 1, Line 3
A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a Hotfix from Technical Support.
Msg 596, Level 21, State 1, Line 0
Cannot continue the execution because the session is in the kill state.
Msg 0, Level 20, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
There is no data corruption--DBCC CHECKDB runs without errors. Inserting a small number of rows succeeds, but it fails when we try over 1000 (we haven't tried to figure out the exact number where failure occurs, but we also tried over a million). We are running SQL Server 2017, 14.0.3223.3.
How to reproduce the problem:
Step 1: Create a sample staging table
CREATE TABLE [dbo].[Data](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[Description] [varchar](50) NOT NULL,
[JSON] [nvarchar](max) NOT NULL,
CONSTRAINT [PK_Data] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[Data] WITH CHECK ADD CONSTRAINT [CK_Data] CHECK ((isjson([JSON])=(1)))
GO
ALTER TABLE [dbo].[Data] CHECK CONSTRAINT [CK_Data]
GO
Step 2: Fill the staging table with sample data (our JSON column is over 100KB)
DECLARE #i INT = 1
WHILE (#i < 1000)
BEGIN
INSERT INTO Data
SELECT 'Test' AS Description, BulkColumn as JSON
FROM OPENROWSET (BULK 'C:\Temp\JSON.json', SINGLE_CLOB) AS J
SET #i = #i + 1
END
Step 3: Create a sample target table and CCI
CREATE TABLE [dbo].[DataCCI](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[Description] [varchar](50) NOT NULL,
[JSON] [nvarchar](max) NOT NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[DataCCI] WITH CHECK ADD CONSTRAINT [CK_DataCCI] CHECK ((isjson([JSON])=(1)))
GO
ALTER TABLE [dbo].[DataCCI] CHECK CONSTRAINT [CK_DataCCI]
GO
CREATE CLUSTERED COLUMNSTORE INDEX [cci] ON [dbo].[DataCCI] WITH (DROP_EXISTING = OFF, COMPRESSION_DELAY = 0) ON [PRIMARY]
GO
Step 4: Bulk load from sample staging to CCI
INSERT INTO DataCCI WITH (TABLOCK)
SELECT Description, JSON FROM Data
What am I missing? Is there a better way to do this or a workaround?
Thank you.
I was able to workaround this issue by removing the constraints from the target table.
Cheers!
I have seen this question asked quite a few times and most of them end with a logical explanation. My table doesn't seem to be anywhere near the maximum row size.
My Dev Server is SQL 2008 Express Edition
This is my table definition. I have one varchar(max) column and the rest of my columns should be tiny. The "Notes" filed didn't contain much text, only a few characters.
CREATE TABLE [dbo].[PegBoard](
[ID] [int] IDENTITY(1,1) NOT NULL,
[OwnersCorporationID] [int] NOT NULL,
[PegNumber] [int] NOT NULL,
[DepositRequired] [bit] NOT NULL,
[KeyRegister] [bit] NOT NULL,
[Notes] [varchar](max) NULL,
[Locked] [bit] NOT NULL,
[NonLoanable] [bit] NULL,
[DepositAmount] [decimal](10, 2) NULL,
[LastReviewedDate] [datetime] NULL,
CONSTRAINT [PK_PegBoard] PRIMARY KEY NONCLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
CONSTRAINT [PegBoard_PN_Cnst] UNIQUE NONCLUSTERED
(
[PegNumber] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
When I insert a row I receive the following message.
Error:System.Data.SqlClient.SqlException: Cannot create a row of size
8066 which is greater than the allowable maximum row size of 8060.
The statement has been terminated.
I have seen a similar warning when I added new columns to the table but it didn't seem to cause a failure until now.
Any ideas what could cause this problem and what I might try to fix it.
Thanks in Advance
David
Update
Here is the structure of my Audit Table if that helps.
The trigger itself is quite complicated as it was generated from some code I found that does all tables automatically.
CREATE TABLE [dbo].[Audit](
[AuditID] [int] IDENTITY(1,1) NOT NULL,
[Type] [char](1) NULL,
[TableName] [varchar](128) NULL,
[PrimaryKeyField] [varchar](1000) NULL,
[PrimaryKeyValue] [varchar](1000) NULL,
[FieldName] [varchar](128) NULL,
[OldValue] [varchar](1000) NULL,
[NewValue] [varchar](1000) NULL,
[UpdateDate] [datetime] NULL,
[UserName] [varchar](128) NULL
) ON [PRIMARY]
It is a common misunderstanding, that using VARCHAR(MAX) is a good idea in any cases... If you really have to deal with strings larger than 8000 bytes you could think about VARBINARY(MAX) or XML.
Read this: https://www.simple-talk.com/sql/database-administration/whats-the-point-of-using-varchar(n)-anymore/
But it may work: Read this: why row insert above 8053 bytes not giving error when it should because max allowed row limit is 8060
Another problem with VARCHAR(MAX) is, that in some statements the implicitly used data type is the "normal" varchar and you need extra casts:
Read this: https://stackoverflow.com/a/33031838/5089204
Conclusio: If you do not expect really large text it's better to use a VARCHAR(XX)definition.
I'm trying to translate a LINQ to SQL from C# to F#. This is what I've come up with:
let get (roles : List<string>) =
query {
for t in db.Tabs do
join r in db.Regions on (t.Id = r.FkTab)
join l in db.Links on (r.Id = l.FkRegion)
where (roles.Contains(l.Role) || roles.Contains("Administrator"))
groupValBy (t,r,l) (t.Id, t.Label) into tt
select {
Label = snd tt.Key;
Regions = query {
for xr in db.Regions do
join xl in db.Links on (xr.Id = xl.FkRegion)
where (xr.FkTab.Equals (fst tt.Key) && (roles.Contains(xl.Role) || roles.Contains("Administrator")))
groupValBy (xr, xl) (xr.Id, xr.Label) into rr
select {
Label = snd rr.Key;
Links = query {
for yl in db.Links do
where (yl.FkRegion.Equals (fst rr.Key) && (roles.Contains(yl.Role) || roles.Contains("Administrator")))
select {
Label = yl.Label;
Url = yl.Url;
Role = yl.Role
}
} |> Seq.toList
}
} |> Seq.toList
}
} |> Seq.toList
Instead of classes I've used records, here they are:
type Link = {Url : string; Role : string; Label : string}
type Region = {Label : string; Links : List<Link>}
type Tab = {Label : string; Regions : List<Region> }
Unfortunately I'm getting an exception at runtime:
A first chance exception of type 'System.NotSupportedException' occurred in System.Data.Linq.dll
The message is: System.NotSupportedException: Query operator 'AsQueryable' not supported.
I'm very green to F#, what am I missing? Perhaps I cannot use records in LINQ to SQL? I cannot nest in that manner?
Thanks
EDIT:
db is the database context:
let db = SqlConnection.GetDataContext()
If you wish to try this at home, here is the script to create the three tables needed:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Links](
[id] [int] IDENTITY(1,1) NOT NULL,
[fkRegion] [int] NOT NULL,
[label] [varchar](50) NOT NULL,
[role] [varchar](50) NOT NULL,
[url] [varchar](100) NOT NULL,
CONSTRAINT [PK_Links] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/****** Object: Table [dbo].[Regions] Script Date: 22/08/2015 12:06:49 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Regions](
[id] [int] IDENTITY(1,1) NOT NULL,
[fkTab] [int] NOT NULL,
[label] [varchar](50) NOT NULL,
CONSTRAINT [PK_Regions] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
/****** Object: Table [dbo].[Tabs] Script Date: 22/08/2015 12:06:49 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Tabs](
[id] [int] IDENTITY(1,1) NOT NULL,
[label] [varchar](50) NOT NULL,
CONSTRAINT [PK_Tabs] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[Links] WITH CHECK ADD CONSTRAINT [FK_Links_Regions] FOREIGN KEY([fkRegion])
REFERENCES [dbo].[Regions] ([id])
GO
ALTER TABLE [dbo].[Links] CHECK CONSTRAINT [FK_Links_Regions]
GO
ALTER TABLE [dbo].[Regions] WITH CHECK ADD CONSTRAINT [FK_Regions_Tabs] FOREIGN KEY([fkTab])
REFERENCES [dbo].[Tabs] ([id])
GO
ALTER TABLE [dbo].[Regions] CHECK CONSTRAINT [FK_Regions_Tabs]
GO
EDIT2:
The issue seems to be with the result of the query. I tried changing the record type to IEnumerable and removing the Seq.toList. However, whenever I try any operation on the enumerable, the same exception pops up, e.g. Model.Count() (where Model is an IEnumerable). Help?
I am using SQL server 2008, Want to import my db to Azure. I have (*.bak) file. Is their any work around to restore my db to Azure without changing my db structure.
I tried SQLAzureMW but it is giving me this error
'Filegroup reference and partitioning scheme' is not supported in this version of SQL Server.
I searched Filegroup keyword in scripts but it isn't there.
I also tried Azure SilverLight Managment Tool but it giving me same error.
While running this script i am getting the above error.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [Mohsin].[Supplier](
[SuppID] [int] NOT NULL,
[Name] [varchar](50) NULL,
[street] [varchar](30) NULL,
[City] [varchar](20) NULL,
[State] [varchar](20) NULL,
[County] [varchar](30) NULL,
[PostalCode] [varchar](25) NULL,
[Phone] [varchar](17) NULL,
[Fax] [varchar](17) NULL,
[Active] [bit] NULL,
CONSTRAINT [PK_Supplier] PRIMARY KEY CLUSTERED
(
[SuppID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
Make sure there isn't a reference for an Index or object indicating that it should go on the Primary filegroup. Search for ON [PRIMARY] or ON PRIMARY, or just the word PRIMARY.
The SQL Azure Migration Wizard is usually pretty good about pointing out exactly what it doesn't like.
I need to import the data from the multiple distributed database ( around 70 ) to the single source table .So how is it possible through SSIS 2008
Assuming that you can run the same query against each of the 70 source servers, you can use a ForEach Loop with a single Data Flow Task. The source connection manager's ConnectionString should be an expression using the loop variables.
Here's an example reading the INFORMATION_SCHEMA.COLUMNS view from multiple DBs. I created the following tables on my local instance:
<!-- language: lang-sql -->
CREATE TABLE [MultiDbDemo].[SourceConnections](
[DatabaseKey] [int] IDENTITY(1,1) NOT NULL,
[ServerName] [varchar](50) NOT NULL,
[DatabaseName] [varchar](50) NOT NULL,
CONSTRAINT [PK_SourceConnections] PRIMARY KEY CLUSTERED
(
[DatabaseKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [MultiDbDemo].[SourceColumns](
[ColumnKey] [int] IDENTITY(1,1) NOT NULL,
[ServerName] [varchar](50) NOT NULL,
[DatabaseName] [varchar](50) NOT NULL,
[SchemaName] [varchar](50) NOT NULL,
[TableName] [varchar](50) NOT NULL,
[ColumnName] [varchar](50) NOT NULL,
CONSTRAINT [PK_SourceColumns] PRIMARY KEY CLUSTERED
(
[ColumnKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
This is the control flow for the SSIS package:
The Source_AdoDotNet connection manager's ConnectionString property is set to the following expression:
SQL_GetSourceList's SQLStatement property is SELECT ServerName, DatabaseName FROM MultiDbDemo.SourceConnections, and the ResultSet is mapped to the User::SourceList variable.
The ForEach Loop task is configured thusly:
Note that the ADO object source variable is set to the User::SourceList variable populated in the SQL_GetSourceList task.
And the data flow looks like this:
ADO_SRC_SourceInfo is configured thusly:
The next effect of all this is that, for each database listed in the SourceConnections table, we execute the query SELECT LEFT(TABLE_SCHEMA, 50) AS SchemaName, LEFT(TABLE_NAME, 50) AS TableName, LEFT(COLUMN_NAME, 50) AS ColumnName FROM INFORMATION_SCHEMA.COLUMNS and save the results in the SourceColumns table.
You will still need 70 destination components. Simply specify the same table in all of them.