I have create a model using declarative_base() in sql alchemy as shown below.
class SchemaOnInstance(Base):
__tablename__ = 'schema_on_instance'
__table_args__ = {
'extend_existing' : True,
'schema' : 'SFOPT_TEST_SCHEMA'
}
id = Column(Integer, primary_key=True, autoincrement=True)
created_on = Column(Time, nullable=True)
name = Column(String(100), nullable=True)
is_default = Column(String(50), nullable=True)
is_current = Column(String(50), nullable=True)
database_name = Column(String(200), nullable=True)
owner = Column(String(100), nullable=True)
comment = Column(Text, nullable=True)
options = Column(String(100), nullable=True)
retention_time = Column(Integer, nullable=True)
instance_id = Column(Integer, nullable=True)
def __repr__(self):
return "<SchemaOnInstance({})>".format(self.id)
Then I migrated the same model to Snowflake database.
The model has a field id which is declared as primary_key=True and autoincrement=True. When I try to insert the data into table schema_on_instance using snowflake console. I have to provide id or else it won't insert the data and will return an error.
Query (executed successfully, where id is provided) -
INSERT INTO "SFOPT_TEST_SCHEMA".schema_on_instance (id, created_on, name, is_default, is_current, database_name, owner, comment, options, retention_time, instance_id)
VALUES (1, Null, 'Some Name', 'N', 'N', 'DEMO_DB', Null, 'Some comment', Null, 1, 1);
Query (executed successfully when I completely ignored the column id) -
INSERT INTO "SFOPT_TEST_SCHEMA".schema_on_instance (created_on, name, is_default, is_current, database_name, owner, comment, options, retention_time, instance_id)
VALUES (Null, 'Some Name', 'N', 'N', 'DEMO_DB', Null, 'Some comment', Null, 1, 1);
Query (execution failed, where id is provided as Null) -
INSERT INTO "SFOPT_TEST_SCHEMA".schema_on_instance (id, created_on, name, is_default, is_current, database_name, owner, comment, options, retention_time, instance_id)
VALUES (Null, Null, 'Some Name', 'N', 'N', 'DEMO_DB', Null, 'Some comment', Null, 1, 1);
and it returned an error -
NULL result in a non-nullable column
This method's job is to insert the data in the above stated database table.
def dump_schema(self):
session = self.Session()
schema_obj = []
for each_schema in self.schema:
schema_obj.append(SchemaOnInstance(created_on=each_schema[0], name=each_schema[1], is_default=each_schema[2], is_current=each_schema[3], database_name=each_schema[4], owner=each_schema[5], comment=each_schema[6], options=each_schema[7], retention_time=each_schema[8], instance_id=each_schema[9]))
session.add_all(schema_obj)
try:
x = session.commit()
except Exception as identifier:
logging.error(identifier)
error from SQLAlchemy -
2020-11-23 08:01:02,215 :: ERROR :: dump_schema :: 95 :: (snowflake.connector.errors.ProgrammingError) 100072 (22000): 01987501-0b18-b6aa-0000-d5e500083d26: NULL result in a non-nullable column
[SQL: INSERT INTO "SFOPT_TEST_SCHEMA".schema_on_instance (id, created_on, name, is_default, is_current, database_name, owner, comment, options, retention_time, instance_id) VALUES (%(id)s, %(created_on)s, %(name)s, %(is_default)s, %(is_current)s, %(database_name)s, %(owner)s, %(comment)s, %(options)s, %(retention_time)s, %(instance_id)s)]
[parameters: {'id': None, 'created_on': datetime.datetime(2020, 11, 23, 0, 0, 58, 29000, tzinfo=<DstTzInfo 'America/Los_Angeles' PST-1 day, 16:00:00 STD>), 'name': 'INFORMATION_SCHEMA', 'is_default': 'N', 'is_current': 'N', 'database_name': 'DEMO_DB', 'owner': '', 'comment': 'Views describing the contents of schemas in this database', 'options': '', 'retention_time': '1', 'instance_id': 1}]
If we look at the query formed in the error returned by the SQLAlchemy, it has considered column id and its value is interpreted as None. How can I form the query without including column id and its value.
My end goal is to insert data into Snowflake database table using SQLAlchemy. I want Snowflake database table to auto increment the value of id.
How should I get rid of this error.
I think you need to include a Sequence when defining the table in order to get this to work: SQLAlchemy Auto-increment Behavior
A Sequence is a standalone object in Snowflake that needs to be created before the table is created and is then referenced in the CREATE TABLE statement: CREATE SEQUENCE
Related
I'm trying to set up a self-referential many-to-many database via an association object (since I need to add a value in the relationship).
I have searched all over Stack Exchange, Google and the SQLAlchemy documentation, but I haven't found a single similar instance.
The model definitions are (Python3.8):
class Recipe(db.Model):
id = db.Column(db.Integer, primary_key=True)
base_id = db.Column(db.Integer, db.ForeignKey('color.id'))
ingredients = db.relationship('Color',
backref='recipe',
cascade='save-update')
quantity = db.Column(db.Integer)
def __init__(self, ingredient, quantity):
print(f'{ingredient=}\n{quantity=}')
self.ingredients = ingredient
self.quantity = quantity
def __repr__(self):
return f'Recipe({self.ingredients} x{self.quantity})'
class Color(db.Model):
id = db.Column(db.Integer, primary_key=True)
medium = db.Column(db.String(2), nullable=False)
name = db.Column(db.String(50), nullable=False, unique=True)
pure = db.Column(db.Boolean, default=False, nullable=False)
_recipe = db.relationship(Recipe,
primaryjoin=id==Recipe.base_id,
join_depth=1)
recipe = association_proxy('_recipe', 'ingredients',
creator = lambda _: _)
swatch = db.Column(db.String(25), nullable=False, unique=True)
def __init__(self, medium, name, pure, *, recipe=None, swatch):
self.medium = medium.upper()
self.name = name
self.pure = pure
self.swatch = swatch
if self.pure:
self.recipe = [Recipe(self, 1)]
else:
for ingredient, quantity in recipe:
self.recipe.append(Recipe(ingredient, quantity))
def __repr__(self):
return f'Color({self.name})'
I am using a SQLite database.
I would expect to have a column for each attribute in both tables, but for some reason, this is the only table creation SQL generated:
CREATE TABLE color (
id INTEGER NOT NULL,
medium VARCHAR(2) NOT NULL,
name VARCHAR(50) NOT NULL,
pure BOOLEAN NOT NULL,
swatch VARCHAR(25) NOT NULL,
PRIMARY KEY (id),
UNIQUE (name),
CHECK (pure IN (0, 1)),
UNIQUE (swatch)
)
CREATE TABLE recipe (
id INTEGER NOT NULL,
base_id INTEGER,
quantity INTEGER,
PRIMARY KEY (id),
FOREIGN KEY(base_id) REFERENCES color (id)
)
I am receiving no errors. If I use the cli to manually input records, all defined columns save, though not entirely correctly. I think that may be due to the missing columns, though, and will post it as a separate issue if needed, later.
I've got a query that normally looks like
def get_models_with_children(ids):
query = MyModel.query.filter(MyModel.id.in_(ids))
.join(Child, Child.parent_id = Child.id)
.groupBy(MyModel.id)
.having(func.count(Child.id) > 0)
return query.all()
Sometimes, I want to actually retrieve the count, as well. I can make that happen easily enough:
def get_models_with_children(ids, return_count):
query = MyModel.query
if return_count:
query = query.add_columns(func.count(Child.id).label("child_count"))
query = query.filter(MyModel.id.in_(ids))
.join(Child, Child.parent_id = Child.id)
.groupBy(MyModel.id)
.having(func.count(Child.id) > 0)
return query.all()
This works fine, but now, instead of a List[MyModel] coming back, I've got a differently shaped result with MyModel and child_count keys. If I want the MyModel's id, I do result[0].id if I didn't add the count, and result[0].MyModel.id if I did.
Is there any way I can flatten the result, so that the thing that's returned looks like a MyModel with an extra child_count column?
def do_stuff_with_models():
result = get_models_with_children([1, 2, 3], True)
for r in result:
# can't do this, but I want to:
print(r.id)
print(r.child_count)
# instead I have to do this:
print(r.MyModel.id)
print(r.child_count)
sqlalchemy.util.KeyedTuple is the type * of differently shaped result with MyModel and child_count keys:
Result rows returned by Query that contain multiple
ORM entities and/or column expressions make use of this
class to return rows.
You can effectively flatten them by explictly specifying the columns for your query. Here follows a complete example (tested on SQLAlchemy==1.3.12).
Plain table column attribute
Models:
import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class User(Base):
__tablename__ = 'user'
user_id = sa.Column(sa.Integer, sa.Sequence('user_id_seq'), primary_key=True)
username = sa.Column(sa.String(80), unique=True, nullable=False)
def __repr__(self):
return f'User({self.user_id!r}, {self.username!r})'
class Token(Base):
__tablename__ = 'token'
token_id = sa.Column(sa.Integer, sa.Sequence('token_id_seq'), primary_key=True)
user_id = sa.Column(sa.Integer, sa.ForeignKey('user.user_id'), nullable=False)
user = sa.orm.relationship('User')
value = sa.Column(sa.String(120), nullable=False)
def __repr__(self):
return f'Token({self.user.username!r}, {self.value!r})'
Connect and fill some data:
engine = sa.create_engine('sqlite://')
Base.metadata.create_all(engine)
Session = sa.orm.sessionmaker(bind=engine)
session = Session()
user1 = User(username='joe')
user2 = User(username='john')
token1 = Token(user=user1, value='q1w2e3r4t56')
session.add_all([user1, user2, token1])
session.commit()
Now, let's define the "virtual" column as whether user has a token:
query = session.query(User)
exists = (
sa.exists()
.where(User.user_id == Token.user_id)
.correlate(User)
.label("has_token")
)
query = query.add_columns(exists)
query.all() # [(User(1, 'joe'), True), (User(2, 'john'), False)]
It's the undesired shape. And here's how to flatten it:
query = session.query(*[getattr(User, n) for n in User.__table__.columns.keys()])
query = query.add_columns(exists)
query.all() # [(1, 'joe', True), (2, 'john', False)]
It's all possible to define columns for an existing query, given that you know the model:
query = session.query(User)
# later down the line
query = query.with_entities(*[
getattr(User, n) for n in User.__table__.columns.keys()])
query = query.add_columns(exists)
query.all() # [(1, 'joe', True), (2, 'john', False)]
Column bundle
The same can be achieved with sqlalchemy.orm.Bundle and passing single_entity to it.
bundle = sa.orm.Bundle(
'UserBundle', User.user_id, User.username, exists, single_entity=True)
query = session.query(bundle)
query.all() # [(1, 'joe', True), (2, 'john', False)]
Issue with relationship attribute
With complex models it gets complicated. It's possible to inspect the model (mapped class) attributes with sqlalchemy.orm.mapper.Mapper.attrs and take class_attribute:
# replace
[getattr(User, n) for n in User.__table__.columns.keys()]
# with
[mp.class_attribute for mp in sa.inspect(User).attrs]
But in this case relationship attributes turn into their target tables in FROM clause of the query without ON clause, effectively producing a cartesian product. And the "joins" have to be defined manually, so it's not a good solution. See this answer and a SQLAlchemy user group discussion.
Query expression attribute
Myself I ended up using query expressions, because of the issues with relationships in existing code. It's possible to get away with minimal modification of the model, with query-time SQL expressions as mapped attributes.
User.has_tokens = sa.orm.query_expression()
...
query = query.options(sa.orm.with_expression(User.has_tokens, exists))
query.all() # [User(1, 'joe'), User(2, 'john')]
[u.has_tokens for u in query.all()] # [True, False]
* Actually it's generated on-the-fly sqlalchemy.util._collections.result with MRO of sqlalchemy.util._collections.result, sqlalchemy.util._collections._LW, class sqlalchemy.util._collections.AbstractKeyedTuple, tuple, object, but that's details. More details on how the class is created with lightweight_named_tuple are available in this answer.
i am trying to insert some data into mysql table using nodejs. These data are dynamic , so before inserting how can i check the field type , i am facing issue when trying to insert 'date ' to date field type.
for(var i = 0; i < insertdata.length; i++){
var post=[];
post = insertdata[i];
var querydd = connection.query('INSERT INTO '+ req.body.table_name + ' SET ?', post, function(err, result) {
});
}
data trying to insert is as follows
{ category: 'sd',
book_id: '56353',
author_book: 'Sir Alex Ferguson',
book_title: 'Leading',
price: '11',
publication: 'abc',
publication_date: '12-10-2015' }
{ category: 'df',
book_id: '73638',
author_book: 'Eric Smith',
book_title: 'How Google Works',
price: '110',
publication: 'abcdd',
publication_date: '17-10-2016' }
{ category: 'ffs',
book_id: '37364',
author_book: 'William Shakespeare',
book_title: 'The Merchant of Venice',
price: '200',
publication: 'sgre',
publication_date: '2017-10-2016' }
12-10-2015 is a string. Try convert it to Date by new Date(yyyy-mm-dd) before insert.
Typically for (var i = 0; i < insertdata.length; i++) is a not correct code to insert multiply rows. You don't control execute flow, e.g. what do if some query fail? Read async or promises.
('INSERT INTO '+ req.body.table_name + ' SET ?', post is a dangerous code (see SQL-injection). You must be careful with input data. The better way is use placeholder like a insert into mytable set field = ?, field2 = ?, data, ...
You can read db-field property from metadata tables.
I want to add comments to the table and columns created of a model.
I tried the doc parameter of the Column class, like the following:
class Notice(db.Model):
__tablename__ = "tb_notice"
__table_args__ = {'mysql_engine': 'MyISAM'}
seqno = db.Column(db.Integer, primary_key=True, autoincrement=True, doc="seqno")
title = db.Column(db.String(200), nullable=False, doc="notice title")
detail = db.Column(db.TEXT, nullable=True, doc="notice detail ")
But it didn't work, the comments weren't added to the SQL creation statement, and I wonder how to add a comment to the table.
comment only supported in version 1.2
New in version 1.2:
http://docs.sqlalchemy.org/en/latest/core/metadata.html#sqlalchemy.schema.Column.params.comment
According to the documentation of doc parameter:
doc¶ – optional String that can be used by the ORM or similar to document attributes on the Python side. This attribute does not render SQL comments; use the Column.comment parameter for this purpose.
And the comment parameter:
comment¶ – Optional string that will render an SQL comment on table creation.
Please note that the comment is added in version 1.2 of SQlAlchemy
And for adding a comment for the table, you just pass additional comment attribute (according to the Table class documentation) to your __table_args__ dictionary. Which is also added in version 1.2
The code would be something like this:
class Notice(db.Model):
__tablename__ = "tb_notice"
__table_args__ = {
'mysql_engine': 'MyISAM',
'comment': 'Notice table'
}
seqno = db.Column(db.Integer, primary_key=True, autoincrement=True, doc="seqno",
comment='Integer representing the sequence number')
title = db.Column(db.String(200), nullable=False, doc="notice title",
comment='Title of the notice, represented as a string')
detail = db.Column(db.TEXT, nullable=True, doc="notice detail",
comment='Notice detail description')
The doc attribute acts as a docstring of your class:
print(Notice.title.__doc__)
will outputs:
notice title
Now the corresponding SQL table creation statement would be:
CREATE TABLE `tb_notice` (
`seqno` int(11) NOT NULL COMMENT 'Integer representing the sequence number',
`title` varchar(200) NOT NULL COMMENT 'Title of the notice, represented as a string',
`detail` text COMMENT 'Notice detail description'
) ENGINE=MyISAM DEFAULT CHARSET=utf32 COMMENT='Notice table';
You can see that comments were added correctly to both the table and the columns.
in new 1.2 version you can do
class Notice(db.Model):
__tablename__ = "tb_notice"
__table_args__ = {
'mysql_engine': 'MyISAM'
'comment': 'yeah comment'
}
I am trying to map a legacy MySQL database with a fresh DataMapper model.
MySQL schema:
CREATE TABLE IF NOT EXISTS `backer` (
`backer_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`secret` varchar(16) NOT NULL,
`email` varchar(255) DEFAULT NULL,
`status` enum('pending','ready') NOT NULL DEFAULT 'pending', # relevant bit
PRIMARY KEY (`backer_id`),
UNIQUE KEY `backer_id` (`secret`),
KEY `email` (`email`,`status`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=8166 ;
DataMapper model:
class Backer
include DataMapper::Resource
storage_names[:default] = "backer"
property :id, Serial, :field => "backer_id"
property :secret, String, :field => "secret"
property :email, String, :field => "email"
property :status, Enum[ :pending, :ready ], :field => "status", :default => "pending"
has n, :items, :child_key => "backer_id"
end
DataMapper.finalize
With most attributes, I can persist:
b = Backer.first
b.first_name = "Daniel"
b.save!
# Backer.first.first_name == "Daniel"
It persists fine.
But when I do it with an enum:
b = Backer.first
b.status = :pending
b.save!
# b.status == :pending
# Backer.first.status != :pending
# Backer.first.update!(:status => :pending) ...
Updates/saves to an enum field doesn't seem to persist it.
SQL looks typically like:
1.9.3p327 :019 > Backer.last.update(:status => :ready)
~ (0.000349) SELECT `backer_id`, `secret`, `email`, `status` FROM `backer` ORDER BY `backer_id` DESC LIMIT 1
~ (0.000190) UPDATE `backer` SET `status` = 2 WHERE `backer_id` = 8166
=> true
But when I view the object (e.g. Backer.last) then status is not changed.
UPDATE
I've verified that the enumerated status field in the MySQL database is being persisted. However, the enumerated status property in the DataMapper class doesn't reflect this at all (is always nil).
What's happening here?
Thanks #morbusg: that's one solution.
The other one was to change the Enum at my end to a String and that's worked fine for my purposes.