setting a different default value for a column - sqlalchemy

How do I generate a different default value for a column in SQLAlchemy model? In the following example, I am getting the same default value for every new instance of the model object.
import random, string
def randomword():
length = 10
return ''.join(random.choice(string.lowercase) for i in range(length))
class ModelFoo(AppBase):
temp = Column("temp", String, default=randomword())

default=randomword() is wrong. Since the function has called so become a constant, it is not a function any more. Pass a callable function if you want to get different values every execution:
import random, string
from sqlalchemy import create_engine, Column, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
engine = create_engine('sqlite:///foo.db')
Session = sessionmaker(bind=engine)
sess = Session()
def randomword():
return ''.join(random.choice(string.lowercase) for i in xrange(10))
class Foo(Base):
__tablename__ = 'foo'
key = Column(String, primary_key=True, default=randomword)
Base.metadata.create_all(engine)
Demo:
>>> sess.add(Foo())
>>> sess.add(Foo())
>>> sess.add(Foo())
>>> sess.flush()
>>> [foo.key for foo in sess.query(Foo)]
[u'aerpkwsaqx', u'cxnjlgrshh', u'dszcgrbfxn']

default=randomword will solve the issue.
Not useful for you case, but there is another default called 'server_default' which sits at the DB. So, even if you are manually inserting rows, 'server_default' gets applied.

Related

How to use Enum with schema in SQLAlchemy?

I am trying to create a table inside a schema using SQLAlchemy. It has a column of type Enum. Following is the code
import enum
import sqlalchemy
from sqlalchemy import Column, Text, Enum
from sqlalchemy.schema import CreateSchema
import sqlalchemy_utils
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class T(enum.Enum):
X = 1
Y = 2
ET = Enum(T, inherit_schema=True)
#ET = Enum(T, schema="schema1") # This works
class A(Base):
__tablename__ = 'a'
c1 = Column(Text, primary_key=True, nullable=False)
c2 = Column(Text, nullable=False)
c3 = Column(ET)
engine = sqlalchemy.create_engine("postgresql://postgres:mypass#172.17.0.2/mydb")
engine.execute(CreateSchema('schema1'))
schema_engine = engine.execution_options(schema_translate_map = { None: "schema1" } )
Base.metadata.create_all(schema_engine)
This fails at the "create_all" line with the following error
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DuplicateObject)
type "t" already exists [SQL: "CREATE TYPE schema1.t AS ENUM ('X',
'Y')"] (Background on this error at: http://sqlalche.me/e/f405)
I am using this pattern because I will have multiple schemas inside which the same table has to be created.
The reason you get the error is because there is a bug in the version that you are using.
I would suggest to use a virtual environment and use the latest stable release of SQLAlchemy.

SQLAlchemy 0.9.4 filtering for a group

I am using SQLAlchemy 0.9.4 with Python 3.4.1 and MySQL on a CentOS Server. I am trying to filter by seeing if a certain value in a column is any of multiple values. For example, if x in [1, 2, 3, 4, 5] I would like the value to be selected. How could I go about doing that?
Use in_ operator in the filter expression. Working code below, but please go through SQLAlchemy documentation.
from sqlalchemy import create_engine, Table, Column, Integer
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine('sqlite:///:memory:', echo=True)
session = sessionmaker(bind=engine)()
Base = declarative_base(engine)
class MyTable(Base):
__tablename__ = 'my_table'
id = Column(Integer, primary_key=True)
x = Column(Integer)
Base.metadata.create_all(engine)
# this is the query
qry = session.query(MyTable).filter(MyTable.x.in_([1,2,3,4,5]))
result = qry.all()

Unique Sequencial Number to column

I need create sequence but in generic case not using Sequence class.
USN = Column(Integer, nullable = False, default=nextusn, server_onupdate=nextusn)
, this funcion nextusn is need generate func.max(table.USN) value of rows in model.
I try using this
class nextusn(expression.FunctionElement):
type = Numeric()
name = 'nextusn'
#compiles(nextusn)
def default_nextusn(element, compiler, **kw):
return select(func.max(element.table.c.USN)).first()[0] + 1
but the in this context element not know element.table. Exist way to resolve this?
this is a little tricky, for these reasons:
your SELECT MAX() will return NULL if the table is empty; you should use COALESCE to produce a default "seed" value. See below.
the whole approach of inserting the rows with SELECT MAX is entirely not safe for concurrent use - so you need to make sure only one INSERT statement at a time invokes on the table or you may get constraint violations (you should definitely have a constraint of some kind on this column).
from the SQLAlchemy perspective, you need your custom element to be aware of the actual Column element. We can achieve this either by assigning the "nextusn()" function to the Column after the fact, or below I'll show a more sophisticated approach using events.
I don't understand what you're going for with "server_onupdate=nextusn". "server_onupdate" in SQLAlchemy doesn't actually run any SQL for you, this is a placeholder if for example you created a trigger; but also the "SELECT MAX(id) FROM table" thing is an INSERT pattern, I'm not sure that you mean for anything to be happening here on an UPDATE.
The #compiles extension needs to return a string, running the select() there through compiler.process(). See below.
example:
from sqlalchemy import Column, Integer, create_engine, select, func, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.sql.expression import ColumnElement
from sqlalchemy.schema import ColumnDefault
from sqlalchemy.ext.compiler import compiles
from sqlalchemy import event
class nextusn_default(ColumnDefault):
"Container for a nextusn() element."
def __init__(self):
super(nextusn_default, self).__init__(None)
#event.listens_for(nextusn_default, "after_parent_attach")
def set_nextusn_parent(default_element, parent_column):
"""Listen for when nextusn_default() is associated with a Column,
assign a nextusn().
"""
assert isinstance(parent_column, Column)
default_element.arg = nextusn(parent_column)
class nextusn(ColumnElement):
"""Represent "SELECT MAX(col) + 1 FROM TABLE".
"""
def __init__(self, column):
self.column = column
#compiles(nextusn)
def compile_nextusn(element, compiler, **kw):
return compiler.process(
select([
func.coalesce(func.max(element.column), 0) + 1
]).as_scalar()
)
Base = declarative_base()
class A(Base):
__tablename__ = 'a'
id = Column(Integer, default=nextusn_default(), primary_key=True)
data = Column(String)
e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
# will normally pre-execute the default so that we know the PK value
# result.inserted_primary_key will be available
e.execute(A.__table__.insert(), data='single row')
# will run the default expression inline within the INSERT
e.execute(A.__table__.insert(), [{"data": "multirow1"}, {"data": "multirow2"}])
# will also run the default expression inline within the INSERT,
# result.inserted_primary_key will not be available
e.execute(A.__table__.insert(inline=True), data='single inline row')

Example using BLOB in SQLAlchemy

Does anybody have example on how to use BLOB in SQLAlchemy?
from sqlalchemy import *
from sqlalchemy.orm import mapper, sessionmaker
import os
engine = create_engine('sqlite://', echo=True)
metadata = MetaData(engine)
sample = Table(
'sample', metadata,
Column('id', Integer, primary_key=True),
Column('lob', Binary),
)
class Sample(object):
def __init__(self, lob):
self.lob = lob
mapper(Sample, sample)
metadata.create_all()
session = sessionmaker(engine)()
# Creating new object
blob = os.urandom(100000)
obj = Sample(lob=blob)
session.add(obj)
session.commit()
obj_id = obj.id
session.expunge_all()
# Retrieving existing object
obj = session.query(Sample).get(obj_id)
assert obj.lob==blob
from sqlalchemy import *
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from struct import *
_DeclarativeBase = declarative_base()
class MyTable(_DeclarativeBase):
__tablename__ = 'mytable'
id = Column(Integer, Sequence('my_table_id_seq'), primary_key=True)
my_blob = Column(BLOB)
DB_NAME = 'sqlite:///C:/BlobbingTest.db'
db = create_engine(DB_NAME)
#self.__db.echo = True
_DeclarativeBase.metadata.create_all(db)
Session = sessionmaker(bind=db)
session = Session()
session.add(MyTable(my_blob=pack('H', 365)))
l = [n + 1 for n in xrange(10)]
session.add(MyTable(my_blob=pack('H'*len(l), *l)))
session.commit()
query = session.query(MyTable)
for mt in query.all():
print unpack('H'*(len(mt.my_blob)/2), mt.my_blob)
Why don't you use LargeBinary?
Extract from: https://docs.sqlalchemy.org/en/13/core/type_basics.html#sqlalchemy.types.LargeBinary
class sqlalchemy.types.LargeBinary(length=None)
A type for large binary byte data.
The LargeBinary type corresponds to a large and/or unlengthed binary type for the target platform, such as BLOB on MySQL and BYTEA for PostgreSQL. It also handles the necessary conversions for the DBAPI.
I believe this might assist you.
From the documentation BINARY seems the way to go: http://docs.sqlalchemy.org/en/latest/dialects/mysql.html
class sqlalchemy.dialects.mysql.BLOB(length=None) Bases:
sqlalchemy.types.LargeBinary
The SQL BLOB type.
__init__(length=None) Construct a LargeBinary type.
Parameters: length – optional, a length for the column for use in DDL
statements, for those BLOB types that accept a length (i.e. MySQL). It
does not produce a lengthed BINARY/VARBINARY type - use the
BINARY/VARBINARY types specifically for those. May be safely omitted
if no CREATE TABLE will be issued. Certain databases may require a
length for use in DDL, and will raise an exception when the CREATE
TABLE DDL is issued.

Random ids in sqlalchemy (pylons)

I'm using pylons and sqlalchemy and I was wondering how I could have some randoms ids as primary_key.
the best way is to use randomly generated UUIDs:
import uuid
id = uuid.uuid4()
uuid datatypes are available natively in some databases such as Postgresql (SQLAlchemy has a native PG uuid datatype for this purpose - in 0.5 its called sqlalchemy.databases.postgres.PGUuid). You should also be able to store a uuid in any 16 byte CHAR field (though I haven't tried this specifically on MySQL or others).
i use this pattern and it works pretty good. source
from sqlalchemy import types
from sqlalchemy.databases.mysql import MSBinary
from sqlalchemy.schema import Column
import uuid
class UUID(types.TypeDecorator):
impl = MSBinary
def __init__(self):
self.impl.length = 16
types.TypeDecorator.__init__(self,length=self.impl.length)
def process_bind_param(self,value,dialect=None):
if value and isinstance(value,uuid.UUID):
return value.bytes
elif value and not isinstance(value,uuid.UUID):
raise ValueError,'value %s is not a valid uuid.UUID' % value
else:
return None
def process_result_value(self,value,dialect=None):
if value:
return uuid.UUID(bytes=value)
else:
return None
def is_mutable(self):
return False
id_column_name = "id"
def id_column():
import uuid
return Column(id_column_name,UUID(),primary_key=True,default=uuid.uuid4)
#usage
my_table = Table('test',metadata,id_column(),Column('parent_id',UUID(),ForeignKey(table_parent.c.id)))
Though zzzeek I believe is the author of sqlalchemy, so if this is wrong he would know, and I would listen to him.
Or with ORM mapping:
import uuid
from sqlalchemy import Column, Integer, String, Boolean
def uuid_gen():
return str(uuid.uuid4())
Base = declarative_base()
class Device(Base):
id = Column(String, primary_key=True, default=uuid_gen)
This stores it as a string providing better database compatibility. However, you lose the database's ability to more optimally store and use the uuid.