I am trying to create a table inside a schema using SQLAlchemy. It has a column of type Enum. Following is the code
import enum
import sqlalchemy
from sqlalchemy import Column, Text, Enum
from sqlalchemy.schema import CreateSchema
import sqlalchemy_utils
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class T(enum.Enum):
X = 1
Y = 2
ET = Enum(T, inherit_schema=True)
#ET = Enum(T, schema="schema1") # This works
class A(Base):
__tablename__ = 'a'
c1 = Column(Text, primary_key=True, nullable=False)
c2 = Column(Text, nullable=False)
c3 = Column(ET)
engine = sqlalchemy.create_engine("postgresql://postgres:mypass#172.17.0.2/mydb")
engine.execute(CreateSchema('schema1'))
schema_engine = engine.execution_options(schema_translate_map = { None: "schema1" } )
Base.metadata.create_all(schema_engine)
This fails at the "create_all" line with the following error
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DuplicateObject)
type "t" already exists [SQL: "CREATE TYPE schema1.t AS ENUM ('X',
'Y')"] (Background on this error at: http://sqlalche.me/e/f405)
I am using this pattern because I will have multiple schemas inside which the same table has to be created.
The reason you get the error is because there is a bug in the version that you are using.
I would suggest to use a virtual environment and use the latest stable release of SQLAlchemy.
Related
I'd like to define a unique json column via sqlalchemy on postgres. the naive approach did not work:
this:
values = db.Column(db.JSON(), nullable=False, unique=True)
led to this:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) data type json has no default operator class for access method "btree"
any ideas?
Create a new column that will receive the json md5 hash:
hash_values = db.Column(db.String(32), default="")
Declare the combination of the json field and the hash as unique:
__table_args__ = (db.UniqueConstraint('values', 'hash_values'))
Staying like this:
import json
import hashlib
class Register(db.Model):
__tablename__ = 'register'
__table_args__ = (
db.UniqueConstraint('values', 'hash_values'),
)
values = db.Column(db.JSON, default="{}")
hash_values = db.Column(db.String(32), default="")
def __init__(self, values):
self.values = values
self.hash_values = hashlib.md5(
json.dumps(
values,
sort_keys=True
).encode("utf-8")
).hexdigest()
I don't know if you import JSON from sqlalchemy as follows:
from sqlalchemy.types import JSON
I think calling sqlalchemy JSON type should work. You could try something like this:
values = db.Column(JSON, nullable=False, unique=True)
Remember the base types.JSON provides keyed index operations, integer index operations and path index operations.
For more information see this
Hope it works for you.
I wrote a general dbhandler module that can entangle data containers and uploade them to a mySQL database and is independent of the DB structure. Now I want to add a default or the possibility to shove the data into a sqlite DB. Structure-wise this is related to this question. The package looks like this:
dbhandler\
dbhandler.py
models\
meta.py
default\
default_DB_map.py
default_DB.cfg
default.cfg is the config file that describes the database for the dbhandler script. default_DB_map.py contains a map for each table of the DB, which inherits from BASE:
from sqlalchemy import BigInteger, Column, Integer, String, Float, DateTime
from sqlalchemy import Date, Enum
from ..meta import BASE
class db_info(BASE):
__tablename__ = "info"
id = Column(Integer, primary_key=True)
name = Column(String)
project = Column(String)
manufacturer = Column(String)
...
class db_probe(BASE):
__tablename__ = "probe"
probeid = Column(Integer, primary_key=True)
id = Column(Integer)
paraX = Column(String)
...
In meta.py I initialize the declarative_base object:
from sqlalchemy.ext.declarative import declarative_base
BASE = declarative_base()
And eventually, I import BASE within the dbhandler.py and create the engine and session:
"DBHandler module"
...
import sqlalchemy
from sqlalchemy.orm import sessionmaker
from models import meta #pylint: disable=E0401
....
class DBHandler(object):
"""Database handling
Methods:
- get_dict: returns table row
- add_item: adds dict to DB table
- get_table_keys: gets list of all DB table keys
- get_values: returns all values of key in DB table
- check_for_value: checks if value is in DB table or not
- upload: uploads data container to DB
- get_dbt: returns DBTable object
"""
def __init__(self, db_cfg=None):
"""Load credentials, DB structure and name of DB map from cfg file,
create DB session. Create DBTable object to get table names of DB
from cfg file, import table classes and get name of primary keys.
Args:
- db_cfg (yaml) : contains infos about DB structure and location
of DB credentials.
Misc:
- cred = {"host" : "...",
"database" : "...",
"user" : "...",
"passwd" : "..."}
"""
...
db_cfg = self.load_cfg(db_cfg)
if db_cfg["engine"] == "sqlite":
engine = sqlalchemy.create_engine("sqlite:///mySQlite.db")
meta.BASE.metadata.create_all(engine)
session = sessionmaker(bind=engine)
self.session = session()
elif db_cfg["engine"] == "mysql+mysqlconnector":
cred = self.load_cred(db_cfg["credentials"])
engine = sqlalchemy.create_engine(db_cfg["engine"]
+ "://"
+ cred["user"] + ":"
+ cred["passwd"] + "#"
+ cred["host"] + ":"
+ "3306" + "/"
+ cred["database"])
session = sessionmaker(bind=engine)
self.session = session()
else:
self.log.warning("Unkown engine in DB cfg...")
# here I'm importing the table classes stated in the config file
self.dbt = DBTable(map_file=db_cfg["map"],
table_dict=db_cfg["tables"],
cr_dict=db_cfg["cross-reference"])
I'm obviously doing something wrong within the if db_cfg["engine"] == "sqlite": paragraph, but I can't figure out what.
The script is working just fine with the mySQL engine. When I initialize the handler object I'm getting an empty mySQLite.db file.
Adding something with that session yields:
(sqlite3.OperationalError) no such table: info....
I can however use something like ´sqlalchemy.inspect´ on a table object without any errors. So I have the correct table objects at hand, but they are somehow not connected to the base?
For SQLite, apperently the import of the table classes needs to happen before the DB is created.
# here I'm importing the table classes stated in the config file
self.dbt = DBTable(map_file=db_cfg["map"],
table_dict=db_cfg["tables"],
cr_dict=db_cfg["cross-reference"])
(which is done via pydoc.locate btw) has to be done before
engine = sqlalchemy.create_engine("sqlite:///mySQlite.db")
meta.BASE.metadata.create_all(engine)
session = sessionmaker(bind=engine)
self.session = session()
is called. I thought this was not important since I imported BASE at the beginning and since it works just fine when using a different engine.
I am using SQLAlchemy 0.9.4 with Python 3.4.1 and MySQL on a CentOS Server. I am trying to filter by seeing if a certain value in a column is any of multiple values. For example, if x in [1, 2, 3, 4, 5] I would like the value to be selected. How could I go about doing that?
Use in_ operator in the filter expression. Working code below, but please go through SQLAlchemy documentation.
from sqlalchemy import create_engine, Table, Column, Integer
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
engine = create_engine('sqlite:///:memory:', echo=True)
session = sessionmaker(bind=engine)()
Base = declarative_base(engine)
class MyTable(Base):
__tablename__ = 'my_table'
id = Column(Integer, primary_key=True)
x = Column(Integer)
Base.metadata.create_all(engine)
# this is the query
qry = session.query(MyTable).filter(MyTable.x.in_([1,2,3,4,5]))
result = qry.all()
How do I generate a different default value for a column in SQLAlchemy model? In the following example, I am getting the same default value for every new instance of the model object.
import random, string
def randomword():
length = 10
return ''.join(random.choice(string.lowercase) for i in range(length))
class ModelFoo(AppBase):
temp = Column("temp", String, default=randomword())
default=randomword() is wrong. Since the function has called so become a constant, it is not a function any more. Pass a callable function if you want to get different values every execution:
import random, string
from sqlalchemy import create_engine, Column, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
engine = create_engine('sqlite:///foo.db')
Session = sessionmaker(bind=engine)
sess = Session()
def randomword():
return ''.join(random.choice(string.lowercase) for i in xrange(10))
class Foo(Base):
__tablename__ = 'foo'
key = Column(String, primary_key=True, default=randomword)
Base.metadata.create_all(engine)
Demo:
>>> sess.add(Foo())
>>> sess.add(Foo())
>>> sess.add(Foo())
>>> sess.flush()
>>> [foo.key for foo in sess.query(Foo)]
[u'aerpkwsaqx', u'cxnjlgrshh', u'dszcgrbfxn']
default=randomword will solve the issue.
Not useful for you case, but there is another default called 'server_default' which sits at the DB. So, even if you are manually inserting rows, 'server_default' gets applied.
Does anybody have example on how to use BLOB in SQLAlchemy?
from sqlalchemy import *
from sqlalchemy.orm import mapper, sessionmaker
import os
engine = create_engine('sqlite://', echo=True)
metadata = MetaData(engine)
sample = Table(
'sample', metadata,
Column('id', Integer, primary_key=True),
Column('lob', Binary),
)
class Sample(object):
def __init__(self, lob):
self.lob = lob
mapper(Sample, sample)
metadata.create_all()
session = sessionmaker(engine)()
# Creating new object
blob = os.urandom(100000)
obj = Sample(lob=blob)
session.add(obj)
session.commit()
obj_id = obj.id
session.expunge_all()
# Retrieving existing object
obj = session.query(Sample).get(obj_id)
assert obj.lob==blob
from sqlalchemy import *
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from struct import *
_DeclarativeBase = declarative_base()
class MyTable(_DeclarativeBase):
__tablename__ = 'mytable'
id = Column(Integer, Sequence('my_table_id_seq'), primary_key=True)
my_blob = Column(BLOB)
DB_NAME = 'sqlite:///C:/BlobbingTest.db'
db = create_engine(DB_NAME)
#self.__db.echo = True
_DeclarativeBase.metadata.create_all(db)
Session = sessionmaker(bind=db)
session = Session()
session.add(MyTable(my_blob=pack('H', 365)))
l = [n + 1 for n in xrange(10)]
session.add(MyTable(my_blob=pack('H'*len(l), *l)))
session.commit()
query = session.query(MyTable)
for mt in query.all():
print unpack('H'*(len(mt.my_blob)/2), mt.my_blob)
Why don't you use LargeBinary?
Extract from: https://docs.sqlalchemy.org/en/13/core/type_basics.html#sqlalchemy.types.LargeBinary
class sqlalchemy.types.LargeBinary(length=None)
A type for large binary byte data.
The LargeBinary type corresponds to a large and/or unlengthed binary type for the target platform, such as BLOB on MySQL and BYTEA for PostgreSQL. It also handles the necessary conversions for the DBAPI.
I believe this might assist you.
From the documentation BINARY seems the way to go: http://docs.sqlalchemy.org/en/latest/dialects/mysql.html
class sqlalchemy.dialects.mysql.BLOB(length=None) Bases:
sqlalchemy.types.LargeBinary
The SQL BLOB type.
__init__(length=None) Construct a LargeBinary type.
Parameters: length – optional, a length for the column for use in DDL
statements, for those BLOB types that accept a length (i.e. MySQL). It
does not produce a lengthed BINARY/VARBINARY type - use the
BINARY/VARBINARY types specifically for those. May be safely omitted
if no CREATE TABLE will be issued. Certain databases may require a
length for use in DDL, and will raise an exception when the CREATE
TABLE DDL is issued.