How to add a string into a class? - function

Let's say I have a Patron class that has the instance variables: name, patron_id and borroweds(borrowed books). This pretty much is a class for a virtual library. if one of my functions requires me to take a book_id # which is a string and to "reshelve" this book. How would I add the string to my class? This is what I have:
class Patron:
name= ""
patron_id= ""
borroweds= list()
# the class constructor
def __init__(self, name, patron_id, borroweds):
self.name= name
self.patron_id= patron_id
self.borroweds= borroweds
def __str__(self):
s= str("Patron("+str(self.name)+","+str(self.patron_id)+","
+list(self.borroweds)")"
return s
def __repr__(self):
return str(self)
def return_book(self,library,book, book_id):
print("Can you please reshelve this book?" + book_id)
The last function is what I need some help with.

Well if that library is an instance of some Library class and it provides methods for "reshelving" a book then you should be able to just say
def return_book(self,library,book, book_id):
print("Can you please reshelve this book?" + book_id)
library.shelve_book(book)
If you're asking how you are supposed to write your Library class or how to write the shelve_book method, well, that really depends on how you're storing those books.
For example if you store it as a list of book objects or something
self.books = []
Then you could say
class Library(object):
def self.shelve_book(self, book):
self.books.append(book)

Related

Parent instance is not bound to a Session; lazy load operation of attribute cannot proceed

I'm following the models with relationship tutorial in the sqlmodel site and adapting it to my needs. I have a character model which looks like this
class CharacterBase(SQLModel):
name: str
birthdate: Optional[date]
sex: str
height_metric: Optional[condecimal(max_digits=5, decimal_places=2)]
weight_metric: Optional[condecimal(max_digits=5, decimal_places=2)]
status: str
status_date: Optional[date]
status_cause: Optional[str]
class Character(CharacterBase, table=True):
character_id: Optional[int] = Field(default=None, primary_key=True)
aliases: Optional[List["Alias"]] = Relationship(back_populates="character")
species: List["Species"] = Relationship(back_populates="character")
occupations: List["Occupation"] = Relationship(back_populates="character")
creation_date: datetime = Field(default=datetime.utcnow())
update_date: datetime = Field(default=datetime.utcnow())
class AliasBase(SQLModel):
alias: str
class Alias(AliasBase, table=True):
alias_id: Optional[int] = Field(default=None, primary_key=True)
character_id: Optional[int] = Field(
default=None, foreign_key="character.character_id"
)
character: Optional[Character] = Relationship(back_populates="aliases")
As you can see, the model has an aliases field which allows the user to add different names to a single character. However when I fetch the data I don't get any relationship values. According to the tutorial it's because that could lead to inifinite recursion which makes sense. The site suggests to create a separate model for data reading so that's what I did.
class CharacterRead(CharacterBase):
character_id: int
aliases: Optional[List["AliasBase"]]
It inherits from AliasBase since I only care about the alias itself, not the id or character related to it. However when I do the API call I get the error sqlalchemy.orm.exc.DetachedInstanceError: Parent instance <Character at 0x2661c723e40> is not bound to a Session; lazy load operation of attribute 'aliases' cannot proceed which I find weird since all the fetching is done within a session.
#router.get("/{id}", response_model=models.CharacterRead)
def get_character_by_id(id: int):
with Session(engine) as session:
character = session.exec(
select(models.Character).where(models.Character.character_id == id)
).one()
return character
How can I fix this issue?

In SqlAlchemy how to share function between different table ORM

I wrote some table ORM in SqlAlchemy, and I want to share common functionality between them, this is my table ORM:
class RetailCampaignTemp(Base, Serialize, PrimaryField):
__tablename__ = 'retail_campaign_temp'
clicks = Column(INTEGER(11))
id = Column(INTEGER(11), primary_key=True)
Here are my custom functions:
class Serialize(object):
def Serialize(self):
return {c: getattr(self, c) for c in inspect(self).attrs.keys()}
class PrimaryField(object):
#declared_attr
def GetPrimaryField(cls):
yield from (column for column in cls.columns if column.primary_key)
When I call RetailCampaignTemp.GetPrimaryField() it say 'generator' object is not callable, what does this mean?
The culprit here is the #declared_attr decorator, which creates a class property, not a class method - that is, the expression RetailCampaignTemp.GetPrimaryField is already running the function body and returning the resulting generator, and RetailCampaignTemp.GetPrimaryField() is trying to call this generator.
This decorator is designed to be used to dynamically create sqlalchemy mapping/table declarations, e.g. by returning a relationship. From your example code it doesn't look like this is the case, so is there any reason you're not just using python's builtin #classmethod instead? This would make your call RetailCampaignTemp.GetPrimaryField() valid.

sqlalchemy: how to block updates on a specific column

I have a declarative mapping:
class User(base):
username = Column(Unicode(30), unique=True)
How can I tell sqlalchemy that this attribute may not be modified?
The workaround I came up with is kindof hacky:
from werkzeug.utils import cached_property
# regular #property works, too
class User(base):
_username = Column('username', Unicode(30), unique=True)
#cached_property
def username(self):
return self._username
def __init__(self, username, **kw):
super(User,self).__init__(**kw)
self._username=username
Doing this on the database column permission level will not work because not all databases support that.
You can use the validates SQLAlchemy feature.
from sqlalchemy.orm import validates
...
class User(base):
...
#validates('username')
def validates_username(self, key, value):
if self.username: # Field already exists
raise ValueError('Username cannot be modified.')
return value
reference: https://docs.sqlalchemy.org/en/13/orm/mapped_attributes.html#simple-validators
I can suggest the following ways do protect column from modification:
First is using hook when any attribute is being set:
In case above all column in all tables of Base declarative will be hooked, so you need somehow to store information about whether column can be modified or not. For example you could inherit sqlalchemy.Column class to add some attribute to it and then check attribute in the hook.
class Column(sqlalchemy.Column):
def __init__(self, *args, **kwargs):
self.readonly = kwargs.pop("readonly", False)
super(Column, self).__init__(*args, **kwargs)
# noinspection PyUnusedLocal
#event.listens_for(Base, 'attribute_instrument')
def configure_listener(class_, key, inst):
"""This event is called whenever an attribute on a class is instrumented"""
if not hasattr(inst.property, 'columns'):
return
# noinspection PyUnusedLocal
#event.listens_for(inst, "set", retval=True)
def set_column_value(instance, value, oldvalue, initiator):
"""This event is called whenever a "set" occurs on that instrumented attribute"""
logging.info("%s: %s -> %s" % (inst.property.columns[0], oldvalue, value))
column = inst.property.columns[0]
# CHECK HERE ON CAN COLUMN BE MODIFIED IF NO RAISE ERROR
if not column.readonly:
raise RuntimeError("Column %s can't be changed!" % column.name)
return value
To hook concrete attributes you can do the next way (adding attribute to column not required):
# standard decorator style
#event.listens_for(SomeClass.some_attribute, 'set')
def receive_set(target, value, oldvalue, initiator):
"listen for the 'set' event"
# ... (event handling logic) ...
Here is guide about SQLAlchemy events.
Second way that I can suggest is using standard Python property or SQLAlchemy hybrid_property as you have shown in your question, but using this approach result in code growing.
P.S. I suppose that compact way is add attribute to column and hook all set event.
Slight correction to #AlexQueue answer.
#validates('username')
def validates_username(self, key, value):
if self.username and self.username != value: # Field already exists
raise ValueError('Username cannot be modified.')
return value

Making the unique validator with Coland and SQLAlchemy

All I trying to do is simple blog website using Pyramid, SQLAlchemy. The form module I have chosen is Deform which uses Coland. So I have for now two fields in my form: name and url. Url creates by transliteration the name field, but it's nevermind. So I don't wanna have two articles with the same urls. I need somehow make the validator with Colland I think. But the problem is the validator performs per field, but not per Model record. I mean if I'd make validator for url field, I dont have information in my method about another fields, such as id or name, so I couldn't perform the validation.
For now I have there couple of strings I created for two hours =)
from slugify import slugify
def convertUrl(val):
return slugify(val) if val else val
class ArticleForm(colander.MappingSchema):
name = colander.SchemaNode(colander.String())
url = colander.SchemaNode(colander.String(),
preparer=convertUrl)
Actually, I thought I should perform such validation on a model level, i.e. in SQLAlchemy model, but of course futher rules don't work, because such rules exist mainly for making SQL scripts (CREATE TABLE):
class Article(TBase, Base):
""" The SQLAlchemy declarative model class for a Article object. """
__tablename__ = 'article'
id = Column(Integer, primary_key=True)
name = Column(Text, unique=True)
url = Column(Text, unique=True)
Actually my question doesn't refer neither to Deform nor to Colander, this validation must be performed at SQLAlchemy level, here's what i've come to:
#validates('url')
def validate_url_unique(self, key, value):
check_unique = DBSession.query(Article)\
.filter(and_(Article.url == value, Article.id != self.id)).first()
if check_unique:
# Doesn't work
raise ValueError('Something went wrong')
# Neither doesn't work
# assert not check_unique
return value

"Class already has a primary mapper defined" error with SQLAlchemy

Back in October 2010, I posted this question to the Sqlalchemy user list.
At the time, I just used the clear_mappers workaround mentioned in the message, and didn't try to figure out what the problem was. That was very naughty of me. Today I ran into this bug again, and decided to construct a minimal example, which appears below. Michael also addressed what is probably the same issue back in 2006. I decided to follow up here, to give Michael a break from my dumb questions.
So, the upshot appears to be that for a given class definition, you can't have more than one mapper defined. In my case I have the Pheno class declared in module scope (I assume that is top level scope here) and each time make_tables runs, it tries to define another mapper.
Mike wrote "Based on the description of the problem above, you need to ensure your Python classes are declared in the same scope as your mappers. The error message you're getting suggests that 'Pheno' is declared at the module level." That would take care of the problem, but how do I manage that, without altering my current structure? What other options do I have, if any? Apparently mapper doesn't have an option like "if the mapper is already defined, exit without doing anything", which would take care of it nicely. I guess I could define a wrapper function, but that would be pretty ugly.
from sqlalchemy import *
from sqlalchemy.orm import *
def make_pheno_table(meta, schema, name='pheno'):
pheno_table = Table(
name, meta,
Column('patientid', String(60), primary_key=True),
schema=schema,
)
return pheno_table
class Pheno(object):
def __init__(self, patientid):
self.patientid = patientid
def make_tables(schema):
from sqlalchemy import MetaData
meta = MetaData()
pheno_table = make_pheno_table(meta, schema)
mapper(Pheno, pheno_table)
table_dict = {'metadata': meta, 'pheno_table':pheno_table}
return table_dict
table_dict = make_tables('foo')
table_dict = make_tables('bar')
Error message follows. Tested with SQLAlchemy 0.6.3-3 on Debian squeeze.
$ python test.py
Traceback (most recent call last):
File "test.py", line 25, in <module>
table_dict = make_tables('bar')
File "test.py", line 20, in make_tables
mapper(Pheno, pheno_table)
File "/usr/lib/python2.6/dist-packages/sqlalchemy/orm/__init__.py", line 818, in mapper
return Mapper(class_, local_table, *args, **params)
File "/usr/lib/python2.6/dist-packages/sqlalchemy/orm/mapper.py", line 209, in __init__
self._configure_class_instrumentation()
File "/usr/lib/python2.6/dist-packages/sqlalchemy/orm/mapper.py", line 381, in _configure_class_instrumentation
self.class_)
sqlalchemy.exc.ArgumentError: Class '<class '__main__.Pheno'>' already has a primary mapper defined. Use non_primary=True to create a non primary Mapper. clear_mappers() will remove *all* current mappers from all classes.
EDIT: Per the documentation in SQLAlchemy: The mapper() API, I could replace mapper(Pheno, pheno_table) above with
from sqlalchemy.orm.exc import UnmappedClassError
try:
class_mapper(Pheno)
except UnmappedClassError:
mapper(Pheno, pheno_table)
If a mapper is not defined for Pheno, it throws an UnmappedClassError. This at least doesn't return an error in my test script, but I haven't checked if it actually works. Comments?
EDIT2: Per Denis's suggestion, the following works:
class Tables(object):
def make_tables(self, schema):
class Pheno(object):
def __init__(self, patientid):
self.patientid = patientid
from sqlalchemy import MetaData
from sqlalchemy.orm.exc import UnmappedClassError
meta = MetaData()
pheno_table = make_pheno_table(meta, schema)
mapper(Pheno, pheno_table)
table_dict = {'metadata': meta, 'pheno_table':pheno_table, 'Pheno':Pheno}
return table_dict
table_dict = Tables().make_tables('foo')
table_dict = Tables().make_tables('bar')
However, the superficially similar
# does not work
class Tables(object):
class Pheno(object):
def __init__(self, patientid):
self.patientid = patientid
def make_tables(self, schema):
from sqlalchemy import MetaData
from sqlalchemy.orm.exc import UnmappedClassError
meta = MetaData()
pheno_table = make_pheno_table(meta, schema)
mapper(self.Pheno, pheno_table)
table_dict = {'metadata': meta, 'pheno_table':pheno_table, 'Pheno':self.Pheno}
return table_dict
table_dict = Tables().make_tables('foo')
table_dict = Tables().make_tables('bar')
does not. I get the same error message as before.
I don't really understand the scoping issues well enough to say why.
Isn't the Pheno class in both cases in some kind of local scope?
You are trying to map the same class Pheno to 2 different tables. SQLAlchemy allows only one primary mapper for each class, so that it knows what table to use for session.query(Pheno). It's not clear what do you wish to get from your question, so I can't propose solution. There are 2 obvious options:
define separate class to map to second table,
create non-primary mapper for second table by passing non_primary=True parameter and pass it (the value returned by mapper() function) to session.query() instead of class.
Update: to define separate class for each table you can put its definition into the make_tables():
def make_tables(schema):
from sqlalchemy import MetaData
meta = MetaData()
pheno_table = make_pheno_table(meta, schema)
class Pheno(object):
def __init__(self, patientid):
self.patientid = patientid
mapper(Pheno, pheno_table)
table_dict = {'metadata': meta,
'pheno_class': Pheno,
'pheno_table':pheno_table}
return table_dict
maybe i didn't quite understand what you want, but this recipe create identical column in different __tablename__
class TBase(object):
"""Base class is a 'mixin'.
Guidelines for declarative mixins is at:
http://www.sqlalchemy.org/docs/orm/extensions/declarative.html#mixin-classes
"""
id = Column(Integer, primary_key=True)
data = Column(String(50))
def __repr__(self):
return "%s(data=%r)" % (
self.__class__.__name__, self.data
)
class T1Foo(TBase, Base):
__tablename__ = 't1'
class T2Foo(TBase, Base):
__tablename__ = 't2'
engine = create_engine('sqlite:///foo.db', echo=True)
Base.metadata.create_all(engine)
sess = sessionmaker(engine)()
sess.add_all([T1Foo(data='t1'), T1Foo(data='t2'), T2Foo(data='t3'),
T1Foo(data='t4')])
print sess.query(T1Foo).all()
print sess.query(T2Foo).all()
sess.commit()
info in example sqlalchemy