I'm figuring out my way into Scala 2.11 with Slick 2.1.0. I've got two entities persons and users with users extending person. How can i have a projection in users that would allow fetching person as part of users everytime i fetch a user entity.
Here are the entity classes
import scala.slick.driver.MySQLDriver.simple._
case class Person(
id: Option[Long],
name: String,
createdAt: Option[java.sql.Timestamp],
deletedAt: Option[java.sql.Timestamp]
);
class Persons(tag: Tag) extends Table[Person](tag, "persons") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def name = column[String]("name", O.NotNull)
def createdAt = column[java.sql.Timestamp]("created_at")
def deletedAt = column[java.sql.Timestamp]("deleted_at")
def * = (
id.?,
name,
createdAt.?,
deletedAt.?) <> (Person.tupled, Person.unapply)
}
and
import scala.slick.driver.MySQLDriver.simple._
case class User(
id: Option[Long],
personId: Long,
active: Boolean,
createdAt: Option[java.sql.Timestamp],
modifiedAt: Option[java.sql.Timestamp],
deletedAt: Option[java.sql.Timestamp]
);
class Users(tag: Tag) extends Table[User](tag, "users") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def personId = column[Long]("person_id", O.NotNull)
def active = column[Boolean]("active")
def createdAt = column[java.sql.Timestamp]("created_at")
def modifiedAt = column[java.sql.Timestamp]("modified_at")
def deletedAt = column[java.sql.Timestamp]("deleted_at")
def * = (
id.?,
personId,
active,
createdAt.?,
modifiedAt.?,
deletedAt.?) <> (User.tupled, User.unapply)
def person = foreignKey(
"user_person_fk",
personId,
TableQuery[Users])(_.id)
}
Now when i query using
val users = TableQuery[Users]
def loadAll: Option[List[Any]] = {
db withDynSession {
val query = for {
u <- users
p <- u.person
} yield (u,p)
var result = query.list map {
case (u, p) => Map("user" -> u, "person" -> p)
}
return Option(result)
}
}
I get,
List[Map:("user" -> User, "person" -> Person)]
Is there a way i can use a projection or map to get person as part of User.
Instead of mapping your tables using <>, you can transform them after executing the query into whatever shape you like and then map them to case classes using .map.
HOWEVER: This is not recommended practice. Best practice with Slick is to keep foreign keys in your case classes and associate case classes only using tuples (or equivalent named association classes as EndeNeu suggests). Having user embed Person directly takes away some flexibility from you. To quote the Slick manual:
From: http://slick.typesafe.com/doc/2.1.0/orm-to-slick.html#relationships
case class Address( … )
case class Person( …, address: Address )
The problem is that this hard-codes that to exist, a Person requires an Address. It can not be loaded without it. This does’t fit to Slick’s philosophy of giving you fine-grained control over what you load exactly. With Slick it is advised to map one table to a tuple or case class without them having object references to related objects. Instead you can write a function that joins two tables and returns them as a tuple or association case class instance, providing an association externally, not strongly tied one of the classes.
val tupledJoin: Query[(People,Addresses),(Person,Address), Seq]
= people join addresses on (_.addressId === _.id)
case class PersonWithAddress(person: Person, address: Address)
val caseClassJoinResults = tupledJoin.run map PersonWithAddress.tupled
Also be aware that unlike with some ORMs, Slick makes it very easy to write queries with finer granularity than whole rows for better performance. See
http://slick.typesafe.com/doc/2.1.0/orm-to-slick.html#query-granularity
Related
I'm working on a project using Flask and a PostgreSQL database, with SQLAlchemy.
I have Group objects which have a list of User IDs who are members of the group. For some reason, when I try to add an ID to a group, it will not save properly.
If I try members.append(user_id), it doesn't seem to work at all. However, if I try members += [user_id], the id will show up in the view listing all the groups, but if I restart the server, the added value(s) is (are) not there. The initial values, however, are.
Related code:
Adding group to the database initially:
db = SQLAlchemy(app)
# ...
g = Group(request.form['name'], user_id)
db.session.add(g)
db.session.commit()
The Group class:
from flask.ext.sqlalchemy import SQLAlchemy
from sqlalchemy.dialects.postgresql import ARRAY
class Group(db.Model):
__tablename__ = "groups"
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(128))
leader = db.Column(db.Integer)
# list of the members in the group based on user id
members = db.Column(ARRAY(db.Integer))
def __init__(self, name, leader):
self.name = name
self.leader = leader
self.members = [leader]
def __repr__(self):
return "Name: {}, Leader: {}, Members: {}".format(self.name, self.leader, self.members)
def add_user(self, user_id):
self.members += [user_id]
My test function for updating the Group:
def add_2_to_group():
g = Group.query.all()[0]
g.add_user(2)
db.session.commit()
return redirect(url_for('show_groups'))
Thanks for any help!
As you have mentioned, the ARRAY datatype in sqlalchemy is immutable. This means it isn’t possible to add new data into array once it has been initialised.
To solve this, create class MutableList.
from sqlalchemy.ext.mutable import Mutable
class MutableList(Mutable, list):
def append(self, value):
list.append(self, value)
self.changed()
#classmethod
def coerce(cls, key, value):
if not isinstance(value, MutableList):
if isinstance(value, list):
return MutableList(value)
return Mutable.coerce(key, value)
else:
return value
This snippet allows you to extend a list to add mutability to it. So, now you can use the class above to create a mutable array type like:
class Group(db.Model):
...
members = db.Column(MutableList.as_mutable(ARRAY(db.Integer)))
...
You can use the flag_modified function to mark the property as having changed. In this example, you could change your add_user method to:
from sqlalchemy.orm.attributes import flag_modified
# ~~~
def add_user(self, user_id):
self.members += [user_id]
flag_modified(self, 'members')
To anyone in the future: so it turns out that arrays through SQLAlchemy are immutable. So, once they're initialized in the database, they can't change size. There's probably a way to do this, but there are better ways to do what we're trying to do.
This is a hacky solution, but what you can do is:
Store the existing array temporarily
Set the column value to None
Set the column value to the existing temporary array
For example:
g = Group.query.all()[0]
temp_array = g.members
g.members = None
db.session.commit()
db.session.refresh(g)
g.members = temp_array
db.session.commit()
In my case it was solved by using the new reference for storing a object variable and assiging that new created variable in object variable.so, Instead of updating the existing objects variable it will create a new reference address which reflect the changes.
Here in Model,
Table: question
optional_id = sa.Column(sa.ARRAY(sa.Integer), nullable=True)
In views,
option_list=list(question.optional_id if question.optional_id else [])
if option_list:
question.optional_id.clear()
option_list.append(obj.id)
question.optional_id=option_list
else:
question.optional_id=[obj.id]
I'm trying to perform a GraphQL query using Django and Graphene. To query one single object using the id I did the following:
{
samples(id:"U2FtcGxlU2V0VHlwZToxMjYw") {
edges {
nodes {
name
}
}
}
}
And it just works fine. Problem arise when I try to query with more than one id, like the following:
{
samples(id_In:"U2FtcGxlU2V0VHlwZToxMjYw, U2FtcGxlU2V0VHlwZToxMjYx") {
edges {
nodes {
name
}
}
}
}
In the latter case I got the following error:
argument should be a bytes-like object or ASCII string, not 'list'
And this is a sketch of how defined the Type and Query in django-graphene
class SampleType(DjangoObjectType):
class Meta:
model = Sample
filter_fields = {
'id': ['exact', 'in'],
}
interfaces = (graphene.relay.Node,)
class Query(object):
samples = DjangoFilterConnectionField(SampleType)
def resolve_sample_sets(self, info, **kwargs):
return Sample.objects.all()
GlobalIDMultipleChoiceFilter from django-graphene kinda solves this issue, if you put "in" in the field name. You can create filters like
from django_filters import FilterSet
from graphene_django.filter import GlobalIDMultipleChoiceFilter
class BookFilter(FilterSet):
author = GlobalIDMultipleChoiceFilter()
and use it by
{
books(author: ["<GlobalID1>", "<GlobalID2>"]) {
edges {
nodes {
name
}
}
}
}
Still not perfect, but the need for custom code is minimized.
You can easily use a Filter just put this with your nodes.
class ReportFileFilter(FilterSet):
id = GlobalIDMultipleChoiceFilter()
Then in your query just use -
class Query(graphene.ObjectType):
all_report_files = DjangoFilterConnectionField(ReportFileNode, filterset_class=ReportFileFilter)
This is for relay implementation of graphql django.
None of the existing answers seemed to work for me as they were presented, however with some slight changes I managed to resolve my problem as follows:
You can create a custom FilterSet class for your object type, and filter the field by using the GlobalIDMultipleChoiceFilter. for example:
from django_filters import FilterSet
from graphene_django.filter import GlobalIDFilter, GlobalIDMultipleChoiceFilter
class SampleFilter(FilterSet):
id = GlobalIDFilter()
id__in = GlobalIDMultipleChoiceFilter(field_name="id")
class Meta:
model = Sample
fields = (
"id_in",
"id",
)
Something I came cross is that you can not have filter_fields defined with this approach. Instead, you have to only rely on the custom FilterSet class exclusively, making your object type effectively look like this:
from graphene import relay
from graphene_django import DjangoObjectType
class SampleType(DjangoObjectType):
class Meta:
model = Sample
filterset_class = SampleFilter
interfaces = (relay.Node,)
I had trouble implementing the 'in' filter as well--it appears to be misimplemented in graphene-django right now and does not work as expected. Here are the steps to make it work:
Remove the 'in' filter from your filter_fields
Add an input value to your DjangoFilterConnectionField entitled 'id__in' and make it a list of IDs
Rename your resolver to match the 'samples' field.
Handle filtering by 'id__in' in your resolver for the field. For you this will look as follows:
from base64 import b64decode
def get_pk_from_node_id(node_id: str):
"""Gets pk from node_id"""
model_with_pk = b64decode(node_id).decode('utf-8')
model_name, pk = model_with_pk.split(":")
return pk
class SampleType(DjangoObjectType):
class Meta:
model = Sample
filter_fields = {
'id': ['exact'],
}
interfaces = (graphene.relay.Node,)
class Query(object):
samples = DjangoFilterConnectionField(SampleType, id__in=graphene.List(graphene.ID))
def resolve_samples(self, info, **kwargs):
# filter_field for 'in' seems to not work, this hack works
id__in = kwargs.get('id__in')
if id__in:
node_ids = kwargs.pop('id__in')
pk_list = [get_pk_from_node_id(node_id) for node_id in node_ids]
return Sample._default_manager.filter(id__in=pk_list)
return Sample._default_manager.all()
This will allow you to call the filter with the following api. Note the use of an actual array in the signature (I think this is a better API than sending a comma separated string of values). This solution still allows you to add other filters to the request and they will chain together correctly.
{
samples(id_In: ["U2FtcGxlU2V0VHlwZToxMjYw", "U2FtcGxlU2V0VHlwZToxMjYx"]) {
edges {
nodes {
name
}
}
}
}
Another way is to tell the Relay filter of graphene_django to also deals with a list. This filter is register in a mixin in graphene_django and applied to any filter you define.
So here my solution:
from graphene_django.filter.filterset import (
GlobalIDFilter,
GrapheneFilterSetMixin,
)
from graphql_relay import from_global_id
class CustomGlobalIDFilter(GlobalIDFilter):
"""Allow __in lookup for IDs"""
def filter(self, qs, value):
if isinstance(value, list):
value_lst = [from_global_id(v)[1] for v in value]
return super(GlobalIDFilter, self).filter(qs, value_lst)
else:
return super().filter(qs, value)
# Fix the mixin defaults
GrapheneFilterSetMixin.FILTER_DEFAULTS.update({
AutoField: {"filter_class": CustomGlobalIDFilter},
OneToOneField: {"filter_class": CustomGlobalIDFilter},
ForeignKey: {"filter_class": CustomGlobalIDFilter},
})
I'm using the SQLalchemy association-object pattern (http://docs.sqlalchemy.org/en/rel_1_1/orm/basic_relationships.html#association-object) for three model classes.
Basic relationship is on the left side one User can belong to multiple Organizations. I'm storing extra User-Organization relevant data in the association object class. Then, the association-object class maps a many-to-one to the Organization.
From SQLAlchemy point, the relationship works fine. The problem is testing this with factory boy has proven difficult and always results in error RecursionError: maximum recursion depth exceeded.
Below are the three models for the association object relationship, where User is parent and the Child is Organization:
class MemberOrgsAssoc(Model):
"""The left side of the relationship maps a User as a one-to-many to
Organizations. User-Organization relevant data is stored in
this association-object table. Then, there is a one-to-many from
this association-object table to the Organization table. """
__tablename__ = 'member_orgs'
member_id = Column(db.Integer, db.ForeignKey("users.id"), primary_key=True)
org_id = Column(db.Integer, db.ForeignKey("organizations.id"), primary_key=True)
manager_id = Column(db.Integer, db.ForeignKey("users.id"))
org_title = Column(db.Unicode(50))
organization = relationship("Organization", back_populates="members")
member = relationship("User", back_populates="organizations",
foreign_keys=[member_id])
manager = relationship("User", back_populates="subordinates",
foreign_keys=[manager_id])
class User(SurrogatePK, Model):
"""A user of the app."""
__tablename__ = 'users'
username = Column(db.Unicode(80), unique=True, nullable=False)
organizations = relationship("MemberOrgsAssoc", back_populates="member",
primaryjoin = "member_orgs.c.member_id == User.id",
lazy="dynamic")
subordinates = relationship("MemberOrgsAssoc", back_populates="manager",
primaryjoin = "member_orgs.c.manager_id == User.id",
lazy="dynamic")
class Organization(SurrogatePK, Model):
"""An organization that Users may belong to."""
__tablename__ = 'organizations'
name = Column(db.Unicode(128), nullable=False)
members = relationship("MemberOrgsAssoc", back_populates="organization")
So all the above SQLAlchemy model classes and relationships seem to work as intended for now.
Below are the three factory-boy classes I'm attempting to make work.
MemberOrgs association-object factory:
class MemberOrgsAssocFactory(BaseFactory):
"""Association-object table Factory"""
class Meta:
"""Factory config"""
model = MemberOrgsAssoc
member_id = factory.SubFactory('tests.factories.UserFactory')
org_id = factory.SubFactory('tests.factories.OrganizationFactory')
manager_id = factory.SubFactory('tests.factories.UserFactory')
org_title = Sequence(lambda n: 'CEO{0}'.format(n))
organization = factory.SubFactory('tests.factories.OrganizationFactory')
member = factory.SubFactory('tests.factories.UserFactory')
manager = factory.SubFactory('tests.factories.UserFactory')
class UserFactory(BaseFactory):
"""User factory."""
class Meta:
"""Factory configuration."""
model = User
username = Sequence(lambda n: 'user{0}'.format(n))
organizations = factory.List(
[factory.SubFactory('tests.factories.MemberOrgsAssocFactory')])
subordinates = factory.List(
[factory.SubFactory('tests.factories.MemberOrgsAssocFactory')])
class OrganizationFactory(BaseFactory):
"""Company factory"""
class Meta:
"""Factory config"""
model = Organization
id = Sequence(lambda n: '{0}'.format(n))
name = Sequence(lambda n: 'company{0}'.format(n))
members = factory.List(
[factory.SubFactory('tests.factories.MemberOrgsAssocFactory')])
Finally, need to make a user for the tests and so below is a pytest fixture to make a User. This is where the tests fail due to `RecursionError: maximum recursion depth exceeded".
#pytest.fixture(scope='function')
def user(db):
"""An user for the unit tests.
setup reference: https://github.com/FactoryBoy/factory_boy/issues/101
# how to handle self referential foreign key relation in factory boy
# https://github.com/FactoryBoy/factory_boy/issues/173
"""
user = UserFactory(
organizations__0=None,
subordinates__0=None,
)
a = MemberOrgsAssocFactory(
is_org_admin=True,
is_default_org=True,
is_active=True,
)
a.organization=OrganizationFactory()
user.organizations.append(a)
db.session.commit()
return user
Error message:
E RecursionError: maximum recursion depth exceeded
!!! Recursion detected (same locals & position)
More or less resolved this, though a bit fragile overall. Must follow required pattern carefully as laid out in the sqlalchemy docs:
""" EXAMPLE USE:
# create User object, append an Organization object via association
p = User()
a = MemberOrgsAssoc(extra_data="some data")
a.organization = Organization()
p.organizations.append(a)
# iterate through Organization objects via association, including association attributes:
for assoc in p.organizations:
print(assoc.extra_data)
print(assoc.child)
"""
Below changes to the pytest fixture resolved the RecursionError issue and got it working:
#pytest.fixture(scope='function')
def user(db):
"""An user for the tests."""
user = UserFactory(
organizations='',
subordinates=''
)
a = MemberOrgsAssocFactory(
member_id=None,
org_id=None,
manager_id=None,
is_org_admin=True,
is_default_org=True,
is_active=True,
organization=None,
member=None,
manager=None
)
a.organization = OrganizationFactory(members=[])
user.organizations.append(a)
db.session.commit()
# debugging
# thisuser = User.get_by_id(user.id)
# for assoc in thisuser.organizations:
# if assoc.is_default_org:
# print('The default organization of thisuser is -> {}'.format(assoc.organization.name))
return user
Coming from play anorm, create a model from json without passing anorm PK value in the json I'm trying to add Seq[Address] to case class like
case class User(
id: Pk[Long] = NotAssigned,
name: String = "",
email: String = "",
addresses: Seq[Address])
Address is a simple object/class with three Strings. An user can have more than 1 address, how do I get all the addresses along with the User in findAll.
def findAll(): Seq[User] = {
Logger.info("select * from pt_users")
DB.withConnection { implicit connection =>
SQL(
"""
select * from pt_users where name like {query} limit {size} offset {offset}
""").as(User.list *)
}
}
A side note about something I have found useful: if you're not sure you will always want to fetch the addresses for a user, you can avoid adding that relation as a field and instead use tuples or other data structures for representing them. This would allow you to have a method like this:
def allUsersWithAddresses(): Map[User, Seq[Address])] = ...
But still have methods that return only users without having the joined data.
To read a join (or subselect) you will have to parse the combined output with a parser, something like this:
.as(User ~ Address *).groupBy(_._1)
If you really want to put the addresses inside of user, you'd have to make the address list empty from the user parser and then map each distinct user into one with the addresses:
.as(User ~ Address *).groupBy(_._1).map {
case (user, addresses) => user.copy(addresses = addresses)
}
Note, the examples are just pointers to an approximate solution, not copy-paste-and-compile ready code.
Hope it helped!
This will work
/** Parses a `Blog` paired with an optional `Post` that can be later be collapsed into one object. */
val parser: RowParser[(Blog, Option[Post])] = {
simple ~ Post.parser.? map {
case blog~post => (blog, post)
}
}
def list: List[Blog] = {
DB.withConnection { implicit c =>
SQL("""
SELECT * FROM blogs b
LEFT OUTER JOIN posts p ON(p.blog_id = b.id)
""").as(parser *)
.groupBy(_._1)
.mapValues(_.map(_._2).flatten)
.toList
.map{ case (blog, posts) => blog.copy(posts = posts) }
}
}
Copied from https://index.scala-lang.org/mhzajac/anorm-relational/anorm-relational/0.3.0?target=_2.12
I need some models for instance following:
Work - e.g. works of literature.
Worker - e.g. composer, translator or something similar has contribution to work.
Thus, a 'type' field is required to distinguish workers by division of work. As SQLAlchemy's documentation, this case can benifit from association object like following:
class Work(base):
id = Column(Integer, primary_key=True)
name = Column(String(50))
description = Column(Text)
class Worker(base):
id = Column(Integer, primary_key=True)
name = Column(String(50))
description = Column(Text)
class Assignment(base):
work_id = Column(Integer, Foreignkey('work.id'), primary_key=True)
worker_id = Column(Integer, Foreignkey('worker.id'), primary_key=True)
type = Column(SmallInteger, nullable=True)
Nonetheless, how to take advantage of backref and alternatvie join condition for building relation immediately to implement that each Work object can retrieve and modify corresponding Worker(s) via different attributions for distinction. For example:
work = session.query(Work).get(1)
work.name
>>> 'A Dream of The Red Mansions'
work.composers
>>> [<Worker('Xueqin Cao')>]
work.translators
>>> [<Worker('Xianyi Yang')>, <Worker('Naidie Dai')>]
Vice versa:
worker = session.query(Worker).get(1)
worker.name
>>> 'Xueqin Cao'
worker.composed
>>> [<Work('A Dream of The Red Mansions')>]
worker.translated
>>> []
Adding secondaryjoin directly without secondary specified seems not feasible, besides, SQLAlchemy's docs notes that:
When using the association object pattern, it is advisable that the association-mapped table not be used as the secondary argument on a relationship() elsewhere, unless that relationship() contains the option viewonly=True. SQLAlchemy otherwise may attempt to emit redundant INSERT and DELETE statements on the same table, if similar state is detected on the related attribute as well as the associated object.
Then, is there some way to build these relations elegantly and readily ?
There's three general ways to go here.
One is, do a "vanilla" setup where you have "work"/"workers" set up without distinguishing on "type" - then, use relationship() for "composer", "composed", "translator", "translated" by using "secondary" to Assignment.__table__ along with custom join conditions, as well as viewonly=True. So you'd do writes via the vanilla properties only. A disadvantage here is that there's no immediate synchronization between the "vanilla" and "specific" collections.
Another is, same with the "vanilla" setup, but just use plain Python descriptors to give "composer", "composed", "translator", "translated" views in memory, that is, [obj.worker for obj in self.workers if obj.type == 'composer']. This is the simplest way to go. Whatever you put in the "vanilla" collections shows right up in the "filtered" collection, the SQL is simple, and there's fewer SELECT statements in play (one per Worker/Work instead of N per Worker/Work).
Finally, the approach that's closest to what you're asking, with primary joins and backrefs, but note with the association object, the backrefs are between Work/Assignment and Assignment/Worker, but not between Work/Worker directly. This approach probably winds up using more SQL to get at the results but is the most complete, and also has the nifty feature that the "type" is written automatically. We're also using a "one way backref", as Assignment doesn't have a simple way of relating back outwards (there's ways to do it but it would be tedious). Using a Python function to automate creation of the relationships reduces the boilerplate, and note here I'm using a string for "type", this can be an integer if you add more arguments to the system:
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.associationproxy import association_proxy
Base = declarative_base()
def _work_assignment(name):
assign_ = relationship("Assignment",
primaryjoin="and_(Assignment.work_id==Work.id, "
"Assignment.type=='%s')" % name,
back_populates="work", cascade="all, delete-orphan")
assoc = association_proxy("%s_assign" % name, "worker",
creator=lambda worker: Assignment(worker=worker, type=name))
return assoc, assign_
def _worker_assignment(name):
assign_ = relationship("Assignment",
primaryjoin="and_(Assignment.worker_id==Worker.id, "
"Assignment.type=='%s')" % name,
back_populates="worker", cascade="all, delete-orphan")
assoc = association_proxy("%s_assign" % name, "work",
creator=lambda work: Assignment(work=work, type=name))
return assoc, assign_
class Work(Base):
__tablename__ = 'work'
id = Column(Integer, primary_key=True)
name = Column(String(50))
description = Column(Text)
composers, composer_assign = _work_assignment("composer")
translators, translator_assign = _work_assignment("translator")
class Worker(Base):
__tablename__ = 'worker'
id = Column(Integer, primary_key=True)
name = Column(String(50))
description = Column(Text)
composed, composer_assign = _worker_assignment("composer")
translated, translator_assign = _worker_assignment("translator")
class Assignment(Base):
__tablename__ = 'assignment'
work_id = Column(Integer, ForeignKey('work.id'), primary_key=True)
worker_id = Column(Integer, ForeignKey('worker.id'), primary_key=True)
type = Column(String, nullable=False)
worker = relationship("Worker")
work = relationship("Work")
e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
session = Session(e)
ww1, ww2, ww3 = Worker(name='Xueqin Cao'), Worker(name='Xianyi Yang'), Worker(name='Naidie Dai')
w1 = Work(name='A Dream of The Red Mansions')
w1.composers.append(ww1)
w1.translators.extend([ww2, ww3])
session.add(w1)
session.commit()
work = session.query(Work).get(1)
assert work.name == 'A Dream of The Red Mansions'
assert work.composers == [ww1]
assert work.translators == [ww2, ww3]
worker = session.query(Worker).get(ww1.id)
assert worker.name == 'Xueqin Cao'
assert worker.composed == [work]
assert worker.translated == []
worker.composed[:] = []
# either do this...
session.expire(work, ['composer_assign'])
# or this....basically need composer_assign to reload
# session.commit()
assert work.composers == []