Programmatically identify django foreignkey links - mysql

Similar to the question I asked here, if I wanted to list all of the foreign key relationships from a model, is there a way to detect these relationships (forward and backward) automatically?
Specifically, if Model 1 reads
class Mdl_one(models.Model):
name = models.CharField(max_length=30)
and Model 2 reads
class Mdl_two(models.Model):
mdl_one = models.ForeignKey(Mdl_one)
name = models.CharField(max_length=30)
Is there some meta command I can run from Mdl_one (like Model_one()._meta.one_to_many) that tells me that mdl_two has a one-to-many foreign key relationship with it? Simply that mdl_one and mdl_two can be connected, not necessarily that any two objects actually are?

This is you are looking for:
yourModel._meta.get_all_related_objects()
Sample (Edited):
class Alumne(models.Model):
id_alumne = models.AutoField(primary_key=True)
grup = models.ForeignKey(Grup, db_column='id_grup')
nom_alumne = models.CharField("Nom",max_length=240)
cognom1alumne = models.CharField("Cognom1",max_length=240)
cognom2alumne = models.CharField("Cognom2",max_length=240, blank=True)
...
class Expulsio(models.Model): <---!
alumne = models.ForeignKey(Alumne, db_column='id_alumne')
dia_expulsio = models.DateField(blank=True)
...
>>> from alumnes.models import Alumne as A
>>> for x in A._meta.get_all_related_objects():
... print x.name
...
horaris:alumneexclosdelhorari
presencia:controlassitencia
incidencies:entrevista
incidencies:expulsio <---!
incidencies:incidencia
incidencies:incidenciadaula
seguimentTutorial:seguimenttutorial

Related

When do you actually use multiple foreign keys?

I want to create tables where one has multiple foreign keys of the other. Then I want relationships between the two using the foreign_keys argument.
class Trial(SurrogatePK, Model):
__tablename__ = 'trials'
challenges = relationship('Challenge', foreign_keys='[Challenge.winner_id, '
'Challenge.loser_id]')
class Challenge(SurrogatePK, Model):
__tablename__ = 'challenges'
winner_id = reference_col('trials')
winner = relationship('Trial', back_populates='challenges',
foreign_keys=winner_id)
loser_id = reference_col('trials')
loser = relationship('Trial', back_populates='challenges',
foreign_keys=loser_id)
This doesnt work because sqlachemy gets two foreign_keys and gives an error for that.
The way to make it work is with a primaryjoin:
class Trial(SurrogatePK, Model):
__tablename__ = 'trials'
challenges = relationship('Challenge', primaryjoin=
'or_(Trial.id==Challenge.winner_id,'
'Trial.id==Challenge.loser_id)')
Now, the thing that I want to ask is. When should I actually use multiple foreign keys in the foreign_keys argument. It's got to be plural for a reason right?
In the entire documentation I can't find a singel case where they are actually using multiple foreign keys.

Unable to change the value of foreign key to foreign key of an object Django

I am having a model structure like:
class user(models.Model):
name = models.CharField(max_length=100)
tasks = models.IntegerField(default=0)
class project(models.Model):
worker = models.ForeignKey(user, on_delete=models.CASCADE)
project_name = models.CharField(max_length=100)
class task(models.Model):
project = models.ForeignKey(project, on_delete=models.CASCADE)
task_name = models.CharField(max_length=150)
expected_date = models.DateField(auto_now=False,auto_now_add=False,)
actual_date = models.DateField(auto_now=False,auto_now_add=False,blank=True,null=True,)
I want to traverse through the task list and if actual date field is not null i.e. task completed then to update the tasks field in user class by 1. I have written the following code:
a = task.objects.filter(actual_date__isnull=False)
for x in a:
x.project.worker.tasks+=1
However this is not giving the desired result. What should I do?
You are not saving your object after modifying it - simply modifying the value doesn't write it to the database. Try this instead:
a = task.objects.filter(actual_date__isnull=False)
for x in a:
worker = x.project.worker
worker.tasks += 1
worker.save()
On a separate note you should consider following PEP8 conventions and using CamelCase for your class names. As it is currently you can very easily mix up classes with objects.

Passive deletes in SQLAlchemy with a many-to-many relationship don't prevent DELETE from being issued for related object

I am trying to get SQLAlchemy to let my database's foreign keys "on delete cascade" do the cleanup on the association table between two objects. I have setup the cascade and passive_delete options on the relationship as seems appropriate from the docs. However, when a related object is loaded into the collection of a primary object and the primary object is deleted from the session, then SQLAlchemy issues a delete statement on the secondary table for the record relating the primary and secondary objects.
For example:
import logging
import sqlalchemy as sa
import sqlalchemy.ext.declarative as sadec
import sqlalchemy.orm as saorm
engine = sa.create_engine('sqlite:///')
engine.execute('PRAGMA foreign_keys=ON')
logging.basicConfig()
_logger = logging.getLogger('sqlalchemy.engine')
meta = sa.MetaData(bind=engine)
Base = sadec.declarative_base(metadata=meta)
sess = saorm.sessionmaker(bind=engine)
session = sess()
blog_tags_table = sa.Table(
'blog_tag_map',
meta,
sa.Column('blog_id', sa.Integer, sa.ForeignKey('blogs.id', ondelete='cascade')),
sa.Column('tag_id', sa.Integer, sa.ForeignKey('tags.id', ondelete='cascade')),
sa.UniqueConstraint('blog_id', 'tag_id', name='uc_blog_tag_map')
)
class Blog(Base):
__tablename__ = 'blogs'
id = sa.Column(sa.Integer, primary_key=True)
title = sa.Column(sa.String, nullable=False)
tags = saorm.relationship('Tag', secondary=blog_tags_table, passive_deletes=True,
cascade='save-update, merge, refresh-expire, expunge')
class Tag(Base):
__tablename__ = 'tags'
id = sa.Column(sa.Integer, primary_key=True)
label = sa.Column(sa.String, nullable=False)
meta.create_all(bind=engine)
blog = Blog(title='foo')
blog.tags.append(Tag(label='bar'))
session.add(blog)
session.commit()
# sanity check
assert session.query(Blog.id).count() == 1
assert session.query(Tag.id).count() == 1
assert session.query(blog_tags_table).count() == 1
_logger.setLevel(logging.INFO)
session.commit()
# make sure the tag is loaded into the collection
assert blog.tags[0]
session.delete(blog)
session.commit()
_logger.setLevel(logging.WARNING)
# confirm check
assert session.query(Blog.id).count() == 0
assert session.query(Tag.id).count() == 1
assert session.query(blog_tags_table).count() == 0
The above code will produce DELETE statements as follows:
DELETE FROM blog_tag_map WHERE
blog_tag_map.blog_id = ? AND blog_tag_map.tag_id = ?
DELETE FROM blogs WHERE blogs.id = ?
Is there a way to setup the relationship so that no DELETE statement for blog_tag_map is issued? I've also tried setting passive_deletes='all' with the same results.
Here, the “related object” is not being deleted. That would be “tags”. The blog_tags_table is not an object, it is a many-to-many table. Right now the passive_deletes='all' option is not supported for many-to-many, that is, a relationship that includes "secondary". This would be an acceptable feature add but would require development and testing efforts.
Applying viewonly=True to the relationship() would prevent any changes from affecting the many-to-many table. If the blog_tags_table is otherwise special, then you'd want to use the association object pattern to have finer grained control.

How to build backref with both associatition object and secondaryjoin?

I need some models for instance following:
Work - e.g. works of literature.
Worker - e.g. composer, translator or something similar has contribution to work.
Thus, a 'type' field is required to distinguish workers by division of work. As SQLAlchemy's documentation, this case can benifit from association object like following:
class Work(base):
id = Column(Integer, primary_key=True)
name = Column(String(50))
description = Column(Text)
class Worker(base):
id = Column(Integer, primary_key=True)
name = Column(String(50))
description = Column(Text)
class Assignment(base):
work_id = Column(Integer, Foreignkey('work.id'), primary_key=True)
worker_id = Column(Integer, Foreignkey('worker.id'), primary_key=True)
type = Column(SmallInteger, nullable=True)
Nonetheless, how to take advantage of backref and alternatvie join condition for building relation immediately to implement that each Work object can retrieve and modify corresponding Worker(s) via different attributions for distinction. For example:
work = session.query(Work).get(1)
work.name
>>> 'A Dream of The Red Mansions'
work.composers
>>> [<Worker('Xueqin Cao')>]
work.translators
>>> [<Worker('Xianyi Yang')>, <Worker('Naidie Dai')>]
Vice versa:
worker = session.query(Worker).get(1)
worker.name
>>> 'Xueqin Cao'
worker.composed
>>> [<Work('A Dream of The Red Mansions')>]
worker.translated
>>> []
Adding secondaryjoin directly without secondary specified seems not feasible, besides, SQLAlchemy's docs notes that:
When using the association object pattern, it is advisable that the association-mapped table not be used as the secondary argument on a relationship() elsewhere, unless that relationship() contains the option viewonly=True. SQLAlchemy otherwise may attempt to emit redundant INSERT and DELETE statements on the same table, if similar state is detected on the related attribute as well as the associated object.
Then, is there some way to build these relations elegantly and readily ?
There's three general ways to go here.
One is, do a "vanilla" setup where you have "work"/"workers" set up without distinguishing on "type" - then, use relationship() for "composer", "composed", "translator", "translated" by using "secondary" to Assignment.__table__ along with custom join conditions, as well as viewonly=True. So you'd do writes via the vanilla properties only. A disadvantage here is that there's no immediate synchronization between the "vanilla" and "specific" collections.
Another is, same with the "vanilla" setup, but just use plain Python descriptors to give "composer", "composed", "translator", "translated" views in memory, that is, [obj.worker for obj in self.workers if obj.type == 'composer']. This is the simplest way to go. Whatever you put in the "vanilla" collections shows right up in the "filtered" collection, the SQL is simple, and there's fewer SELECT statements in play (one per Worker/Work instead of N per Worker/Work).
Finally, the approach that's closest to what you're asking, with primary joins and backrefs, but note with the association object, the backrefs are between Work/Assignment and Assignment/Worker, but not between Work/Worker directly. This approach probably winds up using more SQL to get at the results but is the most complete, and also has the nifty feature that the "type" is written automatically. We're also using a "one way backref", as Assignment doesn't have a simple way of relating back outwards (there's ways to do it but it would be tedious). Using a Python function to automate creation of the relationships reduces the boilerplate, and note here I'm using a string for "type", this can be an integer if you add more arguments to the system:
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.associationproxy import association_proxy
Base = declarative_base()
def _work_assignment(name):
assign_ = relationship("Assignment",
primaryjoin="and_(Assignment.work_id==Work.id, "
"Assignment.type=='%s')" % name,
back_populates="work", cascade="all, delete-orphan")
assoc = association_proxy("%s_assign" % name, "worker",
creator=lambda worker: Assignment(worker=worker, type=name))
return assoc, assign_
def _worker_assignment(name):
assign_ = relationship("Assignment",
primaryjoin="and_(Assignment.worker_id==Worker.id, "
"Assignment.type=='%s')" % name,
back_populates="worker", cascade="all, delete-orphan")
assoc = association_proxy("%s_assign" % name, "work",
creator=lambda work: Assignment(work=work, type=name))
return assoc, assign_
class Work(Base):
__tablename__ = 'work'
id = Column(Integer, primary_key=True)
name = Column(String(50))
description = Column(Text)
composers, composer_assign = _work_assignment("composer")
translators, translator_assign = _work_assignment("translator")
class Worker(Base):
__tablename__ = 'worker'
id = Column(Integer, primary_key=True)
name = Column(String(50))
description = Column(Text)
composed, composer_assign = _worker_assignment("composer")
translated, translator_assign = _worker_assignment("translator")
class Assignment(Base):
__tablename__ = 'assignment'
work_id = Column(Integer, ForeignKey('work.id'), primary_key=True)
worker_id = Column(Integer, ForeignKey('worker.id'), primary_key=True)
type = Column(String, nullable=False)
worker = relationship("Worker")
work = relationship("Work")
e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
session = Session(e)
ww1, ww2, ww3 = Worker(name='Xueqin Cao'), Worker(name='Xianyi Yang'), Worker(name='Naidie Dai')
w1 = Work(name='A Dream of The Red Mansions')
w1.composers.append(ww1)
w1.translators.extend([ww2, ww3])
session.add(w1)
session.commit()
work = session.query(Work).get(1)
assert work.name == 'A Dream of The Red Mansions'
assert work.composers == [ww1]
assert work.translators == [ww2, ww3]
worker = session.query(Worker).get(ww1.id)
assert worker.name == 'Xueqin Cao'
assert worker.composed == [work]
assert worker.translated == []
worker.composed[:] = []
# either do this...
session.expire(work, ['composer_assign'])
# or this....basically need composer_assign to reload
# session.commit()
assert work.composers == []

Inserting data in Many to Many relationship in SQLAlchemy

Suppose I have 3 classes in SQLALchemy: Topic, Tag, Tag_To_Topic.
Is it possible to write something like:
new_topic = Topic("new topic")
Topics.tags = ['tag1', 'tag2', 'tag3']
Which I would like to automatically insert 'tag1', 'tag2' and 'tag3' in Tag table, and also insert the correct relationship between new_topic and these 3 tags in Tag_To_Topic table.
So far I haven't been able to figure out how to do this because of many-to-many relationship. (If it was a one-to-many, it would be very easy, SQLAlchemy would does it by default already. But this is many-to-many.)
Is this possible?
Thanks, Boda Cydo.
Fist of all you could simplify your many-to-many relation by using association_proxy.
Then, I would leave the relation as it is in order not to interfere with what SA does:
# here *tag_to_topic* is the relation Table object
Topic.tags = relation('Tag', secondary=tag_to_topic)
And I suggest that you just create a simple wrapper property that does the job of translating the string list to the relation objects (you probably will rename the relation). Your Tags class would look similar to:
class Topic(Base):
__tablename__ = 'topic'
id = Column(Integer, primary_key=True)
# ... other properties
def _find_or_create_tag(self, tag):
q = Tag.query.filter_by(name=tag)
t = q.first()
if not(t):
t = Tag(tag)
return t
def _get_tags(self):
return [x.name for x in self.tags]
def _set_tags(self, value):
# clear the list first
while self.tags:
del self.tags[0]
# add new tags
for tag in value:
self.tags.append(self._find_or_create_tag(tag))
str_tags = property(_get_tags,
_set_tags,
"Property str_tags is a simple wrapper for tags relation")
Then this code should work:
# Test
o = Topic()
session.add(o)
session.commit()
o.str_tags = ['tag1']
o.str_tags = ['tag1', 'tag4']
session.commit()