I have a SQLAlchemy model like this:
class Group(Base):
__tablename__ = 'groups'
id = Column(Integer, primary_key = True, ca_include = True)
name = Column(String, ca_include = True)
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key = True, ca_include = True)
name = Column(String, ca_include = True)
group_id = Column(Integer, ForeignKey('groups.id'), nullable = True, ca_include = True)
group = relationship('Group', ca_include = True)
The form library I used is deform. I installed ColanderAlchemy to convert the model definition into Colander Schema automatically:
form = deform.Form(SQLAlchemyMapping(Group), use_ajax = True)
And I can do form.render() to get a empty form. But how to fill this empty form with a record?
I tried:
group = Group.get(1)
form.render(group)
But failed.
I also followed this blog but it can only convert a single record into colander's format but no relationship would be converted.
So... is there anyway for me to convert the SQLAlchemy record into Colander record?
You'll need to utilise the dictify method associated with your given SQLAlchemyMapping schema object to convert a given model instance into an appstruct acceptable for rendering your Deform form.
So, using your example model, this is what you might do:
schema = SQLAlchemyMapping(Group)
form = deform.Form(schema, use_ajax=True)
my_group = Group(id=1, name='Foobar') #or query for an instance etc
appstruct = schema.dictify(my_group)
form.render(appstruct)
Since ColanderAlchemy is very much cutting edge at this stage, your mileage will likely vary in newer versions (the above was written for version 0.1), especially as it is being substantially rewritten to remove the need for custom columns and relationship types in version 0.2. I've noticed that there were issues with the the current ColanderAlchemy 0.1b6 release - especially with regards to the mapping of relationships.
Consult the documentation at http://colanderalchemy.rtfd.org/ for details on the latest version.
Related
I am trying create a new database entry using a custom Django model I created. However, when I try to create the model and save it, the id does not increment. Instead, the previous database entry is overwritten whose id == 1. I have tried setting force_insert=True inside the save() function, but it results in a runtime error where the primary key already exists. I don't set any primary values in the creation of the object, so I'm not sure why the id is not being incremented. I am running the test code in the manage.py shell. All the models have been migrated properly.
The model:
class RoadWayData(models.Model):
blocked_lanes = models.PositiveIntegerField()
city = models.CharField(max_length=255)
county = models.CharField(max_length=255)
direction = models.CharField(max_length=255, blank=True, null=True, default=None)
eto = models.CharField(max_length=255)
incident_type = models.ForeignKey(ContentType, on_delete=models.CASCADE)
incident_object = GenericForeignKey('incident_type', 'id')
injuries = models.PositiveIntegerField()
postmile = models.CharField(max_length=255, blank=True, null=True, default=None)
queue = models.CharField(max_length=255, default="NONE: Freeflow Conditions")
route = models.CharField(max_length=255, blank=True, null=True, default=None)
street = models.CharField(max_length=255, blank=True, null=True, default=None)
timestamp = models.DateTimeField(auto_now=True)
update = models.PositiveIntegerField()
maintenance = models.CharField(max_length=255)
tow = models.CharField(max_length=255)
weather = models.CharField(max_length=255)
vehicles_involved = models.PositiveIntegerField()
The test code:
from incident.models import *
import datetime
x = IncidentIndex.objects.get(id=1)
y = CHPIncident.objects.get(id=x.incident_object.id)
print("ID already exists in DB: {}".format(RoadWayData.objects.get(id=1).id))
z = RoadWayData(
blocked_lanes=0,
city="testCity",
county="testCounty",
direction="NB",
eto="Unknown",
highway_accident=True,
incident_object=y,
injuries=0,
postmile="New Postmile",
route="new Route",
update = 2,
maintenance= "Not Requested",
tow="Not Requested",
weather="Clear Skies",
vehicles_involved=0,
)
z.save()
print("New Data Object ID: {}".format(z.id))
Shell Output:
ID already exists in DB: 1
New Data Object ID: 1
Edit #1:
I am using a mySQL database and have not overridden the save() function. The mySQL console shows only one entry in the table(the model that was most recently saved).
Edit #2
I commented out the RoadWayData model and migrated the changes to wipe the table. Afterwards, I un-commented the model and migrated the changes to add it back to the database. The issue still persists.
Edit #3
I was able to manually insert a new entry into the table using the mySQL console. The ID incremented correctly. Perhaps it is a Django bug?
Edit #4
I've pinpointed the source of the problem. The problem stems from the contenttypes library. More specifically, the GenericForeignKey. For some reason when an the content object is assigned, the model inherits the content object's id.
Code with problem isolated:
x = IncidentIndex.objects.get(id=1)
y = CHPIncident.objects.get(id=x.incident_object.id)
r = RoadWayData(
...
incident_object = None, # Do not assign the generic foreign key
...
)
r.save()
print(r) # Shows <RoadWayData object> with CORRECT id
r.incident_object = y # Assign the general object
print(r) # Shows <RoadWayData object> with the id of y. INCORRECT
The easiest fix would be to create a variable to keep track of the Model's id BEFORE assigning the content_object (incident_object in my case).
FIX:
... initialization from code above ...
r.save()
r_id = r.id # SAVE THE CORRECT ID BEFORE ASSIGNING GENERIC FOREIGN KEY
r.incident_object = y # ASSIGN THE GENERIC FOREIGN OBJECT
r.id = r_id # OVERWRITE THE ID WITH THE CORRECT OLD ID
r.save()
The incident_object field in the RoadWayData model, has the reference id (the second parameter) set to its own id. So, when model assigns incident_object , it overwrites the id of the model.
To fix it, create a new PostiveIntegerField (like incident_id) and replace
incident_object = GenericForeignKey('incident_type', 'id')
with
incident_id = models.PostiveIntegerField(null=True)
incident_object = GenericForeignKey('incident_type', 'incident_id')
I am having a model structure like:
class user(models.Model):
name = models.CharField(max_length=100)
tasks = models.IntegerField(default=0)
class project(models.Model):
worker = models.ForeignKey(user, on_delete=models.CASCADE)
project_name = models.CharField(max_length=100)
class task(models.Model):
project = models.ForeignKey(project, on_delete=models.CASCADE)
task_name = models.CharField(max_length=150)
expected_date = models.DateField(auto_now=False,auto_now_add=False,)
actual_date = models.DateField(auto_now=False,auto_now_add=False,blank=True,null=True,)
I want to traverse through the task list and if actual date field is not null i.e. task completed then to update the tasks field in user class by 1. I have written the following code:
a = task.objects.filter(actual_date__isnull=False)
for x in a:
x.project.worker.tasks+=1
However this is not giving the desired result. What should I do?
You are not saving your object after modifying it - simply modifying the value doesn't write it to the database. Try this instead:
a = task.objects.filter(actual_date__isnull=False)
for x in a:
worker = x.project.worker
worker.tasks += 1
worker.save()
On a separate note you should consider following PEP8 conventions and using CamelCase for your class names. As it is currently you can very easily mix up classes with objects.
I am trying to save a list of VLAN IDs per network port and also per network circuit. The list itself is something like this:
class ListOfVlanIds(Base):
__tablename__ = 'listofvlanids'
id = Column(Integer, primary_key=True)
listofvlanids_name = Column('listofvlanids_name', String, nullable = True)
And I then have a Port
class Port(Base):
__tablename__ = 'ports'
id = Column(Integer, primary_key=True)
listofvlanids_id = Column('listofvlanids_id', ForeignKey('ListOfVlanIds.id'), nullable = True)
and a Circuit:
class Circuit(Base):
__tablename__ = 'circuits'
id = Column(Integer, primary_key=True)
listofvlanids_id = Column('listofvlanids_id', ForeignKey('ListOfVlanIds.id'), nullable = True)
Running code like this results (for me) in a sqlalchemy.exc.NoReferencedTableError error on the ForeignKey.
Looking for the error I read I should add a relationship back from the list. I haven't found a way (or an example) where I can build this from both Port and Circuit. What am I missing?
Creating a list table for Ports and Circuits just moves the problem downstream, since a VLAN ID is it's own table... I'd love to be able to use ORM, instead of having to write (a lot of) SQL by hand.
ForeignKey expects a table and column name, not model and attribute name, so it should be ForeignKey('listofvlanids.id').
I need some models for instance following:
Work - e.g. works of literature.
Worker - e.g. composer, translator or something similar has contribution to work.
Thus, a 'type' field is required to distinguish workers by division of work. As SQLAlchemy's documentation, this case can benifit from association object like following:
class Work(base):
id = Column(Integer, primary_key=True)
name = Column(String(50))
description = Column(Text)
class Worker(base):
id = Column(Integer, primary_key=True)
name = Column(String(50))
description = Column(Text)
class Assignment(base):
work_id = Column(Integer, Foreignkey('work.id'), primary_key=True)
worker_id = Column(Integer, Foreignkey('worker.id'), primary_key=True)
type = Column(SmallInteger, nullable=True)
Nonetheless, how to take advantage of backref and alternatvie join condition for building relation immediately to implement that each Work object can retrieve and modify corresponding Worker(s) via different attributions for distinction. For example:
work = session.query(Work).get(1)
work.name
>>> 'A Dream of The Red Mansions'
work.composers
>>> [<Worker('Xueqin Cao')>]
work.translators
>>> [<Worker('Xianyi Yang')>, <Worker('Naidie Dai')>]
Vice versa:
worker = session.query(Worker).get(1)
worker.name
>>> 'Xueqin Cao'
worker.composed
>>> [<Work('A Dream of The Red Mansions')>]
worker.translated
>>> []
Adding secondaryjoin directly without secondary specified seems not feasible, besides, SQLAlchemy's docs notes that:
When using the association object pattern, it is advisable that the association-mapped table not be used as the secondary argument on a relationship() elsewhere, unless that relationship() contains the option viewonly=True. SQLAlchemy otherwise may attempt to emit redundant INSERT and DELETE statements on the same table, if similar state is detected on the related attribute as well as the associated object.
Then, is there some way to build these relations elegantly and readily ?
There's three general ways to go here.
One is, do a "vanilla" setup where you have "work"/"workers" set up without distinguishing on "type" - then, use relationship() for "composer", "composed", "translator", "translated" by using "secondary" to Assignment.__table__ along with custom join conditions, as well as viewonly=True. So you'd do writes via the vanilla properties only. A disadvantage here is that there's no immediate synchronization between the "vanilla" and "specific" collections.
Another is, same with the "vanilla" setup, but just use plain Python descriptors to give "composer", "composed", "translator", "translated" views in memory, that is, [obj.worker for obj in self.workers if obj.type == 'composer']. This is the simplest way to go. Whatever you put in the "vanilla" collections shows right up in the "filtered" collection, the SQL is simple, and there's fewer SELECT statements in play (one per Worker/Work instead of N per Worker/Work).
Finally, the approach that's closest to what you're asking, with primary joins and backrefs, but note with the association object, the backrefs are between Work/Assignment and Assignment/Worker, but not between Work/Worker directly. This approach probably winds up using more SQL to get at the results but is the most complete, and also has the nifty feature that the "type" is written automatically. We're also using a "one way backref", as Assignment doesn't have a simple way of relating back outwards (there's ways to do it but it would be tedious). Using a Python function to automate creation of the relationships reduces the boilerplate, and note here I'm using a string for "type", this can be an integer if you add more arguments to the system:
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.associationproxy import association_proxy
Base = declarative_base()
def _work_assignment(name):
assign_ = relationship("Assignment",
primaryjoin="and_(Assignment.work_id==Work.id, "
"Assignment.type=='%s')" % name,
back_populates="work", cascade="all, delete-orphan")
assoc = association_proxy("%s_assign" % name, "worker",
creator=lambda worker: Assignment(worker=worker, type=name))
return assoc, assign_
def _worker_assignment(name):
assign_ = relationship("Assignment",
primaryjoin="and_(Assignment.worker_id==Worker.id, "
"Assignment.type=='%s')" % name,
back_populates="worker", cascade="all, delete-orphan")
assoc = association_proxy("%s_assign" % name, "work",
creator=lambda work: Assignment(work=work, type=name))
return assoc, assign_
class Work(Base):
__tablename__ = 'work'
id = Column(Integer, primary_key=True)
name = Column(String(50))
description = Column(Text)
composers, composer_assign = _work_assignment("composer")
translators, translator_assign = _work_assignment("translator")
class Worker(Base):
__tablename__ = 'worker'
id = Column(Integer, primary_key=True)
name = Column(String(50))
description = Column(Text)
composed, composer_assign = _worker_assignment("composer")
translated, translator_assign = _worker_assignment("translator")
class Assignment(Base):
__tablename__ = 'assignment'
work_id = Column(Integer, ForeignKey('work.id'), primary_key=True)
worker_id = Column(Integer, ForeignKey('worker.id'), primary_key=True)
type = Column(String, nullable=False)
worker = relationship("Worker")
work = relationship("Work")
e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
session = Session(e)
ww1, ww2, ww3 = Worker(name='Xueqin Cao'), Worker(name='Xianyi Yang'), Worker(name='Naidie Dai')
w1 = Work(name='A Dream of The Red Mansions')
w1.composers.append(ww1)
w1.translators.extend([ww2, ww3])
session.add(w1)
session.commit()
work = session.query(Work).get(1)
assert work.name == 'A Dream of The Red Mansions'
assert work.composers == [ww1]
assert work.translators == [ww2, ww3]
worker = session.query(Worker).get(ww1.id)
assert worker.name == 'Xueqin Cao'
assert worker.composed == [work]
assert worker.translated == []
worker.composed[:] = []
# either do this...
session.expire(work, ['composer_assign'])
# or this....basically need composer_assign to reload
# session.commit()
assert work.composers == []
I have some code here. I recently added this root_id parameter. The goal of that is to let me determine whether a File belongs to a particular Project without having to add a project_id FK into File (which would result in a model cycle.) Thus, I want to be able to compare Project.directory to File.root. If that is true, File belongs to Project.
However, the File.root attribute is not being autogenerated for File. My understanding is that defining a FK foo_id into table Foo implicit creates a foo attribute to which you can assign a Foo object. Then, upon session flush, foo_id is properly set to the id of the assigned object. In the snippet below that is clearly being done for Project.directory, but why not for File.root?
It definitely seems like it has to do with either 1) the fact that root_id is a self-referential FK or 2) the fact that there are several self-referential FKs in File and SQLAlchemy gets confused.
Things I've tried.
Trying to define a 'root' relationship() - I think this is wrong, this should not be represented by a join.
Trying to define a 'root' column_property() - allows read access to an already set root_id property, but when assigning to it, does not get reflected back to root_id
How can I do what I'm trying to do? Thanks!
from sqlalchemy import create_engine, Column, ForeignKey, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import backref, relationship, scoped_session, sessionmaker, column_property
Base = declarative_base()
engine = create_engine('sqlite:///:memory:', echo=True)
Session = scoped_session(sessionmaker(bind=engine))
class Project(Base):
__tablename__ = 'projects'
id = Column(Integer, primary_key=True)
directory_id = Column(Integer, ForeignKey('files.id'))
class File(Base):
__tablename__ = 'files'
id = Column(Integer, primary_key=True)
path = Column(String)
parent_id = Column(Integer, ForeignKey('files.id'))
root_id = Column(Integer, ForeignKey('files.id'))
children = relationship('File', primaryjoin=id==parent_id, backref=backref('parent', remote_side=id), cascade='all')
Base.metadata.create_all(engine)
p = Project()
root = File()
root.path = ''
p.directory = root
f1 = File()
f1.path = 'test.txt'
f1.parent = root
f1.root = root
Session.add(f1)
Session.add(root)
Session.flush()
# do this otherwise f1 will be returned when calculating rf1
Session.expunge(f1)
rf1 = Session.query(File).filter(File.path == 'test.txt').one()
# this property does not exist
print rf1.root
My understanding is that defining a FK foo_id into table Foo implicit creates a foo attribute to which you can assign a Foo object.
No, it doesn't. In the snippet, it just looks like it is being done for Project.directory, but if you look at the SQL statements being echo'ed, there is no INSERT at all for the projects table.
So, for it to work, you need to add these two relationships:
class Project(Base):
...
directory = relationship('File', backref='projects')
class File(Base):
...
root = relationship('File', primaryjoin='File.id == File.root_id', remote_side=id)