I am using flask and whooshalchemy to implement full text search in a simple web application. The Post and User models are defined like this:
class Post(db.Model):
__searchable__ = ['body']
id = db.Column(db.Integer, primary_key=True)
body = db.Column(db.Text)
author_id = db.Column(db.Integer, db.ForeignKey('user.id'))
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
posts = db.relationship('Post', backref='author', lazy='dynamic')
whooshalchemy.whoosh_index(app, Post)
In some view I perform a check whether current user is allowed to edit the post
post = Post.query.get(pid)
if current_user != post.author:
abort(403)
For some reason current_user and post.author are not the same object if whooshalchemy.whoosh_index(app, Post) is called. If I comment out that line then the test for the ownership of the post works as expected.
Why is this happening? Does whooshalchemy indexing create a copy of post.author that is different from what is loaded from user table? What can I do to correct it?
The official whooshalchemy has a bug - Miguel covered it here. Use his fixed version here and that should solve your problem.
Related
I have a working flask app with few models. The User model is as follow...
class User(UserMixin, db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(64), index=True, unique=True)
email = db.Column(db.String(128), index=True, unique=True)
password_hash = db.Column(db.String(128))
first_name = db.Column(db.String(32))
last_name = db.Column(db.String(32))
bio = db.Column(db.String(255))
patterns = db.relationship('Pattern', backref='user', lazy='dynamic')
Now I want to add a new boolean column to the model. I'm utilizing MySQL as a database. I have tried the following...
invited = db.Column(db.Boolean, default=0)
but when I runt flask db migrate I get the following...
INFO [alembic.runtime.migration] Context impl MySQLImpl.
I also tried
from sqlalchemy import BOOLEAN # also with Boolean
...
invited = db.Column(BOOLEAN, default=0) # also with Boolean
but get the same error. Reading MySQL documentation found out that MySQL doesn't have boolean type rather TINYINT. But reading this Github thread I understand that the Boolean class will turn into TINYINT based on the dialect. So I did the following...
from sqlalchemy.dialect.mysql import BOOLEAN
and still I get the same error when flask run migrate. Seems like Alembic can't see the changes in the model.
Is there a way to create a boolean field in mysql utilizing flask-migrate and flask-sqlalchemy?
Try below, I'm assuming Flask migrate is not recognizing db.Column(BOOLEAN, default=0).
a_boolean_field = db.Column(db.Boolean(), default=False)
I've just tested it and above works and Flask-Migrate was able to detect it.
You can do it like so
from sqlalchemy import Boolean, Column
invited = Column(Boolean, nullable=False, default=False)
this should work with MySQL.
I am trying to save a list of VLAN IDs per network port and also per network circuit. The list itself is something like this:
class ListOfVlanIds(Base):
__tablename__ = 'listofvlanids'
id = Column(Integer, primary_key=True)
listofvlanids_name = Column('listofvlanids_name', String, nullable = True)
And I then have a Port
class Port(Base):
__tablename__ = 'ports'
id = Column(Integer, primary_key=True)
listofvlanids_id = Column('listofvlanids_id', ForeignKey('ListOfVlanIds.id'), nullable = True)
and a Circuit:
class Circuit(Base):
__tablename__ = 'circuits'
id = Column(Integer, primary_key=True)
listofvlanids_id = Column('listofvlanids_id', ForeignKey('ListOfVlanIds.id'), nullable = True)
Running code like this results (for me) in a sqlalchemy.exc.NoReferencedTableError error on the ForeignKey.
Looking for the error I read I should add a relationship back from the list. I haven't found a way (or an example) where I can build this from both Port and Circuit. What am I missing?
Creating a list table for Ports and Circuits just moves the problem downstream, since a VLAN ID is it's own table... I'd love to be able to use ORM, instead of having to write (a lot of) SQL by hand.
ForeignKey expects a table and column name, not model and attribute name, so it should be ForeignKey('listofvlanids.id').
I'm using alembic to maintain my tables. At the same time, I update my models using the declarative way.
This is one the alembic's table:
op.create_table(
'groups',
Column('id', Integer, Sequence('group_id_seq'), primary_key=True),
Column('name', Unicode(50)),
Column('description', Unicode(250)),
)
And the model is like the following:
class Group(Base):
__tablename__ = 'groups'
id = Column(Integer, Sequence('group_id_seq'), primary_key=True)
name = Column(Unicode(50))
description = Column(Unicode(250))
def __init__(self, name, description):
self.description = description
self.name = name
You can see, I'm using the Sequence in both the alembic migration and in the declarative model.
But I have noticed that when using PostgreSQL (v9.1) no sequences are created by alembic, and so the models fail to create instances since they will use the nextval(<sequence name>) clause.
So, how can I create my alembic migrations so that the sequences are truly generated in postgresql?
Just add following to your model:
field_seq = Sequence('groups_field_seq')
field = Column(Integer, field_seq, server_default=field_seq.next_value())
And add following to your migration file (before creating table):
from sqlalchemy.schema import Sequence, CreateSequence
op.execute(CreateSequence(Sequence('groups_field_seq')))
Found a hint at https://bitbucket.org/zzzeek/alembic/issue/60/autogenerate-for-sequences-as-well-as#comment-4100402
Following the CreateSequence found in the previous link I still have to jump through several hoops to make my migrations works in SQLite and PostgreSQL. Currently I have:
def dialect_supports_sequences():
return op._proxy.migration_context.dialect.supports_sequences
def create_seq(name):
if dialect_supports_sequences():
op.execute(CreateSequence(Sequence(name)))
And then call the create_seq whenever I need it.
Is this the best practice?
Not sure if I got your question right but as nobody else chose to answer, here is how I get perfectly normal ids:
Alembic:
op.create_table('event',
sa.Column('id', sa.INTEGER(), autoincrement=True, nullable=False),
The class:
class Event(SQLADeclarativeBase):
__tablename__ = 'event'
id = Column(Integer, primary_key = True)
I ran into this same issue recently and here is how i solved it.
op.execute("create sequence SEQUENCE_NAME")
I ran the above command inside the upgrade function and for the downgrade run the below code inside the downgrade function respectively.
op.execute("drop sequence SEQUENCE_NAME")
Apologies for the awful title.
I'm setting up a website, using Flask and SQLAlchemy. I'd like a list of tags available for all content types. I'm using sqlite3 for my development database.
After inputting data using the html form, only the tag is not being saved to the db. I'm not sure where the weak point(s?) lies. I can't tell if I've got something wrong conceptually about how SQLAlchemy handles inheritance, passing arguments to a subclass and/or the many to many relationship. I'd really appreciate any clarity on the subject or recommendations about how to improve the model.
Here's the code:
I have an association table for the Many-to-Many relationship between the tags and Content:
tagging_association = Table('tagging', Model.metadata,
Column('content_id', Integer, ForeignKey('content.id')),
Column('tag_id', Integer, ForeignKey('tags.id'))
)
I've set up a Content class:
class Content(Model):
'''
The base class for all content types.
'''
__tablename__ = 'content'
id = Column(Integer, primary_key=True)
tag = relationship('Tag', secondary='tagging', backref='content')
type = Column(String(50))
__mapper_args__ = {
'polymorphic_identity':'content',
'polymorphic_on':type
}
All content types are subclasses of Content, using SQLAlchemy's Joined Table Inheritance:
class Entry(Content):
'''The database model for blog-like entries on the homepage.'''
__tablename__ = 'entries'
id = Column(Integer, ForeignKey('content.id'), primary_key=True)
title = Column(String(200))
body = Column(String)
__mapper_args__ = {
'polymorphic_identity':'entries',
}
# Want to pass a single tag first, just to get it to work. Is this how would I do that?
def __init__(self, title, body, *args, **kwargs):
super(Entry, self).__init__(*args, **kwargs)
self.title = title
self.body = body
And the Tag class:
class Tag(Model):
'''Tag database model.'''
__tablename__ = 'tags'
id = Column(Integer, primary_key=True)
tag = Column(String(30), nullable=False, unique=True)
def __init__(self, tag):
self.tag = tag
Here's my WTForms class:
class EntryForm(Form):
title = TextField('Title', validators=[Required()])
body = TextAreaField('Body', validators=[Required()])
tags = TextField('Tags')
submit = SubmitField('Submit')
Here's where I take the form data and add it to the db:
#mod.route('/add_entry/', methods=['GET', 'POST'])
#requires_admin
def add_entry():
form = EntryForm()
if form.validate():
entry = Entry(form.title(), form.body(), form.tags())
form.populate_obj(entry)
db_session.add(entry)
db_session.commit()
return redirect(url_for('general.index'))
return render_template('general/add_entry.html', form=form)
If what you showed above is the actual code, you can see that your relationship is named tag (tag = relationship(...), but in the EntryForm your TextField property is called tags. So your tag relationship is never set and therefore never saved. What is set to the field tags is just ignored. I assume you just need to rename the Content.tag to Content.tags.
Above should answer the question why it is not saved, but if you just rename the field, this will not solve your problem. You need to write code that handles your Tags properly:
when Tag text is assigned to a Content, then you need:
look if the Tag with this tag already exists.
if it does, load it.
if it does not, create it
append the found/created tag to the Content.tags
See the answer to Inserting data in Many to Many relationship in SQLAlchemy for similar problem to give you an idea what to do and how it could be done.
Imagine, I have News models with many text fields
class News(models.Model):
title = models.CharField(max_length=255)
subtitle = models.CharField(max_length=255, blank=True)
lead = models.TextField(max_length=4096)
content = models.TextField()
...
last_visited = models.DateTimeField()
Every time my News object outputs, I update last_visited field:
news.last_visited = datetime.datetime.now()
news.save()
This code makes Django override all model fields:
UPDATE news SET title='...', subtitle='...', last_visited = '...' WHERE id = '...';
Instead of just one:
UPDATE news SET last_visited = '...' WHERE id = '...';
I worried how bad it is and is it worth of thinking about.
Django documentation offers queryset update but it looks not very elegant:
def upd(obj, **kwargs):
obj.__class__._default_manager.filter(pk=obj.pk).update(**kwargs)
upd(news, last_visited=datetime.datetime.now())
I use mysql backend.
Using update but with a cleaner approach:
class News(models.Model):
def update_visited(self):
News.objects.filter(pk=self.pk).update(
last_visited=datetime.datetime.now())
I think using queryset update is good. It removes the possibility that you overwrite changes to other fields by accident.
I know you're worried that it looks inelegant, but you only have to use it once in your upd function, then use upd in your views.
Supposing you want to use this on more than one model (guessing this because you pass obj to your upd function) it would probably make sense to have some base class that implements the last_visited field and your News class inherits from this class... Then you could do the update just on your base class.... Another possibilty would be putting the last_visited information into a seperate model and referencing the News model either through a ForeignKey or a GenericForeignKey (in the case you want to keep a 'history' for different models).