Django unique_together not working - mysql

I can't get Django (1.5) to create MySQL UNIQUE indexes on 3 columns, even though I've followed every suggestion I found on SO. Here's how my model looks like:
class Loc(models.Model):
rand = models.IntegerField()
sectiune = models.ForeignKey(Sectiune)
numar = models.IntegerField()
pret = models.FloatField()
def __unicode__(self):
return str(self.sectiune.nume) + ': R' + str(self.rand) + ' L' + str(self.numar)
class Meta:
unique_together = (("rand","sectiune","numar"),)
I really don't get what's wrong. I've seen a bug report that unique_together doesn't work on foreign keys, but I've also seen that has been fixed. Any help?

Turns out Django is not that smart after all... It doesn't know how to ALTER a table to create the UNIQUE constraint. I just had to delete the tables, run syncdb again, and the constraints were there :)

Related

When do you actually use multiple foreign keys?

I want to create tables where one has multiple foreign keys of the other. Then I want relationships between the two using the foreign_keys argument.
class Trial(SurrogatePK, Model):
__tablename__ = 'trials'
challenges = relationship('Challenge', foreign_keys='[Challenge.winner_id, '
'Challenge.loser_id]')
class Challenge(SurrogatePK, Model):
__tablename__ = 'challenges'
winner_id = reference_col('trials')
winner = relationship('Trial', back_populates='challenges',
foreign_keys=winner_id)
loser_id = reference_col('trials')
loser = relationship('Trial', back_populates='challenges',
foreign_keys=loser_id)
This doesnt work because sqlachemy gets two foreign_keys and gives an error for that.
The way to make it work is with a primaryjoin:
class Trial(SurrogatePK, Model):
__tablename__ = 'trials'
challenges = relationship('Challenge', primaryjoin=
'or_(Trial.id==Challenge.winner_id,'
'Trial.id==Challenge.loser_id)')
Now, the thing that I want to ask is. When should I actually use multiple foreign keys in the foreign_keys argument. It's got to be plural for a reason right?
In the entire documentation I can't find a singel case where they are actually using multiple foreign keys.

django model many to many not working with charfield as primary key

I'm trying to accomplish a many to many relationship between my models where one table has a charfield as a primary key.
The manage.py makemigrations works just fine, but running manage.py migrate crashes with:
django.db.utils.OperationalError: (3780, "Referencing column 'articleCategory_id' and referenced column 'ArticleCategory_id' in foreign key constraint 'item_overview_articl_articleCategory_id_da76142b_fk_item_over' are incompatible.")
Here is model 1:
class Advertisement(models.Model):
advertisement_id = models.IntegerField(primary_key=True)
advertisement_startDate = models.DateTimeField()
advertisement_endDate = models.DateTimeField()
advertisement_isDefault = models.BooleanField()
advertisement_image = models.ImageField(upload_to='banners/')
advertisement_categories = models.ManyToManyField(ArticleCategory, related_name='ArticleAdvertisements')
and model 2
class ArticleCategory(models.Model):
ArticleCategory_id = models.CharField(primary_key=True, max_length=10)
ArticleCategory_value = models.CharField(max_length=50)
ArticleCategory_parent = models.ForeignKey("self", on_delete=models.PROTECT, null=True)
ArticleCategory_type = models.ForeignKey(ArticleType, on_delete=models.PROTECT, null=False)
Thanks in advance for the help! I've been stuck in this for way to long now :)

Django: How to deal with model subclassing and IntegrityErros?

I have this in my models.py
class Page(models.Model)
#fields
class News(Page)
#no fields
class NewsComment(models.Model)
news = models.Foreignkey(news)
name = models.CharField(max_length=128)
email = models.EmailField(max_length=75)
comment = models.TextField()
Every time I am trying this:
page = get_object_or_404(News, id=page_id)
and then
comment, created = NewsComment.objects.get_or_create(news=page, name=name, email=email, comment=text)
I get this error:
(1452, 'Cannot add or update a child row: a foreign key constraint fails (myproject_db.main_newscomment, CONSTRAINT news_id_refs_page_ptr_id_5a5b8a6204eece43 FOREIGN KEY (news_id) REFERENCES main_news (page_ptr_id))')
What am I doing wrong?
(PS: I am using MySQL with InnoDB storage engine)
If the News model has no fields you should implement the inheritance using a proxy model. It will lead to much simpler database schema, and much simpler and faster (!) queries. It will also eliminate most problems dealing with how model inheritance is implemented on the database level.
class Page(models.Model)
#fields
class News(Page)
class Meta:
proxy = True

How to create postgresql's sequences in Alembic

I'm using alembic to maintain my tables. At the same time, I update my models using the declarative way.
This is one the alembic's table:
op.create_table(
'groups',
Column('id', Integer, Sequence('group_id_seq'), primary_key=True),
Column('name', Unicode(50)),
Column('description', Unicode(250)),
)
And the model is like the following:
class Group(Base):
__tablename__ = 'groups'
id = Column(Integer, Sequence('group_id_seq'), primary_key=True)
name = Column(Unicode(50))
description = Column(Unicode(250))
def __init__(self, name, description):
self.description = description
self.name = name
You can see, I'm using the Sequence in both the alembic migration and in the declarative model.
But I have noticed that when using PostgreSQL (v9.1) no sequences are created by alembic, and so the models fail to create instances since they will use the nextval(<sequence name>) clause.
So, how can I create my alembic migrations so that the sequences are truly generated in postgresql?
Just add following to your model:
field_seq = Sequence('groups_field_seq')
field = Column(Integer, field_seq, server_default=field_seq.next_value())
And add following to your migration file (before creating table):
from sqlalchemy.schema import Sequence, CreateSequence
op.execute(CreateSequence(Sequence('groups_field_seq')))
Found a hint at https://bitbucket.org/zzzeek/alembic/issue/60/autogenerate-for-sequences-as-well-as#comment-4100402
Following the CreateSequence found in the previous link I still have to jump through several hoops to make my migrations works in SQLite and PostgreSQL. Currently I have:
def dialect_supports_sequences():
return op._proxy.migration_context.dialect.supports_sequences
def create_seq(name):
if dialect_supports_sequences():
op.execute(CreateSequence(Sequence(name)))
And then call the create_seq whenever I need it.
Is this the best practice?
Not sure if I got your question right but as nobody else chose to answer, here is how I get perfectly normal ids:
Alembic:
op.create_table('event',
sa.Column('id', sa.INTEGER(), autoincrement=True, nullable=False),
The class:
class Event(SQLADeclarativeBase):
__tablename__ = 'event'
id = Column(Integer, primary_key = True)
I ran into this same issue recently and here is how i solved it.
op.execute("create sequence SEQUENCE_NAME")
I ran the above command inside the upgrade function and for the downgrade run the below code inside the downgrade function respectively.
op.execute("drop sequence SEQUENCE_NAME")

Frequent update one filed of django model

Imagine, I have News models with many text fields
class News(models.Model):
title = models.CharField(max_length=255)
subtitle = models.CharField(max_length=255, blank=True)
lead = models.TextField(max_length=4096)
content = models.TextField()
...
last_visited = models.DateTimeField()
Every time my News object outputs, I update last_visited field:
news.last_visited = datetime.datetime.now()
news.save()
This code makes Django override all model fields:
UPDATE news SET title='...', subtitle='...', last_visited = '...' WHERE id = '...';
Instead of just one:
UPDATE news SET last_visited = '...' WHERE id = '...';
I worried how bad it is and is it worth of thinking about.
Django documentation offers queryset update but it looks not very elegant:
def upd(obj, **kwargs):
obj.__class__._default_manager.filter(pk=obj.pk).update(**kwargs)
upd(news, last_visited=datetime.datetime.now())
I use mysql backend.
Using update but with a cleaner approach:
class News(models.Model):
def update_visited(self):
News.objects.filter(pk=self.pk).update(
last_visited=datetime.datetime.now())
I think using queryset update is good. It removes the possibility that you overwrite changes to other fields by accident.
I know you're worried that it looks inelegant, but you only have to use it once in your upd function, then use upd in your views.
Supposing you want to use this on more than one model (guessing this because you pass obj to your upd function) it would probably make sense to have some base class that implements the last_visited field and your News class inherits from this class... Then you could do the update just on your base class.... Another possibilty would be putting the last_visited information into a seperate model and referencing the News model either through a ForeignKey or a GenericForeignKey (in the case you want to keep a 'history' for different models).