MySQL + Django: No obj.id after save() - mysql

Using Django and MySQL. I save the model and no id appears. Seen some similar issues with PostgreSQL and with bigintegerfields, but neither of those seem to apply here. Any ideas? The client does receive a primary key in the id field in the database via MySQL auto-increment.
Thanks!
class Client(models.Model):
id = models.IntegerField(primary_key=True)
first_name = models.TextField(null=True, blank=True)
last_name = models.TextField(blank=True)
>>> client = models.Client(last_name="Last", first_name="First")
>>> client.last_name
'Last'
>>> client.save()
>>> client.id
>>> client.last_name
'Last'
>>> client.id
>>> client.pk
>>>
And in the database:
id first_name last_name
------------------------------------------
14 First Last

As Rebus says in the comments, you don't need to define the primary key explicitly. But if you do, you must make sure it is an autoincrement field - ie AutoField - not a basic IntegerField as you have. The way you have it, there's no way to get a new ID, which is why it's blank on save.

System generated database keys must be set by the database or else you won't have a multi-user database.

Related

Django MySQL UUID

I had a django model field which was working in the default sqlite db:
uuid = models.TextField(default=uuid.uuid4, editable=False, unique=True).
However, when I tried to migrate to MySQL, I got the error:
django.db.utils.OperationalError: (1170, "BLOB/TEXT column 'uuid' used in key specification without a key length")
The first thing I tried was removing the unique=True, but I got the same error. Next, since I had another field (which successfully migrated ):
id = models.UUIDField(default=uuid.uuid4, editable=False)
I tried changing uuid to UUIDField, but I still get the same error. Finally, I changed uuid to:
uuid = models.TextField(editable=False)
But I am still getting the same error when migrating (DROP all the tables, makemigrations, migrate --run-syncdb). Ideally, I want to have a UUIDField or TextField with default = uuid.uuid4, editable = False, and unique = True, but I am fine doing these tasks in the view when creating the object.
You need to set max_length for your uuid field. Since UUID v4 uses 36 charactes, you can set it to max_length=36 (make sure you don't have longer values in the db already):
uuid = models.TextField(default=uuid.uuid4, editable=False, unique=True, max_length=36)

SQLAlchemy One To Many Updates Failing

Thank you in advance for your feedback on this. I've check quite a few sources including StackOverflow and still haven't found a resolution that works for this use case.
I'm developing a CRUD API using SQLAlchemy and the goal is to pass in a Pydantic object and update the database to reflect whatever is in the Pydantic object (including lists of attributes). This seems to only be an issue when the "many" table can't have a composite primary key because one of the fields is nullable.
Below is a very simple example where we have organizations and the organizations can have partner-preferred alternative names or generic alternative names. Essentially, you would be able to say that the name of the organization is "Microsoft" but the New York Stock Exchange refers to it as "MSFT". Additionally, in the past, we've seen the incorrect version, "Micro-Soft".
# Create the classes
class Organization(Base):
__tablename__='organization'
org_id=Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
org_name=Column(String(255))
alt_names = relationship("AltOrgName", lazy=False)
__table_args__={'schema': 'dbo'}
class AltOrgName(Base):
__tablename__='alt_org_name'
alt_org_name_id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
org_id=Column(UUID(as_uuid=True), ForeignKey(Organization.org_id))
alt_org_name = Column(String(255))
partner = Column(String(10)) # <-----Nullable, so it can't be part of the primary key
__table_args__=(UniqueConstraint(org_id, alt_org_name, partner), {'schema': 'dbo'})
From there you can create an organization with an alternative name:
# Create the organization
o1 = Organization(org_name="Microsoft",
alt_names = [AltOrgName(alt_org_name="MSFT", partner="New York Stock Exchange")]
)
db.add(o1)
db.commit()
Now I would like to fully overwrite the former version of the alternative names. In practice, this would be constructed from a Pydantic model, but this is functionally similar enough:
for an in o1.alt_names:
db.delete(an)
o1.alt_names = []
o1.alt_names.append(AltOrgName(alt_org_name="MSFT", partner="New York Stock Exchange"))
o1.alt_names.append(AltOrgName(alt_org_name="Micro-Soft", partner=None))
db.commit()
This leads to an integrity error:
IntegrityError: (pyodbc.IntegrityError) ('23000', "[23000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Violation of UNIQUE KEY constraint 'UQ__alt_org___89413C3E3CD4661D'. Cannot insert duplicate key in object 'dbo.alt_org_name'. The duplicate key value is (c8e907b3-fc71-40de-8cf1-0806e2ebbce6, MSFT, New York Stock Exchange). (2627) (SQLExecDirectW); [23000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The statement has been terminated. (3621)")
[SQL: INSERT INTO dbo.alt_org_name (alt_org_name_id, org_id, alt_org_name, partner) VALUES (?, ?, ?, ?)]
[parameters: ('d828928d-ba4a-49e9-989e-44fda8027de5', 'c8e907b3-fc71-40de-8cf1-0806e2ebbce6', 'MSFT', 'New York Stock Exchange')]
However, committing after the delete works.
for an in o1.alt_names:
db.delete(an)
db.commit() # <----------New line of code
o1.alt_names = []
o1.alt_names.append(AltOrgName(alt_org_name="MSFT", partner="New York Stock Exchange"))
o1.alt_names.append(AltOrgName(alt_org_name="Micro-Soft", partner=None))
db.commit()
Having two commits is an extremely undesirable code pattern since it creates the risk that the delete commit will work but the revision commit won't--which will ultimately wipe out all alternative names for the company. For instance, the code below would result in an integrity error and wipe out all alternative names for the organization without correctly replacing them:
for an in o1.alt_names:
db.delete(an)
db.commit()
o1.alt_names = []
o1.alt_names.append(AltOrgName(alt_org_name="MSFT", partner="New York Stock Exchange"))
o1.alt_names.append(AltOrgName(alt_org_name="Micro-Soft", partner=None))
o1.alt_names.append(AltOrgName(alt_org_name="Micro-Soft", partner=None)) # <-------------Duplicate
db.commit()
The issue definitely seems to be sqlalchemy's order of operations. If it performed the delete command first, there would be no issue. However, for some reason, it seems like the old alternative names are somehow still in the database at the time it attempts to insert the new ones. One thought is to somehow tell SQLAlchemy to perform the delete before attempting the inserts.
The other resolution is to make the primary key of the alt_org_name table (org_id, alt_org_name, partner) and make the null value for partner an empty string. However, I have a very similar situation where one of the nullable fields is a number so I would prefer a more robust solution if one exists.
Grateful for any ideas or suggestions.
EDIT
I got a response back from SQLAlchemy's team and adding db.flush() seems to fix it
for an in o1.alt_names:
db.delete(an)
db.flush() # <---- New line of code
o1.alt_names = []
o1.alt_names.append(AltOrgName(alt_org_name="MSFT", partner="New York Stock Exchange"))
o1.alt_names.append(AltOrgName(alt_org_name="Micro-Soft", partner=None))
db.commit()

Is there a specific ordering needed for classes in Peewee models?

I'm currently trying to create an ORM model in Peewee for an application. However, I seem to be running into an issue when querying a specific model. After some debugging, I found out that it is whatever below a specific model, it's failing.
I've moved around models (with the given ForeignKeys still being in check), and for some odd reason, it's only what is below a specific class (User).
def get_user(user_id):
user = User.select().where(User.id==user_id).get()
return user
class BaseModel(pw.Model):
"""A base model that will use our MySQL database"""
class Meta:
database = db
class User(BaseModel):
id = pw.AutoField()
steam_id = pw.CharField(max_length=40, unique=True)
name = pw.CharField(max_length=40)
admin = pw.BooleanField(default=False)
super_admin = pw.BooleanField()
#...
I expected to be able to query Season like every other model. However, this the peewee error I run into, when I try querying the User.id of 1 (i.e. User.select().where(User.id==1).get() or get_user(1)), I get an error returned with the value not even being inputted.
UserDoesNotExist: <Model: User> instance matching query does not exist:
SQL: SELECT `t1`.`id`, `t1`.`steam_id`, `t1`.`name`, `t1`.`admin`, `t1`.`super_admin` FROM `user` AS `t1` WHERE %s LIMIT %s OFFSET %s
Params: [False, 1, 0]
Does anyone have a clue as to why I'm getting this error?
Read the error message. It is telling you that the user with the given ID does not exist.
Peewee raises an exception if the call to .get() does not match any rows. If you want "get or None if not found" you can do a couple things. Wrap the call to .get() with a try / except, or use get_or_none().
http://docs.peewee-orm.com/en/latest/peewee/api.html#Model.get_or_none
Well I think I figured it out here. Instead of querying directly for the server ID, I just did a User.get(1) as that seems to do the trick. More reading shows there's a get by id as well.

what is the equivalent ORM query in Django for sql join

I have two django models and both have no relation to each other but have JID in common(I have not made it foreign key):
class result(models.Model):
rid = models.IntegerField(primary_key=True, db_column='RID')
jid = models.IntegerField(null=True, db_column='JID', blank=True)
test_case = models.CharField(max_length=135, blank=True)
class job(models.Model):
jid = models.IntegerField(primary_key = True, db_column='JID')
client_build = models.IntegerField(max_length=135,null=True, blank=True)
I want to achieve this sql query in ORM:
SELECT *
FROM result
JOIN job
ON job.JID = result.JID
Basically I want to join two tables and then perform a filter query on that table.
I am new to ORM and Django.
jobs = job.objects.filter(jid__in=result.objects.values('jid').distinct()
).select_related()
I don't know how to do that in Django ORM but here are my 2 cents:
any ORM makes 99% of your queries super easy to write (without any SQL). For the 1% left, you've got 2 options: understand the core of the ORM and add custom code OR simply write pure SQL. I'd suggest you to write the SQL query for it.
if both table result and job have a JID, why won't you make it a foreign key? I find that odd.
a class name starts with an uppercase, class *R*esult, class *J*ob.
You can represent a Foreign Key in Django models by modifying like this you result class:
class result(models.Model):
rid = models.IntegerField(primary_key=True, db_column='RID')
# jid = models.IntegerField(null=True, db_column='JID', blank=True)
job = models.ForeignKey(job, db_column='JID', blank=True, null=True, related_name="results")
test_case = models.CharField(max_length=135, blank=True)
(I've read somewhere you need to add both blank=True and null=True to make a foreign key optional in Django, you may try different options).
Now you can access the job of a result simply by writing:
myresult.job # assuming myresult is an instance of class result
With the parameter related_name="results", a new field will automatically be added to the class job by Django, so you will be able to write:
myjob.results
And obtain the results for the job myjob.
It does not mean it will necessarilly be fetched by Django ORM with a JOIN query (it will probably be another query instead), but the effect will be the same from your code's point of view (performance considerations aside).
You can find more information about models.ForeignKey in Django documentation.

How does SqlAlchemy handle unique constraint in table definition

I have a table with the following declarative definition:
class Type(Base):
__tablename__ = 'Type'
id = Column(Integer, primary_key=True)
name = Column(String, unique = True)
def __init__(self, name):
self.name = name
The column "name" has a unique constraint, but I'm able to do
type1 = Type('name1')
session.add(type1)
type2 = Type(type1.name)
session.add(type2)
So, as can be seen, the unique constraint is not checked at all, since I have added to the session 2 objects with the same name.
When I do session.commit(), I get a mysql error since the constraint is also in the mysql table.
Is it possible that sqlalchemy tells me in advance that I can not make it or identifies it and does not insert 2 entries with the same "name" columm?
If not, should I keep in memory all existing names, so I can check if they exist of not, before creating the object?
SQLAlechemy doesn't handle uniquness, because it's not possible to do good way. Even if you keep track of created objects and/or check whether object with such name exists there is a race condition: anybody in other process can insert a new object with the name you just checked. The only solution is to lock whole table before check and release the lock after insertion (some databases support such locking).
AFAIK, sqlalchemy does not handle uniqueness constraints in python behavior. Those "unique=True" declarations are only used to impose database level table constraints, and only then if you create the table using a sqlalchemy command, i.e.
Type.__table__.create(engine)
or some such. If you create an SA model against an existing table that does not actually have this constraint present, it will be as if it does not exist.
Depending on your specific use case, you'll probably have to use a pattern like
try:
existing = session.query(Type).filter_by(name='name1').one()
# do something with existing
except:
newobj = Type('name1')
session.add(newobj)
or a variant, or you'll just have to catch the mysql exception and recover from there.
From the docs
class MyClass(Base):
__tablename__ = 'sometable'
__table_args__ = (
ForeignKeyConstraint(['id'], ['remote_table.id']),
UniqueConstraint('foo'),
{'autoload':True}
)
.one() throws two kinds of exceptions:
sqlalchemy.orm.exc.NoResultFound and sqlalchemy.orm.exc.MultipleResultsFound
You should create that object when the first exception occurs, if the second occurs you're screwed anyway and shouldn't make is worse.
try:
existing = session.query(Type).filter_by(name='name1').one()
# do something with existing
except NoResultFound:
newobj = Type('name1')
session.add(newobj)