Odd IntegrityError on MySQL: #1452 - mysql

This is sort of an odd one but I'll try to explain as best I can. I have 2 models: one representing an email message (Message), the other a sales lead (AffiliateLead). When a form is submitted through the site, the system generates a lead and then emails. The Message model has an optional FK back to the Lead. From the Message models file:
lead = models.ForeignKey('tracking.AffiliateLead', blank=True, null=True)
Now, this basic shell works:
from tracking.models import Affiliate, AffiliateLead
from messages.models import Message
from django.contrib.auth.models import User
u = User.objects.get(username='testguy')
a = Affiliate.objects.get(affiliate_id = 'ACD023')
l = AffiliateLead(affiliate = a)
l.save()
m = Message(recipient=u, sender=u, subject='s', body='a', lead=l)
m.save()
However, the form view itself does not. It throws an IntegrityError when I try to save a Message that points to an AffiliateLead:
(1452, 'Cannot add or update a child row: a foreign key constraint fails (`app`.`messages_message`, CONSTRAINT `lead_id_refs_id_6bc546751c1f96` FOREIGN KEY (`lead_id`) REFERENCES `tracking_affiliatelead` (`id`))')
This is despite the fact that the view is simply taking the form, creating and saving the AffiliateLead, then creating and (trying) to save the Message. In fact, when this error is thrown, I can go into MySQL and see the newly-created lead. It even throws this error in the view when I re-retrieve the lead from the DB immediately before saving:
af_lead = AffiliateLead.objects.get(id = af_lead.id)
msg.lead = af_lead
msg.save()
Finally, if I immediately refresh (re-submitting the form), it works. No IntegrityError. If I have Django print out the SQL it's doing, I can indeed see that it is INSERTing the AffiliateLead before it tries to INSERT the Message, and the Message INSERT is using the correct AffiliateLead ID. I'm really stumped at this point. I've even tried manual transaction handling to no avail.

I'm not exactly sure why it happened, but I did seem to find a solution. I'm using South to manage the DB; it created Messages as InnoDB and AffiliateLead as MyISAM. Changing the AffiliateLead table to InnoDB ended the IntegrityErrors. Hope this helps someone else.

Related

Still getting error message after `except` statement dealing with `ProgrammingError`

I am using python to automatically populate a MySQL DB. The script that populates the DB (/path/to/pythonScript.py very long) is actually called by another python script (example below) works fine and I have added a few statements that prevent me from inserting duplicated entries.
When I try to insert a duplicated entry with the script /path/to/pythonScript.py I get (as expected)
ProgrammingError: 1061 (42000): Duplicate key name 'unique_index'
In order to deal with that, I want to write a try except statement while calling the /path/to/pythonScript.py script, as shown below:
import mysql.connector
from mysql.connector.errors import ProgrammingError
# Here I have already successfully connected to the DB, and populated it
try:
get_ipython().system("ipython /path/to/pythonScript.py") # this is the script that populates the DB. It does not allow the insertion of duplicated entries
except ProgrammingError:
print("a warning message informing that I am trying to insert a duplicated entry")
When I call the script for the first time, everything goes well (after all, the DB was empty). But then when I call the script for the second time (i.e. when I attempt to insert duplicated entries) I am still getting the same error ProgrammingError: 1061 (42000): Duplicate key name 'unique_index'
I have found this documentation page where they show examples on how to handle errors, though there is no example specifically on the ProgrammingError. In this other documentation page there is one example on the ProgrammingError, though they skipped the imports section and I am afraid I am missing something by the import (note that I don't get any error when I call from mysql.connector.errors import ProgrammingError)?
You are running the other script as a distinct process. Exceptions only exist within the current process - FWIW, you could be running a shell script or a C-coded binary app instead, it would be just the same.
I kindly suggest you momentarily ditch IPython and take a couple days doing the full official Python tutorial, paying particular attention to the parts about functions and modules. Then you may want to rewrite your first script to make it a proper module with proper functions (I assume from your question and example code it's currently a plain script with everything at the top-level - but I may be wrong of course ;) ), then rewrite your calling script to import functions from the first one and call them.
Also note that that ProgrammingError can happen for a whole lot of other reasons than a duplicate key, so you MUST check the exception's code to find out which exact error happened.

Is there a specific ordering needed for classes in Peewee models?

I'm currently trying to create an ORM model in Peewee for an application. However, I seem to be running into an issue when querying a specific model. After some debugging, I found out that it is whatever below a specific model, it's failing.
I've moved around models (with the given ForeignKeys still being in check), and for some odd reason, it's only what is below a specific class (User).
def get_user(user_id):
user = User.select().where(User.id==user_id).get()
return user
class BaseModel(pw.Model):
"""A base model that will use our MySQL database"""
class Meta:
database = db
class User(BaseModel):
id = pw.AutoField()
steam_id = pw.CharField(max_length=40, unique=True)
name = pw.CharField(max_length=40)
admin = pw.BooleanField(default=False)
super_admin = pw.BooleanField()
#...
I expected to be able to query Season like every other model. However, this the peewee error I run into, when I try querying the User.id of 1 (i.e. User.select().where(User.id==1).get() or get_user(1)), I get an error returned with the value not even being inputted.
UserDoesNotExist: <Model: User> instance matching query does not exist:
SQL: SELECT `t1`.`id`, `t1`.`steam_id`, `t1`.`name`, `t1`.`admin`, `t1`.`super_admin` FROM `user` AS `t1` WHERE %s LIMIT %s OFFSET %s
Params: [False, 1, 0]
Does anyone have a clue as to why I'm getting this error?
Read the error message. It is telling you that the user with the given ID does not exist.
Peewee raises an exception if the call to .get() does not match any rows. If you want "get or None if not found" you can do a couple things. Wrap the call to .get() with a try / except, or use get_or_none().
http://docs.peewee-orm.com/en/latest/peewee/api.html#Model.get_or_none
Well I think I figured it out here. Instead of querying directly for the server ID, I just did a User.get(1) as that seems to do the trick. More reading shows there's a get by id as well.

Django admin - model visible to superuser, not staff user

I am aware of syncdb and makemigrations, but we are restricted to do that in production environment.
We recently had couple of tables created on production. As expected, tables were not visible on admin for any user.
Post that, we had below 2 queries executed manually on production sql (i ran migration on my local and did show create table query to fetch raw sql)
django_content_type
INSERT INTO django_content_type(name, app_label, model)
values ('linked_urls',"urls", 'linked_urls');
auth_permission
INSERT INTO auth_permission (name, content_type_id, codename)
values
('Can add linked_urls Table', (SELECT id FROM django_content_type where model='linked_urls' limit 1) ,'add_linked_urls'),
('Can change linked_urls Table', (SELECT id FROM django_content_type where model='linked_urls' limit 1) ,'change_linked_urls'),
('Can delete linked_urls Table', (SELECT id FROM django_content_type where model='linked_urls' limit 1) ,'delete_linked_urls');
Now this model is visible under super-user and is able to grant access to staff users as well, but staff users cant see it.
Is there any table entry that needs to be entered in it?
Or is there any other way to do a solve this problem without syncdb, migrations?
We recently had couple of tables created on production.
I can read what you wrote there in two ways.
First way: you created tables with SQL statements, for which there are no corresponding models in Django. If this is the case, no amount of fiddling with content types and permissions that will make Django suddenly use the tables. You need to create models for the tables. Maybe they'll be unmanaged, but they need to exist.
Second way: the corresponding models in Django do exist, you just manually created tables for them, so that's not a problem. What I'd do in this case is run the following code, explanations follow after the code:
from django.contrib.contenttypes.management import update_contenttypes
from django.apps import apps as configured_apps
from django.contrib.auth.management import create_permissions
for app in configured_apps.get_app_configs():
update_contenttypes(app, interactive=True, verbosity=0)
for app in configured_apps.get_app_configs():
create_permissions(app, verbosity=0)
What the code above does is essentially perform the work that Django performs after it runs migrations. When the migration occurs, Django just creates tables as needed, then when it is done, it calls update_contenttypes, which scans the table associated with the models defined in the project and adds to the django_content_type table whatever needs to be added. Then it calls create_permissions to update auth_permissions with the add/change/delete permissions that need adding. I've used the code above to force permissions to be created early during a migration. It is useful if I have a data migration, for instance, that creates groups that need to refer to the new permissions.
So, finally i had a solution.I did lot of debugging on django and apparanetly below function (at django.contrib.auth.backends) does the job for providing permissions.
def _get_permissions(self, user_obj, obj, from_name):
"""
Returns the permissions of `user_obj` from `from_name`. `from_name` can
be either "group" or "user" to return permissions from
`_get_group_permissions` or `_get_user_permissions` respectively.
"""
if not user_obj.is_active or user_obj.is_anonymous() or obj is not None:
return set()
perm_cache_name = '_%s_perm_cache' % from_name
if not hasattr(user_obj, perm_cache_name):
if user_obj.is_superuser:
perms = Permission.objects.all()
else:
perms = getattr(self, '_get_%s_permissions' % from_name)(user_obj)
perms = perms.values_list('content_type__app_label', 'codename').order_by()
setattr(user_obj, perm_cache_name, set("%s.%s" % (ct, name) for ct, name in perms))
return getattr(user_obj, perm_cache_name)
So what was the issue?
Issue lied in this query :
INSERT INTO django_content_type(name, app_label, model)
values ('linked_urls',"urls", 'linked_urls');
looks fine initially but actual query executed was :
--# notice the caps case here - it looked so trivial, i didn't even bothered to look into it untill i realised what was happening internally
INSERT INTO django_content_type(name, app_label, model)
values ('Linked_Urls',"urls", 'Linked_Urls');
So django, internally, when doing migrate, ensures everything is migrated in lower case - and this was the problem!!
I had a separate query executed to lower case all the previous inserts and voila!

IllegalStateException while trying create NativeQuery with EntityManager

I have been getting this annoying exception while trying to create a native query with my entity manager. The full error message is:
java.lang.IllegalStateException: During synchronization a new object was found through a relationship that was not marked cascade PERSIST: com.model.OneToManyEntity2#61f3b3b.
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.discoverUnregisteredNewObjects(RepeatableWriteUnitOfWork.java:313)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.calculateChanges(UnitOfWorkImpl.java:723)
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.writeChanges(RepeatableWriteUnitOfWork.java:441)
at org.eclipse.persistence.internal.jpa.EntityManagerImpl.flush(EntityManagerImpl.java:874)
at org.eclipse.persistence.internal.jpa.QueryImpl.performPreQueryFlush(QueryImpl.java:967)
at org.eclipse.persistence.internal.jpa.QueryImpl.executeReadQuery(QueryImpl.java:207)
at org.eclipse.persistence.internal.jpa.QueryImpl.getSingleResult(QueryImpl.java:521)
at org.eclipse.persistence.internal.jpa.EJBQueryImpl.getSingleResult(EJBQueryImpl.java:400)
The actual code that triggers the error is:
Query query;
query = entityManager.createNativeQuery(
"SELECT MAX(CAST(SUBSTRING_INDEX(RecordID,'-',-1) as Decimal)) FROM `QueriedEntityTable`");
String recordID = (query.getSingleResult() == null ?
null :
query.getSingleResult()
.toString());
This is being executed with an EntityTransaction in the doTransaction part. The part that is getting me with this though is that this is the first code to be executed within the doTransaction method, simplified below to:
updateOneToManyEntity1();
updateOneToManyEntity2();
entityManager.merge(parentEntity);
The entity it has a problem with "OneToManyEntity1" isn't even the table I'm trying to create the query on. I'm not doing any persist or merge up until this point either, so I'm also not sure what is supposedly causing it to be out of sync. The only database work that's being done up until this code is executed is just pulling in data, not changing anything. The foreign keys are properly set up in the database.
I'm able to get rid of this error by doing as it says and marking these relationships as Cascade.PERSIST, but then I get a MySQLContrainstraViolationException on the query.getSingleResult() line. My logs show that its doing some INSERT queries right before this, so it looks like its reaching the EntityManager.merge part of my doTransaction method, but the error and call stack point to a completely different part of the code.
Using EclipseLink (2.6.1), Glassfish 4, and MySQL. The entitymanager is using RESOURCE_LOCAL with all the necessary classes listed under the persistence-unit tag and exclude-unlisted-classes is set to false.
Edit: So some more info as I'm trying to work through this. If I put a breakpoint at the beginning of the transaction and then execute entityManager.clear() through IntelliJ's "Evaluate Expression" tool, everything works fine at least the first time through. Without it, I get an error as it tries to insert empty objects into the table.
Edit #2: I converted the nativeQuery part into using the Criteria API and this let me actually make it through my code so I could find where it was unintentionally adding in a null object to my entity list. I'm still just confused as to why the entity manager is caching these errors or something to the point that creating a native query is breaking because its still trying to insert bad data. Is this something I'd need to call EntityManager.clear() before doing each time? Or am I supposed to call this when there is an error in the doTransaction method?
So after reworking the code and setting this aside, I stumbled on at least part of the answer to my question. My issue was caused by the object being persisted prior to the transaction starting. So when I was entering my transaction, it first tried to insert/update data from my entity objects and threw an error since I hadn't set the values of most of the non-null columns. I believe this is the reason I was getting the cascade errors and I'm positive this is the source of the random insert queries I saw being fired off at the beginning of my transaction. Hope this helps someone else avoid a lot of trouble.

Errors creating generic relations using content types (object_pk)

I am working to use django's ContentType framework to create some generic relations for a my models; after looking at how the django developers do it at django.contrib.comments.models I thought I would imitate their approach/conventions:
from django.contrib.comments.models, line 21):
content_type = models.ForeignKey(ContentType,
verbose_name='content type',
related_name="content_type_set_for_%(class)s")
object_pk = models.TextField('object ID')
content_object = generic.GenericForeignKey(ct_field="content_type", fk_field="object_pk")
That's taken from their source and, of course, their source works for me (I have comments with object_pk's stored just fine (integers, actually); however, I get an error during syncdb on table creation that ends:
_mysql_exceptions.OperationalError: (1170, "BLOB/TEXT column 'object_pk' used in key specification without a key length")
Any ideas why they can do it and I can't ?
After looking around, I noticed that the docs actually state:
Give your model a field that can store a primary-key value from the models you'll be relating to. (For most models, this means an IntegerField or PositiveIntegerField.)
This field must be of the same type as the primary key of the models that will be involved in the generic relation. For example, if you use IntegerField, you won't be able to form a generic relation with a model that uses a CharField as a primary key.
But why can they do it and not me ?!
Thanks.
PS: I even tried creating an AbstractBaseModel with these three fields, making it abstract=True and using that (in case that had something to do with it) ... same error.
After I typed out that really long question I looked at the mysql and realized that the error was stemming from:
class Meta:
unique_together = (("content_type", "object_pk"),)
Apparently, I can't have it both ways. Which leaves me torn. I'll have to open a new question about whether it is better to leave my object_pk options open (suppose I use a textfield as a primary key?) or better to enforce the unique_togetherness...