Errors creating generic relations using content types (object_pk) - mysql

I am working to use django's ContentType framework to create some generic relations for a my models; after looking at how the django developers do it at django.contrib.comments.models I thought I would imitate their approach/conventions:
from django.contrib.comments.models, line 21):
content_type = models.ForeignKey(ContentType,
verbose_name='content type',
related_name="content_type_set_for_%(class)s")
object_pk = models.TextField('object ID')
content_object = generic.GenericForeignKey(ct_field="content_type", fk_field="object_pk")
That's taken from their source and, of course, their source works for me (I have comments with object_pk's stored just fine (integers, actually); however, I get an error during syncdb on table creation that ends:
_mysql_exceptions.OperationalError: (1170, "BLOB/TEXT column 'object_pk' used in key specification without a key length")
Any ideas why they can do it and I can't ?
After looking around, I noticed that the docs actually state:
Give your model a field that can store a primary-key value from the models you'll be relating to. (For most models, this means an IntegerField or PositiveIntegerField.)
This field must be of the same type as the primary key of the models that will be involved in the generic relation. For example, if you use IntegerField, you won't be able to form a generic relation with a model that uses a CharField as a primary key.
But why can they do it and not me ?!
Thanks.
PS: I even tried creating an AbstractBaseModel with these three fields, making it abstract=True and using that (in case that had something to do with it) ... same error.

After I typed out that really long question I looked at the mysql and realized that the error was stemming from:
class Meta:
unique_together = (("content_type", "object_pk"),)
Apparently, I can't have it both ways. Which leaves me torn. I'll have to open a new question about whether it is better to leave my object_pk options open (suppose I use a textfield as a primary key?) or better to enforce the unique_togetherness...

Related

(1054, "Unknown column '' in 'field list'")

I know this question has been asked a couple of time but no previous answer was able to solve my problem.
I had a perfectly working model in Django that looked like this:
class Template(models.Model):
mat = models.CharField(max_length=50, null=True)
...
I had many instances of this model and up to now I was very happy with it. I recently decided that instead of a Charfield, this model was better suited to work with a ForeignKey in this position instead.
To get into details, the attribute ''mat'' was previously only referring to the name of another object instance. I decided to change it to the full fledged instance with a ForeignKey instead, like it should have always been. Therefore, the model was modified as follows:
class Template(models.Model):
mat = models.ForeignKey(Stock, on_delete=models.CASCADE, related_name='mat_stock', verbose_name="mat", null=True)
...
I followed this change with the regular */manage.py makemigrations, */manage.py migrate. While these two commands worked, I was unable to select any instance of Template in the shell without raising the following error:
OperationalError: (1054, "Unknown column 'myapp_template.mat_id' in 'field list'")
Similar situations I encountered were solved by manually adding a column in SQL with the following line:
ALTER TABLE database.myapp_template ADD mat INT;
Unfortunately this did not solve my problem.
I figured maybe the problem was that I already had instances of my object that had character values in the ''mat'' column. Django would expect integer values (specifically "id") after my migration, so I decided to create a completely new attribute for Template as follows:
class Template(models.Model):
pos_mat = models.ForeignKey(Stock, on_delete=models.CASCADE, related_name='mat_stock', verbose_name="mat", null=True)
...
This, I thought, would delete (or disregard) the "mat" column and create new "pos_mat" columns with the desired properties without having to handle old character values that wouldn't fit with the requirements. From there on it should be like adding a completely new ForeignKey attribute.
After the required and successful */manage.py makemigrations, */manage.py migrate I am still unable to access an instance of my model in the shell. I still get the same unpleasing:
OperationalError: (1054, "Unknown column 'myapp_template.mat_id' in 'field list'")
Would anybody know how to convince Django to go along with my changes? I am skeptical that rolling back migrations to zero will help me (it did not solve my problems in the past) and I hope it will not come to the deletion of my data. It is acceptable for my model to have an empty field in this column since I added a null=True to my attribute.
Thank you very much for your help. Have a good day.
I have solved my problem by rolling back to my last stable migration. From there I was able to migrate a model where 'mat' was absent and 'pos_mat' was the only attribute. This means my problem arose in the first migration from the old version of 'mat' to the new version of 'mat'. Basically keeping the same name but changing the attribute characteristics is a no go. I hope those with the same problem will be able to fix it with this.

Perl DBIx::Class encounterd Object Json

I'm new to Perl and DBIx::Class.
This is how I get my meaning_ids from the table translation where language = 5:
my $translations = $schema -> resultset('Translation')->search({ language => '5'});
After it I'm trying to push my data from the database into my array data:
while ( my $translation =$translations->next ) {
push #{ $data }, {
meaning_id => $translation-> meaning
};
}
$self->body(encode_json $data );
If I do it like this, I get the following error:
encountered object
'TranslationDB::Schema::Result::Language=HASH(0x9707158)', but neither
allow_blessed , convert_blessed nor allow_tags settings are enabled
(or TO_JSON/FREEZE method missing)
But if I do it like that:
while ( my $translation =$translations->next ) {
push #{ $data }, {
meaning_id => 0+ $translation-> meaning
};
}
$self->body(encode_json $data );
I don't get the error anymore, but the meaning is not the number out of the database. It's way too big (something like 17789000, but only numbers till 7000 are valid).
Is there an easy way to tell Perl that meaning_id is an INT and not a string?
It's a bit hard without knowing your schema classes, but #choroba is right. The error message says $translation->meaning is an instance of TranslationDB::Schema::Result::Language. That's explained in DBIx::Class::Manual::ResultClass on CPAN.
I believe there is a relationship to a table called meaning, and when you call $translation->meaning what you get is a new result class. Instead you need to call $translation->meaning_id. Actually that would only happen in a join, but your code doesn't look like it does that.
It seems $translation->meaning returns an object. Using 0+ just returns its address (that's why the numbers are so high).
It looks like there's a relationship between your translation and meaning tables. Probably, the translation table contains a foreign key to the meaning table. If you look in the Result class for your translation class then you will see that relationship defined - it will be called "meaning".
As you have that relationship, then DBIC has added a meaning method to your class which retrieves the meaning object that is associated with your translation.
But it appears that the foreign key column in your translation table is also called "meaning", so you expect calling the "meaning" method gives you the value of the foreign key rather than the associated object. Unfortunately it doesn't work like that. The relationship method overrides the column method.
This is a result of bad naming practices. I recommend that you call the primary key for every table id and the foreign key that links to another table <table_name>_id - so the column in your translation table would be called meaning_id. That way you can distinguish between the value of the key ($translation->meaning_id) and the associated meaning object ($translation->meaning).
A work-around you can use if you can't rename columns, is to use the get_column method - $translation->get_column('meaning').

Problem with fieldname having '?'

I have a 'user' table with a field name 'process_salary?' which has a boolean datatype
#user = User.create(params[:user])
if #user.process_salary?
//some code here
else
//some code here
end
When I create a new object of user and check for process_salary it gives me following error
NoMethodError: undefined method `process_salary?' for #<User:0xb6ac2f68>
Why does this error occur? Can I avoid it without changing my column name?
When I check it with the debugger it crashes the first time, but after that it runs properly
The question-mark has a special meaning in ActiveRecord. It can be used to check whether a field is true. You are using it as part of your field name which wasn't such a good idea. You could try if #user.process_salary?? exists but I think ultimately it is easiest to change your database column to be called 'process_salary'.
Side note: The 'rails console' is really helpful for playing around with models.
As cellcortex posted, question marks at the end of column names are tricky in Rails. If you need to have it there for legacy reasons, you might be able access the attribute as follows:
#user['process_salary?']
or the more verbose:
#user.read_attribute['process_salary?']
You can of course test for nil using .nil?.

How does SqlAlchemy handle unique constraint in table definition

I have a table with the following declarative definition:
class Type(Base):
__tablename__ = 'Type'
id = Column(Integer, primary_key=True)
name = Column(String, unique = True)
def __init__(self, name):
self.name = name
The column "name" has a unique constraint, but I'm able to do
type1 = Type('name1')
session.add(type1)
type2 = Type(type1.name)
session.add(type2)
So, as can be seen, the unique constraint is not checked at all, since I have added to the session 2 objects with the same name.
When I do session.commit(), I get a mysql error since the constraint is also in the mysql table.
Is it possible that sqlalchemy tells me in advance that I can not make it or identifies it and does not insert 2 entries with the same "name" columm?
If not, should I keep in memory all existing names, so I can check if they exist of not, before creating the object?
SQLAlechemy doesn't handle uniquness, because it's not possible to do good way. Even if you keep track of created objects and/or check whether object with such name exists there is a race condition: anybody in other process can insert a new object with the name you just checked. The only solution is to lock whole table before check and release the lock after insertion (some databases support such locking).
AFAIK, sqlalchemy does not handle uniqueness constraints in python behavior. Those "unique=True" declarations are only used to impose database level table constraints, and only then if you create the table using a sqlalchemy command, i.e.
Type.__table__.create(engine)
or some such. If you create an SA model against an existing table that does not actually have this constraint present, it will be as if it does not exist.
Depending on your specific use case, you'll probably have to use a pattern like
try:
existing = session.query(Type).filter_by(name='name1').one()
# do something with existing
except:
newobj = Type('name1')
session.add(newobj)
or a variant, or you'll just have to catch the mysql exception and recover from there.
From the docs
class MyClass(Base):
__tablename__ = 'sometable'
__table_args__ = (
ForeignKeyConstraint(['id'], ['remote_table.id']),
UniqueConstraint('foo'),
{'autoload':True}
)
.one() throws two kinds of exceptions:
sqlalchemy.orm.exc.NoResultFound and sqlalchemy.orm.exc.MultipleResultsFound
You should create that object when the first exception occurs, if the second occurs you're screwed anyway and shouldn't make is worse.
try:
existing = session.query(Type).filter_by(name='name1').one()
# do something with existing
except NoResultFound:
newobj = Type('name1')
session.add(newobj)

How to force Grails to use proper column type in MySQL for Map field

I have a problem in Grails 1.1.2 + MySQL.
My domain class Something contains field
Map<String, Map<Integer, Integer>> priceMap
When I run the app, Grails creates table 'something' and sub-table 'something_price_map'. 'something_price_map' contains
BIGINT(20) price_map
VARCHAR(255) price_map_idx
TINYBLOB price_map_elt
The problem is that when I fill-in the column priceMap even with small map data like this 'priceMap:[en:[100:4, 500:20, 600:24]]', the size of the data exceeds the limit of 255bytes.
Is there any way of specifying maxSize constraint for the inner map (Map), so that Grails uses MEDIUMBLOB or BLOBK instead of TINYBLOB?
Btw... Using in-mem DB, everything works fine.
As you may know, there is a mapping constraint for a domain class. However, your issue may be too complex for that functionality.
In such cases, you can specify an explicit Hibernate mapping (via hbm file) for a domain class. This allows the complete flexibility of Hibernate.