I had a django model field which was working in the default sqlite db:
uuid = models.TextField(default=uuid.uuid4, editable=False, unique=True).
However, when I tried to migrate to MySQL, I got the error:
django.db.utils.OperationalError: (1170, "BLOB/TEXT column 'uuid' used in key specification without a key length")
The first thing I tried was removing the unique=True, but I got the same error. Next, since I had another field (which successfully migrated ):
id = models.UUIDField(default=uuid.uuid4, editable=False)
I tried changing uuid to UUIDField, but I still get the same error. Finally, I changed uuid to:
uuid = models.TextField(editable=False)
But I am still getting the same error when migrating (DROP all the tables, makemigrations, migrate --run-syncdb). Ideally, I want to have a UUIDField or TextField with default = uuid.uuid4, editable = False, and unique = True, but I am fine doing these tasks in the view when creating the object.
You need to set max_length for your uuid field. Since UUID v4 uses 36 charactes, you can set it to max_length=36 (make sure you don't have longer values in the db already):
uuid = models.TextField(default=uuid.uuid4, editable=False, unique=True, max_length=36)
Related
I'm currently trying to create an ORM model in Peewee for an application. However, I seem to be running into an issue when querying a specific model. After some debugging, I found out that it is whatever below a specific model, it's failing.
I've moved around models (with the given ForeignKeys still being in check), and for some odd reason, it's only what is below a specific class (User).
def get_user(user_id):
user = User.select().where(User.id==user_id).get()
return user
class BaseModel(pw.Model):
"""A base model that will use our MySQL database"""
class Meta:
database = db
class User(BaseModel):
id = pw.AutoField()
steam_id = pw.CharField(max_length=40, unique=True)
name = pw.CharField(max_length=40)
admin = pw.BooleanField(default=False)
super_admin = pw.BooleanField()
#...
I expected to be able to query Season like every other model. However, this the peewee error I run into, when I try querying the User.id of 1 (i.e. User.select().where(User.id==1).get() or get_user(1)), I get an error returned with the value not even being inputted.
UserDoesNotExist: <Model: User> instance matching query does not exist:
SQL: SELECT `t1`.`id`, `t1`.`steam_id`, `t1`.`name`, `t1`.`admin`, `t1`.`super_admin` FROM `user` AS `t1` WHERE %s LIMIT %s OFFSET %s
Params: [False, 1, 0]
Does anyone have a clue as to why I'm getting this error?
Read the error message. It is telling you that the user with the given ID does not exist.
Peewee raises an exception if the call to .get() does not match any rows. If you want "get or None if not found" you can do a couple things. Wrap the call to .get() with a try / except, or use get_or_none().
http://docs.peewee-orm.com/en/latest/peewee/api.html#Model.get_or_none
Well I think I figured it out here. Instead of querying directly for the server ID, I just did a User.get(1) as that seems to do the trick. More reading shows there's a get by id as well.
I'm trying to connect replicate a table at realtime using Kafka connect. The database used is MySQLv5.7.
On working with insert and update mode separately, the columns are behaving as expected. However, when I use the upsert mode, no change is observed in the database.
Configuration File filled via UI
Sink
topic = custom-p2p
Connector Class = JdbcSinkConnector
name = sink
tasks-max = 1
Key-converter-class=org.apache.kafka.connect.storage.StringConverter
Value-converter-class=org.apache.kafka.connect.json.JsonConverter
jdbc_url=jdbc:mysql://127.0.0.1:3306/p2p_service_db4?user=root&password=root&useSSL=false
insert mode = upsert
auto create = true
auto evolve = true
Source
Connector Class = JdbcSourceConnector
name = source-new
task max = 1
key converter class = org.apache.kafka.connect.storage.StringConverter
value converter class = org.apache.kafka.connect.json.JsonConverter
jdbc url = jdbc:mysql://127.0.0.1:3306/p2p_service_db3?user=root&password=root&useSSL=false
table loading mode = timestamp+incrementing
incrementing column name = auto_id
timestamp column name = last_updated_at
topic prefix = custom-
ver
The issue that I'm having is that when the sink insert mode is changed to insert, the insertion takes place properly when changed to update, this also happens perfectly as expected, however when the value is changed to upsert, neither insertion nor update takes place.
Please let me know if something done is wrong? Why this mode is not working? Is there some alternative to this if this inserts and updates both need to be replicated in the backup DB.
Thank you in advance. Let me know if some other information is needed
I'm working on Rails 3.2.9 app with ruby 1.9.3 and mysql as my DB . I want to retrieve a particular column data called 'no_of_tc'from a model named "excel_file' but i dont have the primary key/id of tat row . All i have is the filename.
tc_no = ExcelFile.find(35).no_of_tc gives me the result but i dont have id all the time
tc_no = ExcelFile.find_by filename: 'excel_name' gives the error "unknown method - find_by"
How can i get the required data without having the primary key?? and why am I getting unknown method error for 'find_by'
I am assuming :file_name is another attribute of ExcelFile class.
try:
tc_no = ExcelFile.find_by_file_name(#name_of_the_fie)
I have a table with the following declarative definition:
class Type(Base):
__tablename__ = 'Type'
id = Column(Integer, primary_key=True)
name = Column(String, unique = True)
def __init__(self, name):
self.name = name
The column "name" has a unique constraint, but I'm able to do
type1 = Type('name1')
session.add(type1)
type2 = Type(type1.name)
session.add(type2)
So, as can be seen, the unique constraint is not checked at all, since I have added to the session 2 objects with the same name.
When I do session.commit(), I get a mysql error since the constraint is also in the mysql table.
Is it possible that sqlalchemy tells me in advance that I can not make it or identifies it and does not insert 2 entries with the same "name" columm?
If not, should I keep in memory all existing names, so I can check if they exist of not, before creating the object?
SQLAlechemy doesn't handle uniquness, because it's not possible to do good way. Even if you keep track of created objects and/or check whether object with such name exists there is a race condition: anybody in other process can insert a new object with the name you just checked. The only solution is to lock whole table before check and release the lock after insertion (some databases support such locking).
AFAIK, sqlalchemy does not handle uniqueness constraints in python behavior. Those "unique=True" declarations are only used to impose database level table constraints, and only then if you create the table using a sqlalchemy command, i.e.
Type.__table__.create(engine)
or some such. If you create an SA model against an existing table that does not actually have this constraint present, it will be as if it does not exist.
Depending on your specific use case, you'll probably have to use a pattern like
try:
existing = session.query(Type).filter_by(name='name1').one()
# do something with existing
except:
newobj = Type('name1')
session.add(newobj)
or a variant, or you'll just have to catch the mysql exception and recover from there.
From the docs
class MyClass(Base):
__tablename__ = 'sometable'
__table_args__ = (
ForeignKeyConstraint(['id'], ['remote_table.id']),
UniqueConstraint('foo'),
{'autoload':True}
)
.one() throws two kinds of exceptions:
sqlalchemy.orm.exc.NoResultFound and sqlalchemy.orm.exc.MultipleResultsFound
You should create that object when the first exception occurs, if the second occurs you're screwed anyway and shouldn't make is worse.
try:
existing = session.query(Type).filter_by(name='name1').one()
# do something with existing
except NoResultFound:
newobj = Type('name1')
session.add(newobj)
I am working to use django's ContentType framework to create some generic relations for a my models; after looking at how the django developers do it at django.contrib.comments.models I thought I would imitate their approach/conventions:
from django.contrib.comments.models, line 21):
content_type = models.ForeignKey(ContentType,
verbose_name='content type',
related_name="content_type_set_for_%(class)s")
object_pk = models.TextField('object ID')
content_object = generic.GenericForeignKey(ct_field="content_type", fk_field="object_pk")
That's taken from their source and, of course, their source works for me (I have comments with object_pk's stored just fine (integers, actually); however, I get an error during syncdb on table creation that ends:
_mysql_exceptions.OperationalError: (1170, "BLOB/TEXT column 'object_pk' used in key specification without a key length")
Any ideas why they can do it and I can't ?
After looking around, I noticed that the docs actually state:
Give your model a field that can store a primary-key value from the models you'll be relating to. (For most models, this means an IntegerField or PositiveIntegerField.)
This field must be of the same type as the primary key of the models that will be involved in the generic relation. For example, if you use IntegerField, you won't be able to form a generic relation with a model that uses a CharField as a primary key.
But why can they do it and not me ?!
Thanks.
PS: I even tried creating an AbstractBaseModel with these three fields, making it abstract=True and using that (in case that had something to do with it) ... same error.
After I typed out that really long question I looked at the mysql and realized that the error was stemming from:
class Meta:
unique_together = (("content_type", "object_pk"),)
Apparently, I can't have it both ways. Which leaves me torn. I'll have to open a new question about whether it is better to leave my object_pk options open (suppose I use a textfield as a primary key?) or better to enforce the unique_togetherness...