How to declare a prefix index in SQLAlchemy? - sqlalchemy

I have a VARCHAR(255) column that I want to index, but this exceeds the 767 byte maximum index size in MySQL. The fix seems to be to declare an index prefix, but I can't figure out the SQLAlchemy syntax for this.
I'm using SQLAlchemy 2.0.0 and Python 3.9. For the moment, I'm working around the problem by reducing the width of the field, but I really don't want to resort to that in production.
class BotLog(BaseModel):
__tablename__ = "bot_log"
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
title: Mapped[str] = mapped_column(String(190), index=True)
timestamp_utc: Mapped[datetime]

Related

Django MySQL UUID

I had a django model field which was working in the default sqlite db:
uuid = models.TextField(default=uuid.uuid4, editable=False, unique=True).
However, when I tried to migrate to MySQL, I got the error:
django.db.utils.OperationalError: (1170, "BLOB/TEXT column 'uuid' used in key specification without a key length")
The first thing I tried was removing the unique=True, but I got the same error. Next, since I had another field (which successfully migrated ):
id = models.UUIDField(default=uuid.uuid4, editable=False)
I tried changing uuid to UUIDField, but I still get the same error. Finally, I changed uuid to:
uuid = models.TextField(editable=False)
But I am still getting the same error when migrating (DROP all the tables, makemigrations, migrate --run-syncdb). Ideally, I want to have a UUIDField or TextField with default = uuid.uuid4, editable = False, and unique = True, but I am fine doing these tasks in the view when creating the object.
You need to set max_length for your uuid field. Since UUID v4 uses 36 charactes, you can set it to max_length=36 (make sure you don't have longer values in the db already):
uuid = models.TextField(default=uuid.uuid4, editable=False, unique=True, max_length=36)

JDO Class - convert to varchar or nvarchar based on MySQL or MSSQL

I have a JDO Class. Some of the attributes are as shown below:
#Column(jdbcType = "VARCHAR", length = 200)
String anotherSrcFieldValue;
#Column(jdbcType = "BIGINT")
long tgtFieldId;
#Column(jdbcType = "VARCHAR", length = 200)
String tgtFieldValue;
With MySQL and MSSQL it works fine.
My requirement is, if it is MySQL make it a column of type VARCHAR; and when it is MSSQL, create a column of type NVARCHAR. How can I achieve this?
A second requirement is one entity class to be run on both the databases.
All JDO docs I've seen explain clearly that putting schema specific info in annotations is a bad idea. Consequently you should have 2 files "package-mysql.orm" and "package-mssql.orm" to specify the schema-specific parts of the mapping, and set "datanucleus.Mapping" to be either "mysql" or "mssql" depending on your datastore. As per http://www.datanucleus.org/products/accessplatform_4_2/jdo/orm/metadata_orm.html

SQLAlchemy Truncating VARCHAR(MAX)

I'm querying Microsoft SQL Server 2008 with Flask-SQLAlchemy (.16) and SQL Alchemy (0.8.2) in python 2.7.
When I attempt to query the varchar(max) column. It is truncating it to 4096 characters. I've tried different data types in the code. String, Text, VARCHAR.
Any thoughts to get my code to pull all the data from my column?
Here is part of the code:
from web import db
class DynamicPage(db.Model):
__tablename__ = 'DynamicPage'
DynamicPageId = db.Column(db.Integer, primary_key=True)
PageHtml = db.Column(db.VARCHAR)
And the query:
pages = DynamicPage().query.all()
Are you using ODBC? FreeTDS? ODBC has a fixed maximum size for large text/binary fields. In FreeTDS you need to set the text size setting to support as large a field as you need.

MySQL + Django: No obj.id after save()

Using Django and MySQL. I save the model and no id appears. Seen some similar issues with PostgreSQL and with bigintegerfields, but neither of those seem to apply here. Any ideas? The client does receive a primary key in the id field in the database via MySQL auto-increment.
Thanks!
class Client(models.Model):
id = models.IntegerField(primary_key=True)
first_name = models.TextField(null=True, blank=True)
last_name = models.TextField(blank=True)
>>> client = models.Client(last_name="Last", first_name="First")
>>> client.last_name
'Last'
>>> client.save()
>>> client.id
>>> client.last_name
'Last'
>>> client.id
>>> client.pk
>>>
And in the database:
id first_name last_name
------------------------------------------
14 First Last
As Rebus says in the comments, you don't need to define the primary key explicitly. But if you do, you must make sure it is an autoincrement field - ie AutoField - not a basic IntegerField as you have. The way you have it, there's no way to get a new ID, which is why it's blank on save.
System generated database keys must be set by the database or else you won't have a multi-user database.

How does SqlAlchemy handle unique constraint in table definition

I have a table with the following declarative definition:
class Type(Base):
__tablename__ = 'Type'
id = Column(Integer, primary_key=True)
name = Column(String, unique = True)
def __init__(self, name):
self.name = name
The column "name" has a unique constraint, but I'm able to do
type1 = Type('name1')
session.add(type1)
type2 = Type(type1.name)
session.add(type2)
So, as can be seen, the unique constraint is not checked at all, since I have added to the session 2 objects with the same name.
When I do session.commit(), I get a mysql error since the constraint is also in the mysql table.
Is it possible that sqlalchemy tells me in advance that I can not make it or identifies it and does not insert 2 entries with the same "name" columm?
If not, should I keep in memory all existing names, so I can check if they exist of not, before creating the object?
SQLAlechemy doesn't handle uniquness, because it's not possible to do good way. Even if you keep track of created objects and/or check whether object with such name exists there is a race condition: anybody in other process can insert a new object with the name you just checked. The only solution is to lock whole table before check and release the lock after insertion (some databases support such locking).
AFAIK, sqlalchemy does not handle uniqueness constraints in python behavior. Those "unique=True" declarations are only used to impose database level table constraints, and only then if you create the table using a sqlalchemy command, i.e.
Type.__table__.create(engine)
or some such. If you create an SA model against an existing table that does not actually have this constraint present, it will be as if it does not exist.
Depending on your specific use case, you'll probably have to use a pattern like
try:
existing = session.query(Type).filter_by(name='name1').one()
# do something with existing
except:
newobj = Type('name1')
session.add(newobj)
or a variant, or you'll just have to catch the mysql exception and recover from there.
From the docs
class MyClass(Base):
__tablename__ = 'sometable'
__table_args__ = (
ForeignKeyConstraint(['id'], ['remote_table.id']),
UniqueConstraint('foo'),
{'autoload':True}
)
.one() throws two kinds of exceptions:
sqlalchemy.orm.exc.NoResultFound and sqlalchemy.orm.exc.MultipleResultsFound
You should create that object when the first exception occurs, if the second occurs you're screwed anyway and shouldn't make is worse.
try:
existing = session.query(Type).filter_by(name='name1').one()
# do something with existing
except NoResultFound:
newobj = Type('name1')
session.add(newobj)