Validation Error in get request fastAPI sqlalchemy - mysql

I have this route, that with the actual data in the DB table is supposed to answer with a 9 and 4 if i enter a 2 as parameter, but i get the error.
#userRoutes.get(
"/users/userMatch/{idusuariobuscar}",
response_model=list[PartidosUser],
tags=["users"],
)
def get_user_matches(idusuariobuscar: str):
return conn.execute(
partidosusuarios.select(partidosusuarios.c.idpartido).where(
partidosusuarios.c.idusuario == idusuariobuscar
)
).fetchall()
that queries this table
This is the schema
class PartidosUser(BaseModel):
id: Optional[str]
idUsuario: str
idPartido: str
class Config:
orm_mode = True
And this the model of the table
partidosusuarios = Table(
"partidosusuarios",
meta,
Column("idrelacion", Integer, primary_key=True),
Column("idpartido", Integer),
Column(
"idusuario",
Integer,
),
)
And the error
raise ValidationError(errors, field.type_)
pydantic.error_wrappers.ValidationError: 4 validation errors for PartidosUser
response -> 0 -> idUsuario
field required (type=value_error.missing)
response -> 0 -> idPartido
field required (type=value_error.missing)
response -> 1 -> idUsuario
field required (type=value_error.missing)
response -> 1 -> idPartido
field required (type=value_error.missing)

You are only fetching the idpartido column from the database, hence the other fields don't exist. Changing your query from
partidosusuarios.select(partidosusuarios.c.idpartido).where(
partidosusuarios.c.idusuario == idusuariobuscar
)
to
partidosusuarios.select().where(
partidosusuarios.c.idusuario == idusuariobuscar
)
should solve your problem.
Also, be aware that since your Pydantic model contains an id field but your database model doesn't, this field will always be None. Also, your database field idrelacion is not included in the response. These might be intentional choices, or a naming error.

Related

JDBI select on varbinary and uuid

A legacy mysql db table has an id column that is non-human readable raw varbinary (don't ask me why :P)
CREATE TABLE IF NOT EXISTS `tbl_portfolio` (
`id` varbinary(16) NOT NULL,
`name` varchar(128) NOT NULL,
...
PRIMARY KEY (`id`)
);
and I need to select on it based on a java.util.UUID
jdbiReader
.withHandle<PortfolioData, JdbiException> { handle ->
handle
.createQuery(
"""
SELECT *
FROM tbl_portfolio
WHERE id = :id
"""
)
.bind("id", uuid) //mapping this uuid into the varbinary
//id db column is the problem
.mapTo(PortfolioData::class.java) //the mapper out does work
.firstOrNull()
}
just in case anyone wants to see it, here's the mapper out (but again, the mapper out is not the problem - binding the uuid to the varbinary id db column is)
class PortfolioDataMapper : RowMapper<PortfolioData> {
override fun map(
rs: ResultSet,
ctx: StatementContext
): PortfolioData = PortfolioData(
fromBytes(rs.getBytes("id")),
rs.getString("name"),
rs.getString("portfolio_idempotent_key")
)
private fun fromBytes(bytes: ByteArray): UUID {
val byteBuff = ByteBuffer.wrap(bytes)
val first = byteBuff.long
val second = byteBuff.long
return UUID(first, second)
}
}
I've tried all kinds of things to get the binding to work but no success - any advice much appreciated!
Finally got it to work, partly thanks to https://jdbi.org/#_argumentfactory which actually deals with UUID specifically but I somehow missed despite looking at JDBI docs for hours, oh well
The query can remain like this
jdbiReader
.withHandle<PortfolioData, JdbiException> { handle ->
handle
.createQuery(
"""
SELECT *
FROM tbl_portfolio
WHERE id = :id
"""
)
.bind("id", uuid)
.mapTo(PortfolioData::class.java)
.firstOrNull()
}
But jdbi needs a UUIDArgumentFactory registered
jdbi.registerArgument(UUIDArgumentFactory(VARBINARY))
where
class UUIDArgumentFactory(sqlType: Int) : AbstractArgumentFactory<UUID>(sqlType) {
override fun build(
value: UUID,
config: ConfigRegistry?
): Argument {
return UUIDArgument(value)
}
}
where
class UUIDArgument(private val value: UUID) : Argument {
companion object {
private const val UUID_SIZE = 16
}
#Throws(SQLException::class)
override fun apply(
position: Int,
statement: PreparedStatement,
ctx: StatementContext
) {
val bb = ByteBuffer.wrap(ByteArray(UUID_SIZE))
bb.putLong(value.mostSignificantBits)
bb.putLong(value.leastSignificantBits)
statement.setBytes(position, bb.array())
}
}
NOTE that registering an ArgumentFactory on the entire jdbi instance like this will make ALL UUID type arguments sent to .bind map to bytes which MAY not be what you want in case you elsewhere in your code base have other UUID arguments that are stored on the mysql end with something other than VARBINARY - eg, you may have another table with a column where your JVM UUID are actually stores as VARCHAR or whatever, in which case you'd have to, rather than registering the UUID ArgumentFactory on the entire jdbi instance, only use it ad hoc on individual queries where appropriate.

Sqlalchemy. Get inserted default values

Preface:
My task is storing files on a disk, the part of a file name is a timestamp. Path to these files is storing in DB. Multiple files may have a single owner entity (one message can contain multiple attachments).
To make things easier I want to have the same timestamp for file paths in DB (it's set to default now()) and files on the disk.
Question:
So after insertion, I need to get back default inserted values (in most cases primary_key_id and created_datetime).
I tried:
db_session # Just for clarity
<sqlalchemy.orm.session.AsyncSession object at 0x7f836691db20>
str(statement) # Just for clarity. Don't know how to get back the original python (not an SQL) statement
'INSERT INTO users (phone_number, login, full_name, hashed_password, role) VALUES (:phone_number, :login, :full_name, :hashed_password, :role)'
query_result = await db_session.execute(statement=statement)
query_result.returned_defaults_rows # Primary_key, but no datetime
[(243,)]
query_result.returned_defaults # Primary_key, but no datetime
(243,)
query_result.fetchall()
[]
My tables:
Base = declarative_base() # Main class of ORM; Put in config by suggestion https://t.me/ru_python/1450665
claims = Table( # TODO set constraints for status
"claims",
Base.metadata,
Column("id", Integer, primary_key=True),
My queries
async def create_item(statement: Insert, db_session: AsyncSession, detail: str = '') -> dict:
try: # return default created values
statement = statement.returning(statement.table.c.id, statement.table.c.created_datetime)
return (await db_session.execute(statement=statement)).fetchone()._asdict()
except sqlalchemy.exc.IntegrityError as error:
# if psycopg2_errors.lookup(code=error.orig.pgcode) in (psycopg2_errors.UniqueViolation, psycopg2_errors.lookup):
detail = error.orig.args[0].split('Key ')[-1].replace('(', '').replace(')', '').replace('"', '')
raise HTTPException(status_code=422, detail=detail)
P.S. Sqlalchemy v 1.4
I was able to do this with session.bulk_save_objects(objects, return_defaults=True)
Docs on this method are here

SQLAlchemy query db with filter for all tables

I have SQLAlchemy models on top of the MySQL db. I need to query almost all models (string or text fields) and find everything that contains a specific substring. And also, apply common filtering like object_type=type1. For exsmple:
class Model1(Model):
name = Column(String(100), nullable=False, unique=True)
version = Column(String(100))
description = Column(String(100))
updated_at = Column(TIMESTAMP(timezone=True))
# other fields
class Model2(Model):
name = Column(String(100), nullable=False, unique=True)
version = Column(String(100))
description = Column(String(100))
updated_at = Column(TIMESTAMP(timezone=True))
# other fields
class Model3(Model):
name = Column(String(100), nullable=False, unique=True)
version = Column(String(100))
description = Column(String(100))
updated_at = Column(TIMESTAMP(timezone=True))
# other fields
And then do query something like:
db.query(
Model1.any_of_all_columns.contains('sub_string') or
Model2.any_of_all_columns.contains('sub_string') or
Model3.any_of_all_columns.contains('sub_string')
).all()
Is it possible to build such an ORM query in one SQL to the db and dynamically add Model(table) names and columns?
For applying common filtering for all the columns, you can subscribe to sqlachemy events as following:
#event.listens_for(Query, "before_compile", retval=True)
def before_compile(query):
for ent in query.column_descriptions:
entity = ent['entity']
if entity is None:
continue
inspect_entity_for_mapper = inspect(ent['entity'])
mapper = getattr(inspect_entity_for_mapper, 'mapper', None)
if mapper and has_tenant_id:
query = query.enable_assertions(False).filter(
ent['entity’].object == object)
return query
This function will be called whenever you do Model.query() and add filter for your object.
I eventually gave up and did one big loop in which I make a separate request for each model:
from sqlalchemy import or_
def db_search(self, model, q, object_ids=None, status=None, m2m_ids=None):
"""
Build a query to the db for given model using 'q' search substring
and filter it by object ids, its status and m2m related model.
:param model: a model object which columns will be used for search.
:param q: the query substring we are trying to find in all
string/text columns of the model.
:param object_ids: list of ids we want to include in the search.
If the list is empty, the search query will return 0 results.
If object_ids is None, we will ignore this filter.
:param status: name of object status.
:param m2m_ids: list of many-to-many related object ids.
:return: sqlalchemy query result.
"""
# Filter out private columns and not string/text columns
string_text_columns = [
column.name for column in model.__table__.columns if
isinstance(column.type, (db.String, db.Text))
and column.name not in PRIVATE_COLUMN_NAMES
]
# Find only enum ForeignKey columns
foreign_key_columns = [
column.name for column in model.__table__.columns if
column.name.endswith("_id") and column.name in ENUM_OBJECTS
)
]
query_result = model.query
# Search in all string/text columns for the required query
# as % LIKE %
if q:
query_result = query_result.join(
# Join related enum tables for being able to search in
*[enum_tables_to_model_map[col]["model_name"] for col in
foreign_key_columns]
).filter(
or_(
# Search 'q' substring in all string/text columns
*[
getattr(model, col_name).like(f"%{q}%")
for col_name in string_text_columns
],
# Search 'q' substring in the enum tables
*[
enum_tables_to_model_map[col]["model_field"]
.like(f"%{q}%") for col in foreign_key_columns
]
)
)
# Apply filter by object ids if given and it's not None.
# If the object ids filter exist but it's empty, we should
# return an empty result
if object_ids is not None:
query_result = query_result.filter(model.id.in_(object_ids))
# Apply filter by status if given and if the model has the status
# column
if status and 'status_id' in model.__table__.columns:
query_result = query_result.filter(model.status_id == status.id)
if m2m_ids:
query_result = query_result.filter(
model.labels.any(Label.id.in_(m2m_ids)))
return query_result.all()
And call it:
result = {}
for model in db.Model._decl_class_registry.values():
# Search only in the public tables
# sqlalchemy.ext.declarative.clsregistry._ModuleMarker object
# located in the _decl_class_registry that is why we check
# instance type and whether it is subclass of the db.Model
if isinstance(model, type) and issubclass(model, db.Model) \
and model.__name__ in PUBLIC_MODEL_NAMES:
query_result = self.db_search(
model, q, object_ids.get(model.__name__), status=status,
m2m_ids=m2m_ids)
result[model.__tablename__] = query_result
This is far from the best solution, but it works for me.

Reading a row with a NULL column causes an exception in slick

I have a table with a column type date. This column accepts null values,
therefore, I declared it as an Option (see field perDate below). When I
run the select query through the application code I get the following exception
slick.SlickException: Read NULL value (null) for ResultSet column
problem.This
is the Slick table definition:
import java.sql.Date
import java.time.LocalDate
class FormulaDB(tag: Tag) extends Table[Formula](tag, "formulas") {
def sk = column[Int]("sk", O.PrimaryKey, O.AutoInc)
def formula = column[Option[String]]("formula")
def notes = column[Option[String]]("notes")
def periodicity = column[Int]("periodicity")
def perDate = column[Option[LocalDate]]("per_date")(localDateColumnType)
def * =
(sk, name, descrip, formula, notes, periodicity, perDate) <>
((Formula.apply _).tupled, Formula.unapply)
implicit val localDateColumnType = MappedColumnType.base[Option[LocalDate], Date](
{
case Some(localDate) => Date.valueOf(localDate)
case None => null
}, { sqlDate =>
if (sqlDate != null) Some(sqlDate.toLocalDate) else None
}
)
}
Your mapped column function just needs to provide the LocalDate to Date conversion. Slick will automatically handle Option[LocalDate] if it knows how to handle LocalDate.
That means changing your localDateColumnType to be:
implicit val localDateColumnType = MappedColumnType.base[LocalDate, Date](
Date.valueOf(_), _.toLocalDate
)
Chapter 5 of Essential Slick covers some of this, as does the section on User Defined Features in the Manual.
I'm not 100% sure why you're seeing the run-time error: my guess is that the column is being treated as an Option[Option[LocalDate]] or similar, and there's a level of null in there that's being missed.
BTW, your def * can probably be:
def * = (sk, name, descrip, formula, notes, periodicity, perDate).mapTo[Formula]
...which is a little nicer to read. The mapTo was added in Slick 3 at some point.

Coercion in SQLAlchemy from Column annotations

Good day everyone,
I have a file of strings corresponding to the fields of my SQLAlchemy object. Some fields are floats, some are ints, and some are strings.
I'd like to be able to coerce my string into the proper type by interrogating the column definition. Is this possible?
For instance:
class MyClass(Base):
...
my_field = Column(Float)
It feels like one should be able to say something like MyClass.my_field.column.type and either ask the type to coerce the string directly or write some conditions and int(x), float(x) as needed.
I wondered whether this would happen automatically if all the values were strings, but I received Oracle errors because the type was incorrect.
Currently I naively coerce -- if it's float()able, that's my value, else it's a string, and I trust that integral floats will become integers upon inserting because they are represented exactly. But the runtime value is wrong (e.g. 1.0 vs 1) and it just seems sloppy.
Thanks for your input!
SQLAlchemy 0.7.4
You can iterate over columns of the mapped Table:
for col in MyClass.__table__.columns:
print col, repr(col.type)
... so you can check the type of each field by its name like this:
def get_col_type(cls_, fld_):
for col in cls_.__table__.columns:
if col.name == fld_:
return col.type # this contains the instance of SA type
assert Float == type(get_col_type(MyClass, 'my_field'))
I would cache the results though if your file is large in order to save the for-loop on every row imported from the file.
Type coercion for sqlalchemy prior to committing to some database.
How can I verify Column data types in the SQLAlchemy ORM?
from sqlalchemy import (
Column,
Integer,
String,
DateTime,
)
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import event
import datetime
Base = declarative_base()
type_coercion = {
Integer: int,
String: str,
DateTime: datetime.datetime,
}
# this event is called whenever an attribute
# on a class is instrumented
#event.listens_for(Base, 'attribute_instrument')
def configure_listener(class_, key, inst):
if not hasattr(inst.property, 'columns'):
return
# this event is called whenever a "set"
# occurs on that instrumented attribute
#event.listens_for(inst, "set", retval=True)
def set_(instance, value, oldvalue, initiator):
desired_type = type_coercion.get(inst.property.columns[0].type.__class__)
coerced_value = desired_type(value)
return coerced_value
class MyObject(Base):
__tablename__ = 'mytable'
id = Column(Integer, primary_key=True)
svalue = Column(String)
ivalue = Column(Integer)
dvalue = Column(DateTime)
x = MyObject(svalue=50)
assert isinstance(x.svalue, str)
I'm not sure if I'm reading this question correctly, but I would do something like:
class MyClass(Base):
some_float = Column(Float)
some_string = Column(String)
some_int = Column(Int)
...
def __init__(self, some_float, some_string, some_int, ...):
if isinstance(some_float, float):
self.some_float = somefloat
else:
try:
self.some_float = float(somefloat)
except:
# do something intelligent
if isinstance(some_string, string):
...
And I would repeat the checking process for each column. I would trust anything to do it "automatically". I also expect your file of strings to be well structured, otherwise something more complicated would have to be done.
Assuming your file is a CSV (I'm not good with file reads in python, so read this as pseudocode):
while not EOF:
thisline = readline('thisfile.csv', separator=',') # this line is an ordered list of strings
thisthing = MyClass(some_float=thisline[0], some_string=thisline[1]...)
DBSession.add(thisthing)