I have a table with a column type date. This column accepts null values,
therefore, I declared it as an Option (see field perDate below). When I
run the select query through the application code I get the following exception
slick.SlickException: Read NULL value (null) for ResultSet column
problem.This
is the Slick table definition:
import java.sql.Date
import java.time.LocalDate
class FormulaDB(tag: Tag) extends Table[Formula](tag, "formulas") {
def sk = column[Int]("sk", O.PrimaryKey, O.AutoInc)
def formula = column[Option[String]]("formula")
def notes = column[Option[String]]("notes")
def periodicity = column[Int]("periodicity")
def perDate = column[Option[LocalDate]]("per_date")(localDateColumnType)
def * =
(sk, name, descrip, formula, notes, periodicity, perDate) <>
((Formula.apply _).tupled, Formula.unapply)
implicit val localDateColumnType = MappedColumnType.base[Option[LocalDate], Date](
{
case Some(localDate) => Date.valueOf(localDate)
case None => null
}, { sqlDate =>
if (sqlDate != null) Some(sqlDate.toLocalDate) else None
}
)
}
Your mapped column function just needs to provide the LocalDate to Date conversion. Slick will automatically handle Option[LocalDate] if it knows how to handle LocalDate.
That means changing your localDateColumnType to be:
implicit val localDateColumnType = MappedColumnType.base[LocalDate, Date](
Date.valueOf(_), _.toLocalDate
)
Chapter 5 of Essential Slick covers some of this, as does the section on User Defined Features in the Manual.
I'm not 100% sure why you're seeing the run-time error: my guess is that the column is being treated as an Option[Option[LocalDate]] or similar, and there's a level of null in there that's being missed.
BTW, your def * can probably be:
def * = (sk, name, descrip, formula, notes, periodicity, perDate).mapTo[Formula]
...which is a little nicer to read. The mapTo was added in Slick 3 at some point.
Related
A legacy mysql db table has an id column that is non-human readable raw varbinary (don't ask me why :P)
CREATE TABLE IF NOT EXISTS `tbl_portfolio` (
`id` varbinary(16) NOT NULL,
`name` varchar(128) NOT NULL,
...
PRIMARY KEY (`id`)
);
and I need to select on it based on a java.util.UUID
jdbiReader
.withHandle<PortfolioData, JdbiException> { handle ->
handle
.createQuery(
"""
SELECT *
FROM tbl_portfolio
WHERE id = :id
"""
)
.bind("id", uuid) //mapping this uuid into the varbinary
//id db column is the problem
.mapTo(PortfolioData::class.java) //the mapper out does work
.firstOrNull()
}
just in case anyone wants to see it, here's the mapper out (but again, the mapper out is not the problem - binding the uuid to the varbinary id db column is)
class PortfolioDataMapper : RowMapper<PortfolioData> {
override fun map(
rs: ResultSet,
ctx: StatementContext
): PortfolioData = PortfolioData(
fromBytes(rs.getBytes("id")),
rs.getString("name"),
rs.getString("portfolio_idempotent_key")
)
private fun fromBytes(bytes: ByteArray): UUID {
val byteBuff = ByteBuffer.wrap(bytes)
val first = byteBuff.long
val second = byteBuff.long
return UUID(first, second)
}
}
I've tried all kinds of things to get the binding to work but no success - any advice much appreciated!
Finally got it to work, partly thanks to https://jdbi.org/#_argumentfactory which actually deals with UUID specifically but I somehow missed despite looking at JDBI docs for hours, oh well
The query can remain like this
jdbiReader
.withHandle<PortfolioData, JdbiException> { handle ->
handle
.createQuery(
"""
SELECT *
FROM tbl_portfolio
WHERE id = :id
"""
)
.bind("id", uuid)
.mapTo(PortfolioData::class.java)
.firstOrNull()
}
But jdbi needs a UUIDArgumentFactory registered
jdbi.registerArgument(UUIDArgumentFactory(VARBINARY))
where
class UUIDArgumentFactory(sqlType: Int) : AbstractArgumentFactory<UUID>(sqlType) {
override fun build(
value: UUID,
config: ConfigRegistry?
): Argument {
return UUIDArgument(value)
}
}
where
class UUIDArgument(private val value: UUID) : Argument {
companion object {
private const val UUID_SIZE = 16
}
#Throws(SQLException::class)
override fun apply(
position: Int,
statement: PreparedStatement,
ctx: StatementContext
) {
val bb = ByteBuffer.wrap(ByteArray(UUID_SIZE))
bb.putLong(value.mostSignificantBits)
bb.putLong(value.leastSignificantBits)
statement.setBytes(position, bb.array())
}
}
NOTE that registering an ArgumentFactory on the entire jdbi instance like this will make ALL UUID type arguments sent to .bind map to bytes which MAY not be what you want in case you elsewhere in your code base have other UUID arguments that are stored on the mysql end with something other than VARBINARY - eg, you may have another table with a column where your JVM UUID are actually stores as VARCHAR or whatever, in which case you'd have to, rather than registering the UUID ArgumentFactory on the entire jdbi instance, only use it ad hoc on individual queries where appropriate.
So I'm very (extremely) new to Databases and slick and scala, so I was using the example code from their documentation at http://slick.typesafe.com/doc/3.0.0/gettingstarted.html
My problem is that for some reason, I have to run a query multiple times before it returns data. I have to rerun it atleast 3-4 times before it returns results. I use a for-loop to rerun the query and they don't necessarily give me the exact same results each time either.
to create two tables as followed:
class Patients(tag: Tag) extends Table[(String, String, Int, String)](tag, "Patientss") {
def PID = column[String]("Patient Id", O.PrimaryKey)
def Gender = column[String]("Gender")
def Age = column[Int]("Age")
def Ethnicity = column[String]("Ethnicity")
def * = (PID, Gender, Age, Ethnicity)
}
val patientsss = TableQuery[Patients]
class DrugEffect(tag: Tag) extends Table[(String, String, Double)](tag, "DrugEffectss") {
def DrugID = column[String]("Drug ID", O.PrimaryKey)
def PatientID = column[String]("Patient_ID")
def DrugEffectssss = column[Double]("Drug Effect")
def * = (DrugID, PatientID, DrugEffectssss)
def Patient = foreignKey("Patient_FK", PatientID, patientsss)(_.PID)}
val d_effects = TableQuery[DrugEffect]
I then create these tables using
val create_empty = DBIO.seq((patientsss.schema ++ d_effects.schema).create)
val setup_1 = db.run(create_empty)
I have actual data in two text files, which I parse through using a buffered reader.
I store all the Drug ID's in a list creatively named DrugIds
Then, I start filling in the tables in the following way
I first fill in the Patients table:
while (switch != 1) {
val Patient = CurPatient.split("\\s+")
if (Patient(2).toUpperCase() == "NA" || (Patient(2).toFloat % 1 != 0))
age = -1
else age = Patient(2).toInt
val insertPatient: DBIO[Option[Int]] = patientsss ++= Seq(
(Patient(0), Patient(1), age, Patient(3))
)
var future = db.run(insertPatient)
CurPatient = PatientReader.readLine()
if (CurPatient == null)
switch = 1 //switch to 1
}
For the DrugEffects table, I do the following:
while (switch != 1) {
val Effect = CurEffect.split("\\s+")
for (i <- 1 until DrugIds.size - 1) {
if (Effect(i).toUpperCase() == "NA")
d_ef = -1.00
else d_ef = (Effect(i).toFloat).asInstanceOf[Double]
val insertEffect: DBIO[Option[Int]] = d_effects ++= Seq(
(DrugIds(i), Effect(0), d_ef)
)
var future2 = db.run(insertEffect)
}
CurEffect = EffectReader.readLine()
if (CurEffect == null)
switch = 1
}
Then I run a query with the following piece of code
val q1 = for {
c <- patientsss
} yield (c.PID, c.Gender, c.Age, c.Ethnicity)
db.stream(q1.result).foreach(println)
This should just give me all the data in the Patient's table, but it doesn't necessarily do that.
Sometimes, I get the following error (but not always):
java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponent$DatabaseDef$$anon$3#47089c2c rejected from java.util.concurrent.ThreadPoolExecutor#6453123[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 215]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
at slick.backend.DatabaseComponent$DatabaseDef$class.scheduleSynchronousStreaming(DatabaseComponent.scala:253)
at slick.jdbc.JdbcBackend$DatabaseDef.scheduleSynchronousStreaming(JdbcBackend.scala:38)
at slick.backend.DatabaseComponent$BasicStreamingActionContext.restartStreaming(DatabaseComponent.scala:516)
at slick.backend.DatabaseComponent$BasicStreamingActionContext.request(DatabaseComponent.scala:531)
at slick.backend.DatabasePublisher$$anon$3$$anonfun$onNext$2.apply(DatabasePublisher.scala:50)
at slick.backend.DatabasePublisher$$anon$3$$anonfun$onNext$2.apply(DatabasePublisher.scala:49)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
If I run a more complex query, the data I get back is accurate to the parameters of the query, but the same problems occur, which is that the results are either duplicated or non-existent or not complete (when I rerun the query multiple times).
Explain like I'm 5 if you can, or point me to a resource that can help me solve these problems
EDIT:
bjfletcher's answer worked (Thanks!), but now I have another problem:
Every now and again, the code will fail with the error:
Exception in thread "main" org.h2.jdbc.JdbcSQLException: Table "Patientss" not found; SQL statement:
insert into "Patientss" ("Patient Id","Gender","Age","Ethnicity") values (?,?,?,?) [42102-162]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:329)
at org.h2.message.DbException.get(DbException.java:169)
at org.h2.message.DbException.get(DbException.java:146)
at org.h2.command.Parser.readTableOrView(Parser.java:4758)
at org.h2.command.Parser.readTableOrView(Parser.java:4736)
at org.h2.command.Parser.parseInsert(Parser.java:954)
at org.h2.command.Parser.parsePrepared(Parser.java:375)
at org.h2.command.Parser.parse(Parser.java:279)
at org.h2.command.Parser.parse(Parser.java:251)
at org.h2.command.Parser.prepareCommand(Parser.java:217)
at org.h2.engine.Session.prepareLocal(Session.java:415)
at org.h2.engine.Session.prepareCommand(Session.java:364)
at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1121)
at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:71)
at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:267)
at slick.jdbc.JdbcBackend$SessionDef$class.prepareStatement(JdbcBackend.scala:252)
at slick.jdbc.JdbcBackend$BaseSession.prepareStatement(JdbcBackend.scala:386)
at slick.jdbc.JdbcBackend$SessionDef$class.withPreparedStatement(JdbcBackend.scala:301)
at slick.jdbc.JdbcBackend$BaseSession.withPreparedStatement(JdbcBackend.scala:386)
at slick.driver.JdbcInsertInvokerComponent$BaseInsertInvoker.preparedInsert(JdbcInsertInvokerComponent.scala:177)
at slick.driver.JdbcInsertInvokerComponent$BaseInsertInvoker$$anonfun$internalInsertAll$1.apply(JdbcInsertInvokerComponent.scala:201)
at slick.jdbc.JdbcBackend$BaseSession.withTransaction(JdbcBackend.scala:422)
at slick.driver.JdbcInsertInvokerComponent$BaseInsertInvoker.internalInsertAll(JdbcInsertInvokerComponent.scala:198)
at slick.driver.JdbcInsertInvokerComponent$BaseInsertInvoker.insertAll(JdbcInsertInvokerComponent.scala:194)
at slick.driver.JdbcInsertInvokerComponent$InsertInvokerDef$class.$plus$plus$eq(JdbcInsertInvokerComponent.scala:73)
at slick.driver.JdbcInsertInvokerComponent$BaseInsertInvoker.$plus$plus$eq(JdbcInsertInvokerComponent.scala:152)
at slick.driver.JdbcActionComponent$InsertActionComposerImpl$$anonfun$$plus$plus$eq$1.apply(JdbcActionComponent.scala:459)
at slick.driver.JdbcActionComponent$InsertActionComposerImpl$$anonfun$$plus$plus$eq$1.apply(JdbcActionComponent.scala:459)
at slick.driver.JdbcActionComponent$InsertActionComposerImpl$$anon$8.run(JdbcActionComponent.scala:449)
at slick.driver.JdbcActionComponent$InsertActionComposerImpl$$anon$8.run(JdbcActionComponent.scala:447)
at slick.backend.DatabaseComponent$DatabaseDef$$anon$2.liftedTree1$1(DatabaseComponent.scala:231)
at slick.backend.DatabaseComponent$DatabaseDef$$anon$2.run(DatabaseComponent.scala:231)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Doesn't happen all the time, but very often, and I have no clue what this means
All the DB calls will return to you immediately with Futures, even if they've not finished with their operations. This is asynchronous not synchronous.
You can change your code to accommodate the Futures in one of two ways:
you can use Await.result with all DB calls, to wait at that point until they complete, for example: Await.result(db.run(insertEffect), Duration.Inf)
you can use .map (or .flatMap if you're using another Future from within), with code that you want to run when the DB operation is complete. For example: db.run(insertEffect).map(_ => ... do stuff... )
Have a look another Stack Overflow thread regarding the exception with some ideas as to the cause.
I need create sequence but in generic case not using Sequence class.
USN = Column(Integer, nullable = False, default=nextusn, server_onupdate=nextusn)
, this funcion nextusn is need generate func.max(table.USN) value of rows in model.
I try using this
class nextusn(expression.FunctionElement):
type = Numeric()
name = 'nextusn'
#compiles(nextusn)
def default_nextusn(element, compiler, **kw):
return select(func.max(element.table.c.USN)).first()[0] + 1
but the in this context element not know element.table. Exist way to resolve this?
this is a little tricky, for these reasons:
your SELECT MAX() will return NULL if the table is empty; you should use COALESCE to produce a default "seed" value. See below.
the whole approach of inserting the rows with SELECT MAX is entirely not safe for concurrent use - so you need to make sure only one INSERT statement at a time invokes on the table or you may get constraint violations (you should definitely have a constraint of some kind on this column).
from the SQLAlchemy perspective, you need your custom element to be aware of the actual Column element. We can achieve this either by assigning the "nextusn()" function to the Column after the fact, or below I'll show a more sophisticated approach using events.
I don't understand what you're going for with "server_onupdate=nextusn". "server_onupdate" in SQLAlchemy doesn't actually run any SQL for you, this is a placeholder if for example you created a trigger; but also the "SELECT MAX(id) FROM table" thing is an INSERT pattern, I'm not sure that you mean for anything to be happening here on an UPDATE.
The #compiles extension needs to return a string, running the select() there through compiler.process(). See below.
example:
from sqlalchemy import Column, Integer, create_engine, select, func, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.sql.expression import ColumnElement
from sqlalchemy.schema import ColumnDefault
from sqlalchemy.ext.compiler import compiles
from sqlalchemy import event
class nextusn_default(ColumnDefault):
"Container for a nextusn() element."
def __init__(self):
super(nextusn_default, self).__init__(None)
#event.listens_for(nextusn_default, "after_parent_attach")
def set_nextusn_parent(default_element, parent_column):
"""Listen for when nextusn_default() is associated with a Column,
assign a nextusn().
"""
assert isinstance(parent_column, Column)
default_element.arg = nextusn(parent_column)
class nextusn(ColumnElement):
"""Represent "SELECT MAX(col) + 1 FROM TABLE".
"""
def __init__(self, column):
self.column = column
#compiles(nextusn)
def compile_nextusn(element, compiler, **kw):
return compiler.process(
select([
func.coalesce(func.max(element.column), 0) + 1
]).as_scalar()
)
Base = declarative_base()
class A(Base):
__tablename__ = 'a'
id = Column(Integer, default=nextusn_default(), primary_key=True)
data = Column(String)
e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
# will normally pre-execute the default so that we know the PK value
# result.inserted_primary_key will be available
e.execute(A.__table__.insert(), data='single row')
# will run the default expression inline within the INSERT
e.execute(A.__table__.insert(), [{"data": "multirow1"}, {"data": "multirow2"}])
# will also run the default expression inline within the INSERT,
# result.inserted_primary_key will not be available
e.execute(A.__table__.insert(inline=True), data='single inline row')
Good day everyone,
I have a file of strings corresponding to the fields of my SQLAlchemy object. Some fields are floats, some are ints, and some are strings.
I'd like to be able to coerce my string into the proper type by interrogating the column definition. Is this possible?
For instance:
class MyClass(Base):
...
my_field = Column(Float)
It feels like one should be able to say something like MyClass.my_field.column.type and either ask the type to coerce the string directly or write some conditions and int(x), float(x) as needed.
I wondered whether this would happen automatically if all the values were strings, but I received Oracle errors because the type was incorrect.
Currently I naively coerce -- if it's float()able, that's my value, else it's a string, and I trust that integral floats will become integers upon inserting because they are represented exactly. But the runtime value is wrong (e.g. 1.0 vs 1) and it just seems sloppy.
Thanks for your input!
SQLAlchemy 0.7.4
You can iterate over columns of the mapped Table:
for col in MyClass.__table__.columns:
print col, repr(col.type)
... so you can check the type of each field by its name like this:
def get_col_type(cls_, fld_):
for col in cls_.__table__.columns:
if col.name == fld_:
return col.type # this contains the instance of SA type
assert Float == type(get_col_type(MyClass, 'my_field'))
I would cache the results though if your file is large in order to save the for-loop on every row imported from the file.
Type coercion for sqlalchemy prior to committing to some database.
How can I verify Column data types in the SQLAlchemy ORM?
from sqlalchemy import (
Column,
Integer,
String,
DateTime,
)
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import event
import datetime
Base = declarative_base()
type_coercion = {
Integer: int,
String: str,
DateTime: datetime.datetime,
}
# this event is called whenever an attribute
# on a class is instrumented
#event.listens_for(Base, 'attribute_instrument')
def configure_listener(class_, key, inst):
if not hasattr(inst.property, 'columns'):
return
# this event is called whenever a "set"
# occurs on that instrumented attribute
#event.listens_for(inst, "set", retval=True)
def set_(instance, value, oldvalue, initiator):
desired_type = type_coercion.get(inst.property.columns[0].type.__class__)
coerced_value = desired_type(value)
return coerced_value
class MyObject(Base):
__tablename__ = 'mytable'
id = Column(Integer, primary_key=True)
svalue = Column(String)
ivalue = Column(Integer)
dvalue = Column(DateTime)
x = MyObject(svalue=50)
assert isinstance(x.svalue, str)
I'm not sure if I'm reading this question correctly, but I would do something like:
class MyClass(Base):
some_float = Column(Float)
some_string = Column(String)
some_int = Column(Int)
...
def __init__(self, some_float, some_string, some_int, ...):
if isinstance(some_float, float):
self.some_float = somefloat
else:
try:
self.some_float = float(somefloat)
except:
# do something intelligent
if isinstance(some_string, string):
...
And I would repeat the checking process for each column. I would trust anything to do it "automatically". I also expect your file of strings to be well structured, otherwise something more complicated would have to be done.
Assuming your file is a CSV (I'm not good with file reads in python, so read this as pseudocode):
while not EOF:
thisline = readline('thisfile.csv', separator=',') # this line is an ordered list of strings
thisthing = MyClass(some_float=thisline[0], some_string=thisline[1]...)
DBSession.add(thisthing)
I'm implementing an actor-based app in scala and I'm trying to be able to pass functions between the actors for them to be processed only when some message is received by the actor.
import actors.Actor
import java.util.Random
import scala.Numeric._
import Implicits._
class Constant(val n:Number) extends Actor{
def act(){
loop{
receive{
case "value" => reply( {n} )
}
}
}
}
class Arithmetic[T: Numeric](A: ()=>T, B: ()=>T) extends Actor{
def act(){
receive{
case "sum" => reply ( A() + B() )
/* case "mul" => reply ( A * B )
*/
}
}
}
object Main extends App{
val c5 = new Constant(5)
c5.start
val a = new Arithmetic({c5 !! "value"}, {c5!!"value"} )
a.start
println(a!?"sum")
println(a!?"mul")
}
In the example code above I would expect the output to be both 5+5 and 5*5. The issue is that reply is not a typed function and as such I'm unable to have the operator (+,*) to operate over the result from A and B.
Can you provide any help on how to better design/implement such system?
Edit: Code updated to better reflect the problem. Error in:
error: could not find implicit value for evidence parameter of type Numeric[Any]
val a = new Arithmetic({c5 !! "value"}, {c5!!"value"} )
I need to be able to pass the function to be evaluated in the actor whenever I call it. This example uses static values but I'll bu using dynamic values in the future, so, passing the value won't solve the problem. Also, I would like to receive different var types (Int/Long/Double) and still be able to use the same code.
The error: Error in: error: could not find implicit value for evidence parameter of type Numeric[Any]. The definition of !!:
def !! (msg: Any): Future[Any]
So the T that Arithmetic is getting is Any. There truly isn't a Numeric[Any].
I'm pretty sure that is not your problem. First, A and B are functions, and functions don't have + or *. If you called A() and B(), then you might stand a chance... except for the fact that they are java.lang.Number, which also does not have + or * (or any other method you'd expect it to have).
Basically, there's no "Number" type that is a superclass or interface of all numbers for the simple reason that Java doesn't have it. There's a lot of questions touching this subject on Stack Overflow, including some of my own very first questions about Scala -- investigate scala.math.Numeric, which is the best approximation for the moment.
Method vs Function and lack of parenthesis
Methods and functions are different things -- see tons of related questions here, and the rule regarding dropping parenthesis is different as well. I'll let REPL speak for me:
scala> def f: () => Int = () => 5
f: () => Int
scala> def g(): Int = 5
g: ()Int
scala> f
res2: () => Int = <function0>
scala> f()
res3: Int = 5
scala> g
res4: Int = 5
scala> g()
res5: Int = 5