Use java.util.Date with Slick 3.1.0-M1 - mysql

I am new to Slick. I am using mysql and I am trying to retrieve some datetime from the database. Here are my imports
import slick.driver.MySQLDriver.simple._
import scala.slick.driver._
import java.util.Date
and here the line of the class where the mapping is
def creationDate = column[Date]("creation_date")
But I am getting this error
could not find implicit value for parameter tt: slick.ast.TypedType[java.util.Date]
Is there a way to import a datetime from mysql to a java.util.Date without using String?
Thank you

The reason you can't use java.util.Date in the Column is
it's not supported in Slick, see the documentation in Table Rows part.
The following primitive types are supported out of the box for JDBC-based databases in JdbcProfile (with certain limitations imposed by the individual database drivers):
Date types: java.sql.Date, java.sql.Time, java.sql.Timestamp
Thus, no implicit TypedType[C] is provided.
def column[C](n: String, options: ColumnOption[C]*)
(implicit tt: TypedType[C]): Rep[C] = {
If you try to find the children of TypedType, you will find three time-relevant class in slick.driver.JdbcTypesComponent.
DateJdbcType for java.sql.Date
TimestampJdbcType for java.sql.Timestamp
TimeJdbcType for java.sql.Time
Also, the types defined are in line with what is stated in the documentation, three time-relevant type.
I use Timestamp with Slick 3.0 in my program as following:
import slick.driver.MySQLDriver.api._
import java.sql.Timestamp
case class Temp(creation_date: Timestamp)
class Tests(tag: Tag) extends Table[Temp](tag, "tests") {
def creationDate = column[Timestamp]("creation_date")
def * = creationDate <> ((creationDate: Timestamp) =>
Temp.apply(creationDate), Temp.unapply _)
}
In that way, you just have to convert Timestamp to any time-relevant type you want back and forth, but that should be no big deal.
Anyway, hope it helps.

Have you tried Joda-Time ?
If not, you should give it a serious thought. And there is a slick-mapper project for it https://github.com/tototoshi/slick-joda-mapper
import org.joda.time.DateTime
import com.github.tototoshi.slick.MySQLJodaSupport._
// then it works just the same
def creationDate = column[DateTime]("creation_date")

If you want to use java.util.Date as is,
create mapping Date type to Timestamp.
import slick.driver.MySQLDriver.api._
import java.util.Date
import java.sql.Timestamp
implicit def mapDate = MappedColumnType.base[Date, Timestamp](
d => new Timestamp(d.getTime),
identity
)

Related

SQLModel: sqlalchemy.exc.ArgumentError: Column expression or FROM clause expected,

I am using the SQLModel library to do a simple select() like described on their official website. However I am getting Column expression or FROM clause expected error message
from typing import Optional
from sqlmodel import Field, Session, SQLModel, create_engine, select
from models import Hero
sqrl = f"mysql+pymysql:///roo#asdf:localhost:3306/datab"
engine = create_engine(sqrl, echo=True)
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
def select_heroes():
with Session(engine) as session:
statement = select(Hero)
results = session.exec(statement)
for hero in results:
print(hero)
def main():
select_heroes()
if __name__ == "__main__":
main()
this is my models/Hero.py code:
from datetime import datetime, date, time
from typing import Optional
from sqlmodel import Field, SQLModel
class Hero(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str
secret_name: str
age: Optional[int] = None
created: datetime
lastseen: time
when I run app.py I get the sqlalchemy.exc.ArgumentError: Column expression or FROM clause expected, got <module 'models.Hero' from '/Users/dev/test/models/Hero.py'>. message
The error message <Column expression or FROM clause expected, got module 'models.Hero' from '/Users/dev/test/models/Hero.py'> tells us:
SQLModel / SQLAlchemy unexpectedly received a module object named models.Hero
that you have a module named Hero.py
The import statement from models import Hero only imports the module Hero. Either
change the import to import the model*
from models.Hero import Hero
change the code in select_heroes to reference the model†
statement = select(Hero.Hero)
* It's conventional to use all lowercase for module names; following this convention will help you distinguish between modules and models.
† This approach is preferable in my opinion: accessing the object via the module namespace eliminates the possibility of name collisions (of course it can be combined with lowercase module names).

What datatype to use in Slick Table definiton for json?

A table in my database has a column with the type json. I am wondering, which data type should I use when I define the table in Slick?
Currently, for the specific column I use Blob since I found it suggested here.
I'm interested to know if it is possible to use the json library class provided Play framework (play.libs.json)? If so, how? Do I need implicit mappers?
Slick provides custom data type mapper which are not supported by Slick.
In my case: Saving Json as string into database and in code using as JValue(I am using json4s):
object JsonMapper{
import driver.api._
implicit val jsonMapper = MappedColumnType.base[JValue, String](
{ json => write(framesType) /* json writer */ },
{ str =>parse(str) /*json parser */ }
)
}
import JsonMapper._
case class Info(id:Int, json:JValue)
class DBTable(tag:Tag) extends Table[Info](tag, "info"){
val id = column[Int]("id")
val json = column[JValue]("json")
def * = (id ,json) <> (Info.tupled, Info.unapply)
}
I am not sure this will work for your case. But this is a way to map any custom data type in Slick.

How to handle MongoDB ObjectIds in Play framework using Reactivemongo?

I have a basic model with a case class
case class Record( id: Option[String],
data: Double,
user: String,
)
object RecordJsonFormats {
import play.api.libs.json.Json
implicit val recordFormat = Json.format[Record]
}
Field user is actually an ObjectId of other module also id is also an ObjectId yet then try to change String type to BSONObjectId macros in play.api.libs.json.Json break... so both user and if saved with object id fields get saved as String not ObjectId.
What is the optimal way to operate with ObjectIds in Play framework?
Maybe I should extend play.api.libs.json.Json with BSONObjectId?
Maybe there is a way to link models and IDs are tracked automatically without a need to declare them in model?
You can override the default type of _id. You just need to specify the type you want in the case class.
import java.util.UUID
import play.api.libs.json._
case class Record (_id: UUID = UUID.randomUUID())
object Record {
implicit val entityFormat = Json.format[Record]
}
MongoDB has a default _id field of type ObjectId, which uniquely identifies a document in a given collection. However, this _id typically does not have a semantic meaning in the context of the application domain. Therefore, a good practice is to introduce an additional id field as index of documents. This id can simply a Long number, no more or less.
Then, you can search documents by id easily, and do not care much about ObjectId.
This, https://github.com/luongbalinh/play-mongo/, is a sample project using Play 2.4.x and ReactiveMongo. Hopefully, it helps you.
For those using Official Mongo Scala Driver and Play Framework 2.6+, Here's my solution: https://gist.github.com/ntbrock/556a1add78dc287b0cf7e0ce45c743c1
import org.mongodb.scala.bson.ObjectId
import play.api.libs.json._
import scala.util.Try
object ObjectIdFormatJsonMacro extends Format[ObjectId] {
def writes(objectId: ObjectId): JsValue = JsString(objectId.toString)
def reads(json: JsValue): JsResult[ObjectId] = json match {
case JsString(x) => {
val maybeOID: Try[ObjectId] = Try{new ObjectId(x)}
if(maybeOID.isSuccess) JsSuccess(maybeOID.get) else {
JsError("Expected ObjectId as JsString")
}
}
case _ => JsError("Expected ObjectId as JsString")
}
}
Use it like this in your business objects:
case class BusinessTime(_id: ObjectId = new ObjectId(), payRate: Double)
object BusinessTime {
implicit val objectIdFormat = ObjectIdFormatJsonMacro
implicit val businessTimeFormat = Json.format[BusinessTime]
}

ObjectId is not serialized to JSON

I am using scalatra and configured my servlet to always return JSON (as described in the respective guide). Using the MongoDB and Salat leads me to the point where I read a MongoDBObject back into my case class - which seems to work great.
My case class:
import org.bson.types.ObjectId
import com.novus.salat.annotations.raw.Key
case class Player(_id: ObjectId, firstName: String, ...)
Printing the case class object outputs this:
Player(547489ee93f4272e548ded63,Peter,...)
As you can see, the objectid is a org.bson.types.ObjectId.
The automatical serialization to JSON sends this to the browser:
{"_id":{},"firstName":"Peter",...}
Where is my ObjectID? What am I doing wrong?
I found the following on the web:
https://gist.github.com/dozed/5631680
After a small test it seems as if all I had to do was changing the code in my servlet from
protected implicit val jsonFormats: Formats = DefaultFormats
to
protected implicit val jsonFormats: Formats = DefaultFormats + new ObjectIdSerializer
and add
import org.json4s.mongo.ObjectIdSerializer
Maybe this will help another Scalatra-NOOB... ;-)

Scala toJson when using net.liftweb.mongodb.record.MongoRecord and Argonaut

I'm scala newbie and come from a Ruby background so and am having trouble rendering json response in my web service for which I use scalatra, mongodb with liftweb mongo record and argonaut for JSon serialisation and deserialisation.
However based on the examples given at http://argonaut.io/ I'm unable to figure out how this would work when using the net.liftweb.mongo.record library.
On compiling this i get a error which says a type mismatch. The error description follows the code snippet.
package firstscalatraapp
import org.scalatra
import net.liftweb.mongodb._
import net.liftweb.mongodb.record.MongoRecord
import net.liftweb.mongodb.record.field.ObjectIdPk
import net.liftweb.record.field.StringField
import net.liftweb.record.field.IntField
import net.liftweb.record.field.PasswordField
import net.liftweb.record.field.DateTimeField
import net.liftweb.mongodb.record.MongoMetaRecord
import argonaut._
import Argonaut._
case class Person private extends MongoRecord[Person] with ObjectIdPk[Person] {
def meta = Person
object age extends IntField(this, 3)
object name extends StringField(this, 29)
object created_at extends DateTimeField(this)
object password extends PasswordField(this)
}
object Person extends Person with MongoMetaRecord[Person] {
implicit def PersonCodecJson: CodecJson[Person] =
casecodec3(Person.apply, Person.unapply)("name", "age", "things")
}
The Error i get is
[error] found : () => firstscalatraapp.Person
[error] required: (?, ?, ?) => ?
[error] casecodec3(Person.apply, Person.unapply)("name", "age", "things")
[error] ^
[error] one error found
[error] (compile:compile) Compilation failed
which seems logical because the constructor does not accept any parameters and the mongo library seems to be generating the val for the fields that i need for the class (I still don't fully understand what the lift mongo wrapper does yet).
So how do i define the implicit to be able to find serialise an object of type person.
Also how do I define serialisation capabilities when i'm dealing with collections. For instance when I have a List[Person].
Thanks in advance. I would really appreciate any help i can get on this.
I'm just about to start using Argonaut so I'm no expert on that but with that said your initial problem seems obvious.
casecodec3 needs a constructor and a deconstructor for the class you're defining the codec for. In the examples of Argonaut they're using case classes and these have automatically generated companion objects with apply/unapply for the fields defined. Which for casecodec3 needs to be 3. In your case, the case class is of zero-arity - you have no case class fields at all. The fields of the record are defined as inner objects with their own apply-methods (very imperative stuff). That's just the way lifts records are defined. So your apply method is just () => Person.
casecodec3 wants a function from a 3-tuple to Person and from Person to a 3-tuple. I would suggest skipping the case definition if you're going to use lift record. And create functions on the side instead. Something like:
object Person extends Person with MongoMetaRecord[Person] {
implicit def PersonCodecJson: CodecJson[Person] =
casecodec3(parse, serialize)("name", "age", "things")
// Something like
def parse(name: String, age: Int, things: Something) = {
val p = Person.createRecord
p.name(name)
...
}
def serialize(p: Person) = (p.name.get, p.age.get, p.things.get)
}
As for your other questions I think you can head back to argonaut.io again. Their documentation seems quite alright - maybe it was worse when you posted this question as it is kind of old?
I'm going to try to replace all my serialization from lift-json to argonaut right now so if you're still stuck (probably not) I might be able to answer better in a bit.