In my tests, I'm trying to setup a test database based on the schema defined in slick, which in turn is generated with the slick code generator based on an existing database. To do so, I'm extending WithApplication and redefining the around method like so:
abstract class WithDbData extends WithApplication {
def ds = DB.getDataSource("test")
def slickDb = Database.forDataSource(ds)
override def around[T: AsResult](t: => T): Result = super.around {
setup
val result = AsResult.effectively(t)
teardown
result
}
def setup = {
slickDb.withSession { implicit session =>
ddl.create
}
}
def teardown = {
slickDb.withSession { implicit session =>
ddl.drop
}
}
}
But when I run the tests, I'm getting a
MySqlSyntaxErrorException : Column length too big for column 'text' (max = 21845); use BLOB or TEXT instead (null:-2).
I'm using MySql for development and for testing, and part of the schema generated by the code generator is like so:
/** Database column text DBType(TEXT), Length(65535,true) */
val text: Column[String] = column[String]("text", O.Length(65535,varying=true))
It seems that the ddl generator is trying to create the text column as a varchar or something like that, instead of as text, as it was originally intended, because in the original database that column is of type text.
In the autogenerated slick data model I have as profile = scala.slick.driver.MySQLDriver, I've also imported scala.slick.driver.MySQLDriver.simple._ in my test class so I shouldn't have any problem because of mixing drivers.
I'm using play 2.3, slick 2.1.0, codegen 2.1.0 and play-slick 0.8.0.
I'd appreciate any light on this matter.
It's a bug in the code generator. github.com/slick/slick/issues/975
You can work around it by customizing the code generator. See http://slick.typesafe.com/doc/2.1.0/code-generation.html
One easy way to do it is override def dbType = true . You loose cross-vendor portability but get the exact types. You may have to filter Length out of def options, not sure.
Also discussed here: https://github.com/slick/slick/issues/982
You may change the column manually:
Create the table as standar varchar(255):
val text: Column[String] = column[String]("text", O.Length(255,varying=true))
After that you can change the database:
alter table your_table change `text` `text` text;
And restore the values:
val text: Column[String] = column[String]("text", O.Length(65535,varying=true))
In the other hand: you can add a custom field definition:
def recipe = column[Option[String]]("text_column", O.DBType("TEXT"))
Note: I changed the column name to text_column because text is a reserved word in MySQL.
Related
I have two sql statements to be executed with a validity check. My need is that I execute the 1st query and store the response in one object and check the object is empty or not and execute the second query if it is not empty.
So, I have tried something like
In rolerepository.scala=>
override val allQuery = s"""
select UserRoles.* from
(select CASE rbac.roleTypeID
ELSE rbac.name JOIN dirNetworkInfo ni
ON UserRoles.PersonID = ni.PersonID
where ni.Loginname = {loginName}
and UserRoles.roleName in ( 'Business User ','Administrator')"""
(This is just some sample of the query - it is not fully written here.)
Then I map it to an object with model class written outside
override def map2Object(implicit map: Map[String, Any]):
HierarchyEntryBillingRoleCheck = {
HierarchyEntryBillingRoleCheck(str("roleName"), oint("PersonID")) }
Then I have written the getall method to execute the query
override def getAll(implicit loginName: String):
Future[Seq[HierarchyEntryBillingRoleCheck]] = {
doQueryIgnoreRowErrors(allQuery, "loginName" -> loginName) }
Then I have written the method to check whether the response from the 1st sql is empty or not. This is were I'm stuck and not able to proceed further.
def method1()= {
val getallresponse = HierarchyEntryBillingRoleCheck
getallresponse.toString
if (getallresponse != " ")
billingMonthCheckRepository.getrepo()
}
I am getting an error (type mismatch) in last closing brace and I don't know what other logic can be used here.
Can any one of you please explain and give me some solution for this?
And i also tried to use for loop in controller but not getting how to do that.
i tried ->
def getAll(implicit queryParams: QueryParams,
billingMonthmodel:Seq[HierarchyEntryBillingRoleCheck]):
Action[AnyContent] = securityService.authenticate() { implicit request
=> withErrorRecovery { req =>
toJson {
repository.getAll(request.user.loginName)
for {
rolenamecheck <- billingMonthmodel
}yield rolenamecheck
}}}}
You don't say which db access method you are using. (I'm assuming anorm). One way of approaching this is:
Create a case class matching your table
Create a parser matching your case class
use Option (or Either) to return a row for a specific set of parameters
For example, perhaps you have:
case class UserRole (id:Int, loginName:String, roleName:String)
And then
object UserRole {
val sqlFields = "ur.id, ur.loginName, ur.roleName"
val userRoleParser = {
get[Int]("id") ~
get[String]("loginName") ~
get[String]("roleName") map {
case id ~ loginName ~ roleName => {
UserRole(id, loginName, roleName)
}
}
}
...
The parser maps the row to your case class. The next step is creating either single row methods like findById or findByLoginName and multi-row methods, perhaps allForRoleName or other generic filter methods. In your case there might (assuming a single role per loginName) be something like:
def findByLoginName(loginName:String):Option[UserRole) = DB.withConnection { implicit c =>
SQL(s"select $sqlFields from userRoles ur ...")
.on('loginName -> loginName)
.as(userRoleParser.singleOpt)
}
The .as(parser... is key. Typically, you'll need at least:
as(parser.singleOpt) which returns an Option of your case class
as(parser *) which returns a List of your case class (you'll need this if multiple roles could exist for a login
as(scalar[Long].singleOpt) which returns an Option[Long] and which is handy for returning counts or exists values
Then, to eventually return to your question a little more directly, you can call your find method, and if it returns something, continue with the second method call, perhaps like this:
val userRole = findByLoginName(loginName)
if (userRole.isDefined)
billingMonthCheckRepository.getrepo()
or, a little more idiomatically
findByLoginName(loginName).map { userRole =>
billingMonthCheckRepository.getrepo()
...
I've shown the find method returning an Option, but in reality we find it more useful to return an Either[String,(your case class)], and then the string contains the reason for failure. Either is cool.
On my version of play (2.3.x), the imports for the above are:
import play.api.db._
import play.api.Play.current
import anorm._
import anorm.SqlParser._
You're going to be doing this sort of thing a lot so worth finding a set of patterns that works for you.
WOW I don't know what's happening with the formatting here, I am really attempting to use the code formatter on the toolbar but I don't know why it won't format it, even when pressed multiple times. I invite the community to edit my code formatting because I can't figure it out. Apologies to OP.
Because I find Play's documentation to be very tough to trudge through if you're unfamiliar with it, I won't just leave a link to it only.
You have to inject an instance of your database into your controller. This will then give it to you as a global variable:
#Singleton
class LoginRegController #Inject()(**myDB: Database**, cc: ControllerComponents ) {
// do stuff
}
But, it's bad practice to actually use this connection within the controller, because the JDBC is a blocking operation, so you need to create a Model which takes the db as a parameter to a method. Don't set the constructor of the object to take the DB and store it as a field. For some reason this creates connection leaks and the connections won't release when they are done with your query. Not sure why, but that's how it is.
Create a Model object that you will use to execute your query. Instead of passing the DB through the object's constructor, pass it through the method you will create:
object DBChecker {
def attemptLogin(db:Database, password:String): String = {
}}
In your method, use the method .withConnection { conn => to access your JDBC connection. So, something like this:
object DBChecker {
def attemptLogin(db:Database, password:String):String = {
var username: String = ""
db.withConnection{ conn =>
val query:String = s"SELECT uploaded_by, date_added FROM tableName where PASSWORD = $password ;"
val stmt = conn.createStatement()
val qryResult:ResultSet = stmt.executeQuery(query)
// then iterate over your ResultSet to get the results from the query
if (qryResult.next()) {
userName = qryResult.getString("uploaded_by")
}
}
}
return username
}
// but note, please look into the use of PreparedStatement objects, doing it this way leaves you vulnerable to SQL injection.
In your Controller, as long as you import the object, you can then call that object's methods from your controller you made in Step 1.
import com.path.to.object.DBChecker
#Singleton
class LoginRegController #Inject()(myDB: Database, cc: ControllerComponents ) { def attemptLogin(pass:String) = Action {
implicit request: Request[AnyContent] => {
val result:String = DbChecker.attemptLogin(pass)
// do your work with the results here
}
Let's say If I have 2 Schema below.Both of them are in a same MySQL server.
master
base
The problem is I can't use more than 2 schema actions in a single Database run.
If I execute such plain sql queries via sql, It works without any problem.
def fooAction(name: String) = {
sql"""
SELECT AGE FROM MASTER.FOO_TABLE WHERE NAME = $name
""".as[String].head
}
def barAction(id: String) = {
sql"""
SELECT BAZ FROM BASE.BAR_TABLE WHERE ID = $id
""".as[String].head
}
def execute = {
//It doesn't matter which Db I use in here But Let's say this baseDb is pointing to BASE schema.
baseDb.run(for{
foo <- fooAction("sample")
bar <- barAction("sample")
} yield foo + bar)
}
But the case of code blow doesn't
class FooTableDAO #Inject() (#NamedDatabase("master") protected val dbConfigProvider: DatabaseConfigProvider) extends HasDatabaseConfigProvider[JdbcProfile] {
import dbConfig.driver.api._
val table = TableQuery[FooTable]
def fooAction(name: String) = table.filter{_.name == name}.map{_.age}.result.head
}
class BarTableDAO #Inject() (#NamedDatabase("base") protected val dbConfigProvider: DatabaseConfigProvider) extends HasDatabaseConfigProvider[JdbcProfile] {
import dbConfig.driver.api._
val table = TableQuery[BarTable]
def fooAction(id: String) = table.filter{_.id == id}.map{_.baz}.result.head
}
def execute = {
//com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'BASE.FOO_TABLE' doesn't exist
baseDb.run(for{
foo <- fooTableDAO.fooAction("sample")
bar <- barTableDAO.barAction("sample")
} yield foo + bar)
}
Since baseDb is pointing to BASE schema, It tries to find FOO_TABLE in MASTER schema. All What I want slick to do is use different schema for each query but I couldn't find the way.
Currently I do DBIO.from(db.run(**)) if another schema action is needed in a for-comprehension of DBIO action or execute each actions via run and wrap them with EitherT which is scala library named cats's monad transformer for Either to keep using for-comprehension.
Is there any way to handle more than 2 schemas in a single DBIO Action except using plain text query?
Thanks in advance.
I think (though I am not an MySQL expert) you mean schema, not a database. At least this is what I see from your SQL samples.
Can't you just use schema attribute in your Slick table mappings? Here you have complete answer for using different schemas: https://stackoverflow.com/a/41090987/2239369
Here is the relevant piece of code:
class StudentTable(tag: Tag) extends Table[Student](tag, _schemaName = Option("database2"), "STUDENT") {
...
}
(notice _schemaName attribute).
With this in mind answer to this part of the question:
Is there any way to handle more than 2 schemas in a single DBIO Action except using plain text query?
is: Yes you can.
I have a sqlAlchemy model that has one column of type geometry which is defined like this:
point_geom = Column(Geometry('POINT'), index=True)
I'm using geoalchemy2 module:
from geoalchemy2 import Geometry
Then I make my queries using sqlAlchemy ORM, and everything works fine. For example:
data = session.query(myModel).filter_by(...)
My problem is that when I need to get the sql statement of the query object, I use the following code:
sql = data.statement.compile(dialect=postgresql.dialect())
But the column of type geometry is converted to Byte[], so the resulting sql statement is this:
SELECT column_a, column_b, ST_AsBinary(point_geom) AS point_geom
FROM tablename WHERE ...
What should be done to avoid the conversion of the geometry type to byte type?
I had the same problem when was working with Flask-Sqlalchemy and Geoalchemy2 and solved this as follows.
You just need to create a new subclass of GEOMETRY type.
If you look at documentations, the arguments of "GEOMETRY" type are given:
ElementType - which is the type of returned element, by default it's 'WKBElement' (Well-known-binary-element)
as_binary - the function to use, by default it's 'ST_AsEWKB' which in makes a problem on your case
from_text - the geometry constructor used to create, insert and update elements, by default it is 'ST_GeomFromEWKT'
So what I did? I have just created new subclass with required function, element and constructor and used "Geometry" type on my db models as always do.
from geoalchemy2 import Geometry as BaseGeometry
from geoalchemy2.elements import WKTElement
class Geometry(BaseGeometry):
from_text = 'ST_GeomFromText'
as_binary = 'ST_asText'
ElementType = WKTElement
As you can see I have changed only these 3 arguments of a base class.
This will return you a String with required column variables.
It think you can specify that in your query. Something like this:
from geoalchemy2.functions import ST_AsGeoJSON
query = session.query(ST_AsGeoJSON(YourModel.geom_column))
That should change your conversion. There are many conversion functions in the
geoalchemy documentation.
Any idea how to do a conditional drop in Slick 3.0, to prevent An exception or error caused a run to abort: Unknown table 'MY_TABLE' if for some reason it doesn't exist?
def clear = {
val operations = DBIO.seq(
myTable.schema.drop,
// other table definitions
...
)
db.run(operations)
}
I went down the MTable route, but at least in Postgres it's a big hassle.
Try
def qDropSchema = sqlu"""drop table if exists your-table-name;""";
Watch out for case-sensitivity issues with the table name. I ran into odd problems with the postgres there - don't know about mysql.
Let me try to answer your question, I think you can first check the availability of the table using MTable then drop it if it exists. More or less like below:
import scala.slick.jdbc.meta._
if (MTable.getTables("table_name").list().isEmpty) {
//do something here..
}
I did this:
val personQuery = TableQuery[PersonTable]
val addressQuery = TableQuery[AddressTable]
...
val setupAction = DBIO.seq(
sqlu"SET FOREIGN_KEY_CHECKS = 0",
sqlu"DROP TABLE IF EXISTS #${personQuery.baseTableRow.tableName}",
sqlu"DROP TABLE IF EXISTS #${addressQuery.baseTableRow.tableName}",
sqlu"SET FOREIGN_KEY_CHECKS = 1",
)
val setupFuture = db.run(setupAction)
Note how you need to use #${} not ${} otherwise slick will fire off something like:
DROP TABLE IF EXISTS 'PERSON'
Which won't work
I am currently using the Slick framework in its 3.2.0 version.
The solution I am giving may apply to earlier version of the framework however but I did not verify this point.
If the only problem is to drop the table if it exists without throwing exception you can use the combinators of the Actions for this.
I have a series of test for which I run the create/populate/drop statement for each test on H2 in memory database.
I suppose you have two tables Canal and SubCanal (SubCanal has a foreign key on Canal so that you would like to drop it first if it exists) for wich you already have declared TableQuery variables such as:
lazy val canals = TableQuery[CanalTable]
lazy val subcanals = TableQuery[SubCanalTable]
// we don't put SubCanals to check if no exeption is produced and then
// add it for further testing.
lazy val ddl = canals.schema // ++ subcanals.schema
...I provided helper methods as follows:
def create: DBIO[Unit] = ddl.create
def drop: DBIO[Unit] = ddl.drop
def popCanal = canals ++= Seq(
Canal("Chat"),
Canal("Web"),
Canal("Mail"))
The above is just creating the action but what is cool is that Slick will attempt to drop the SubCanal table and the Canal table but will encapsulate the exception in the Try[...]. So this will run smoothly:
val db = Database.forConfig("yourBaseConfig")
val res = db.run(drop)
And this will run also:
val db = Database.forConfig("yourBaseConfig")
val res1 = db.run(
create >>
popCanal >>
canals.result
)
.... some interesting computation ...
val res2 = db.run(drop)
Note: The SubCanal scheme is still commented so has never been performed for the moment and the drop is however applied and fail to this table but does not raise the exeption.
More on combining actions (combinator):
DBIO Action Doc (3.2.0)
This book is free (but you may give some money) Essential Slick
How do I update an HSTORE field with Flask-Admin?
The regular ModelView doesn't show the HSTORE field in Edit view. It shows nothing. No control at all. In list view, it shows a column with data in JSON notation. That's fine with me.
Using a custom ModelView, I can change the HSTORE field into a TextAreaField. This will show me the HSTORE field in JSON notation when in edit view. But I cannot edit/update it. In list view, it still shows me the object in JSON notation. Looks fine to me.
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
When I attempt to save/edit the JSON, I receive this error:
sqlalchemy.exc.InternalError
InternalError: (InternalError) Unexpected end of string
LINE 1: UPDATE mytable SET attributes='{}' WHERE mytable.id = ...
^
'UPDATE mytable SET attributes=%(attributes)s WHERE mytable.id = %(mytable_id)s' {'attributes': u'{}', 'mytable_id': 14L}
Now -- using code, I can get something to save into the HSTORE field:
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
def on_model_change(self, form, model, is_created):
model.attributes = {"a": "1"}
return
This basically overrides the model and put this object into it. I can then see the object in the List view and the Edit view. Still not good enough -- I want to save/edit the object that the user typed in.
I tried to parse and save the content from the form into JSON and back out. This doesn't work:
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
def on_model_change(self, form, model, is_created):
x = form.data['attributes']
y = json.loads(x)
model.attributes = y
return
json.loads(x) says this:
ValueError ValueError: Expecting property name: line 1 column 1 (char
1)
and here are some sample inputs that fail:
{u's': u'ff'}
{'s':'ff'}
However, this input works:
{}
Blank also works
This is my SQL Table:
CREATE TABLE mytable (
id BIGSERIAL UNIQUE PRIMARY KEY,
attributes hstore
);
This is my SQA Model:
class MyTable(Base):
__tablename__ = u'mytable'
id = Column(BigInteger, primary_key=True)
attributes = Column(HSTORE)
Here is how I added the view's to the admin object
admin.add_view(ModelView(models.MyTable, db.session))
Add the view using a custom Model View
admin.add_view(MyView(models.MyTable, db.session))
But I don't do those views at the same time -- I get a Blueprint name collision error -- separate issue)
I also attempted to use a form field converter. I couldn't get it to actually hit the code.
class MyModelConverter(AdminModelConverter):
def post_process(self, form_class, info):
raise Exception('here I am') #but it never hits this
return form_class
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
The answer gives you a bit more then asked
Fist of all it "extends" hstore to be able to store actually JSON, not just key-value
So this structure is also OK:
{"key":{"inner_object_key":{"Another_key":"Done!","list":["no","problem"]}}}
So, first of all your ModelView should use custom converter
class ExtendedModelView(ModelView):
model_form_converter=CustomAdminConverter
Converter itself should know how to use hstore dialect:
class CustomAdminConverter(AdminModelConverter):
#converts('sqlalchemy.dialects.postgresql.hstore.HSTORE')
def conv_HSTORE(self, field_args, **extra):
return DictToHstoreField(**field_args)
This one as you can see uses custom WTForms field which converts data in both directions:
class DictToHstoreField(TextAreaField):
def process_data(self, value):
if value is None:
value = {}
else:
for key,obj in value.iteritems():
if (obj.startswith("{") and obj.endswith("}")) or (obj.startswith("[") and obj.endswith("]")):
try:
value[key]=json.loads(obj)
except:
pass #
self.data=json.dumps(value)
def process_formdata(self, valuelist):
if valuelist:
self.data = json.loads(valuelist[0])
for key,obj in self.data.iteritems():
if isinstance(obj,dict) or isinstance(obj,list):
self.data[key]=json.dumps(obj)
if isinstance(obj,int):
self.data[key]=str(obj)
The final step will be to actual use this data in application
I did not make it in common nice way for SQLalchemy, since was used with flask-restful, so I have only adoption for flask-restful in one direction, but I think it's easy to get the idea from here and do the rest.
And if your case is simple key-value storage so nothing additionaly should be done, just use it as is.
But if you want to unwrap JSON somewhere in code, it's simple like this whenever you use it, just wrap in function
if (value.startswith("{") and value.endswith("}")) or (value.startswith("[") and value.endswith("]")):
value=json.loads(value)
Creating dynamical field for actual nice non-JSON way for editing of data also possible by extending FormField and adding some javascript for adding/removing fields, but this is whole different story, in my case I needed actual json storage, with blackjack and lists :)
Was working on postgres JSON datatype. The above solution worked great with a minor modifications.
Tried
'sqlalchemy.dialects.postgresql.json.JSON',
'sqlalchemy.dialects.postgresql.JSON',
'dialects.postgresql.json.JSON',
'dialects.postgresql.JSON'
The above versions did not work.
Finally the following change worked
#converts('JSON')
And changed class DictToHstoreField to the following:
class DictToJSONField(fields.TextAreaField):
def process_data(self, value):
if value is None:
value = {}
self.data = json.dumps(value)
def process_formdata(self, valuelist):
if valuelist:
self.data = json.loads(valuelist[0])
else:
self.data = '{}'
Although, this is might not be the answer to your question, but by default SQLAlchemy's ORM doesn't detect in-place changes to HSTORE field values. But fortunately there's a solution: SQLAlchemy's MutableDict type:
from sqlalchemy.ext.mutable import MutableDict
class MyClass(Base):
__tablename__ = 'mytable'
id = Column(Integer, primary_key=True)
attributes = Column(MutableDict.as_mutable(HSTORE))
Now when you change something in-place:
my_object.attributes.['some_key'] = 'some value'
The hstore field will be updated after session.commit().