I have a Spring MVC project and a MySQL Database which has a table with the following rows:
I have a function that gets row by the date field:
#Override
public List listAllTimeSheetDetailsByDate(LocalDate date) {
CriteriaBuilder cb = getSession().getCriteriaBuilder();
CriteriaQuery<TimeSheetDetails> cr = cb.createQuery(TimeSheetDetails.class);
Root<TimeSheetDetails> root = cr.from(TimeSheetDetails.class);
cr.select(root).where(cb.equal(root.get("date"), date));
Query<TimeSheetDetails> query = getSession().createQuery(cr);
List<TimeSheetDetails> results = query.getResultList();
return results;
}
I am using Hibernate's getCriteriaBuilder() and createQuery() implementations.
The problem is that when I want to retrieve the row with date 2018-11-03 I get back the row which corresponds to the date 2018-11-02. I am using LocalDate, so the days of the month do not start at 0. Here is Hibernate's Log:
INFO [default task-4] stdout - Hibernate: select timesheetd0_.timeSheetDetailsId as timeShee1_2_, timesheetd0_.date as date2_2_, timesheetd0_.employeeId as employee3_2_, timesheetd0_.endTime as endTime4_2_, timesheetd0_.startTime as startTim5_2_ from TimeSheetDetails timesheetd0_ where timesheetd0_.date=?
**TRACE [default task-4] org.hibernate.type.descriptor.sql.BasicBinder - binding parameter [1] as [DATE] - [2018-11-03]**
TRACE [default task-4] org.hibernate.type.descriptor.sql.BasicExtractor - extracted value ([timeShee1_2_] : [INTEGER]) - [4]
**TRACE [default task-4] org.hibernate.type.descriptor.sql.BasicExtractor - extracted value ([date2_2_] : [DATE]) - [2018-11-02]**
TRACE [default task-4] org.hibernate.type.descriptor.sql.BasicExtractor - extracted value ([employee3_2_] : [VARCHAR]) - [3]
TRACE [default task-4] org.hibernate.type.descriptor.sql.BasicExtractor - extracted value ([endTime4_2_] : [TIME]) - [05:00]
TRACE [default task-4] org.hibernate.type.descriptor.sql.BasicExtractor - extracted value ([startTim5_2_] : [TIME]) - [01:00]
I do not understand why its getting the previous date. Furthermore, when I want to retrieve the "2018-11-30" date, I get back an empty set. I do not understand why this is occurring.
in my MariaDB 10.2.4 i have record:
id: 3
name: Jumper
category_id: 3
attributes: {"sensor_type": "CMOS", "processor": "Digic DV I", "scanning_system": "progressive", "mount_type": "PL", "monitor_type": "LCD"}
i get error:
Error in query (4038): Syntax error in JSON text in argument 1 to function 'json_remove' at position 86
when trying to:
UPDATE `products`
SET `attributes` = JSON_REMOVE(attributes , '$.mount_type')
WHERE`category_id` = 3;
JSON_EXTRACT, JSON_INSERT (and others) work ok with "attributes" as first argument.
Can anyone help?
Y
It was a bug fixed by this commit in scope of MDEV-12262. The fix is already available on github, and will be included in MariaDB 10.2.5-rc which is expected to be released in the next few days.
I have a 3 functions in Dataweave transform message component and I would like to reuse these functions in 4 other transform message components.
Is there a way I can centralise the 3 functions and reference them in the 4 other transform message components without copying and pasting the function into every transform message I want to use it with?
I am using Anypoint Studio 6.1 and Mule 3.8.1.
The 3 functions in Dataweave that I would like to access globally in my project are:
%function acceptable(value) (
value match {
:null -> false,
a is :array -> a != [{}],
o is :object -> o != {},
s is :string -> s != "",
default -> true
}
)
%function filterKeyValue(key, value) (
{(key): value} when acceptable(value) otherwise {}
)
%function removeFields(x)
x match {
a is :array -> a map removeFields($),
o is :object -> o mapObject
(filterKeyValue($$, removeFields($))),
default -> $
}
These functions were taken from a Stackoverflow post around removing empty fields and I am getting this error when I try to deploy the application:
INFO 2017-02-17 19:31:37,190 [main] org.mule.config.spring.MuleArtifactContext: Closing org.mule.config.spring.MuleArtifactContext#70b2fa10: startup date [Fri Feb 17 19:31:30 GMT 2017]; root of context hierarchy
ERROR 2017-02-17 19:31:37,478 [main] org.mule.module.launcher.application.DefaultMuleApplication: null
org.mule.mvel2.CompileException: [Error: unknown class or illegal statement: org.mule.mvel2.ParserContext#515940af]
[Near : {... value match { ....}]
^
[Line: 3, Column: 20]
at org.mule.mvel2.compiler.AbstractParser.procTypedNode(AbstractParser.java:1476) ~[mule-mvel2-2.1.9-MULE-010.jar:?]
Thanks
This has already been answered here, see if this helps you.
https://forums.mulesoft.com/questions/31467/invoking-java-or-groovy-method-in-dataweave-script.html
You can create a global function in configuration section and call it from your Dataweave.
I am trying to get erlang-mysql-driver working, I managed to set it up and make queries but there are two things I cannot do.(https://code.google.com/archive/p/erlang-mysql-driver/issues)
(BTW, I am new to Erlang)
So Here is my code to connect MySQL.
<erl>
out(Arg) ->
mysql:start_link(p1, "127.0.0.1", "root", "azzkikr", "MyDB"),
{data, Result} = mysql:fetch(p1, "SELECT * FROM messages").
</erl>
1. I cannot get data from table.
mysql.erl doesn't contain any specific information on how to get table datas but this is the farthest I could go.
{A,B} = mysql:get_result_rows(Result),
B.
And the result was this:
ERROR erlang code threw an uncaught exception:
File: /Users/{username}/Sites/Yaws/index.yaws:1
Class: error
Exception: {badmatch,[[4,0,<<"This is done baby!">>,19238],
[5,0,<<"Success">>,19238],
[6,0,<<"Hello">>,19238]]}
Req: {http_request,'GET',{abs_path,"/"},{1,1}}
Stack: [{m181,out,1,
[{file,"/Users/{username}/.yaws/yaws/default/m181.erl"},
{line,18}]},
{yaws_server,deliver_dyn_part,8,
[{file,"yaws_server.erl"},{line,2818}]},
{yaws_server,aloop,4,[{file,"yaws_server.erl"},{line,1232}]},
{yaws_server,acceptor0,2,[{file,"yaws_server.erl"},{line,1068}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]
I understand that somehow I need to get second element and use foreach to get each data but strings are returned in different format like queried string is Success but returned string is <<"Success">>.
{badmatch,[[4,0,<<"This is done baby!">>,19238],
[5,0,<<"Success">>,19238],
[6,0,<<"Hello">>,19238]]}
First Question is: How do I get datas from table?
2. How to insert values into table using variables?
I can insert data into table using this method:
Msg = "Hello World",
mysql:prepare(add_message,<<"INSERT INTO messages (`message`) VALUES (?)">>),
mysql:execute(p1, add_message, [Msg]).
But there are two things I am having trouble,
1. I am inserting data without << and >> operators, because When I do Msg = << ++ "Hello World" >>, erlang throws out an exception (I think I am doing something wrong), i don't know wether they are required but without them I am able to insert data into table except this error bothers me after execution:
yaws code at /Users/{username}/Yaws/index.yaws:1 crashed or ret bad val:{updated,
{mysql_result,
[],
[],
1,
[]}}
Req: {http_request,'GET',{abs_path,"/"},{1,1}}
returned atom is updated while I commanded to insert data.
Question 2 is: How do I insert data into table in a proper way?
Error:
{badmatch,[[4,0,<<"This is done baby!">>,19238],
[5,0,<<"Success">>,19238],
[6,0,<<"Hello">>,19238]]}
Tells you that returned values is:
[[4,0,<<"This is done baby!">>,19238],
[5,0,<<"Success">>,19238],
[6,0,<<"Hello">>,19238]]
Which obviously can't match with either {data, Data} nor {A, B}. You can obtain your data as:
<erl>
out(Arg) ->
mysql:start_link(p1, "127.0.0.1", "root", "azzkikr", "MyDB"),
{ehtml,
[{table, [{border, "1"}],
[{tr, [],
[{td, [],
case Val of
_ when is_binary(Val) -> yaws_api:htmlize(Val);
_ when is_integer(val) -> integer_to_binary(Val)
end}
|| Val <- Row
]}
|| Row <- mysql:fetch(p1, "SELECT * FROM messages")
]}
]
}.
</erl>
I am trying to insert to a table in bulk of 100 ( i heard it's the best size to use with mySQL), i use scala 2.10.4 with sbt 0.13.6 and the jdbc framework i am using is scalikejdbc with Hikaricp , my connection settings look like this:
val dataSource: DataSource = {
val ds = new HikariDataSource()
ds.setDataSourceClassName("com.mysql.jdbc.jdbc2.optional.MysqlDataSource");
ds.addDataSourceProperty("url", "jdbc:mysql://" + org.Server.GlobalSettings.DB.mySQLIP + ":3306?rewriteBatchedStatements=true")
ds.addDataSourceProperty("autoCommit", "false")
ds.addDataSourceProperty("user", "someUser")
ds.addDataSourceProperty("password", "not my password")
ds
}
ConnectionPool.add('review, new DataSourceConnectionPool(dataSource))
The insert code:
try {
implicit val session = AutoSession
val paramList: scala.collection.mutable.ListBuffer[Seq[(Symbol, Any)]] = scala.collection.mutable.ListBuffer[Seq[(Symbol, Any)]]()
.
.
.
for(rev<reviews){
paramList += Seq[(Symbol, Any)](
'review_id -> rev.review_idx,
'text -> rev.text,
'category_id -> rev.category_id,
'aspect_id -> aspectId,
'not_aspect -> noAspect /*0*/ ,
'certainty_aspect -> rev.certainty_aspect,
'sentiment -> rev.sentiment,
'sentiment_grade -> rev.certainty_sentiment,
'stars -> rev.stars
)
}
.
.
.
try {
if (paramList != null && paramList.length > 0) {
val result = NamedDB('review) localTx { implicit session =>
sql"""INSERT INTO `MasterFlow`.`classifier_results`
(
`review_id`,
`text`,
`category_id`,
`aspect_id`,
`not_aspect`,
`certainty_aspect`,
`sentiment`,
`sentiment_grade`,
`stars`)
VALUES
( {review_id}, {text}, {category_id}, {aspect_id},
{not_aspect}, {certainty_aspect}, {sentiment}, {sentiment_grade}, {stars})
"""
.batchByName(paramList.toIndexedSeq: _*)/*.__resultOfEnsuring*/
.apply()
}
Each time i insert a batch it took 15 seconds, my logs:
29/10/2014 14:03:36 - DEBUG[Hikari Housekeeping Timer (pool HikariPool-0)] HikariPool - Before cleanup pool stats HikariPool-0 (total=10, inUse=1, avail=9, waiting=0)
29/10/2014 14:03:36 - DEBUG[Hikari Housekeeping Timer (pool HikariPool-0)] HikariPool - After cleanup pool stats HikariPool-0 (total=10, inUse=1, avail=9, waiting=0)
29/10/2014 14:03:46 - DEBUG[default-akka.actor.default-dispatcher-3] StatementExecutor$$anon$1 - SQL execution completed
[SQL Execution]
INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
.
.
.
INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
... (total: 100 times); (15466 ms)
[Stack Trace]
...
logic.DB.ClassifierJsonToDB$$anonfun$1.apply(ClassifierJsonToDB.scala:119)
logic.DB.ClassifierJsonToDB$$anonfun$1.apply(ClassifierJsonToDB.scala:96)
scalikejdbc.DBConnection$$anonfun$_localTx$1$1.apply(DBConnection.scala:252)
scala.util.control.Exception$Catch.apply(Exception.scala:102)
scalikejdbc.DBConnection$class._localTx$1(DBConnection.scala:250)
scalikejdbc.DBConnection$$anonfun$localTx$1.apply(DBConnection.scala:257)
scalikejdbc.DBConnection$$anonfun$localTx$1.apply(DBConnection.scala:257)
scalikejdbc.LoanPattern$class.using(LoanPattern.scala:33)
scalikejdbc.NamedDB.using(NamedDB.scala:32)
scalikejdbc.DBConnection$class.localTx(DBConnection.scala:257)
scalikejdbc.NamedDB.localTx(NamedDB.scala:32)
logic.DB.ClassifierJsonToDB$.insertBulk(ClassifierJsonToDB.scala:96)
logic.DB.ClassifierJsonToDB$$anonfun$bulkInsert$1.apply(ClassifierJsonToDB.scala:176)
logic.DB.ClassifierJsonToDB$$anonfun$bulkInsert$1.apply(ClassifierJsonToDB.scala:167)
scala.collection.Iterator$class.foreach(Iterator.scala:727)
...
When i run it on the server that host the mySQL database it's run fast, what can i do to make it run faster on a remote computer ?
In case if any one need that, I had similar problem to batch insert 10000 records into MySQL with ScalikeJdbc, and it could be solved by setting rewriteBatchedStatements to true in jdbc url ("jdbc:mysql://host:3306/db?rewriteBatchedStatements=true"). It reduced the batch insert time from 40 seconds to 1 second!
I guess this is not an issue of ScalikeJDBC or HikariCP. You should investigate your network environment between your machine and MySQL server.