Why do I get dates back one off when using JOOQ? - mysql

We are using jOOQ to talk to a MySQL database containing this table:
CREATE TABLE daily_sessions
(
session_id INT AUTO_INCREMENT NOT NULL,
user_id VARCHAR(45) NULL,
day date NULL,
CONSTRAINT PK_DAILY_SESSIONS PRIMARY KEY (session_id)
);
We have enabled the support for the JSR-310 types, so we are using LocalDate on the Java/Kotlin side to map this.
What we are seeing is that the day field gets retrieved with an offset of one day. The inserts and select statements logged by jOOQ seem to indicate it is doing the right thing when binding SQL parameters, but when the result comes back it shows the day before:
2019-04-05 09:32:08 [Gax-20 ] DEBUG o.j.i.DefaultConnectionProvider - setting auto commit : false
2019-04-05 09:32:08 [Gax-20 ] DEBUG o.j.tools.LoggerListener - Executing query : select `daily_sessions`.`session_id`, `daily_sessions`.`user_id`, `daily_sessions`.`day` from `daily_sessions` where (`daily_sessions`.`user_id` = ? and `daily_sessions`.`day` = ?)
2019-04-05 09:32:08 [Gax-20 ] DEBUG o.j.tools.LoggerListener - -> with bind values : select `daily_sessions`.`session_id`, `daily_sessions`.`user_id`, `daily_sessions`.`day` from `daily_sessions` where (`daily_sessions`.`user_id` = '87a09702-0d6b-485c-895c-986f238e1d30' and `daily_sessions`.`day` = {d '2011-11-11'})
2019-04-05 09:32:08 [Gax-20 ] DEBUG o.j.tools.LoggerListener - Fetched result : +----------+------------------------------------+----------+
2019-04-05 09:32:08 [Gax-20 ] DEBUG o.j.tools.LoggerListener - : |session_id|user_id |day |
2019-04-05 09:32:08 [Gax-20 ] DEBUG o.j.tools.LoggerListener - : +----------+------------------------------------+----------+
2019-04-05 09:32:08 [Gax-20 ] DEBUG o.j.tools.LoggerListener - : | 13|87a09702-0d6b-485c-895c-986f238e1d30|2011-11-10|
2019-04-05 09:32:08 [Gax-20 ] DEBUG o.j.tools.LoggerListener - : +----------+------------------------------------+----------+
2019-04-05 09:32:08 [Gax-20 ] DEBUG o.j.tools.LoggerListener - Fetched row(s) : 1
2019-04-05 09:32:08 [Gax-20 ] DEBUG o.j.i.DefaultConnectionProvider - commit
2019-04-05 09:32:08 [Gax-20 ] DEBUG o.j.i.DefaultConnectionProvider - setting auto commit : true
2019-04-05 09:32:08 [Gax-20 ] DEBUG c.z.hikari.pool.PoolBase - HikariPool-1 - Reset (isolation) on connection com.mysql.cj.jdbc.ConnectionImpl#4af95547
Notice how the select filters on 2011-11-11, but the result table shows 2011-11-10.
This is from a test, run on my local machine (UTC+10), against the standard mysql Docker image running locally as well.
Despite using DATE, I assume we are running into some timezone issue, but I cannot reproduce the problem by talking JDBC directly. I tried running this in the same setup the other tests run in:
#Test
fun testDateColumn() {
DriverManager.getConnection("jdbc:mysql://localhost:8890/rewards-test", "root", "").use { con ->
con.createStatement().use { stmt ->
stmt.execute("insert into `daily_sessions` (`user_id`, `day`) values ('a20add98-5a93-417f-a771-848757b2b1f8', {d '2011-11-11'})")
}
con.createStatement().use { stmt ->
stmt.executeQuery("select `daily_sessions`.`session_id`, `daily_sessions`.`user_id`, `daily_sessions`.`day` from `daily_sessions` where (`daily_sessions`.`user_id` = 'a20add98-5a93-417f-a771-848757b2b1f8' and `daily_sessions`.`day` = {d '2011-11-11'})").use { rs ->
while (rs.next()) {
println("${rs.getString(3)} - ${rs.getDate(3)}")
}
}
}
}
}
This code produces the expected output. The SQL statements are straight copies from the jOOQ logs. There must be something else jOOQ does that I do not understand.
Do I need to configure timezones in jOOQ somehow? Or am I missing anything else?
Update
As proposed by Lukas in the comments, I tried changing my JDBC test to use prepared statements:
#Test
fun testDateColumn() {
DriverManager.getConnection("jdbc:mysql://localhost:8890/rewards-test", "root", "").use { con ->
con.prepareStatement("insert into `daily_sessions` (`user_id`, `day`) values (?, ?)").use { ps ->
ps.setString(1, "a20add98-5a93-417f-a771-848757b2b1f8")
ps.setDate(2, Date.valueOf(LocalDate.of(2011, 11, 11)))
ps.execute()
}
con.prepareStatement("select `daily_sessions`.`session_id`, `daily_sessions`.`user_id`, `daily_sessions`.`day` from `daily_sessions` where (`daily_sessions`.`user_id` = ? and `daily_sessions`.`day` = ?)").use { ps ->
ps.setString(1, "a20add98-5a93-417f-a771-848757b2b1f8")
ps.setDate(2, Date.valueOf(LocalDate.of(2011, 11, 11)))
ps.executeQuery().use { rs ->
while (rs.next()) {
println("${rs.getString(3)} - ${rs.getDate(3)}")
}
}
}
}
}
This indeed produces the wrong results, the output is 2011-11-10 for both the string and date variants. It seems there is something I don't understand in JDBC.
Update 2
The code above can be fixed by passing the default java.util.Calendar instance as a third parameter to the setDate method, i.e. replacing both cases above with:
ps.setDate(2, Date.valueOf(LocalDate.of(2011, 11, 11)), Calendar.getInstance())
Using this we see the expected output, whereas the plain version without the third parameter does not.
The JavaDoc of the setDate method says the absence of the Calendar object will cause the VM's time zone to be used, which seems to be exactly the same the definition of Calendar.getInstance() specifies, which seems to indicate nothing should change.

This turned out to be a known bug in the MySQL JDBC driver. My fix was to revert to a much older version which predates the problem.

Related

Hibernate not retrieving correct row given a date from MySQL Database

I have a Spring MVC project and a MySQL Database which has a table with the following rows:
I have a function that gets row by the date field:
#Override
public List listAllTimeSheetDetailsByDate(LocalDate date) {
CriteriaBuilder cb = getSession().getCriteriaBuilder();
CriteriaQuery<TimeSheetDetails> cr = cb.createQuery(TimeSheetDetails.class);
Root<TimeSheetDetails> root = cr.from(TimeSheetDetails.class);
cr.select(root).where(cb.equal(root.get("date"), date));
Query<TimeSheetDetails> query = getSession().createQuery(cr);
List<TimeSheetDetails> results = query.getResultList();
return results;
}
I am using Hibernate's getCriteriaBuilder() and createQuery() implementations.
The problem is that when I want to retrieve the row with date 2018-11-03 I get back the row which corresponds to the date 2018-11-02. I am using LocalDate, so the days of the month do not start at 0. Here is Hibernate's Log:
INFO [default task-4] stdout - Hibernate: select timesheetd0_.timeSheetDetailsId as timeShee1_2_, timesheetd0_.date as date2_2_, timesheetd0_.employeeId as employee3_2_, timesheetd0_.endTime as endTime4_2_, timesheetd0_.startTime as startTim5_2_ from TimeSheetDetails timesheetd0_ where timesheetd0_.date=?
**TRACE [default task-4] org.hibernate.type.descriptor.sql.BasicBinder - binding parameter [1] as [DATE] - [2018-11-03]**
TRACE [default task-4] org.hibernate.type.descriptor.sql.BasicExtractor - extracted value ([timeShee1_2_] : [INTEGER]) - [4]
**TRACE [default task-4] org.hibernate.type.descriptor.sql.BasicExtractor - extracted value ([date2_2_] : [DATE]) - [2018-11-02]**
TRACE [default task-4] org.hibernate.type.descriptor.sql.BasicExtractor - extracted value ([employee3_2_] : [VARCHAR]) - [3]
TRACE [default task-4] org.hibernate.type.descriptor.sql.BasicExtractor - extracted value ([endTime4_2_] : [TIME]) - [05:00]
TRACE [default task-4] org.hibernate.type.descriptor.sql.BasicExtractor - extracted value ([startTim5_2_] : [TIME]) - [01:00]
I do not understand why its getting the previous date. Furthermore, when I want to retrieve the "2018-11-30" date, I get back an empty set. I do not understand why this is occurring.

MariaDB 10.2.4, JSON_REMOVE error

in my MariaDB 10.2.4 i have record:
id: 3
name: Jumper
category_id: 3
attributes: {"sensor_type": "CMOS", "processor": "Digic DV I", "scanning_system": "progressive", "mount_type": "PL", "monitor_type": "LCD"}
i get error:
Error in query (4038): Syntax error in JSON text in argument 1 to function 'json_remove' at position 86
when trying to:
UPDATE `products`
SET `attributes` = JSON_REMOVE(attributes , '$.mount_type')
WHERE`category_id` = 3;
JSON_EXTRACT, JSON_INSERT (and others) work ok with "attributes" as first argument.
Can anyone help?
Y
It was a bug fixed by this commit in scope of MDEV-12262. The fix is already available on github, and will be included in MariaDB 10.2.5-rc which is expected to be released in the next few days.

How to create a function that can be reused in dataweave

I have a 3 functions in Dataweave transform message component and I would like to reuse these functions in 4 other transform message components.
Is there a way I can centralise the 3 functions and reference them in the 4 other transform message components without copying and pasting the function into every transform message I want to use it with?
I am using Anypoint Studio 6.1 and Mule 3.8.1.
The 3 functions in Dataweave that I would like to access globally in my project are:
%function acceptable(value) (
value match {
:null -> false,
a is :array -> a != [{}],
o is :object -> o != {},
s is :string -> s != "",
default -> true
}
)
%function filterKeyValue(key, value) (
{(key): value} when acceptable(value) otherwise {}
)
%function removeFields(x)
x match {
a is :array -> a map removeFields($),
o is :object -> o mapObject
(filterKeyValue($$, removeFields($))),
default -> $
}
These functions were taken from a Stackoverflow post around removing empty fields and I am getting this error when I try to deploy the application:
INFO 2017-02-17 19:31:37,190 [main] org.mule.config.spring.MuleArtifactContext: Closing org.mule.config.spring.MuleArtifactContext#70b2fa10: startup date [Fri Feb 17 19:31:30 GMT 2017]; root of context hierarchy
ERROR 2017-02-17 19:31:37,478 [main] org.mule.module.launcher.application.DefaultMuleApplication: null
org.mule.mvel2.CompileException: [Error: unknown class or illegal statement: org.mule.mvel2.ParserContext#515940af]
[Near : {... value match { ....}]
^
[Line: 3, Column: 20]
at org.mule.mvel2.compiler.AbstractParser.procTypedNode(AbstractParser.java:1476) ~[mule-mvel2-2.1.9-MULE-010.jar:?]
Thanks
This has already been answered here, see if this helps you.
https://forums.mulesoft.com/questions/31467/invoking-java-or-groovy-method-in-dataweave-script.html
You can create a global function in configuration section and call it from your Dataweave.

Proper way to (get from /insert into) table using Erlang Mysql Driver

I am trying to get erlang-mysql-driver working, I managed to set it up and make queries but there are two things I cannot do.(https://code.google.com/archive/p/erlang-mysql-driver/issues)
(BTW, I am new to Erlang)
So Here is my code to connect MySQL.
<erl>
out(Arg) ->
mysql:start_link(p1, "127.0.0.1", "root", "azzkikr", "MyDB"),
{data, Result} = mysql:fetch(p1, "SELECT * FROM messages").
</erl>
1. I cannot get data from table.
mysql.erl doesn't contain any specific information on how to get table datas but this is the farthest I could go.
{A,B} = mysql:get_result_rows(Result),
B.
And the result was this:
ERROR erlang code threw an uncaught exception:
File: /Users/{username}/Sites/Yaws/index.yaws:1
Class: error
Exception: {badmatch,[[4,0,<<"This is done baby!">>,19238],
[5,0,<<"Success">>,19238],
[6,0,<<"Hello">>,19238]]}
Req: {http_request,'GET',{abs_path,"/"},{1,1}}
Stack: [{m181,out,1,
[{file,"/Users/{username}/.yaws/yaws/default/m181.erl"},
{line,18}]},
{yaws_server,deliver_dyn_part,8,
[{file,"yaws_server.erl"},{line,2818}]},
{yaws_server,aloop,4,[{file,"yaws_server.erl"},{line,1232}]},
{yaws_server,acceptor0,2,[{file,"yaws_server.erl"},{line,1068}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]
I understand that somehow I need to get second element and use foreach to get each data but strings are returned in different format like queried string is Success but returned string is <<"Success">>.
{badmatch,[[4,0,<<"This is done baby!">>,19238],
[5,0,<<"Success">>,19238],
[6,0,<<"Hello">>,19238]]}
First Question is: How do I get datas from table?
2. How to insert values into table using variables?
I can insert data into table using this method:
Msg = "Hello World",
mysql:prepare(add_message,<<"INSERT INTO messages (`message`) VALUES (?)">>),
mysql:execute(p1, add_message, [Msg]).
But there are two things I am having trouble,
1. I am inserting data without << and >> operators, because When I do Msg = << ++ "Hello World" >>, erlang throws out an exception (I think I am doing something wrong), i don't know wether they are required but without them I am able to insert data into table except this error bothers me after execution:
yaws code at /Users/{username}/Yaws/index.yaws:1 crashed or ret bad val:{updated,
{mysql_result,
[],
[],
1,
[]}}
Req: {http_request,'GET',{abs_path,"/"},{1,1}}
returned atom is updated while I commanded to insert data.
Question 2 is: How do I insert data into table in a proper way?
Error:
{badmatch,[[4,0,<<"This is done baby!">>,19238],
[5,0,<<"Success">>,19238],
[6,0,<<"Hello">>,19238]]}
Tells you that returned values is:
[[4,0,<<"This is done baby!">>,19238],
[5,0,<<"Success">>,19238],
[6,0,<<"Hello">>,19238]]
Which obviously can't match with either {data, Data} nor {A, B}. You can obtain your data as:
<erl>
out(Arg) ->
mysql:start_link(p1, "127.0.0.1", "root", "azzkikr", "MyDB"),
{ehtml,
[{table, [{border, "1"}],
[{tr, [],
[{td, [],
case Val of
_ when is_binary(Val) -> yaws_api:htmlize(Val);
_ when is_integer(val) -> integer_to_binary(Val)
end}
|| Val <- Row
]}
|| Row <- mysql:fetch(p1, "SELECT * FROM messages")
]}
]
}.
</erl>

batch insert in scalikejdbc is slow on remote computer

I am trying to insert to a table in bulk of 100 ( i heard it's the best size to use with mySQL), i use scala 2.10.4 with sbt 0.13.6 and the jdbc framework i am using is scalikejdbc with Hikaricp , my connection settings look like this:
val dataSource: DataSource = {
val ds = new HikariDataSource()
ds.setDataSourceClassName("com.mysql.jdbc.jdbc2.optional.MysqlDataSource");
ds.addDataSourceProperty("url", "jdbc:mysql://" + org.Server.GlobalSettings.DB.mySQLIP + ":3306?rewriteBatchedStatements=true")
ds.addDataSourceProperty("autoCommit", "false")
ds.addDataSourceProperty("user", "someUser")
ds.addDataSourceProperty("password", "not my password")
ds
}
ConnectionPool.add('review, new DataSourceConnectionPool(dataSource))
The insert code:
try {
implicit val session = AutoSession
val paramList: scala.collection.mutable.ListBuffer[Seq[(Symbol, Any)]] = scala.collection.mutable.ListBuffer[Seq[(Symbol, Any)]]()
.
.
.
for(rev<reviews){
paramList += Seq[(Symbol, Any)](
'review_id -> rev.review_idx,
'text -> rev.text,
'category_id -> rev.category_id,
'aspect_id -> aspectId,
'not_aspect -> noAspect /*0*/ ,
'certainty_aspect -> rev.certainty_aspect,
'sentiment -> rev.sentiment,
'sentiment_grade -> rev.certainty_sentiment,
'stars -> rev.stars
)
}
.
.
.
try {
if (paramList != null && paramList.length > 0) {
val result = NamedDB('review) localTx { implicit session =>
sql"""INSERT INTO `MasterFlow`.`classifier_results`
(
`review_id`,
`text`,
`category_id`,
`aspect_id`,
`not_aspect`,
`certainty_aspect`,
`sentiment`,
`sentiment_grade`,
`stars`)
VALUES
( {review_id}, {text}, {category_id}, {aspect_id},
{not_aspect}, {certainty_aspect}, {sentiment}, {sentiment_grade}, {stars})
"""
.batchByName(paramList.toIndexedSeq: _*)/*.__resultOfEnsuring*/
.apply()
}
Each time i insert a batch it took 15 seconds, my logs:
29/10/2014 14:03:36 - DEBUG[Hikari Housekeeping Timer (pool HikariPool-0)] HikariPool - Before cleanup pool stats HikariPool-0 (total=10, inUse=1, avail=9, waiting=0)
29/10/2014 14:03:36 - DEBUG[Hikari Housekeeping Timer (pool HikariPool-0)] HikariPool - After cleanup pool stats HikariPool-0 (total=10, inUse=1, avail=9, waiting=0)
29/10/2014 14:03:46 - DEBUG[default-akka.actor.default-dispatcher-3] StatementExecutor$$anon$1 - SQL execution completed
[SQL Execution]
INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
.
.
.
INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
... (total: 100 times); (15466 ms)
[Stack Trace]
...
logic.DB.ClassifierJsonToDB$$anonfun$1.apply(ClassifierJsonToDB.scala:119)
logic.DB.ClassifierJsonToDB$$anonfun$1.apply(ClassifierJsonToDB.scala:96)
scalikejdbc.DBConnection$$anonfun$_localTx$1$1.apply(DBConnection.scala:252)
scala.util.control.Exception$Catch.apply(Exception.scala:102)
scalikejdbc.DBConnection$class._localTx$1(DBConnection.scala:250)
scalikejdbc.DBConnection$$anonfun$localTx$1.apply(DBConnection.scala:257)
scalikejdbc.DBConnection$$anonfun$localTx$1.apply(DBConnection.scala:257)
scalikejdbc.LoanPattern$class.using(LoanPattern.scala:33)
scalikejdbc.NamedDB.using(NamedDB.scala:32)
scalikejdbc.DBConnection$class.localTx(DBConnection.scala:257)
scalikejdbc.NamedDB.localTx(NamedDB.scala:32)
logic.DB.ClassifierJsonToDB$.insertBulk(ClassifierJsonToDB.scala:96)
logic.DB.ClassifierJsonToDB$$anonfun$bulkInsert$1.apply(ClassifierJsonToDB.scala:176)
logic.DB.ClassifierJsonToDB$$anonfun$bulkInsert$1.apply(ClassifierJsonToDB.scala:167)
scala.collection.Iterator$class.foreach(Iterator.scala:727)
...
When i run it on the server that host the mySQL database it's run fast, what can i do to make it run faster on a remote computer ?
In case if any one need that, I had similar problem to batch insert 10000 records into MySQL with ScalikeJdbc, and it could be solved by setting rewriteBatchedStatements to true in jdbc url ("jdbc:mysql://host:3306/db?rewriteBatchedStatements=true"). It reduced the batch insert time from 40 seconds to 1 second!
I guess this is not an issue of ScalikeJDBC or HikariCP. You should investigate your network environment between your machine and MySQL server.