Groovy Sql get difference datetime compare with mysql cli - mysql

In mysql cli, i get following result:
mysql> select * from words limit 1;
+----+------+--------------------+---------------------+---------------------+
| id | name | full | created_at | updated_at |
+----+------+--------------------+---------------------+---------------------+
| 30 | prpr | a full explanation | 2016-09-20 12:59:07 | 2016-09-20 12:59:07 |
+----+------+--------------------+---------------------+---------------------+
the "created_at" is 2016-09-20 12:59:07
but when i
static void main(String[] args) {
def c = Sql.newInstance("jdbc:mysql://127.0.0.1:3306/ro_test", "root", "root")
println c.rows("select * from words")[0]['created_at']
}
the output is
2016-09-21 05:30:58.0
I hope groovy code output is same with mysql cli, how to do that?

These two dates probably refer to (roughly) the same instant in time. Given that the dates are 5.5 hours apart, my guess is that the MySQL CLI is showing the date in the UTC timezone, whereas the Groovy code is showing the date in the UTC+05:30 (Indian) time zone.
In other words
2016-09-20 12:59:07 + 5.5 hours ≈ 2016-09-21 05:30:58.0

When I force specific timezone, it work
static void main(String[] args) {
def c = Sql.newInstance("jdbc:mysql://127.0.0.1:3306/ro_test", "root", "root")
def tz = TimeZone.default
def cal = Calendar.getInstance(TimeZone.getTimeZone("Asia/Shanghai"))
c.query("select * from words") { ResultSetImpl rs ->
while (rs.next()) {
println rs.getTimestamp(4, cal)
}
}
}
I think the best way is rewrite Groovy.sql.Sql#rows with above code, the full implementation is here:
List<LinkedHashMap> e2(String stmt) {
def cal = Calendar.getInstance(Time.timezone)
List<GroovyRowResult> rs = []
c.query(stmt) { ResultSetImpl rs2 ->
def md = rs2.metaData
int cc = md.columnCount
while (rs2.next()) {
def attrs = [:]
for (int i = 1; i <= cc; i++) {
def key = md.getColumnLabel(i)
def t = md.getColumnType(i)
def v
if (t == Types.TIMESTAMP) {
v = rs2.getTimestamp(i, cal)
} else {
v = rs2.getObject(i)
}
attrs[key] = v
}
rs.add(attrs)
}
}
rs
}

Related

Scala - iterate over structs in json and get values

{
"config1":{
"url":"xxxx",
"database":"xxxx",
"dbTable":"xxxx"
},
"config2":{
"url":"xxxx",
"database":"xxxxx",
"dbTable":"xxxxx"
},
"snippets":{
"optionA":{
"months_back":"2",
"list":{
"code1":{
"id":"11111",
"country":"11111"
},
"code2":{
"id":"2222",
"country":"2222"
},
"code3":{
"id":"3333",
"country":"3333"
}
}
}
}
}
let's say I have a config.json that looks like that, I have some code with a query I need to swap parameters with the id and country in that json
So far my code is something like this
import spark.implicits._
val df = sqlContext.read.option("multiline","true").json("path_to_json")
val range_df = df.select("snippets.optionA.months_back").collect()
val range_str = range_df.map(x => x.get(0))
val range = range_str(0)
val list = df.select("snippets.optionA.list.*")).collect()
I need something like
For(x <- json_list){
val results = spark.sql("""
select * from table
where date >= add_months(current_date(), -"""+range+""")
and country = """+json_list(country)+"""
and id = """+json_lis(id)+""")
the List after collect() is list: Array[org.apache.spark.sql.Row] and I have no idea how to iterate over it.
Any help is welcome, thank you
Convert snippets.optionA.list.* inner struct into array(snippets.optionA.list.*) & iterate each value from this array.
Check below code.
val queriesResult = df
.withColumn(
"query",
explode(
expr(
"""
|transform(
| array(snippets.optionA.list.*),
| v -> concat(
| 'SELECT * FROM TABLE WHERE DATE >= add_months(current_date(), -',
| snippets.optionA.months_back,
| ') AND country=\"',
| v.country,
| '\" AND id =',
| v.id
| )
|)
|""".stripMargin
)
)
)
.select("query")
.as[String]
.collect
.map { query =>
spark.sql(query)
}
.collect function will return array of queries like below, then using map function to pass each query to spark.sql function to execute query.
Array(
"SELECT * FROM TABLE WHERE DATE >= add_months(current_date(), -2) AND country="11111" AND id =11111",
"SELECT * FROM TABLE WHERE DATE >= add_months(current_date(), -2) AND country="2222" AND id =2222",
"SELECT * FROM TABLE WHERE DATE >= add_months(current_date(), -2) AND country="3333" AND id =3333"
)
Spark Version >= 2.4 +

mysql, is possible to get statistics about cpu usage and memory, after query or insert command?

On mysql I usually see time of query
select * from rooms;
+---------+---------------+----------+
| number | room_name | identify |
+---------+---------------+----------+
| 1 | myroom | 1 |
| 2 | studio 1 | 4 |
| 3 | Dancefloor | 7 |
+---------+---------------+----------+
3 rows in set (0,00 sec)
Is also possible to get cpu usage and memory from the mysql server?
Yes. It's nothing more than the elapsed time. You can replicate it by storing the start time and subtracting it from the time when the query ends. Like this pseudo-code.
start = Time.now
do_the_query
end = Time.now
elapsed_time = end - start
Digging into the mysql source code, this is nothing more than the elapsed clock time.
Here's the relevant code in client/mysql.cc.
static ulong start_timer(void) {
#if defined(_WIN32)
return clock();
#else
struct tms tms_tmp;
return times(&tms_tmp);
#endif
}
static void end_timer(ulong start_time, char *buff) {
nice_time((double)(start_timer() - start_time) / CLOCKS_PER_SEC, buff, true);
}
static void mysql_end_timer(ulong start_time, char *buff) {
buff[0] = ' ';
buff[1] = '(';
end_timer(start_time, buff + 2);
my_stpcpy(strend(buff), ")");
}
static int com_go(String *buffer, char *line MY_ATTRIBUTE((unused))) {
...
char time_buff[52 + 3 + 1]; /* time max + space&parens + NUL */
...
timer = start_timer();
executing_query = true;
error = mysql_real_query_for_lazy(buffer->ptr(), buffer->length());
...
if (verbose >= 3 || !opt_silent)
mysql_end_timer(timer, time_buff);
else
time_buff[0] = '\0';
If you don't read C...
timer = start_timer(); gets the current time from times.
mysql_real_query_for_lazy runs the query.
mysql_end_timer(timer, time_buff) subtracts the current time from the start time and displays it.

Summarizing/aggregating a Scala Slick object into another

I'm essentially trying to recreate the following SQL query using Scala Slick:
select labelOne, labelTwo, sum(countA), sum(countB) from things where date > 'blah' group by labelOne, labelTwo;
As you can see, it takes what a table of labeled things and aggregates them, summing various counts. A table with the following info:
ID | date | labelOne | labelTwo | countA | countB
-------------------------------------------------
0 | 0 | foo | cheese | 1 | 2
1 | 0 | bar | wine | 0 | 3
2 | 1 | foo | cheese | 3 | 4
3 | 1 | bar | wine | 2 | 1
4 | 2 | foo | beer | 1 | 1
Should yield the following result if queried across all dates:
labelOne | labelTwo | countA | countB
-------------------------------------
foo | cheese | 4 | 6
bar | wine | 2 | 4
foo | beer | 1 | 1
This is what my Scala code looks like:
import scala.slick.driver.MySQLDriver.simple._
import scala.slick.jdbc.StaticQuery
import StaticQuery.interpolation
import org.joda.time.LocalDate
import com.github.tototoshi.slick.JodaSupport._
case class Thing(
id: Option[Long],
date: LocalDate,
labelOne: String,
labelTwo: String,
countA: Long,
countB: Long)
// summarized version of "Thing": note there's no date in this object
// each distinct grouping of Thing.labelOne + Thing.labelTwo should become a "SummarizedThing", with summed counts
case class SummarizedThing(
labelOne: String,
labelTwo: String,
countASum: Long,
countBSum: Long)
trait ThingsComponent {
val Things: Things
class Things extends Table[Thing]("things") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def date = column[LocalDate]("date", O.NotNull)
def labelOne = column[String]("labelOne", O.NotNull)
def labelTwo = column[String]("labelTwo", O.NotNull)
def countA = column[Long]("countA", O.NotNull)
def countB = column[Long]("countB", O.NotNull)
def * = id.? ~ date ~ labelOne ~ labelTwo ~ countA ~ countB <> (Thing.apply _, Thing.unapply _)
val byId = createFinderBy(_.id)
}
}
object Things extends DAO {
def insert(thing: Thing)(implicit s: Session) { Things.insert(thing) }
def findById(id: Long)(implicit s: Session): Option[Thing] = Things.byId(id).firstOption
// ???
def summarizeSince(date: LocalDate)(implicit s: Session): Set[SummarizedThing] = {
Query(Things).where(_.date > date).groupBy(x => (x.labelOne, x.labelTwo)).map {
case(thing: Thing) => {
// obviously this line below is wrong, but you can get an idea of what I'm trying to accomplish:
// create a new SummarizedThing for each unique labelOne + labelTwo combo, summing the count columns
new SummarizedThing(thing.labelOne, thing.labelTwo, thing.countA.sum, thing.countB.sum)
}
} // presumably need to run the query and map to SummarizedThing here, perhaps?
}
}
The summarizeSince function is where I'm having trouble. I seem to be able to query Things just fine, filtering by date, and grouping by my fields... however, I'm having trouble summing countA and countB. With the summed results, I'd then like to create a SummarizedThing for each unique labelOne + labelTwo combination. Hopefully that makes sense. Any help would be greatly appreciated.
presumably need to run the query and map to SummarizedThing here, perhaps?
Exactly.
Query(Things).filter(_.date > date).groupBy(x => (x.labelOne, x.labelTwo)).map {
// match on (key,group)
case ((labelOne, labelTwo), things) => {
// prepare results as tuple (note .sum returns an Option)
(labelOne, labelTwo, things.map(_.countA).sum.get, things.map(_.countB).sum.get)
}
}.run.map(SummarizedThing.tupled) // run and map tuple into case class
Same as the other answer, but expressed as a for comprehension, except that .get is exceptional so you probably need getOrElse.
val q = for {
((l1,l2), ts) <- Things.where(_.date > date).groupBy(t => (t.labelOne, t.labelTwo))
} yield (l1, l2, ts.map(_.countA).sum.getOrElse(0L), ts.map(_.countB).sum.getOrElse(0L))
// see the SQL that generates.
println( q.selectStatement )
// select x2.`labelOne`, x2.`labelTwo`, sum(x2.`countA`), sum(x2.`countB`)
// from `things` x2 where x2.`date` > '2013' group by x2.`labelOne`, x2.`labelTwo`
// map the result(s) of your query to your case class
q.map(SummarizedThing.tupled).list

resultat BETWEEN is null

this is my jpql
#NamedQuery(name = "Subscribe.countByDate", query = "SELECT COUNT (s.idSubscribe) FROM Subscribe s WHERE s.dateInscription BETWEEN :dateS AND :dateF"),
this is my facade :
public Number subSexeDate(String v, Date dated, Date datef) {
Query query = em.createNamedQuery("Subscribe.countByDate");
//query.setParameter("sexe", v);
query.setParameter("dateS", dated, TemporalType.DATE);
query.setParameter("dateF", datef, TemporalType.DATE);
return (Number) query.getSingleResult();
}
this is my controller
public List<Number> subSexeDate() {
sexe();
Date d1= new Date(2008-01-07);
Date d2= new Date(2010-01-01);
List<Number> nb = new ArrayList<Number>();
for (String var : sexe()) {
nb.add(ejbFacade.subSexeDate("homme", d1, d2));
}
return nb;
}
the result is: [0, 0]
the real problem
Date d1 = new Date(2007-01-01); long x = d1.getTime(); long y = System.currentTimeMillis(); Date d2 = new Date(); d2.setTime(y); d1.setTime(x); List<Number> nb = new ArrayList<Number>(); for (String var : sexe()) { nb.add(ejbFacade.subSexeDate(var, d1, d2)); System.out.println(d1.toString()+"date2"+d2);}
but résult of system.out : Infos: Thu Jan 01 01:00:02 CET 1970date2Sun May 26 11:55:31 CEST 2013 –
I imagine the issue has to do with the way you are constructing your Date objects.
You are writing this:
Date d1= new Date(2008-01-07);
Which is the same as this:
long x = 2008 - 1 - 7;
Date d1 = new Date(x); // or new Date(2000L);
Which I suspect is not what you wanted. Use a DateFormat and parse your date string instead.

How to return a function in scala

How can I return a function side-effecting lexical closure1 in Scala?
For instance, I was looking at this code sample in Go:
...
// fib returns a function that returns
// successive Fibonacci numbers.
func fib() func() int {
a, b := 0, 1
return func() int {
a, b = b, a+b
return b
}
}
...
println(f(), f(), f(), f(), f())
prints
1 2 3 5 8
And I can't figure out how to write the same in Scala.
1. Corrected after Apocalisp comment
Slightly shorter, you don't need the return.
def fib() = {
var a = 0
var b = 1
() => {
val t = a;
a = b
b = t + b
b
}
}
Gah! Mutable variables?!
val fib: Stream[Int] =
1 #:: 1 #:: (fib zip fib.tail map Function.tupled(_+_))
You can return a literal function that gets the nth fib, for example:
val fibAt: Int => Int = fib drop _ head
EDIT: Since you asked for the functional way of "getting a different value each time you call f", here's how you would do that. This uses Scalaz's State monad:
import scalaz._
import Scalaz._
def uncons[A](s: Stream[A]) = (s.tail, s.head)
val f = state(uncons[Int])
The value f is a state transition function. Given a stream, it will return its head, and "mutate" the stream on the side by taking its tail. Note that f is totally oblivious to fib. Here's a REPL session illustrating how this works:
scala> (for { _ <- f; _ <- f; _ <- f; _ <- f; x <- f } yield x)
res29: scalaz.State[scala.collection.immutable.Stream[Int],Int] = scalaz.States$$anon$1#d53513
scala> (for { _ <- f; _ <- f; _ <- f; x <- f } yield x)
res30: scalaz.State[scala.collection.immutable.Stream[Int],Int] = scalaz.States$$anon$1#1ad0ff8
scala> res29 ! fib
res31: Int = 5
scala> res30 ! fib
res32: Int = 3
Clearly, the value you get out depends on the number of times you call f. But this is all purely functional and therefore modular and composable. For example, we can pass any nonempty Stream, not just fib.
So you see, you can have effects without side-effects.
While we're sharing cool implementations of the fibonacci function that are only tangentially related to the question, here's a memoized version:
val fib: Int => BigInt = {
def fibRec(f: Int => BigInt)(n: Int): BigInt = {
if (n == 0) 1
else if (n == 1) 1
else (f(n-1) + f(n-2))
}
Memoize.Y(fibRec)
}
It uses the memoizing fixed-point combinator implemented as an answer to this question: In Scala 2.8, what type to use to store an in-memory mutable data table?
Incidentally, the implementation of the combinator suggests a slightly more explicit technique for implementing your function side-effecting lexical closure:
def fib(): () => Int = {
var a = 0
var b = 1
def f(): Int = {
val t = a;
a = b
b = t + b
b
}
f
}
Got it!! after some trial and error:
def fib() : () => Int = {
var a = 0
var b = 1
return (()=>{
val t = a;
a = b
b = t + b
b
})
}
Testing:
val f = fib()
println(f(),f(),f(),f())
1 2 3 5 8
You don't need a temp var when using a tuple:
def fib() = {
var t = (1,-1)
() => {
t = (t._1 + t._2, t._1)
t._1
}
}
But in real life you should use Apocalisp's solution.