I use Play-ReactiveMongo to interact with MongoDB... and I'm wondering how to compare two dates considering that I don't use BSON in my application. Let me provide you with an example:
def isTokenExpired(tokenId: String): Future[Boolean] = {
var query = collection.genericQueryBuilder.query(
Json.obj(
"_id" -> Json.obj("$oid" -> tokenId),
"expirationTime" -> Json.obj("$lte" -> DateTime.now(DateTimeZone.UTC))
)
).options(QueryOpts(skipN = 0))
query.cursor[JsValue].collect[Vector](1).map {
case Some(_) => true
case _ => false
}
}
isTokenExpired does not work as expected since expirationTime is considered a String – I've an implicit Writes that serializes a DateTime as "yyyy-MM-ddTHH:mm:ss.SSSZ"... and this is correct since I want a human-readable JSON.
That said, how do I get a document from a collection that has a DateTime less than another DateTime? The following doesn't seem to work:
Json.obj(
"_id" -> Json.obj("$oid" -> tokenId),
"expirationTime" -> Json.obj("$lte" -> Json.obj("$date" -> DateTime.now(DateTimeZone.UTC).getMillis))
)
Thanks.
I've an implicit Writes that serializes a DateTime as "yyyy-MM-ddTHH:mm:ss.SSSZ"... and this is correct since I want a human-readable JSON.
If you store your DateTime as a string in MongoDB then $lte won't compare the dates.
You should store your DateTime as as date in MongoDB (with $date) so you can use your second query (the one with $lte and $date).
I want a human-readable JSON
Why do you need human-readable JSON? I don't see any reason against the date datatype (If you need human-readable JSON in your API then convert your date field there).
The MongoDB dates are readable. Output in MongoDB shell:
PRIMARY> db.mycollection.findOne()
{
"creation" : ISODate("2014-01-16T14:45:27.441Z")
}
Related
I have a value in a JsObject which I want to assign to a specific key in Map, and I want to ask if there is a better way to extract that value without using a case matcher.
I have access to a request variable which is a case class that has a parameter myData
myData is an Option[JsValue] which contains multiple fields and I want to return the boolean value of one specific field in there called “myField” in a string format. The below works, but I want to find a more succinct way of getting the value of "myField" without case matching.
val newMap =
Map(
“myNewKey” -> request.myData.map(_ match {
case JsObject(fields) => fields.getOrElse(“myField”, "unknown").toString
case _ => “unknown”})
The output would then be
"myField": "true"
or
"myField": "false"
or if it isn't true or false, i.e the field doesn't exist
"myField": "unknown"
Rather:
myMap ++ (request.myData \ "myField").validateOpt[String].asOpt.
toSeq.collect { // to map entries
case Some(str) => "keyInMap" -> str
}
Do not use .toString to convert a JSON value or pretty print it.
I'm writing some code to auto-gen JSON codecs for Elm data-structures. There is a point my code, where a "sub-structure/sub-type", has already been encoded to a Json.Encode.Value, and I need to add another key-value pair to it. Is there any way to "destructure" a Json.Encode.Value in Elm? Or combine two values of type Json.Encode.Value?
Here's some sample code:
type alias Entity record =
{ entityKey: (Key record)
, entityVal: record
}
jsonEncEntity : (record -> Value) -> Entity record -> Value
jsonEncEntity localEncoder_record val =
let
encodedRecord = localEncoder_record val.entityVal
in
-- NOTE: the following line won't compile, but this is essentially
-- what I'm looking for
Json.combine encodedRecord (Json.Encode.object [ ( "id", jsonEncKey val.entityKey ) ] )
You can decode the value into a list of key value pairs using D.keyValuePairs D.value and then append the new field. Here's how you'd do that:
module Main exposing (..)
import Json.Decode as D
import Json.Encode as E exposing (Value)
addKeyValue : String -> Value -> Value -> Value
addKeyValue key value input =
case D.decodeValue (D.keyValuePairs D.value) input of
Ok ok ->
E.object <| ( key, value ) :: ok
Err _ ->
input
> import Main
> import Json.Encode as E
> value = E.object [("a", E.int 1)]
{ a = 1 } : Json.Encode.Value
> value2 = Main.addKeyValue "b" E.null value
{ b = null, a = 1 } : Json.Encode.Value
If the input is not an object, this will return the input unchanged:
> Main.addKeyValue "b" E.null (E.int 1)
1 : Json.Encode.Value
If you want to do this, you need to use a decoder to unwrap the values by one level into a Dict String Value, then combine the dictionaries, and finally re-encode as a JSON value. You can unwrap like so:
unwrapObject : Value -> Result String (Dict String Value)
unwrapObject value =
Json.Decode.decodeValue (Json.Decode.dict Json.Decode.value) value
Notice that you have to work with Results from this point on because there's the possibility, as far as Elm is concerned, that your JSON value wasn't really an object (maybe it was a number or a string instead, for instance) and you have to handle that case. For that reason, it's not really best practice to do too much with JSON Values directly; if you can, keep things as Dicts or some other more informative type until the end of processing and then convert the whole result into a Value as the last step.
I'm a new spark user currently playing around with Spark and some big data and I have a question related to Spark SQL or more formally the SchemaRDD. I'm reading a JSON file containing data about some weather forecasts and I'm not really interested in all of the fields that I have ... I only want 10 fields out of 50+ fields returned for each record. Is there a way (similar to filter) that I can use to specify the names of some fields that I want remove from spark.
Just a small descriptive example. Consider I have the Schema "Person" with 3 fields "Name", "Age", and "Gender" and I'm not interested in the "Age" field and wold like to remove it. Can I use spark some how to do that. ? Thanks
If you are using Spark 1.2, you can do the following (using Scala)...
If you already know what fields you want to use, you can construct the schema for these fields and apply this schema to the JSON dataset. Spark SQL will return a SchemaRDD. Then, you can register it and query it as a table. Here is a snippet...
// sc is an existing SparkContext.
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
// The schema is encoded in a string
val schemaString = "name gender"
// Import Spark SQL data types.
import org.apache.spark.sql._
// Generate the schema based on the string of schema
val schema =
StructType(
schemaString.split(" ").map(fieldName => StructField(fieldName, StringType, true)))
// Create the SchemaRDD for your JSON file "people" (every line of this file is a JSON object).
val peopleSchemaRDD = sqlContext.jsonFile("people.txt", schema)
// Check the schema of peopleSchemaRDD
peopleSchemaRDD.printSchema()
// Register peopleSchemaRDD as a table called "people"
peopleSchemaRDD.registerTempTable("people")
// Only values of name and gender fields will be in the results.
val results = sqlContext.sql("SELECT * FROM people")
When you look at the schema of peopleSchemaRDD (peopleSchemaRDD.printSchema()), you will only see name and gender field.
Or, if you want to explore the dataset and determine what fields you want after you see all fields, you can ask Spark SQL to infer the schema for you. Then, you can register the SchemaRDD as a table and use projection to remove unneeded fields. Here is a snippet...
// Spark SQL will infer the schema of the given JSON file.
val peopleSchemaRDD = sqlContext.jsonFile("people.txt")
// Check the schema of peopleSchemaRDD
peopleSchemaRDD.printSchema()
// Register peopleSchemaRDD as a table called "people"
peopleSchemaRDD.registerTempTable("people")
// Project name and gender field.
sqlContext.sql("SELECT name, gender FROM people")
You can specify what fields you would like to have in the schemaRDD. Below is an example. Create a case class, with only the fields that you need. Read the data into an rdd, then specify the only the fileds that you need(in the same order as you have specified the schema in the case class).
Sample Data: People.txt
foo,25,M
bar,24,F
Code:
case class Person(name: String, gender: String)
val people = sc.textFile("People.txt").map(_.split(",")).map(p => Person(p(0), p(2)))
people.registerTempTable("people")
It may be simple question, but I am new to Scala and not able to find the proper solution
I am trying to create a JSON object from the Option values. Will check if the value is not empty then create the Json obj, if the value is None I don't want to create the json object. With out else, default else is Unit which will fail to create Json obj
Json.obj(if(position.nonEmpty) ("position" -> position.get),
if(place.nonEmpty) ("place" -> place.get),
if(country.nonEmpty) ("country" -> country.get))
Need to put the If condition so that the final json string to look like
{
"position": "M2",
"place": "place",
"country": "country"
}
val obj = for {
p <- position
o <- otherOption
...
} yield Json.obj(
"position" -> p,
"other" -> o)
Will only yield a Some of Json Object if all options are defined. Otherwise None
Option is a monad and there are few convenient ways for using it.
First, if you want to extract value you should use map or flatMap and getOrElse methods:
val res = position.map(value => Json.obj("position" -> value)).getOrElse(null)
Another way is to keep Option of another type and use it latter:
val jsonOption = position.map(value => Json.obj("position" -> value))
After you can use it in for comprehension with another options or perform another mutations without extracting:
for (positionJson <- jsonOption; xJson <- xJsonOption) yield positionJson.toString + xJson.toString
jsonOption.map(_.toString).foreach(print(_))
And always try to avoid pattern matching on monads.
I'm new to the Play Framework, and Scala language. I want to save some data to database only by running URL with specified parameters.
For example I want to run url like:
/DeviceData?device_ID=1&insertDate=2013-01-01&windDirection=50&device_ID=1&insertDate=2013-01-02&windDirection=5
and after that in the database two new records would be inserted (with Device_ID, insertDate and windDirection).
Right now I'm trying to save only one record at once (I don't know how to read list of elements and save them) but event that it's not working. There is no error, it's just not inserted.
DeviceData model
case class DeviceData(data_ID: Long, device_ID: Long, insertDate: String, windDirection: Double)
object DeviceData{
var deviceDataList = new HashMap[Long, DeviceData]
var data_ID = 0L
def nextId(): Long = { data_ID += 1; data_ID}
def createDeviceData(device_ID: Long, insertDate: String, windDirection: Double) :Unit = {
DB.withConnection { implicit connection =>
SQL(
"""
INSERT INTO devicedata(device_ID, insertDate, windDirection)
VALUES ({device_ID}, {insertDate}, {windDirection})
"""
).
on("device_ID" -> device_ID, "insertDate" -> insertDate, "windDirection" -> windDirection).
executeInsert()
}
}
def list(): List[DeviceData] = { deviceDataList.values.toList }
}
DeviceDatas controller
object DeviceDatas extends Controller {
val deviceDataForm = Form(
tuple(
"device_ID" -> of[Long],
"insertDate" -> nonEmptyText,
"windDirection" -> of[Double]
)
)
def listDeviceData() = Action {
Ok(views.html.deviceData(DeviceData.list(), deviceDataForm))
}
def createDeviceData(device_ID: Long, insertDate: String, windDirection: Double) = Action { implicit request =>
deviceDataForm.bindFromRequest.fold(
errors => BadRequest(views.html.deviceData(DeviceData.list(), errors)),
{ case (device_ID, insertDate, windDirection) => {
DeviceData.createDeviceData(device_ID, insertDate, windDirection)
Redirect(routes.DeviceDatas.listDeviceData)
}
}
)
}
}
deviceData.scala.html - it's simple one, just to check if there is any new inserted record.
#(deviceDatas: List[DeviceData], deviceDataForm: Form[(Long, String, Double)])
#import helper._
#main("DeviceDatas"){
<h3>#deviceDatas.size DeviceData(s)</h3>
}
routes file for /deviceDatas
GET /deviceDatas controllers.DeviceDatas.listDeviceData
POST /deviceDatas controllers.DeviceDatas.createDeviceData(device_ID: Long, insertDate: String, windDirection: Double)
Could You help me with that how to insert the data into database, and if there is any possibility to put list of elements with few records to insert. Also what's the best way to insert DateTime (yyyy-MM-dd hh:mm:ss) into URL parameters in Play Framework? I'm stuck and I don't know how to do it.
UPDATED
Thanks Zim-Zam O'Pootertoot for the answer. Unfortunately I need to use parameters, because I'm sending the data through the router. But anyway one more thanks to You because I'll use json in the future.
I decided to not use List of parameter as I said before, but for one new record I'm sending one request (for example: to add 6 new records to the database I need to run 6 times URL on the router:
/DeviceData?device_ID=1&insertDate=2013-01-01&windDirection=50
And my problem was solved by changing the route file to:
GET /deviceDatas controllers.DeviceDatas.listDeviceData
GET /deviceDatas controllers.DeviceDatas.createDeviceData(device_ID: Long, insertDate: String, windDirection: Double)
To pass in data for multiple records, and also to pass in DateTime data, send the data in the request's json body instead of as url params
http://www.playframework.com/documentation/2.2.x/ScalaBodyParsers
http://www.playframework.com/documentation/2.2.x/ScalaJson
Action(parse.json) { implicit request =>
(request.body \ "records") match {
case arr: JsArray => arr.value.foreach(json => {
val deviceId = (json \ "device_ID").as[Long]
val date = (json \ "insertDate").as[String]
val windDirection = (json \ "windDirection").as[Double]
// insert data in database
})
case _ => throw new IllegalArgumentException("Invalid Json: records must be a JsArray")
}}
The json for your records might look something like
{"records" : [
{"device_ID" : 123, "insertDate" : "2014-03-01 12:00:00", "windDirection" : 123.45},
{"device_ID" : 456, "insertDate" : "2014-03-02 12:00:00", "windDirection" : 54.321}]}