How to convert Scala JsArray to custorm object - json

I'm new to Scala and don't see a way to do this.
I have this class:
case class User(userId: Int, userName: String, email: String, password:
String) {
def this() = this(0, "", "", "")
}
case class Team(teamId: Int, teamName: String, teamOwner: Int,
teamMembers: List[User]) {
def this() = this(0, "", 0, Nil)
}
I'm sending post request as :-
'{
"teamId" : 9,
"teamName" : "team name",
"teamOwner" : 2,
"teamMembers" : [ {
"userId" : 1000,
"userName" : "I am new member",
"email" : "eamil",
"password" : "password"
}]
}'
I get request:-
I tried:-
val data = (request.body \ "teamMembers")
val data2 = (request.body \ "teamId")
val data3 = (request.body \ "teamName")
data: [{"userId":1000,"userName":"I am new
member","email":"eamil","password":"password"}]
data2: 9
data3: "team name"
How to convert data to User object?
[{"userId":1000,"userName":"I am new
member","email":"email","password":"password"}]

As an option, you can read Users like this
import play.api.libs.json.{JsArray, Json}
case class User(
userId: Int,
userName: String,
email: String,
password: String) {
}
case class Team(
teamId: Int,
teamName: String,
teamOwner: Int,
teamMembers: List[User]) {
}
implicit val userFormat = Json.format[User]
implicit val teamFormat = Json.format[Team]
val jsonStr = """{
"teamId" : 9,
"teamName" : "team name",
"teamOwner" : 2,
"teamMembers" : [ {
"userId" : 1000,
"userName" : "I am new member",
"email" : "eamil",
"password" : "password"
}]
}"""
val json = Json.parse(jsonStr)
// Team(9,team name,2,List(User(1000,I am new member,eamil,password)))
json.as[Team]
// Seq[User] = ListBuffer(User(1000,I am new member,eamil,password))
val users = (json \ "teamMembers").as[JsArray].value.map(_.as[User])

Related

Grouping json content

This is my json:
[
{
"category" : {
"id" : 1,
"text" : "cat1"
},
"id" : 1,
"title" : "book1"
},{
"category" : {
"id" : 2,
"text" : "cat2"
},
"id" : 2,
"title" : "book2"
},{
"category" : {
"id" : 1,
"text" : "cat1"
},
"id" : 3,
"title" : "book3"
}
]
How can I grouping it by category? i want to use them in different collectionView
Thank you in advance
Define your JSON codable classes as follows.
typealias Result = [ResultElement]
struct ResultElement: Codable {
let category: Category
let id: Int
let title: String
}
struct Category: Codable {
let id: Int
let text: String
}
Now, Iterate Result array after JSON parsing using JSONDecoder and compare Category struct by equal operator and group it. Since Int and String inside Category struct conforms to Equatable protocol by default, Category struct can also be compared using Equatable protocol.
You can try
let str = """
[
{
"category" : {
"id" : 1,
"text" : "cat1"
},
"id" : 1,
"title" : "book1"
},{
"category" : {
"id" : 2,
"text" : "cat2"
},
"id" : 2,
"title" : "book2"
},{
"category" : {
"id" : 1,
"text" : "cat1"
},
"id" : 3,
"title" : "book3"
}
]
"""
do {
let res = try JSONDecoder().decode([Root].self, from: Data(str.utf8))
print(res)
let dic = Dictionary(grouping: res, by: { $0.category.text})
print(dic) // this dictionary is your new data source key is title of section value is sections rows
}
catch {
print(error)
}
struct Root: Codable {
let category: Category
let id: Int
let title: String
}
struct Category: Codable {
let id: Int
let text: String
}
Create Structs
//MARK: - MyData
public struct MyData {
public var category : Category
public var id : Int
public var title : String
}
//MARK: - Category
public struct Category {
public var id : Int
public var text : String
}
Create Model
func createData () -> [MyData] {
let c1 = Category.init(id: 1, text: "Cat1")
let d1 = MyData.init(category: c1, id: 1, title: "Book1")
let c2 = Category.init(id: 2, text: "Cat2")
let d2 = MyData.init(category: c2, id: 2, title: "Book2")
let c3 = Category.init(id: 1, text: "Cat1")
let d3 = MyData.init(category: c3, id: 3, title: "Book3")
return [d1, d2, d3]
}
Group your data
let ungroupedData = createData()
print("Ungrouped\n")
print(ungroupedData)
let groupedData = Dictionary(grouping: ungroupedData, by: {$0.category.text})
print("\nGrouped\n")
print(groupedData)
groupedData["Cat1"] // get cat1 array list

How to check if value is in dictionary?

I'm using Alamofire and SwiftyJSON and I want to check if the response contains a value that I will type in a search bar.
I've just got the whole JSON file of 1666 objects and append it into my objects array and then I'm searching for value, but it takes too long.
func parseJSON(json: JSON, parse: ParseType) {
var i = 0
var j = 0
switch parse {
case .group:
for elements in json["groups"] {
if let groupId = json["groups"][i]["id"].int {
let groupName = json["groups"][i]["name"].string
// print(groupId)
let group = Groups(id: groupId, name: groupName!)
groupsArray.append(group)
i += 1
} else {
print("Error can't parse JSON")
}
}
func getGroupsData(url: String, groupName: String) {
Alamofire.request(url, method: .get).responseJSON { (response) in
if response.result.isSuccess {
print("Is Success")
let json = JSON(response.result.value)
self.parseJSON(json: json, parse: .group)
if let group = self.groupsArray.first(where: {$0.name == groupName}) {
print("found \(group)")
let searchScheduleUrl = url + "\(group.id)"
self.getGroupSchedule(url: searchScheduleUrl)
} else {
print("Can't find group")
}
} else {
print(response.result.error)
}
}
}
And here is JSON:
{
"groups" : [
{
"faculty" : {
"id" : 101,
"abbr" : "ИнГО",
"name" : "Гуманитарный институт"
},
"id" : 26262,
"spec" : "47.06.01 Философия, этика и религиоведение",
"kind" : 3,
"level" : 3,
"name" : "33865\/4702",
"type" : "common"
},
{
"faculty" : {
"id" : 95,
"abbr" : "ИКНТ",
"name" : "Институт компьютерных наук и технологий"
},
"id" : 27432,
"spec" : "09.03.04 Программная инженерия",
"kind" : 0,
"level" : 1,
"name" : "в13534\/22",
"type" : "evening"
},
{
"faculty" : {
"id" : 92,
"abbr" : "ИСИ",
"name" : "Инженерно-строительный институт"
},
"id" : 26322,
"spec" : "08.06.01 Техника и технологии строительства",
"kind" : 3,
"level" : 1,
"name" : "13163\/0801",
"type" : "common"
}, and so on...
I want to check for example:
if name: "13541/1" is in dictionary and if it is i want to get it's id
You can try
struct Root: Codable {
let groups: [Group]
}
struct Group: Codable {
let faculty: Faculty
let id: Int
let spec: String
let kind, level: Int
let name, type: String
}
struct Faculty: Codable {
let id: Int
let abbr, name: String
}
do {
let res = try JSONDecoder().decode(Root.self,from:data)
if let item = res.groups.first(where:{ $0.name == YourName }) {
print(item.id)
}
}
catch {
print(error)
}

How to write a kotlin data class to match json?

I am using Retrofit to call API and using converter-gson to convert response json to kotlin
This is response
{
"id": "1",
"rank": "1",
"name": "Challenge",
"status": "E",
"createDate": "2018-09-17 15:01:28",
"lastModDate": "2018-09-17 15:06:32",
"category": "DINING",
"photo": {
"path": "http://example.com/xxx.jpg",
"size": [
400,
267
]
}
}
And this is data class.
data class ServiceList (val id:Int,
val rank:Int,
val name:String,
val status:String,
val lastModDate:String,
val category:String,
???????)
How to complete this class?
You can declare another data class to describe the photo property like so:
data class ServiceList(val id: Int,
val rank: Int,
val name: String,
val status: String,
val lastModDate: String,
val category: String,
val photo: Photo) {
data class Photo(val size: List<Int>, val path: String)
}
If the Photo is to be used in other contexts as well you can pull it out to be a top level class:
data class ServiceList (val id: Int,
val rank: Int,
val name: String,
val status: String,
val lastModDate: String,
val category: String,
val photo: ServiceListPhoto)
data class ServiceListPhoto(val size: List<Int>, val path: String)

How to find the difference/mismatch between two JSON file?

I have two json files, one is expected json and the another one is the result of GET API call. I need to compare and find out the mismatch in the file.
Expected Json:
{
"array": [
1,
2,
3
],
"boolean": true,
"null": null,
"number": 123,
"object": {
"a": "b",
"c": "d",
"e": "f"
},
"string": "Hello World"
}
Actual Json response:
{
"array": [
1,
2,
3
],
"boolean": true,
"null": null,
"number": 456,
"object": {
"a": "b",
"c": "d",
"e": "f"
},
"string": "India"
}
Actually there are two mismatch: number received is 456 and string is India.
Is there a way to compare and get these two mismatch as results.
This need to be implemented in gatling/scala.
You can use, for example, play-json library and recursively traverse both JSONs. For next input (a bit more sophisticated than yours input):
LEFT:
{
"array" : [ 1, 2, 4 ],
"boolean" : true,
"null" : null,
"number" : 123,
"object" : {
"a" : "b",
"c" : "d",
"e" : "f"
},
"string" : "Hello World",
"absent-in-right" : true,
"different-types" : 123
}
RIGHT:
{
"array" : [ 1, 2, 3 ],
"boolean" : true,
"null" : null,
"number" : 456,
"object" : {
"a" : "b",
"c" : "d",
"e" : "ff"
},
"string" : "India",
"absent-in-left" : true,
"different-types" : "YES"
}
It produces this output:
Next fields are absent in LEFT:
*\absent-in-left
Next fields are absent in RIGHT:
*\absent-in-right
'*\array\(2)' => 4 != 3
'*\number' => 123 != 456
'*\object\e' => f != ff
'*\string' => Hello World != India
Cannot compare JsNumber and JsString in '*\different-types'
Code:
val left = Json.parse("""{"array":[1,2,4],"boolean":true,"null":null,"number":123,"object":{"a":"b","c":"d","e":"f"},"string":"Hello World","absent-in-right":true,"different-types":123}""").asInstanceOf[JsObject]
val right = Json.parse("""{"array":[1,2,3],"boolean":true,"null":null,"number":456,"object":{"a":"b","c":"d","e":"ff"},"string":"India","absent-in-left":true,"different-types":"YES"}""").asInstanceOf[JsObject]
// '*' - for the root node
showJsDiff(left, right, "*", Seq.empty[String])
def showJsDiff(left: JsValue, right: JsValue, parent: String, path: Seq[String]): Unit = {
val newPath = path :+ parent
if (left.getClass != right.getClass) {
println(s"Cannot compare ${left.getClass.getSimpleName} and ${right.getClass.getSimpleName} " +
s"in '${getPath(newPath)}'")
}
else {
left match {
// Primitive types are pretty easy to handle
case JsNull => logIfNotEqual(JsNull, right.asInstanceOf[JsNull.type], newPath)
case JsBoolean(value) => logIfNotEqual(value, right.asInstanceOf[JsBoolean].value, newPath)
case JsNumber(value) => logIfNotEqual(value, right.asInstanceOf[JsNumber].value, newPath)
case JsString(value) => logIfNotEqual(value, right.asInstanceOf[JsString].value, newPath)
case JsArray(value) =>
// For array we have to call showJsDiff on each element of array
val arr1 = value
val arr2 = right.asInstanceOf[JsArray].value
if (arr1.length != arr2.length) {
println(s"Arrays in '${getPath(newPath)}' have different length. ${arr1.length} != ${arr2.length}")
}
else {
arr1.indices.foreach { idx =>
showJsDiff(arr1(idx), arr2(idx), s"($idx)", newPath)
}
}
case JsObject(value) =>
val leftFields = value.keys.toSeq
val rightJsObject = right.asInstanceOf[JsObject]
val rightFields = rightJsObject.fields.map { case (name, value) => name }
val absentInLeft = rightFields.diff(leftFields)
if (absentInLeft.nonEmpty) {
println("Next fields are absent in LEFT: ")
absentInLeft.foreach { fieldName =>
println(s"\t ${getPath(newPath :+ fieldName)}")
}
}
val absentInRight = leftFields.diff(rightFields)
if (absentInRight.nonEmpty) {
println("Next fields are absent in RIGHT: ")
absentInRight.foreach { fieldName =>
println(s"\t ${getPath(newPath :+ fieldName)}")
}
}
// For common fields we have to call showJsDiff on them
val commonFields = leftFields.intersect(rightFields)
commonFields.foreach { field =>
showJsDiff(value(field), rightJsObject(field), field, newPath)
}
}
}
}
def logIfNotEqual[T](left: T, right: T, path: Seq[String]): Unit = {
if (left != right) {
println(s"'${getPath(path)}' => $left != $right")
}
}
def getPath(path: Seq[String]): String = path.mkString("\\")
Use diffson - a Scala implementation of RFC-6901 and RFC-6902: https://github.com/gnieh/diffson
json4s has a handy diff function described here: https://github.com/json4s/json4s (search for Merging & Diffing) and API doc here: https://static.javadoc.io/org.json4s/json4s-core_2.9.1/3.0.0/org/json4s/Diff.html
This is a slightly modified version of Artavazd's answer (which is amazing btw thank you so much!). This version outputs the differences into a convenient object instead of only logging them.
import play.api.Logger
import play.api.libs.json.{JsArray, JsBoolean, JsError, JsNull, JsNumber, JsObject, JsString, JsSuccess, JsValue, Json, OFormat, Reads}
case class JsDifferences(
differences: List[JsDifference] = List()
)
object JsDifferences {
implicit val format: OFormat[JsDifferences] = Json.format[JsDifferences]
}
case class JsDifference(
key: String,
path: Seq[String],
oldValue: Option[String],
newValue: Option[String]
)
object JsDifference {
implicit val format: OFormat[JsDifference] = Json.format[JsDifference]
}
object JsonUtils {
val logger: Logger = Logger(this.getClass)
def findDiff(left: JsValue, right: JsValue, parent: String = "*", path: List[String] = List()): JsDifferences = {
val newPath = path :+ parent
if (left.getClass != right.getClass) {
logger.debug(s"Cannot compare ${left.getClass.getSimpleName} and ${right.getClass.getSimpleName} in '${getPath(newPath)}'")
JsDifferences()
} else left match {
case JsNull => logIfNotEqual(JsNull, right.asInstanceOf[JsNull.type], newPath)
case JsBoolean(value) => logIfNotEqual(value, right.asInstanceOf[JsBoolean].value, newPath)
case JsNumber(value) => logIfNotEqual(value, right.asInstanceOf[JsNumber].value, newPath)
case JsString(value) => logIfNotEqual(value, right.asInstanceOf[JsString].value, newPath)
case JsArray(value) =>
val arr1 = value
val arr2 = right.asInstanceOf[JsArray].value
if (arr1.length != arr2.length) {
logger.debug(s"Arrays in '${getPath(newPath)}' have different length. ${arr1.length} != ${arr2.length}")
JsDifferences()
} else JsDifferences(arr1.indices.flatMap(idx => findDiff(arr1(idx), arr2(idx), s"($idx)", newPath).differences).toList)
case leftJsObject: JsObject => {
val leftFields = leftJsObject.keys.toSeq
val rightJsObject = right.asInstanceOf[JsObject]
val rightFields = rightJsObject.fields.map { case (name, value) => name }
val keysAbsentInLeft = rightFields.diff(leftFields)
val leftDifferences = keysAbsentInLeft.map(fieldName => JsDifference(
key = fieldName, path = newPath :+ fieldName, oldValue = None, newValue = Some(rightJsObject(fieldName).toString)
))
val keysAbsentInRight = leftFields.diff(rightFields)
val rightDifferences = keysAbsentInRight.map(fieldName => JsDifference(
key = fieldName, path = newPath :+ fieldName, oldValue = Some(leftJsObject(fieldName).toString), newValue = None
))
val commonKeys = leftFields.intersect(rightFields)
val commonDifferences = commonKeys.flatMap(field => findDiff(leftJsObject(field), rightJsObject(field), field, newPath).differences).toList
JsDifferences((leftDifferences ++ rightDifferences ++ commonDifferences).toList)
}
}
}
def logIfNotEqual[T](left: T, right: T, path: Seq[String]): JsDifferences = {
if (left != right) {
JsDifferences(List(JsDifference(
key = path.last, path = path, oldValue = Some(left.toString), newValue = Some(right.toString)
)))
} else JsDifferences()
}
def getPath(path: Seq[String]): String = path.mkString("\\")
}

SPARK : How to create aggregate from RDD[Row] in Scala

How do I create a List/Map inside a RDD/DF so that I can get the aggregate ?
I have a file where each row is a JSON object :
{
itemId :1122334,
language: [
{
name: [
"US", "FR"
],
value: [
"english", "french"
]
},
{
name: [
"IND"
],
value: [
"hindi"
]
}
],
country: [
{
US: [
{
startTime: 2016-06-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
],
CANADA: [
{
startTime: 2016-06-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
],
DENMARK: [
{
startTime: 2016-06-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
],
FRANCE: [
{
startTime: 2016-08-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
]
}
]
},
{
itemId :1122334,
language: [
{
name: [
"US", "FR"
],
value: [
"english", "french"
]
},
{
name: [
"IND"
],
value: [
"hindi"
]
}
],
country: [
{
US: [
{
startTime: 2016-06-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
],
CANADA: [
{
startTime: 2016-07-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
],
DENMARK: [
{
startTime: 2016-06-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
],
FRANCE: [
{
startTime: 2016-08-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
]
}
]
}
I have matching POJO which gets me the values from the JSON.
import com.mapping.data.model.MappingUtils
import com.mapping.data.model.CountryInfo
val mappingPath = "s3://.../"
val timeStamp = "2016-06-06T17: 39: 35.000Z"
val endTimeStamp = "2016-06-07T17: 39: 35.000Z"
val COUNTRY_US = "US"
val COUNTRY_CANADA = "CANADA"
val COUNTRY_DENMARK = "DENMARK"
val COUNTRY_FRANCE = "FRANCE"
val input = sc.textFile(mappingPath)
The input is list of jsons where each line is json which I am mapping to the POJO class CountryInfo using MappingUtils which takes care of JSON parsing and conversion:
val MappingsList = input.map(x=> {
val countryInfo = MappingUtils.getCountryInfoString(x);
(countryInfo.getItemId(), countryInfo)
}).collectAsMap
MappingsList: scala.collection.Map[String,com.mapping.data.model.CountryInfo]
def showCountryInfo(x: Option[CountryInfo]) = x match {
case Some(s) => s
}
But I need to create a DF/RDD so that I can get the aggregates of country and language for based on itemId.
In the given example, if the country's start time is not lesser than "2016-06-07T17: 39: 35.000Z" then the value will be zero.
Which format will be good to create the final aggregate json :
1. List ?
|-----itemId-------|----country-------------------|-----language---------------------|
| 1122334 | [US, CANADA,DENMARK] | [english,hindi,french] |
| 1122334 | [US,DENMARK] | [english] |
|------------------|------------------------------|----------------------------------|
2. Map ?
|-----itemId-------|----country---------------------------------|-----language---------------------|
| 1122334 | (US,2) (CANADA,1) (DENMARK,2) (FRANCE, 0) |(english,2) (hindi,1) (french,1) |
|.... |
|.... |
|.... |
|------------------|--------------------------------------------|----------------------------------|
I would like to create a final json which has the aggregate value like :
{
itemId: "1122334",
country: {
"US" : 2,
"CANADA" : 1,
"DENMARK" : 2,
"FRANCE" : 0
},
language: {
"english" : 2,
"french" : 1,
"hindi" : 1
}
}
I tried List :
val events = sqlContext.sql( "select itemId EventList")
val itemList = events.map(row => {
val itemId = row.getAs[String](1);
val countryInfo = showTitleInfo(MappingsList.get(itemId));
val country = new ListBuffer[String]()
country += if (countryInfo.getCountry().getUS().get(0).getStartTime() < endTimeStamp) COUNTRY_US;
country += if (countryInfo.getCountry().getCANADA().get(0).getStartTime() < endTimeStamp) COUNTRY_CANADA;
country += if (countryInfo.getCountry().getDENMARK().get(0).getStartTime() < endTimeStamp) COUNTRY_DENMARK;
country += if (countryInfo.getCountry().getFRANCE().get(0).getStartTime() < endTimeStamp) COUNTRY_FRANCE;
val languageList = new ListBuffer[String]()
val language = countryInfo.getLanguages().collect.foreach(x => languageList += x.getValue());
Row(itemId, country.toList, languageList.toList)
})
and Map :
val itemList = events.map(row => {
val itemId = row.getAs[String](1);
val countryInfo = showTitleInfo(MappingsList.get(itemId));
val country: Map[String, Int] = Map()
country += if (countryInfo.getCountry().getUS().get(0).getStartTime() < endTimeStamp) ('COUNTRY_US' -> 1) else ('COUNTRY_US' -> 0)
country += if (countryInfo.getCountry().getUS().get(0).getStartTime() < endTimeStamp) ('COUNTRY_CANADA' -> 1) else ('COUNTRY_CANADA' -> 0)
country += if (countryInfo.getCountry().getUS().get(0).getStartTime() < endTimeStamp) ('COUNTRY_DENMARK' -> 1) else ('COUNTRY_DENMARK' -> 0)
country += if (countryInfo.getCountry().getUS().get(0).getStartTime() < endTimeStamp) ('COUNTRY_FRANCE' -> 1) else ('COUNTRY_FRANCE' -> 0)
val language: Map[String, Int] = Map()
countryInfo.getLanguages().collect.foreach(x => language += (x.getValue -> 1)) ;
Row(itemId, country, language)
})
But both are getting frozen in Zeppelin. Is there any better way to get aggregates as json ? Which is better List/Map construct the final aggreagate ?
It would be helpful if you restated your question in terms of Spark DataFrame/Dataset and Row; I understand that you ultimately want to use JSON but the details of the JSON input/output are a separate concern.
The function you are looking for is a Spark SQL aggregate function (see the group of them on that page). The functions collect_list and collect_set are related, but the function you need is not already implemented.
You can implement what I'll call count_by_value by deriving from org.spark.spark.sql.expressions.UserDefinedAggregateFunction. This will require some in-depth knowledge of how Spark SQL works.
Once count_by_value is implemented, you can use it like this:
df.groupBy("itemId").agg(count_by_value(df("country")), count_by_value(df("language")))