I am using Play Framework to convert a case class to JSON object. This for many instances of case class LinkEvolution. Given the structure of each JSON object :
implicit val linkIPFormat = Json.format[LinkIPs]
implicit val linkState = Json.format[LinkState]
// user has JsObject as type
val linkEvolution = LinkEvolution(rawDataLink.link, reference, current, alarms)
val user = Json.obj(
"link" -> rawDataLink.link,
"reference" -> linkEvolution.reference,
"current" -> linkEvolution.current,
"alarms" -> linkEvolution.alarms)
I have a list of users, so a list of JsObject.
My question is how I can save this list in a JSON file, each line of the file is a JsObject.
You can conevrt them to strings and write to a file (I assumed users=List[JsObject]):
import java.io._
val file = "file.json"
val writer = new BufferedWriter(new FileWriter(file))
users.map(_.toString).{ json =>
writer.write(json)
writer.newLine
}
writer.close()
Related
Scala - Couldn't remove double quotes for "{}" braces while building Json
import scala.util.Random
import math.Ordered.orderingToOrdered
import math.Ordering.Implicits.infixOrderingOps
import play.api.libs.json._
import play.api.libs.json.Writes
import play.api.libs.json.Json.JsValueWrapper
val data1 = (1 to 2)
.map {r => Json.toJson(Map(
"name" -> Json.toJson(s"Perftest${Random.alphanumeric.take(6).mkString}"),
"domainId"->Json.toJson("343RDFDGF4RGGFG"),
"value" ->Json.toJson("{}")))}
val data2 = Json.toJson(data1)
println(data2)
Result :
[{"name":"PerftestpXI1ID","domainId":"343RDFDGF4RGGFG","value":"{}"},{"name":"PerftestHoZSQR","domainId":"343RDFDGF4RGGFG","value":"{}"}]
Expected :
"value":{}
[{"name":"PerftestpXI1ID","domainId":"343RDFDGF4RGGFG","value":{}},{"name":"PerftestHoZSQR","domainId":"343RDFDGF4RGGFG","value":{}}]
Please suggest a solution
You are giving it a String so it is creating a string in JSON. What you actually want is an empty dictionary, which is a Map in Scala:
val data1 = (1 to 2)
.map {r => Json.toJson(Map(
"name" -> Json.toJson(s"Perftest${Random.alphanumeric.take(6).mkString}"),
"domainId"->Json.toJson("343RDFDGF4RGGFG"),
"value" ->Json.toJson(Map.empty[String, String])))}
More generally you should create a case class for the data and create a custom Writes implementation for that class so that you don't have to call Json.toJson on every value.
Here is how to do the conversion using only a single Json.toJson call:
import play.api.libs.json.Json
case class MyData(name: String, domainId: String, value: Map[String,String])
implicit val fmt = Json.format[MyData]
val data1 = (1 to 2)
.map { r => new MyData(
s"Perftest${Random.alphanumeric.take(6).mkString}",
"343RDFDGF4RGGFG",
Map.empty
)
}
val data2 = Json.toJson(data1)
println(data2)
The value field can be a standard type such as Boolean or Double. It could also be another case class to create nested JSON as long as there is a similar Json.format line for the new type.
More complex JSON can be generated by using a custom Writes (and Reads) implementation as described in the documentation.
I have a few json arrays
[{"key":"country","value":"aaa"},{"key":"region","value":"a"},{"key":"city","value":"a1"}]
[{"key":"city","value":"b"},{"key":"street","value":"1"}]
I need to extract city and street value into different columns.
Using get_json_object($"address", "$[2].value").as("city") to get element by it's number doesn't work because arrays can miss some fields.
Instead I decided to turn this array into a map of key -> value pairs, but have trouble doing it. So far I only managed to get an array of arrays.
val schema = ArrayType(StructType(Array(
StructField("key", StringType),
StructField("value", StringType)
)))
from_json($"address", schema)
Returns
[[country, aaa],[region, a],[city, a1]]
[[city, b],[street, 1]]
I'm not sure where to go from here.
val schema = ArrayType(MapType(StringType, StringType))
Fails with
cannot resolve 'jsontostructs(`address`)' due to data type mismatch: Input schema array<map<string,string>> must be a struct or an array of structs.;;
I'm using spark 2.2
Using UDF we can handle this easily. In the below code I have created a map using a UDF. I hope this will suffice the need
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
val df1 = spark.read.format("text").load("file path")
val schema = ArrayType(StructType(Array(
StructField("key", StringType),
StructField("value", StringType)
)))
val arrayToMap = udf[Map[String, String], Seq[Row]] {
array => array.map { case Row(key: String, value: String) => (key, value) }.toMap
}
val dfJSON = df1.withColumn("jsonData",from_json(col("value"),schema))
.select("jsonData").withColumn("address", arrayToMap(col("jsonData")))
.withColumn("city", when(col("address.city").isNotNull, col("address.city")).otherwise(lit(""))).withColumn("street", when(col("address.street").isNotNull, col("address.street")).otherwise(lit("")))
dfJSON.printSchema()
dfJSON.show(false)
I want to read a generic CSV file with headers but unknown column number into a typed structure. My question is kind of the same as Strongly typed access to csv in scala? but with the fact I would have no schema to pass to the parser...
Until now, I was using Jackson CSV mapper to read each row as a Map[String,String], and it was working well.
import com.fasterxml.jackson.module.scala.DefaultScalaModule
def genericStringIterator(input: InputStream): Iterator[Map[String, String]] = {
val mapper = new CsvMapper()
mapper.registerModule(DefaultScalaModule)
val schema = CsvSchema.emptySchema.withHeader
val iterator = mapper
.readerFor(classOf[Map[String, String]])
.`with`(schema)
.readValues[Map[String, String]](input)
iterator.asScala
}
Now, we need the field to be typed, so 4.2 would be a Double but "4.2" would still be a String.
We are using play-json everywhere in our project, and so I know JsValue already has a great type inference for generic stuff like that.
As paly-json it is based on Jackson too, I thought it would be great to have something like
import play.api.libs.json.jackson.PlayJsonModule
def genericStringIterator(input: InputStream): Iterator[JsValue] = {
val mapper = new CsvMapper()
mapper.registerModule(PlayJsonModule)
val schema = CsvSchema.emptySchema.withHeader
val iterator = mapper
.readerFor(classOf[JsValue])
.`with`(schema)
.readValues[JsValue](input)
iterator.asScala
}
But when I try the former code, I get an exception :
val iterator = CSV.genericAnyIterator(input(
"""foo,bar,baz
|"toto",42,43
|"tata",,45
| titi,87,88
|"tutu",,
|""".stripMargin))
iterator
.foreach { a =>
println(a)
}
java.lang.RuntimeException: We should be reading map, something got wrong
at play.api.libs.json.jackson.JsValueDeserializer.deserialize(JacksonJson.scala:165)
at play.api.libs.json.jackson.JsValueDeserializer.deserialize(JacksonJson.scala:128)
at play.api.libs.json.jackson.JsValueDeserializer.deserialize(JacksonJson.scala:123)
at com.fasterxml.jackson.databind.MappingIterator.nextValue(MappingIterator.java:277)
at com.fasterxml.jackson.databind.MappingIterator.next(MappingIterator.java:192)
at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:40)
at scala.collection.Iterator.foreach(Iterator.scala:929)
at scala.collection.Iterator.foreach$(Iterator.scala:929)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
at my.company.csv.CSVSpec$$anon$4.<init>(CSVSpec.scala:240)
Is there something I'm doing wrong ?
I don't care particulary to have a play-json JsValue in the end, any Json structure with generic typed field would be ok. Is there another lib I could use for that ? For what I found, all other libs are based on a mapping given to the CSV Reader in advance, and what is important for me is to be able to infer the type from the CSV.
Ok, I was lazy to want to find something working out of the box :)
In fact it was easy to implement it myself.
I went looking to lib in other languages that does this infering (PapaParse in JS, Pandas in python), and discovered that they were doing a test-and-retry with priority to guess the types.
So I implemented it myself, and it works fine !
Here it is:
def genericAnyIterator(input: InputStream): Iterator[JsValue] = {
// the result of former code, mapping to Map[String,String]
val strings = genericStringIterator(input)
val decimalRegExp: Regex = "(\\d*){1}(\\.\\d*){0,1}".r
val jsValues: Iterator[Map[String, JsValue]] = strings.map { m =>
m.mapValues {
case "" => None
case "false" | "FALSE" => Some(JsFalse)
case "true" | "TRUE" => Some(JsTrue)
case value#decimalRegExp(g1, g2) if !value.isEmpty => Some(JsNumber(value.toDouble))
case "null" | "NULL" => Some(JsNull)
case value#_ => Some(JsString(value))
}
.filter(_._2.isDefined)
.mapValues(_.get)
}
jsValues.map(map => JsObject(map.toSeq))
}
which does in a test
it should "read any csv in JsObject" in new WithInputStream {
val iterator = CSV.genericAnyIterator(input(
"""foo,bar,baz
|"toto",NaN,false
|"tata",,TRUE
|titi,87.79,88
|"tutu",,null
|"tete",5.,.5
|""".stripMargin))
val result: Seq[JsValue] = iterator.toSeq
result should be(Stream(
Json.obj("foo" -> "toto", "bar" -> "NaN", "baz" -> false)
, Json.obj("foo" -> "tata", "baz" -> true)
, Json.obj("foo" -> "titi", "bar" -> 87.79, "baz" -> 88)
, Json.obj("foo" -> "tutu", "baz" -> JsNull)
, Json.obj("foo" -> "tete", "bar" -> 5, "baz" -> 0.5)
) )
}
val topics= "test"
val zkQuorum="localhost:2181"
val group="test-consumer-group"
val sparkConf = new org.apache.spark.SparkConf()
.setAppName("XXXXX")
.setMaster("local[*]")
.set("cassandra.connection.host", "127.0.0.1")
.set("cassandra.connection.port", "9042")
val ssc = new StreamingContext(sparkConf, Seconds(2))
ssc.checkpoint("checkpoint")
val topicMap = topics.split(",").map((_, numThreads.toInt)).toMap
val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2)
I am getting DStream (json) like this
[{"id":100,"firstName":"Beulah","lastName":"Fleming","gender":"female","ethnicity":"SpEd","height":167,"address":27,"createdDate":1494489672243,"lastUpdatedDate":1494489672244,"isDeleted":0},{"id":101,"firstName":"Traci","lastName":"Summers","gender":"female","ethnicity":"Frp","height":181,"address":544,"createdDate":1494510639611,"lastUpdatedDate":1494510639611,"isDeleted":0}]
By this above program i am getting json data in DStream.
How i will process this Dstream data and store into Cassandra or elastic search? Then how i will retrieve data from DStream(in json format) and store in Cassandra?
You need to import com.datastax.spark.connector._, convert to elements of the stream to appropriate case classes
case class Record(id: String, firstName: String, ...)
val colums = SomeColums("id", "first_name", ...)
val mapped = lines.map(whateverDataYouHave => fuctionThatReutrnsARecordObject)
and save it using implicit function saveToCassandra
mapped.saveToCassandra(KEYSPACE_NAME, TABLE_NAME, columns)
For more information check documentation https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md
Is there a simple way to converting a given Row object to json?
Found this about converting a whole Dataframe to json output:
Spark Row to JSON
But I just want to convert a one Row to json.
Here is pseudo code for what I am trying to do.
More precisely I am reading json as input in a Dataframe.
I am producing a new output that is mainly based on columns, but with one json field for all the info that does not fit into the columns.
My question what is the easiest way to write this function: convertRowToJson()
def convertRowToJson(row: Row): String = ???
def transformVenueTry(row: Row): Try[Venue] = {
Try({
val name = row.getString(row.fieldIndex("name"))
val metadataRow = row.getStruct(row.fieldIndex("meta"))
val score: Double = calcScore(row)
val combinedRow: Row = metadataRow ++ ("score" -> score)
val jsonString: String = convertRowToJson(combinedRow)
Venue(name = name, json = jsonString)
})
}
Psidom's Solutions:
def convertRowToJSON(row: Row): String = {
val m = row.getValuesMap(row.schema.fieldNames)
JSONObject(m).toString()
}
only works if the Row only has one level not with nested Row. This is the schema:
StructType(
StructField(indicator,StringType,true),
StructField(range,
StructType(
StructField(currency_code,StringType,true),
StructField(maxrate,LongType,true),
StructField(minrate,LongType,true)),true))
Also tried Artem suggestion, but that did not compile:
def row2DataFrame(row: Row, sqlContext: SQLContext): DataFrame = {
val sparkContext = sqlContext.sparkContext
import sparkContext._
import sqlContext.implicits._
import sqlContext._
val rowRDD: RDD[Row] = sqlContext.sparkContext.makeRDD(row :: Nil)
val dataFrame = rowRDD.toDF() //XXX does not compile
dataFrame
}
You can use getValuesMap to convert the row object to a Map and then convert it JSON:
import scala.util.parsing.json.JSONObject
import org.apache.spark.sql._
val df = Seq((1,2,3),(2,3,4)).toDF("A", "B", "C")
val row = df.first() // this is an example row object
def convertRowToJSON(row: Row): String = {
val m = row.getValuesMap(row.schema.fieldNames)
JSONObject(m).toString()
}
convertRowToJSON(row)
// res46: String = {"A" : 1, "B" : 2, "C" : 3}
I need to read json input and produce json output.
Most fields are handled individually, but a few json sub objects need to just be preserved.
When Spark reads a dataframe it turns a record into a Row. The Row is a json like structure. That can be transformed and written out to json.
But I need to take some sub json structures out to a string to use as a new field.
This can be done like this:
dataFrameWithJsonField = dataFrame.withColumn("address_json", to_json($"location.address"))
location.address is the path to the sub json object of the incoming json based dataframe. address_json is the column name of that object converted to a string version of the json.
to_json is implemented in Spark 2.1.
If generating it output json using json4s address_json should be parsed to an AST representation otherwise the output json will have the address_json part escaped.
Pay attention scala class scala.util.parsing.json.JSONObject is deprecated and not support null values.
#deprecated("This class will be removed.", "2.11.0")
"JSONFormat.defaultFormat doesn't handle null values"
https://issues.scala-lang.org/browse/SI-5092
JSon has schema but Row doesn't have a schema, so you need to apply schema on Row & convert to JSon. Here is how you can do it.
import org.apache.spark.sql.Row
import org.apache.spark.sql.types._
def convertRowToJson(row: Row): String = {
val schema = StructType(
StructField("name", StringType, true) ::
StructField("meta", StringType, false) :: Nil)
return sqlContext.applySchema(row, schema).toJSON
}
Essentially, you can have a dataframe which contains just one row. Thus, you can try to filter your initial dataframe and then parse it to json.
I had the same issue, I had parquet files with canonical schema (no arrays), and I only want to get json events. I did as follows, and it seems to work just fine (Spark 2.1):
import org.apache.spark.sql.types.StructType
import org.apache.spark.sql.{DataFrame, Dataset, Row}
import scala.util.parsing.json.JSONFormat.ValueFormatter
import scala.util.parsing.json.{JSONArray, JSONFormat, JSONObject}
def getValuesMap[T](row: Row, schema: StructType): Map[String,Any] = {
schema.fields.map {
field =>
try{
if (field.dataType.typeName.equals("struct")){
field.name -> getValuesMap(row.getAs[Row](field.name), field.dataType.asInstanceOf[StructType])
}else{
field.name -> row.getAs[T](field.name)
}
}catch {case e : Exception =>{field.name -> null.asInstanceOf[T]}}
}.filter(xy => xy._2 != null).toMap
}
def convertRowToJSON(row: Row, schema: StructType): JSONObject = {
val m: Map[String, Any] = getValuesMap(row, schema)
JSONObject(m)
}
//I guess since I am using Any and not nothing the regular ValueFormatter is not working, and I had to add case jmap : Map[String,Any] => JSONObject(jmap).toString(defaultFormatter)
val defaultFormatter : ValueFormatter = (x : Any) => x match {
case s : String => "\"" + JSONFormat.quoteString(s) + "\""
case jo : JSONObject => jo.toString(defaultFormatter)
case jmap : Map[String,Any] => JSONObject(jmap).toString(defaultFormatter)
case ja : JSONArray => ja.toString(defaultFormatter)
case other => other.toString
}
val someFile = "s3a://bucket/file"
val df: DataFrame = sqlContext.read.load(someFile)
val schema: StructType = df.schema
val jsons: Dataset[JSONObject] = df.map(row => convertRowToJSON(row, schema))
if you are iterating through an data frame , you can directly convert the data frame to a new dataframe with json object inside and iterate that
val df_json = df.toJSON
I combining the suggestion from: Artem, KiranM and Psidom. Did a lot of trails and error and came up with this solutions that I tested for nested structures:
def row2Json(row: Row, sqlContext: SQLContext): String = {
import sqlContext.implicits
val rowRDD: RDD[Row] = sqlContext.sparkContext.makeRDD(row :: Nil)
val dataframe = sqlContext.createDataFrame(rowRDD, row.schema)
dataframe.toJSON.first
}
This solution worked, but only while running in driver mode.