Apache Spark - Transform Map[String, String] to Map[String, Struct] - json

I have below dataset:
{
"col1": "val1",
"col2": {
"key1": "{\"SubCol1\":\"ABCD\",\"SubCol2\":\"EFGH\"}",
"key2": "{\"SubCol1\":\"IJKL\",\"SubCol2\":\"MNOP\"}"
}
}
with schema StructType(StructField(col1,StringType,true), StructField(col2,MapType(StringType,StringType,true),true)).
I want to convert col2 to below format:
{
"col1": "val1",
"col2": {
"key1": {"SubCol1":"ABCD","SubCol2":"EFGH"},
"key2": {"SubCol1":"IJKL","SubCol2":"MNOP"}
}
}
The updated dataset schema will be as below:
StructType(StructField(col1,StringType,true), StructField(col2,MapType(StringType,StructType(StructField(SubCol1,StringType,true), StructField(SubCol2,StringType,true)),true),true))

You can use transform_values on the map column:
val df2 = df.withColumn(
"col2",
expr("transform_values(col2, (k, x) -> from_json(x, 'struct<SubCol1:string, SubCol2:string>'))")
)

Try below code It will work in spark 2.4.7
Creating DataFrame with sample data.
scala> val df = Seq(
("val1",Map(
"key1" -> "{\"SubCol1\":\"ABCD\",\"SubCol2\":\"EFGH\"}",
"key2" -> "{\"SubCol1\":\"IJKL\",\"SubCol2\":\"MNOP\"}"))
).toDF("col1","col2")
df: org.apache.spark.sql.DataFrame = [col1: string, col2: map<string,string>]
Steps:
Extract map keys (map_keys), values (map_values) into different arrays.
Convert map values into desired output. i.e. Struct
Use map_from_arrays function to combine keys & values from the above steps to create Map[String, Struct]
scala>
val finalDF = df
.withColumn(
"col2_new",
map_from_arrays(
map_keys($"col2"),
expr("""transform(map_values(col2), x -> from_json(x,"struct<SubCol1:string, SubCol2:string>"))""")
)
)
Printing Schema
finalDF.printSchema
root
|-- col1: string (nullable = true)
|-- col2: map (nullable = true)
| |-- key: string
| |-- value: string (valueContainsNull = true)
|-- col2_new: map (nullable = true)
| |-- key: string
| |-- value: struct (valueContainsNull = true)
| | |-- SubCol1: string (nullable = true)
| | |-- SubCol2: string (nullable = true)
Printing Final Output
+----+------------------------------------------------------------------------------------------+--------------------------------------------+
|col1|col2 |col2_new |
+----+------------------------------------------------------------------------------------------+--------------------------------------------+
|val1|[key1 -> {"SubCol1":"ABCD","SubCol2":"EFGH"}, key2 -> {"SubCol1":"IJKL","SubCol2":"MNOP"}]|[key1 -> [ABCD, EFGH], key2 -> [IJKL, MNOP]]|
+----+------------------------------------------------------------------------------------------+--------------------------------------------+

Related

Spark Scala creating schema based on target JSON structure

I'm hopelessly stuck trying to generate my Spark Schema based on the JSON structure I know I want.
I have a JSON structure that looks like this:
{
"key1": "value1",
"key2": "value2",
"key3": "value3",
"key4": "value4",
"key5": [
"key6": "value6",
"key7": [
"key8": "value8",
"key8": "value9"
]
]
}
I tried to recreate the structure by creating the following Schema in Spark 2.4.8, running in Scala:
val targetSchemaSO = StructType(
List(
StructField("key1", StringType, true),
StructField("key2", StringType, true),
StructField("key3", StringType, true),
StructField("key4", StringType, true),
StructField("key5", StructType(
List(
StructField("key6", StringType, true),
StructField("key7", ArrayType(StructType(
List(
StructField("key8", StringType, true)
))), true)
)), true)
)
)
However, when trying to format each row as a Spark Row using this code:
val outputDictSO = scala.collection.mutable.LinkedHashMap[String, Any](
"key1" -> "value1",
"key2" -> "value2",
"key3" -> "value3",
"key4" -> "value4",
"key5" -> (
"key6" -> "value6",
"key7" -> (
"key8" -> "value8",
"key8" -> "value9"
)
)
)
return Row.fromSeq(output_dict.values.toSeq)
I get the following error when mapping it to the provided Schema:
Error while encoding: java.lang.RuntimeException: scala.Tuple2 is not a valid external
type for schema of struct<key6:string,key7:array<struct<key8:string>>>
The program I'm basing this off of uses this exact Schema in PySpark and the DataFrame is created just fine; do the StructTypes work differently between PySpark and Spark Scala? What would be the correct Schema to make so that nested arrays in the Schema are possible?
You can use from_json function ,pass the json string column as input and printSchema to retrieve valid schema. (Use one set of json string from the data and create dataframe if needed)
val df1 = df.withColumn("json",from_json(json_string_column))
df1.printSchema()
There is no difference in creating the DataFrame schema in both languages. If you want to create a dataframe with a specified schema, the correct way to build the Rows, according to that provided schema, could be something like:
val outputDictSO =
Row("value1", "value2", "value3", "value4",
Row("value6", Array(
Row("value8"))))
val df0 =
spark.createDataFrame(
spark.sparkContext.parallelize(Seq(outputDictSO)), targetSchemaSO)
df0.printSchema()
The dataframeĀ“s schema is as expected:
root
|-- key1: string (nullable = true)
|-- key2: string (nullable = true)
|-- key3: string (nullable = true)
|-- key4: string (nullable = true)
|-- key5: struct (nullable = true)
| |-- key6: string (nullable = true)
| |-- key7: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- key8: string (nullable = true)
If you see the content of the dataframe:
df0.show()
Gives:
+------+------+------+------+--------------------+
| key1| key2| key3| key4| key5|
+------+------+------+------+--------------------+
|value1|value2|value3|value4|[value6, [[value8]]]|
+------+------+------+------+--------------------+
You can select for a nested key:
df0.select("key5.key6").show()
Spark returns:
+------+
| key6|
+------+
|value6|
+------+
The error is because the specified schema and the data do not match. If you take your data:
val outputDictSOMap = Map(
"key1" -> "value1",
"key2" -> "value2",
"key3" -> "value3",
"key4" -> "value4",
"key5" -> (
"key6" -> "value6",
"key7" -> (
"key8" -> "value8",
"key9" -> "value9"
)
))
And convert it to json:
import org.json4s.jackson.Serialization
implicit val formats = org.json4s.DefaultFormats
val json = Serialization.write(outputDictSOMap)
val df1 = spark.read.json(Seq(json).toDS)
df1.printSchema()
The resulting schema is:
root
|-- key1: string (nullable = true)
|-- key2: string (nullable = true)
|-- key3: string (nullable = true)
|-- key4: string (nullable = true)
|-- key5: struct (nullable = true)
| |-- _1: struct (nullable = true)
| | |-- key6: string (nullable = true)
| |-- _2: struct (nullable = true)
| | |-- key7: struct (nullable = true)
| | | |-- _1: struct (nullable = true)
| | | | |-- key8: string (nullable = true)
| | | |-- _2: struct (nullable = true)
| | | | |-- key9: string (nullable = true)
And that is the reason of your error.

Pyspark: store dataframe as JSON in MySQL table column

I have a spark dataframe which needs to be stored in JSON format in MYSQL table as a column value. (along with other string type values in their respective column)
Something similar to this:
column 1
column 2
val 1
[{"name":"Peter G", "age":44, "city":"Quahog"}, {"name":"John G", "age":30, "city":"Quahog"}, {...}, ...]
val 1
[{"name":"Stewie G", "age":3, "city":"Quahog"}, {"name":"Ron G", "age":41, "city":"Quahog"}, {...}, ...]
...
...
Here [{"name":"Peter G", "age":44, "city":"Quahog"}, {"name":"John G", "age":30, "city":"Quahog"}, {...}, ...] is the result of one dataframe stored as list of dict
I can do:
str(dataframe_object.toJSON().collect())
and then store it to mysql table column, but this would mean loading the entire data in memory, before storing it in mysql table. Is there better/optimal way to achieve this i.e. without using collect()?
I suppose you can convert your StructType column into JSON string, then using spark.write.jdbc to write to MySQL. As long as your MySQL table has that column as JSON type, you should be all set.
# My sample data
{
"c1": "val1",
"c2": [
{ "name": "N1", "age": 100 },
{ "name": "N2", "age": 101 }
]
}
from pyspark.sql import functions as F
from pyspark.sql import types as T
schema = T.StructType([
T.StructField('c1', T.StringType()),
T.StructField('c2', T.ArrayType(T.StructType([
T.StructField('name', T.StringType()),
T.StructField('age', T.IntegerType())
])))
])
df = spark.read.json('a.json', schema=schema, multiLine=True)
df.show(10, False)
df.printSchema()
+----+----------------------+
|c1 |c2 |
+----+----------------------+
|val1|[{N1, 100}, {N2, 101}]|
+----+----------------------+
root
|-- c1: string (nullable = true)
|-- c2: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- name: string (nullable = true)
| | |-- age: integer (nullable = true)
df.withColumn('j', F.to_json('c2')).show(10, False)
+----+----------------------+-------------------------------------------------+
|c1 |c2 |j |
+----+----------------------+-------------------------------------------------+
|val1|[{N1, 100}, {N2, 101}]|[{"name":"N1","age":100},{"name":"N2","age":101}]|
+----+----------------------+-------------------------------------------------+
EDIT #1:
# My sample data
{
"c1": "val1",
"c2": "[{ \"name\": \"N1\", \"age\": 100 },{ \"name\": \"N2\", \"age\": 101 }]"
}
from pyspark.sql import functions as F
from pyspark.sql import types as T
df = spark.read.json('a.json', multiLine=True)
df.show(10, False)
df.printSchema()
+----+-----------------------------------------------------------+
|c1 |c2 |
+----+-----------------------------------------------------------+
|val1|[{ "name": "N1", "age": 100 },{ "name": "N2", "age": 101 }]|
+----+-----------------------------------------------------------+
root
|-- c1: string (nullable = true)
|-- c2: string (nullable = true)
schema = T.ArrayType(T.StructType([
T.StructField('name', T.StringType()),
T.StructField('age', T.IntegerType())
]))
df2 = df.withColumn('j', F.from_json('c2', schema))
df2.show(10, False)
df2.printSchema()
+----+-----------------------------------------------------------+----------------------+
|c1 |c2 |j |
+----+-----------------------------------------------------------+----------------------+
|val1|[{ "name": "N1", "age": 100 },{ "name": "N2", "age": 101 }]|[{N1, 100}, {N2, 101}]|
+----+-----------------------------------------------------------+----------------------+
root
|-- c1: string (nullable = true)
|-- c2: string (nullable = true)
|-- j: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- name: string (nullable = true)
| | |-- age: integer (nullable = true)

Add new column to dataframe with modified schema or drop nested columns of array type in Scala

json:-
{"ID": "500", "Data": [{"field2": 308, "field3": 346, "field1": 40.36582609126494, "field7": 3, "field4": 1583057346.0, "field5": -80.03243596528726, "field6": 16.0517578125, "field8": 5}, {"field2": 307, "field3": 348, "field1": 40.36591421686625, "field7": 3, "field4": 1583057347.0, "field5": -80.03259684675493, "field6": 16.234375, "field8": 5}]}
schema:-
val MySchema: StructType =
StructType( Array(
StructField("ID",StringType,true),
StructField("Data", ArrayType(
StructType( Array(
StructField("field1",DoubleType,true),
StructField("field2",LongType,true),
StructField("field3",LongType,true),
StructField("field4",DoubleType,true),
StructField("field5",DoubleType,true),
StructField("field6",DoubleType,true),
StructField("field7",LongType,true),
StructField("field8",LongType,true)
)),true),true)))
Load json into dataframe:-
val MyDF = spark.readStream
.schema(MySchema)
.json(input)
where 'input' is a file that contains above json
How can I add a new column "Data_New" to the above dataframe 'MyDF' with schema as
val Data_New_Schema: StructType =
StructType( Array(
StructField("Data", ArrayType(
StructType( Array(
StructField("field1",DoubleType,true),
StructField("field4",DoubleType,true),
StructField("field5",DoubleType,true),
StructField("field6",DoubleType,true)
)),true),true)))
Please note a huge volume of such json files will be loaded in the source and so performing an explode followed by a collect_list will crash the driver
You can try one of the following two methods:
for Spark 2.4+, use transform:
import org.apache.spark.sql.functions._
val df_new = df.withColumn("Data_New", expr("struct(transform(Data, x -> (x.field1 as f1, x.field4 as f4, x.field5 as f5, x.field6 as f6)))").cast(Data_New_Schema))
scala> df_new.printSchema
root
|-- ID: string (nullable = true)
|-- Data: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- field1: double (nullable = true)
| | |-- field2: long (nullable = true)
| | |-- field3: long (nullable = true)
| | |-- field4: double (nullable = true)
| | |-- field5: double (nullable = true)
| | |-- field6: double (nullable = true)
| | |-- field7: long (nullable = true)
| | |-- field8: long (nullable = true)
|-- Data_New: struct (nullable = false)
| |-- Data: array (nullable = true)
| | |-- element: struct (containsNull = false)
| | | |-- field1: double (nullable = true)
| | | |-- field4: double (nullable = true)
| | | |-- field5: double (nullable = true)
| | | |-- field6: double (nullable = true)
Notice that nullable = false on the top level schema of Data_New, if you want to make it true, add nullif function to the SQL expression: nullif(struct(transform(Data, x -> (...))), null), or a more efficient way if(true, struct(transform(Data, x -> (...))), null).
Prior to Spark 2.4, use from_json + to_json:
val df_new = df.withColumn("Data_New", from_json(to_json(struct('Data)), Data_New_Schema))
Edit: Per comment, if you want Data_New to be an array of structs, just remove struct function, for example:
val Data_New_Schema: ArrayType = ArrayType(
StructType( Array(
StructField("field1",DoubleType,true),
StructField("field4",DoubleType,true),
StructField("field5",DoubleType,true),
StructField("field6",DoubleType,true)
)),true)
// if you need `containsNull=true`, then cast the above Type definition
val df_new = df.withColumn("Data_New", expr("transform(Data, x -> (x.field1 as field1, x.field4 as field4, x.field5 as field5, x.field6 as field6))"))
Or
val df_new = df.withColumn("Data_New", from_json(to_json('Data), Data_New_Schema))

Merge Dataframes With Differents Schemas - Scala Spark

I'm working in transform a JSON into a Data Frame. In the first step I create an Array of Data Frame and after that I make an Union. But I've a problem to do a Union in a JSON with Different Schemas.
I Can do it if the JSON have the same Schema like you can see in this other question: Parse JSON root in a column using Spark-Scala
I'm working with the following data:
val exampleJsonDifferentSchema = spark.createDataset(
"""
{"ITEM1512":
{"name":"Yin",
"address":{"city":"Columbus",
"state":"Ohio"},
"age":28 },
"ITEM1518":
{"name":"Yang",
"address":{"city":"Working",
"state":"Marc"}
},
"ITEM1458":
{"name":"Yossup",
"address":{"city":"Macoss",
"state":"Microsoft"},
"age":28
}
}""" :: Nil)
As you see the difference is that one Data Frame doesn't have Age.
val itemsExampleDiff = spark.read.json(exampleJsonDifferentSchema)
itemsExampleDiff.show(false)
itemsExampleDiff.printSchema
+---------------------------------+---------------------------+-----------------------+
|ITEM1458 |ITEM1512 |ITEM1518 |
+---------------------------------+---------------------------+-----------------------+
|[[Macoss, Microsoft], 28, Yossup]|[[Columbus, Ohio], 28, Yin]|[[Working, Marc], Yang]|
+---------------------------------+---------------------------+-----------------------+
root
|-- ITEM1458: struct (nullable = true)
| |-- address: struct (nullable = true)
| | |-- city: string (nullable = true)
| | |-- state: string (nullable = true)
| |-- age: long (nullable = true)
| |-- name: string (nullable = true)
|-- ITEM1512: struct (nullable = true)
| |-- address: struct (nullable = true)
| | |-- city: string (nullable = true)
| | |-- state: string (nullable = true)
| |-- age: long (nullable = true)
| |-- name: string (nullable = true)
|-- ITEM1518: struct (nullable = true)
| |-- address: struct (nullable = true)
| | |-- city: string (nullable = true)
| | |-- state: string (nullable = true)
| |-- name: string (nullable = true)
My solution now is as the follow code where i make an array of DataFrame:
val columns:Array[String] = itemsExample.columns
var arrayOfExampleDFs:Array[DataFrame] = Array()
for(col_name <- columns){
val temp = itemsExample.select(lit(col_name).as("Item"), col(col_name).as("Value"))
arrayOfExampleDFs = arrayOfExampleDFs :+ temp
}
val jsonDF = arrayOfExampleDFs.reduce(_ union _)
But I've a JSON with Different Schemas when I reduce in a union I can't do it because the Data Frame need to have the same Schema. In fact, I've the following error:
org.apache.spark.sql.AnalysisException: Union can only be performed on
tables with the compatible column types.
I'm trying to do something similar I've found in this question: How to perform union on two DataFrames with different amounts of columns in spark?
Specifically that part:
val cols1 = df1.columns.toSet
val cols2 = df2.columns.toSet
val total = cols1 ++ cols2 // union
def expr(myCols: Set[String], allCols: Set[String]) = {
allCols.toList.map(x => x match {
case x if myCols.contains(x) => col(x)
case _ => lit(null).as(x)
})
}
But I cant make the set for the columns because I need to catch dynamically the columns both totals and singles. I only can do something like that:
for(i <- 0 until arrayOfExampleDFs.length-1) {
val cols1 = arrayOfExampleDFs(i).select("Value").columns.toSet
val cols2 = arrayOfExampleDFs(i+1).select("Value").columns.toSet
val total = cols1 ++ cols2
arrayOfExampleDFs(i).select("Value").printSchema()
print(total)
}
So, how could be a function that do this union dynamically?
Update: expected output
In this Case This Data Frame and Schema:
+--------+---------------------------------+
|Item |Value |
+--------+---------------------------------+
|ITEM1458|[[Macoss, Microsoft], 28, Yossup]|
|ITEM1512|[[Columbus, Ohio], 28, Yin] |
|ITEM1518|[[Working, Marc], null, Yang] |
+--------+---------------------------------+
root
|-- Item: string (nullable = false)
|-- Value: struct (nullable = true)
| |-- address: struct (nullable = true)
| | |-- city: string (nullable = true)
| | |-- state: string (nullable = true)
| |-- age: long (nullable = true)
| |-- name: string (nullable = true)
Here is one possible solution which creates a common schema for all the dataframes by adding the age column when it is not found:
import org.apache.spark.sql.functions.{col, lit, struct}
import org.apache.spark.sql.types.{LongType, StructField, StructType}
....
for(col_name <- columns){
val currentDf = itemsExampleDiff.select(col(col_name))
// try to identify if age field is present
val hasAge = currentDf.schema.fields(0)
.dataType
.asInstanceOf[StructType]
.fields
.contains(StructField("age", LongType, true))
val valueCol = hasAge match {
// if not construct a new value column
case false => struct(
col(s"${col_name}.address"),
lit(null).cast("bigint").as("age"),
col(s"${col_name}.name")
)
case true => col(col_name)
}
arrayOfExampleDFs = arrayOfExampleDFs :+ currentDf.select(lit(col_name).as("Item"), valueCol.as("Value"))
}
val jsonDF = arrayOfExampleDFs.reduce(_ union _)
// +--------+---------------------------------+
// |Item |Value |
// +--------+---------------------------------+
// |ITEM1458|[[Macoss, Microsoft], 28, Yossup]|
// |ITEM1512|[[Columbus, Ohio], 28, Yin] |
// |ITEM1518|[[Working, Marc],, Yang] |
// +--------+---------------------------------+
Analysis: probably the most demanding part is finding out whether the age is present or not. For the look up we use df.schema.fields property which allow us to dig into the internal schema of each column.
When age is not found we regenerate the column by using a struct:
struct(
col(s"${col_name}.address"),
lit(null).cast("bigint").as("age"),
col(s"${col_name}.name")
)

Convert spark Dataframe with schema to dataframe of json String

I have a Dataframe like this:
+--+--------+--------+----+-------------+------------------------------+
|id|name |lastname|age |timestamp |creditcards |
+--+--------+--------+----+-------------+------------------------------+
|1 |michel |blanc |35 |1496756626921|[[hr6,3569823], [ee3,1547869]]|
|2 |peter |barns |25 |1496756626551|[[ye8,4569872], [qe5,3485762]]|
+--+--------+--------+----+-------------+------------------------------+
where the schema of my df is like below:
root
|-- id: string (nullable = true)
|-- name: string (nullable = true)
|-- lastname: string (nullable = true)
|-- age: string (nullable = true)
|-- timestamp: string (nullable = true)
|-- creditcards: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- id: string (nullable = true)
| | |-- number: string (nullable = true)
I would like to convert each line to a json string knowing my schema. So this dataframe would have one column string containing the json.
first line should be like this:
{
"id":"1",
"name":"michel",
"lastname":"blanc",
"age":"35",
"timestamp":"1496756626921",
"creditcards":[
{
"id":"hr6",
"number":"3569823"
},
{
"id":"ee3",
"number":"1547869"
}
]
}
and the secone line of the dataframe should be like this:
{
"id":"2",
"name":"peter",
"lastname":"barns",
"age":"25",
"timestamp":"1496756626551",
"creditcards":[
{
"id":"ye8",
"number":"4569872"
},
{
"id":"qe5",
"number":"3485762"
}
]
}
my goal is not to write the dataframe to json file. My goal is to convert df1 to a second df2 in order to push each json line of df2 to kafka topic
I have this code to create the dataframe:
val line1 = """{"id":"1","name":"michel","lastname":"blanc","age":"35","timestamp":"1496756626921","creditcards":[{"id":"hr6","number":"3569823"},{"id":"ee3","number":"1547869"}]}"""
val line2 = """{"id":"2","name":"peter","lastname":"barns","age":"25","timestamp":"1496756626551","creditcards":[{"id":"ye8","number":"4569872"}, {"id":"qe5","number":"3485762"}]}"""
val rdd = sc.parallelize(Seq(line1, line2))
val df = sqlContext.read.json(rdd)
df show false
df printSchema
Do you have any idea?
If all you need is a single-column DataFrame/Dataset with each column value representing each row of the original DataFrame in JSON, you can simply apply toJSON to your DataFrame, as in the following:
df.show
// +---+------------------------------+---+--------+------+-------------+
// |age|creditcards |id |lastname|name |timestamp |
// +---+------------------------------+---+--------+------+-------------+
// |35 |[[hr6,3569823], [ee3,1547869]]|1 |blanc |michel|1496756626921|
// |25 |[[ye8,4569872], [qe5,3485762]]|2 |barns |peter |1496756626551|
// +---+------------------------------+---+--------+------+-------------+
val dsJson = df.toJSON
// dsJson: org.apache.spark.sql.Dataset[String] = [value: string]
dsJson.show
// +--------------------------------------------------------------------------+
// |value |
// +--------------------------------------------------------------------------+
// |{"age":"35","creditcards":[{"id":"hr6","number":"3569823"},{"id":"ee3",...|
// |{"age":"25","creditcards":[{"id":"ye8","number":"4569872"},{"id":"qe5",...|
// +--------------------------------------------------------------------------+
[UPDATE]
To add name as an additional column, you can extract it from the JSON column using from_json:
val result = dsJson.withColumn("name", from_json($"value", df.schema)("name"))
result.show
// +--------------------+------+
// | value| name|
// +--------------------+------+
// |{"age":"35","cred...|michel|
// |{"age":"25","cred...| peter|
// +--------------------+------+
For that, you can directly convert your dataframe to a Dataset of JSON string using
val jsonDataset: Dataset[String] = df.toJSON
You can convert it into a dataframe using
val jsonDF: DataFrame = jsonDataset.toDF
Here the json will be alphabetically ordered so the output of
jsonDF show false
will be
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|value |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|{"age":"35","creditcards":[{"id":"hr6","number":"3569823"},{"id":"ee3","number":"1547869"}],"id":"1","lastname":"blanc","name":"michel","timestamp":"1496756626921"}|
|{"age":"25","creditcards":[{"id":"ye8","number":"4569872"},{"id":"qe5","number":"3485762"}],"id":"2","lastname":"barns","name":"peter","timestamp":"1496756626551"} |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+