A 'find' function in Clojurescript that matches all key/values - clojurescript

I'm interested in creating a function like the one shown below using just core functions in Clojurescript.
var props = [
["eyes", "brown"],
["age", "20"]
];
var people = [
{eyes: "brown", age: "20", name: "Dick"},
{eyes: "green", age: "30", name: "Tom"},
{eyes: "blue", age: "20", name: "Sally"},
{eyes: "brown", age: "20", name: "Harry"}
];
find = function (items, keyvals) {
var results = items.slice(0);
keyvals.forEach(function (keyval) {
results = results.reduce(function (memo, item) {
if (item.hasOwnProperty(keyval[0]) &&
item[keyval[0]] === keyval[1]) {
memo.push(item);
}
return memo;
}, []);
});
return results;
};
document.body.innerHTML = JSON.stringify(find(people, props));
So far I have:
(defn find-all
([items k v]
(filter #(= (k %) v) items))
([items k v & keyvals]
(into [k v] keyvals)))
The second part of the function above would call (find-all items k v) for each keyvals, store the resulting matches, and then feed those into the next call. items would be a vector of maps, instead of an array of objects as well.

Take a look at (source clojure.set/index) for inspiration.
(def props {:eyes "brown", :age "20"})
(def people #{{:eyes"brown", :age "20", :name "Dick"}
{:eyes "green", :age "30", :name "Tom"}
{:eyes "blue", :age "20", :name "Sally"}
{:eyes "brown", :age "20", :name "Harry"}})
((clojure.set/index people (keys props)) props)
;=> #{{:eyes "brown", :age "20", :name "Harry"}
{:eyes "brown", :age "20", :name "Dick"}}

Related

How to group suggestions nested list in the primeng autocomplete angular 8

I am trying to group the autocomplete suggestions and would like to render them in primeng.
How we can add a custom template in primeng?
my data
data = [{"id":"m1","name":"menu1","val":"D","items":[{"id":"d1","name":"datanested1","val":"D","items":[{"id":"1","name":"direct Data","val":"E"},{"id":"2","name":"test","val":"E"}]}]},{"id":"d2","name":"menu2","val":"D","items":[{"id":"21","name":"test21","val":"E"},{"id":"22","name":"test23","val":"E"}]},{"id":"d3","name":"menu3","val":"D","items":[{"id":"31","name":"test data 3","val":"E"},{"id":"32","name":"test data 4","val":"E"}]}]
Is there any other libraries available in angular 8 which support this?
I would like to achieve something like this when users start searching in autocomplete...
Menu1 - header
datanested1 -subheader
direct Data -values
test -values
Menu2 - header
test21-values
test23-values
Menu3 - header
test data 3-values
test data 4-values
1. if the user types "direct" in the input box...
Menu1 - header
datanested1 -subheader
direct Data -values
2. if the user types "data" in the input box...
Menu3 - header
test data 3-values
test data 4-values
3. if the user types "menu" in the input box...
Menu1 - header
datanested1 -subheader
direct Data -values
test -values
Menu2 - header
test21-values
test23-values
Menu3 - header
test data 3-values
test data 4-values
I have tried the following example in stackblitz.
https://stackblitz.com/edit/primeng-7-1-2-qtsnpm
In the below approach we will reduce the array to a simple structure
[
{
"id": "m1",
"name": "menu1",
"val": "D",
"search": ["m1", "d1", "1", "2", "menu1", "datanested1", "direct Data", "test"],
"depth": 2
},
{
"id": "d1",
"name": "datanested1",
"val": "D",
"search": ["d1", "1", "2", "datanested1", "direct Data", "test"],
"depth": 1
},
{
"id": "1",
"name": "direct Data",
"val": "E",
"search": ["1", "direct Data"
],
"depth": 0
},
...
]
The idea is this, we will use the depth to format out text while the search for searching.
maxDepth = 0;
reducedArray = (arr, depth = 0, parentSearch = []) => {
this.maxDepth = Math.max(this.maxDepth, depth);
if (!arr) {
return [];
}
return arr.reduce((prev, { items, ...otherProps }) => {
// const depth = this.findDepth({ items, ...otherProps });
const search = [
...this.getProps({ items, ...otherProps }, "id"),
...this.getProps({ items, ...otherProps }, "name")
];
const newParentSearch = [...parentSearch, otherProps.name, otherProps.id];
return [
...prev,
{ ...otherProps, search, depth, parentSearch },
...this.reducedArray(items, depth + 1, newParentSearch)
];
}, []);
};
getProps = (item, prop) => {
if (!item.items) {
return [item[prop]];
} else {
return [
item[prop],
...item.items.map(x => this.getProps(x, prop))
].flat();
}
};
We can apply styles like below
getStyle(depth) {
return {
color: depth === 0 ? "darkblue" : depth === 1 ? "green" : "black",
paddingLeft: depth * 7 + "px",
fontWeight: 800 - depth * 200
};
}
I will use reactive programming so I will convert the object to an Observable usinf of operator
data$ = of(this.reducedArray(this.data));
filteredData$ = combineLatest([this.data$, this.filterString$]).pipe(
map(([data, filterString]) =>
data.filter(
({ search, parentSearch }) =>
!![...search, ...parentSearch].find(x => x.includes(filterString))
)
)
);
We now done, the remaining is to update html
<p-autoComplete [(ngModel)]="cdsidvalue" [suggestions]="filteredData$ | async"
(completeMethod)="filterString$.next($event.query)" field="name" [size]="16" placeholder="Menu" [minLength]="1">
<ng-template let-menu pTemplate="item">
<span
[ngStyle]="getStyle(menu.depth)" >{{ menu.name }}</span>
</ng-template>
</p-autoComplete>
Demo Here
Update
Since we are using reactive programming, refactoring to accept http requests is quite easy, simply replace the observable with an http request
data$ = this.http.get<any[]>("my/api/url");
filteredData$ = combineLatest([this.data$, this.filterString$]).pipe(
map(([data, filterString]) =>
this.reducedArray(data).filter(
({ search, parentSearch }) =>
!![...search, ...parentSearch].find(x => x.includes(filterString))
)
)
);
See this update in action
I have also updated the html to add loading message, we do not want to display a form without data
Patel , Yes you can add Group Header in NgPrime
.ts
adminentrylistSearch = [
{
Grp_Header:'THIS IS Header 1' ,
cdsid: "0121",
firstname: "FirstName1",
lastname: "LastName1",
fullname: "LastName1, FirstName1"
},
{
cdsid: "0122",
firstname: "FirstName1",
lastname: "LastName2",
fullname: "LastName2, FirstName2"
},
{
cdsid: "0123",
firstname: "FirstName3",
lastname: "LastName3",
fullname: "LastName3, FirstName3"
},
{
Grp_Header:'THIS IS Header 2',
cdsid: "0124",
firstname: "FirstName4",
lastname: "LastName4",
fullname: "LastName4, FirstName4"
},
{
cdsid: "0125",
firstname: "FirstName5",
lastname: "LastName5",
fullname: "LastName5, FirstName5"
},
{
cdsid: "0126",
firstname: "FirstName6",
lastname: "LastName6",
fullname: "LastName6, FirstName6"
},
{
cdsid: "0127",
firstname: "FirstName7",
lastname: "LastName7",
fullname: "LastName7, FirstName7"
}
];
.html
<p-autoComplete [(ngModel)]="cdsidvalue" [suggestions]="filteredCountriesSingle" [dropdown]="true"
(completeMethod)="filterCountrySingle($event)" field="firstname" [size]="16" placeholder="CDSID">
<ng-template let-adminentrylistSearch [ngIf]="adminentrylistSearch.index" pTemplate="text">
<div class='unclickable-header'>
<span [style.font-weight]="adminentrylistSearch.Grp_Header ? 'bold' : null"> {{adminentrylistSearch.Grp_Header ? adminentrylistSearch.Grp_Header : adminentrylistSearch.firstname}} </span>
</div>
</ng-template>
</p-autoComplete>
Check Result here...
Demo
In your primeng version (7.1.2) grouped autocomplete is not supported. But it's supported in latest version(11.3.0). Please see below image and follow https://www.primefaces.org/primeng/showcase/#/autocomplete

Deleting deep in an array with immutablejs

I am trying to perform a complex deletion in Immutablejs, without, what I always seem to do, converting to JS in the middle of the process.
In the following array, I would like to delete every second {y: } object
series: [
{
name:"1",
data: [
{y: 1},
{y: 2},
{y: 3}
]
},
{
name:"2",
data: [
{y: 1},
{y: 2},
{y: 3}
]
},
{
name:"3",
data: [
{y: 1},
{y: 2},
{y: 3}
]
}
]
So that I would get this :
series: [
{
name:"1",
data: [
{y: 1},
{y: 3}
]
},
{
name:"2",
data: [
{y: 1},
{y: 3}
]
},
{
name:"3",
data: [
{y: 1},
{y: 3}
]
}
]
Can someone point me in the correct direction how to do this with ImmutableJS? If I just use filter or array reduce I can arrive at a really clean solution that looks like this :
series.forEach(function (elem) {
let data = elem.data;
data.splice(index, 1);
});
I am just hoping that immutable has an equally clean looking solution.
The doc for removeIn doesn't go deep enough :
https://facebook.github.io/immutable-js/docs/#/removeIn
You're modifying every element of series so I think you're on the right track with .map(). Then you want to use .removeIn to deeply remove something.
let seriesList = Immutable.fromJS(series)
seriesList = seriesList.map(elem =>
elem.removeIn(['data', indexToRemove]));
// equivalent form with .update() instead of .removeIn()
seriesList = seriesList.map(elem =>
elem.update('data', data => data.remove(indexToRemove)));
I managed to do it using 'map' from Immutable List. This worked for me :
seriesList = Immutable.List(seriesList);
seriesList = seriesList.map(
elem => {
let data = elem.getIn(['data'])
data = data.remove(index)
elem = elem.setIn(['data'], data)
return elem
})
Anyone have something better?

SPARK : How to create aggregate from RDD[Row] in Scala

How do I create a List/Map inside a RDD/DF so that I can get the aggregate ?
I have a file where each row is a JSON object :
{
itemId :1122334,
language: [
{
name: [
"US", "FR"
],
value: [
"english", "french"
]
},
{
name: [
"IND"
],
value: [
"hindi"
]
}
],
country: [
{
US: [
{
startTime: 2016-06-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
],
CANADA: [
{
startTime: 2016-06-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
],
DENMARK: [
{
startTime: 2016-06-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
],
FRANCE: [
{
startTime: 2016-08-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
]
}
]
},
{
itemId :1122334,
language: [
{
name: [
"US", "FR"
],
value: [
"english", "french"
]
},
{
name: [
"IND"
],
value: [
"hindi"
]
}
],
country: [
{
US: [
{
startTime: 2016-06-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
],
CANADA: [
{
startTime: 2016-07-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
],
DENMARK: [
{
startTime: 2016-06-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
],
FRANCE: [
{
startTime: 2016-08-06T17: 39: 35.000Z,
endTime: 2016-07-28T07: 00: 00.000Z
}
]
}
]
}
I have matching POJO which gets me the values from the JSON.
import com.mapping.data.model.MappingUtils
import com.mapping.data.model.CountryInfo
val mappingPath = "s3://.../"
val timeStamp = "2016-06-06T17: 39: 35.000Z"
val endTimeStamp = "2016-06-07T17: 39: 35.000Z"
val COUNTRY_US = "US"
val COUNTRY_CANADA = "CANADA"
val COUNTRY_DENMARK = "DENMARK"
val COUNTRY_FRANCE = "FRANCE"
val input = sc.textFile(mappingPath)
The input is list of jsons where each line is json which I am mapping to the POJO class CountryInfo using MappingUtils which takes care of JSON parsing and conversion:
val MappingsList = input.map(x=> {
val countryInfo = MappingUtils.getCountryInfoString(x);
(countryInfo.getItemId(), countryInfo)
}).collectAsMap
MappingsList: scala.collection.Map[String,com.mapping.data.model.CountryInfo]
def showCountryInfo(x: Option[CountryInfo]) = x match {
case Some(s) => s
}
But I need to create a DF/RDD so that I can get the aggregates of country and language for based on itemId.
In the given example, if the country's start time is not lesser than "2016-06-07T17: 39: 35.000Z" then the value will be zero.
Which format will be good to create the final aggregate json :
1. List ?
|-----itemId-------|----country-------------------|-----language---------------------|
| 1122334 | [US, CANADA,DENMARK] | [english,hindi,french] |
| 1122334 | [US,DENMARK] | [english] |
|------------------|------------------------------|----------------------------------|
2. Map ?
|-----itemId-------|----country---------------------------------|-----language---------------------|
| 1122334 | (US,2) (CANADA,1) (DENMARK,2) (FRANCE, 0) |(english,2) (hindi,1) (french,1) |
|.... |
|.... |
|.... |
|------------------|--------------------------------------------|----------------------------------|
I would like to create a final json which has the aggregate value like :
{
itemId: "1122334",
country: {
"US" : 2,
"CANADA" : 1,
"DENMARK" : 2,
"FRANCE" : 0
},
language: {
"english" : 2,
"french" : 1,
"hindi" : 1
}
}
I tried List :
val events = sqlContext.sql( "select itemId EventList")
val itemList = events.map(row => {
val itemId = row.getAs[String](1);
val countryInfo = showTitleInfo(MappingsList.get(itemId));
val country = new ListBuffer[String]()
country += if (countryInfo.getCountry().getUS().get(0).getStartTime() < endTimeStamp) COUNTRY_US;
country += if (countryInfo.getCountry().getCANADA().get(0).getStartTime() < endTimeStamp) COUNTRY_CANADA;
country += if (countryInfo.getCountry().getDENMARK().get(0).getStartTime() < endTimeStamp) COUNTRY_DENMARK;
country += if (countryInfo.getCountry().getFRANCE().get(0).getStartTime() < endTimeStamp) COUNTRY_FRANCE;
val languageList = new ListBuffer[String]()
val language = countryInfo.getLanguages().collect.foreach(x => languageList += x.getValue());
Row(itemId, country.toList, languageList.toList)
})
and Map :
val itemList = events.map(row => {
val itemId = row.getAs[String](1);
val countryInfo = showTitleInfo(MappingsList.get(itemId));
val country: Map[String, Int] = Map()
country += if (countryInfo.getCountry().getUS().get(0).getStartTime() < endTimeStamp) ('COUNTRY_US' -> 1) else ('COUNTRY_US' -> 0)
country += if (countryInfo.getCountry().getUS().get(0).getStartTime() < endTimeStamp) ('COUNTRY_CANADA' -> 1) else ('COUNTRY_CANADA' -> 0)
country += if (countryInfo.getCountry().getUS().get(0).getStartTime() < endTimeStamp) ('COUNTRY_DENMARK' -> 1) else ('COUNTRY_DENMARK' -> 0)
country += if (countryInfo.getCountry().getUS().get(0).getStartTime() < endTimeStamp) ('COUNTRY_FRANCE' -> 1) else ('COUNTRY_FRANCE' -> 0)
val language: Map[String, Int] = Map()
countryInfo.getLanguages().collect.foreach(x => language += (x.getValue -> 1)) ;
Row(itemId, country, language)
})
But both are getting frozen in Zeppelin. Is there any better way to get aggregates as json ? Which is better List/Map construct the final aggreagate ?
It would be helpful if you restated your question in terms of Spark DataFrame/Dataset and Row; I understand that you ultimately want to use JSON but the details of the JSON input/output are a separate concern.
The function you are looking for is a Spark SQL aggregate function (see the group of them on that page). The functions collect_list and collect_set are related, but the function you need is not already implemented.
You can implement what I'll call count_by_value by deriving from org.spark.spark.sql.expressions.UserDefinedAggregateFunction. This will require some in-depth knowledge of how Spark SQL works.
Once count_by_value is implemented, you can use it like this:
df.groupBy("itemId").agg(count_by_value(df("country")), count_by_value(df("language")))

Kendo UI Chart - Visualize count of returned JSON fields

I want to display the counts of specific retrieved fields in my pie/donut chart.
I'm retrieving data via REST and the result is in json format. The source is a list repeating values:
Example: In the following list, I'd like to get a present the number (count) of completed responses; perhaps in a second chart present the breakdown of responses by location.
var userResponse = [
{ User: "Bob Smith", Status: "Completed", Location: "USA" },
{ User: "Jim Smith", Status: "In-Progress", Location: "USA" },
{ User: "Jane Smith", Status: "Completed", Location: "USA" },
{ User: "Bill Smith", Status: "Completed", Location: "Japan" },
{ User: "Kate Smith", Status: "In-Progress", Location: "Japan" },
{ User: "Sam Smith", Status: "In-Progress", Location: "USA" },
]
My Initialization currently looks like this:
$('#targetChart').kendoChart({
dataSource: {
data: data.d.results,
group: {
field: "Location",
},
},
seriesDefaults: {
type: "donut",
},
series: [{
field: 'Id',
categoryField: 'Location',
}],
});
You can easily transform the data. Read it into a DataSource object grouping by location and filtering for completed only. Then fetch the data and create an array of the counts for each location:
var pieData = [];
var respDS = new kendo.data.DataSource({
data: userResponse,
group: {
field: "Location",
},
filter: {
field: "Status",
operator: "eq",
value: "Completed" },
});
respDS.fetch(function(){
var view = respDS.view();
for (var i=0; i<view.length; i++){
var item = {};
item.Location = view[i].value;
item.Count = view[i].items.length;
pieData.push(item);
}
});
You end up with:
[
{Location: "Japan", Count: 1},
{Location: "USA", Count: 2},
]
This can then be bound to a pie/donut.
DEMO

Play [Scala]: How to flatten a JSON object

Given the following JSON...
{
"metadata": {
"id": "1234",
"type": "file",
"length": 395
}
}
... how do I convert it to
{
"metadata.id": "1234",
"metadata.type": "file",
"metadata.length": 395
}
Tx.
You can do this pretty concisely with Play's JSON transformers. The following is off the top of my head, and I'm sure it could be greatly improved on:
import play.api.libs.json._
val flattenMeta = (__ \ 'metadata).read[JsObject].flatMap(
_.fields.foldLeft((__ \ 'metadata).json.prune) {
case (acc, (k, v)) => acc andThen __.json.update(
Reads.of[JsObject].map(_ + (s"metadata.$k" -> v))
)
}
)
And then:
val json = Json.parse("""
{
"metadata": {
"id": "1234",
"type": "file",
"length": 395
}
}
""")
And:
scala> json.transform(flattenMeta).foreach(Json.prettyPrint _ andThen println)
{
"metadata.id" : "1234",
"metadata.type" : "file",
"metadata.length" : 395
}
Just change the path if you want to handle metadata fields somewhere else in the tree.
Note that using a transformer may be overkill hereā€”see e.g. Pascal Voitot's input in this thread, where he proposes the following:
(json \ "metadata").as[JsObject].fields.foldLeft(Json.obj()) {
case (acc, (k, v)) => acc + (s"metadata.$k" -> v)
}
It's not as composable, and you'd probably not want to use as in real code, but it may be all you need.
This is definitely not trivial, but possible by trying to flatten it recursively. I haven't tested this thoroughly, but it works with your example and some other basic one's I've come up with using arrays:
object JsFlattener {
def apply(js: JsValue): JsValue = flatten(js).foldLeft(JsObject(Nil))(_++_.as[JsObject])
def flatten(js: JsValue, prefix: String = ""): Seq[JsValue] = {
js.as[JsObject].fieldSet.toSeq.flatMap{ case (key, values) =>
values match {
case JsBoolean(x) => Seq(Json.obj(concat(prefix, key) -> x))
case JsNumber(x) => Seq(Json.obj(concat(prefix, key) -> x))
case JsString(x) => Seq(Json.obj(concat(prefix, key) -> x))
case JsArray(seq) => seq.zipWithIndex.flatMap{ case (x, i) => flatten(x, concat(prefix, key + s"[$i]")) }
case x: JsObject => flatten(x, concat(prefix, key))
case _ => Seq(Json.obj(concat(prefix, key) -> JsNull))
}
}
}
def concat(prefix: String, key: String): String = if(prefix.nonEmpty) s"$prefix.$key" else key
}
JsObject has the fieldSet method that returns a Set[(String, JsValue)], which I mapped, matched against the JsValue subclass, and continued consuming recursively from there.
You can use this example by passing a JsValue to apply:
val json = Json.parse("""
{
"metadata": {
"id": "1234",
"type": "file",
"length": 395
}
}
"""
JsFlattener(json)
We'll leave it as an exercise to the reader to make the code more beautiful looking.
Here's my take on this problem, based on #Travis Brown's 2nd solution.
It recursively traverses the json and prefixes each key with its parent's key.
def flatten(js: JsValue, prefix: String = ""): JsObject = js.as[JsObject].fields.foldLeft(Json.obj()) {
case (acc, (k, v: JsObject)) => {
if(prefix.isEmpty) acc.deepMerge(flatten(v, k))
else acc.deepMerge(flatten(v, s"$prefix.$k"))
}
case (acc, (k, v)) => {
if(prefix.isEmpty) acc + (k -> v)
else acc + (s"$prefix.$k" -> v)
}
}
which turns this:
{
"metadata": {
"id": "1234",
"type": "file",
"length": 395
},
"foo": "bar",
"person": {
"first": "peter",
"last": "smith",
"address": {
"city": "Ottawa",
"country": "Canada"
}
}
}
into this:
{
"metadata.id": "1234",
"metadata.type": "file",
"metadata.length": 395,
"foo": "bar",
"person.first": "peter",
"person.last": "smith",
"person.address.city": "Ottawa",
"person.address.country": "Canada"
}
#Trev has the best solution here, completely generic and recursive, but it's missing a case for array support. I'd like something that works in this scenario:
turn this:
{
"metadata": {
"id": "1234",
"type": "file",
"length": 395
},
"foo": "bar",
"person": {
"first": "peter",
"last": "smith",
"address": {
"city": "Ottawa",
"country": "Canada"
},
"kids": ["Bob", "Sam"]
}
}
into this:
{
"metadata.id": "1234",
"metadata.type": "file",
"metadata.length": 395,
"foo": "bar",
"person.first": "peter",
"person.last": "smith",
"person.address.city": "Ottawa",
"person.address.country": "Canada",
"person.kids[0]": "Bob",
"person.kids[1]": "Sam"
}
I've arrived at this, which appears to work, but seems overly verbose. Any help in making this pretty would be appreciated.
def flatten(js: JsValue, prefix: String = ""): JsObject = js.as[JsObject].fields.foldLeft(Json.obj()) {
case (acc, (k, v: JsObject)) => {
val nk = if(prefix.isEmpty) k else s"$prefix.$k"
acc.deepMerge(flatten(v, nk))
}
case (acc, (k, v: JsArray)) => {
val nk = if(prefix.isEmpty) k else s"$prefix.$k"
val arr = flattenArray(v, nk).foldLeft(Json.obj())(_++_)
acc.deepMerge(arr)
}
case (acc, (k, v)) => {
val nk = if(prefix.isEmpty) k else s"$prefix.$k"
acc + (nk -> v)
}
}
def flattenArray(a: JsArray, k: String = ""): Seq[JsObject] = {
flattenSeq(a.value.zipWithIndex.map {
case (o: JsObject, i: Int) =>
flatten(o, s"$k[$i]")
case (o: JsArray, i: Int) =>
flattenArray(o, s"$k[$i]")
case a =>
Json.obj(s"$k[${a._2}]" -> a._1)
})
}
def flattenSeq(s: Seq[Any], b: Seq[JsObject] = Seq()): Seq[JsObject] = {
s.foldLeft[Seq[JsObject]](b){
case (acc, v: JsObject) =>
acc:+v
case (acc, v: Seq[Any]) =>
flattenSeq(v, acc)
}
}
Thanks m-z, it is very helpful. (I'm not so familiar with Scala.)
I'd like to add a line for "flatten" working with primitive JSON array like "{metadata: ["aaa", "bob"]}".
def flatten(js: JsValue, prefix: String = ""): Seq[JsValue] = {
// JSON primitive array can't convert to JsObject
if(!js.isInstanceOf[JsObject]) return Seq(Json.obj(prefix -> js))
js.as[JsObject].fieldSet.toSeq.flatMap{ case (key, values) =>
values match {
case JsBoolean(x) => Seq(Json.obj(concat(prefix, key) -> x))
case JsNumber(x) => Seq(Json.obj(concat(prefix, key) -> x))
case JsString(x) => Seq(Json.obj(concat(prefix, key) -> x))
case JsArray(seq) => seq.zipWithIndex.flatMap{ case (x, i) => flatten(x, concat(prefix, key + s"[$i]")) }
case x: JsObject => flatten(x, concat(prefix, key))
case _ => Seq(Json.obj(concat(prefix, key) -> JsNull))
}
}
}
Based on previous solutions, have tried to simplify the code a bit
def getNewKey(oldKey: String, newKey: String): String = {
if (oldKey.nonEmpty) oldKey + "." + newKey else newKey
}
def flatten(js: JsValue, prefix: String = ""): JsObject = {
if (!js.isInstanceOf[JsObject]) return Json.obj(prefix -> js)
js.as[JsObject].fields.foldLeft(Json.obj()) {
case (o, (k, value)) => {
o.deepMerge(value match {
case x: JsArray => x.as[Seq[JsValue]].zipWithIndex.foldLeft(o) {
case (o, (n, i: Int)) => o.deepMerge(
flatten(n.as[JsValue], getNewKey(prefix, k) + s"[$i]")
)
}
case x: JsObject => flatten(x, getNewKey(prefix, k))
case x => Json.obj(getNewKey(prefix, k) -> x.as[JsValue])
})
}
}
}