Solr Json parsing on Client Side? - json

I am trying to retrieve date and corresponding count from a json below and it turns out that I just can't do it. After some struggle, I ended with the weird code below with nested linkedlists. How can I select solr_date and count as appearing at the very end : (I welcome any library that can do this)
{
"responseHeader":{
"status":0,
"QTime":2,
"params":{
"facet":"true",
"fl":" ",
"indent":"true",
"facet.query":" solr_date",
"q":"solr_body:party",
"facet.field":"solr_date",
"json.nl":"arrarr",
"wt":"json",
"fq":" "}},
"response":{"numFound":19,"start":0,"docs":[
{},
{},
{},
{},
{},
{},
{},
{},
{},
{}]
},
"facet_counts":{
"facet_queries":{
" solr_date":0},
"facet_fields":{
"solr_date":
[
["2013-06-19T13:48:02Z",10], *********************************
["2013-07-25T13:48:02Z",2],
["2013-07-27T13:48:02Z",2],
["2013-07-24T13:48:02Z",1], I need these numbers individually. Date and corresponding number.
["2013-07-26T13:48:02Z",1],
["2013-07-28T13:48:02Z",1],
["2013-07-29T13:48:02Z",1],
["2013-07-30T13:48:02Z",1]]}, ***************************
"facet_dates":{},
"facet_ranges":{}}}
Java code below :
ObjectMapper mapper = new ObjectMapper();
// JsonNode rootNode = m.readTree(new URL("http://173.255.245.138:8983/solr/collection1/select?q=*%3A*&wt=json&indent=true"));
Map<String, Object> mapObject = mapper.readValue(new URL("http://ipa.ddr.ess.000:8983/solr/collection1/select?q=solr_body%3Aparty&fq=+++&fl=+&wt=json&json.nl=arrarr&indent=true&facet=true&facet.query=+solr_date&facet.field=solr_date"),new TypeReference<Map<String, Object>>() {});
LinkedHashMap<String,LinkedHashMap<String,LinkedHashMap<String,ArrayList<String>>>> list = (LinkedHashMap<String, LinkedHashMap<String, LinkedHashMap<String, ArrayList<String>>>>) mapObject.get("facet_counts");

I would suggest using the SolrJ Client:
Solrj is a java client to access solr. It offers a java interface to add, update, and query the solr index.

If you're using Gson, and you're actually only interested in the part you highlighted, you could do a manual parsing. Something like this:
//Create parser and get the root object
JsonParser parser = new JsonParser();
JsonObject rootObj = parser.parse(json).getAsJsonObject();
//Get the solr_date array
JsonArray solrDateArray = rootObj
.getAsJsonObject("facet_counts")
.getAsJsonObject("facet_fields")
.getAsJsonArray("solr_date");
//Create arrays to store the data you want to retrieve
List<String> datesList = new ArrayList<>();
List<Integer> countsList = new ArrayList<>();
//Iterate the solr_date array
Iterator<JsonElement> it = solrDateArray.iterator();
while (it.hasNext()) {
//The solr_date array contains in turn arrays, so we parse each
JsonArray array = it.next().getAsJsonArray();
//and store in your Lists the values
datesList.add(array.get(0).getAsString());
countsList.add(array.get(1).getAsInt());
}
Now you'll have to List objects, one with all the dates and another with all the counts:
datesList: ["2013-06-19T13:48:02Z", "2013-07-25T13:48:02Z", 2013-07-27T13:48:02Z, ...]
countsList: [10, 2, 2, ...]
Note: instead of using 2 List objects, you could use a Map<Integer, String> for example...

i did it something like this.
HttpSolrServer server = new HttpSolrServer("http://localhost:8084/apache-solr-3.6.0/");
server.setParser(new XMLResponseParser());
SolrQuery solrQuery = new SolrQuery();
solrQuery.setQuery("keyword");
solrQuery.setFilterQueries("keyword");
solrQuery.setHighlight(true);
solrQuery.setHighlightRequireFieldMatch(true);
solrQuery.addHighlightField("syndrome");
solrQuery.setStart(0);
solrQuery.setRows(10);
QueryResponse serverResponse = null;
try {
serverResponse = server.query(solrQuery);
} catch (SolrServerException e) {
e.printStackTrace();
}
Gson gson = new Gson();
List<SolrDocument> docs = new ArrayList<SolrDocument>();
for (SolrDocument doc:serverResponse.getResults()) {
docs.add(doc);
}
Map<String, String> pairs= new HashMap<String, String>();
Integer count = new Integer(0);
for (SolrDocument doc:docs){
pairs.put(("start_date" + count), doc.getFieldValue("start_date").toString());
pairs.put(("test_file_result_id" + count), doc.getFieldValue("test_file_result_id").toString());
pairs.put(("job_id" + count), doc.getFieldValue("job_id").toString());
pairs.put(("cluster" + count), doc.getFieldValue("cluster").toString());
pairs.put(("test_file_result_id" + count), doc.getFieldValue("test_file_result_id").toString());
count++;
}

Related

How to send List of objects in a multipart request in flutter?

I want to send a List of ManageTagModel in a multipart request along with other models and files..
I am not certain of how to send this List of model..
This is my code for sending the multipart request without the List:
var uri = Uri.parse(...);
final request = http.MultipartRequest('Post', uri);
request.fields['Id'] = '3';
request.fields['Name'] = siteModel.name;
request.fields['MapAddress'] = siteModel.mapAddress;
request.fields['Country'] = siteModel.country;
request.fields['City'] = siteModel.city;
request.fields['CategoryNb'] = siteModel.categoryNb;
request.fields['UserId'] = userId;
request.fields['Caption'] = caption;
for (File i in
multipleFiles) {
final mimeTypeData =
lookupMimeType(i.path, headerBytes: [0xFF, 0xD8]).split('/');
print("IMAGE: " + i.path);
// Attach the file in the request
final file = await http.MultipartFile.fromPath('files', i.path);
print(mimeTypeData[0] + " mimeTypeData[0]");
print(mimeTypeData[1] + " mimeTypeData[1]");
request.files.add(file);
this is my model:
import 'dart:convert';
class ManageTagModel {
String posX;
String posY;
String postOrder;
String tagger;
String tagged;
ManageTagModel(
{this.posX, this.posY, this.postOrder, this.tagger, this.tagged});
//Flutter way of creating a constructor
factory ManageTagModel.fromJson(Map<String, dynamic> json) => ManageTagModel(
posX: json['PosX'],
posY: json['PosY'],
postOrder: json['PostOrder'],
tagged: json['Tagged'],
tagger: json['Tagger']);
Map<String, dynamic> toMap() {
return {
"PosX": posX,
"PosY": posY,
"PostOrder": postOrder,
"Tagger": tagger,
"Tagged": tagged
};
}
}
List<ManageTagModel> fromJson(String jsonData) {
// Decode json to extract a map
final data = json.decode(jsonData);
return List<ManageTagModel>.from(
data.map((item) => ManageTagModel.fromJson(item)));
}
String toJson(ManageTagModel data) {
// First we convert the object to a map
final jsonData = data.toMap();
// Then we encode the map as a JSON string
return json.encode(jsonData);
}
List encodeToJson(List<ManageTagModel> list) {
List jsonList = List();
list.map((item) => jsonList.add(item.toMap())).toList();
return jsonList;
}
My backend c# method has a parameter List
Any help is appreciated!!
I'm pretty sure I'm quite late here and you might have already found a solution. I have gone through multiple threads and didn't actually find any answers but discovered myself out of frustration and thought to myself that the answer actually is still not out there for any other lost human soul. So here is my solution for anyone still stuck here which is quite intuitive.
You simply have to add all the elements of the list to the request as "files" instead of "fields". But instead of fromPath() method, you have to use fromString().
final request = http.MultipartRequest('Post', uri);
List<String> ManageTagModel = ['xx', 'yy', 'zz'];
for (String item in ManageTagModel) {
request.files.add(http.MultipartFile.fromString('manage_tag_model', item));
}
This worked out for me and I hope it works for you too.
if the data was not string
for (int item in _userData['roles']) {
request.files
.add(http.MultipartFile.fromString('roles', item.toString()));
}

Kafka Streams API GroupBy behaviour

I am new in kafka streams and I am trying to aggregate some streaming data into a KTable using groupBy function. The problem is the following:
The produced message is a json msg with the following format:
{ "current_ts": "2019-12-24 13:16:40.316952",
"primary_keys": ["ID"],
"before": null,
"tokens": {"txid":"3.17.2493",
"csn":"64913009"},
"op_type":"I",
"after": { "CODE":"AAAA41",
"STATUS":"COMPLETED",
"ID":24},
"op_ts":"2019-12-24 13:16:40.316941",
"table":"S_ORDER"}
I want to isolate the json field "after" and then create a KTable with "key" = "ID" and value the whole json "after".
Firstly, I created a KStream to isolate the "after" json, and it works fine.
KStream code block: (Don't pay attention to the if statement because "before" and "after" have the same format.)
KStream<String, String> s_order_list = s_order
.mapValues(value -> {
String time;
JSONObject json = new JSONObject(value);
if (json.getString("op_type").equals("I")) {
time = "after";
}else {
time = "before";
}
JSONObject json2 = new JSONObject(json.getJSONObject(time).toString());
return json2.toString();
});
The output, as expected, is the following:
...
null {"CODE":"AAAA48","STATUS":"SUBMITTED","ID":6}
null {"CODE":"AAAA16","STATUS":"COMPLETED","ID":1}
null {"CODE":"AAAA3","STATUS":"SUBMITTED","ID":25}
null {"CODE":"AAAA29","STATUS":"SUBMITTED","ID":23}
...
Afterwards, I implement a KTable to groupBy the "ID" of the json.
KTable code block:
KTable<String, String> s_table = s_order_list
.groupBy((key, value) -> {
JSONObject json = new JSONObject(value);
return json.getString("ID");
});
And there is an error that I want to create KTable<String, String> but I am creating GroupedStream<Object,String>.
Required type: KTable<String,String>
Provided:KGroupedStream<Object,String>
no instance(s) of type variable(s) KR exist so that KGroupedStream<KR, String> conforms to KTable<String, String>
In conclusion, the question is what exactly are KGroupedStreams and how to implement a KTable properly ?
After groupBy processor, you can use a stateful processor, like aggregate or reduce (that processors returns KTable). You can do something like this:
KGroupedStream<String, String> s_table = s_order_list
.groupBy((key, value) ->
new JSONObject(value).getString("ID"),
Grouped.with(
Serdes.String(),
Serdes.String())
);
KTable<String, StringAggregate> aggregateStrings = s_table.aggregate(
(StringAggregate::new),
(key, value, aggregate) -> aggregate.addElement(value));
StringAggregate looks like:
public class StringAggregate {
private List<String> elements = new ArrayList<>();
public StringAggregate addElement(String element){
elements.add(element);
return this;
}
//other methods
}

Parse JSON using groovy script (using JsonSlurper)

I am just two days old to groovy, I need to parse a json file with below structure. My actual idea is I need to run a set of jobs in different environments based on different sequences, so I came up with this format of json as a input file to my groovy
{
"services": [{
"UI-Service": [{
"file-location": "/in/my/server/location",
"script-names": "daily-batch,weekly-batch,bi-weekly-batch",
"seq1": "daily-batch,weekly-batch",
"seq2": "daily-batch,weekly-batch,bi-weekly-batch",
"DEST-ENVT_seq1": ["DEV1", "DEV2", "QA1", "QA2"],
"DEST-ENVT_seq2": ["DEV3", "DEV4", "QA3", "QA4"]
}]
}, {
"Mobile-Service": [{
"file-location": "/in/my/server/location",
"script-names": "daily-batch,weekly-batch,bi-weekly-batch",
"seq1": "daily-batch,weekly-batch",
"seq2": "daily-batch,weekly-batch,bi-weekly-batch",
"DEST-ENVT_seq1": ["DEV1", "DEV2", "QA1", "QA2"],
"DEST-ENVT_seq2": ["DEV3", "DEV4", "QA3", "QA4"]
}]
}]
}
I tried below script for parsing the json
def jsonSlurper = new JsonSlurper()
//def reader = new BufferedReader(new InputStreamReader(new FileInputStream("in/my/location/config.json"),"UTF-8"))
//def data = jsonSlurper.parse(reader)
File file = new File("in/my/location/config.json")
def data = jsonSlurper.parse(file)
try{
Map jsonResult = (Map) data;
Map compService = (Map) jsonResult.get("services");
String name = (String) compService.get("UI-Service");
assert name.equals("file-location");
}catch (E){
println Exception
}
I need to first read all the services (UI-service, Mobile-Service, etc..) then their elements and their value
Or you could do something like:
new JsonSlurper().parseText(jsonTxt).services*.each { serviceName, elements ->
println serviceName
elements*.each { name, value ->
println " $name = $value"
}
}
But it depends what you want (and you don't really explain in the question)
Example for reading from JsonParser object:
def data = jsonSlurper.parse(file)
data.services.each{
def serviceName = it.keySet()
println "**** key:${serviceName} ******"
it.each{ k, v ->
println "element name: ${k}, element value: ${v}"
}
}
other options:
println data.services[0].get("UI-Service")["file-location"]
println data.services[1].get("Mobile-Service").seq1

How can I deserialize an invalid json ? Truncated list of objects

My json file is mostly an array that contain objects but the list is incomplete, so I can't use the last entry. I would like to deserialize the rest of the file while discarding the last invalid entry
[ { "key" : "value1" }, { "key " : "value2"}, { "key
Please tell me if there is a way using Newtonsoft.Json library, or do I need some preprocessing.
Thank you!
Looks like on Json.NET 8.0.3 you can stream your string from a JsonTextReader to a JTokenWriter and get a partial result by catching and swallowing the JsonReaderException that gets thrown when parsing the truncated JSON:
JToken root;
string exceptionPath = null;
using (var textReader = new StringReader(badJson))
using (var jsonReader = new JsonTextReader(textReader))
using (JTokenWriter jsonWriter = new JTokenWriter())
{
try
{
jsonWriter.WriteToken(jsonReader);
}
catch (JsonReaderException ex)
{
exceptionPath = ex.Path;
Debug.WriteLine(ex);
}
root = jsonWriter.Token;
}
Console.WriteLine(root);
if (exceptionPath != null)
{
Console.WriteLine("Error occurred with token: ");
var badToken = root.SelectToken(exceptionPath);
Console.WriteLine(badToken);
}
This results in:
[
{
"key": "value1"
},
{
"key ": "value2"
},
{}
]
You could then finish deserializing the partial object with JToken.ToObject. You could also delete the incomplete array entry by using badToken.Remove().
It would be better practice not to generate invalid JSON in the first place though. I'm also not entirely sure this is documented functionality of Json.NET, and thus it might not work with future versions of Json.NET. (E.g. conceivably Newtonsoft could change their algorithm such that JTokenWriter.Token is only set when writing is successful.)
You can use the JsonReader class and try to parse as far as you get. Something like the code below will parse as many properties as it gets and then throw an exception. This is of course if you want to deserialize into a concrete class.
public Partial FromJson(JsonReader reader)
{
while (reader.Read())
{
// Break on EndObject
if (reader.TokenType == JsonToken.EndObject)
break;
// Only look for properties
if (reader.TokenType != JsonToken.PropertyName)
continue;
switch ((string) reader.Value)
{
case "Id":
reader.Read();
Id = Convert.ToInt16(reader.Value);
break;
case "Name":
reader.Read();
Name = Convert.ToString(reader.Value);
break;
}
}
return this;
}
Code taken from the CGbR JSON Target.
the second answer above is really good and simple, helped me out!
static string FixPartialJson(string badJson)
{
JToken root;
string exceptionPath = null;
using (var textReader = new StringReader(badJson))
using (var jsonReader = new JsonTextReader(textReader))
using (JTokenWriter jsonWriter = new JTokenWriter())
{
try
{
jsonWriter.WriteToken(jsonReader);
}
catch (JsonReaderException ex)
{
exceptionPath = ex.Path;
}
root = jsonWriter.Token;
}
return root.ToString();
}

Use Jackson To Stream Parse an Array of Json Objects

I have a file that contains a json array of objects:
[
{
"test1": "abc"
},
{
"test2": [1, 2, 3]
}
]
I wish to use use Jackson's JsonParser to take an inputstream from this file, and at every call to .next(), I want it to return an object from the array until it runs out of objects or fails.
Is this possible?
Use case:
I have a large file with a json array filled with a large number of objects with varying schemas. I want to get one object at a time to avoid loading everything into memory.
EDIT:
I completely forgot to mention. My input is a string that is added to over time. It slowly accumulates json over time. I was hoping to be able to parse it object by object removing the parsed object from the string.
But I suppose that doesn't matter! I can do this manually so long as the jsonParser will return the index into the string.
Yes, you can achieve this sort of part-streaming-part-tree-model processing style using an ObjectMapper:
ObjectMapper mapper = new ObjectMapper();
JsonParser parser = mapper.getFactory().createParser(new File(...));
if(parser.nextToken() != JsonToken.START_ARRAY) {
throw new IllegalStateException("Expected an array");
}
while(parser.nextToken() == JsonToken.START_OBJECT) {
// read everything from this START_OBJECT to the matching END_OBJECT
// and return it as a tree model ObjectNode
ObjectNode node = mapper.readTree(parser);
// do whatever you need to do with this object
}
parser.close();
What you are looking for is called Jackson Streaming API. Here is a code snippet using Jackson Streaming API that could help you to achieve what you need.
JsonFactory factory = new JsonFactory();
JsonParser parser = factory.createJsonParser(new File(yourPathToFile));
JsonToken token = parser.nextToken();
if (token == null) {
// return or throw exception
}
// the first token is supposed to be the start of array '['
if (!JsonToken.START_ARRAY.equals(token)) {
// return or throw exception
}
// iterate through the content of the array
while (true) {
token = parser.nextToken();
if (!JsonToken.START_OBJECT.equals(token)) {
break;
}
if (token == null) {
break;
}
// parse your objects by means of parser.getXxxValue() and/or other parser's methods
}
This example reads custom objects directly from a stream:
source is a java.io.File
ObjectMapper mapper = new ObjectMapper();
JsonParser parser = mapper.getFactory().createParser( source );
if ( parser.nextToken() != JsonToken.START_ARRAY ) {
throw new Exception( "no array" );
}
while ( parser.nextToken() == JsonToken.START_OBJECT ) {
CustomObj custom = mapper.readValue( parser, CustomObj.class );
System.out.println( "" + custom );
}
This is a late answer that builds on Ian Roberts' answer. You can also use a JsonPointer to find the start position if it is nested into a document. This avoids custom coding the slightly cumbersome streaming token approach to get to the start point. In this case, the basePath is "/", but it can be any path that JsonPointer understands.
Path sourceFile = Paths.get("/path/to/my/file.json");
// Point the basePath to a starting point in the file
JsonPointer basePath = JsonPointer.compile("/");
ObjectMapper mapper = new ObjectMapper();
try (InputStream inputSource = Files.newInputStream(sourceFile);
JsonParser baseParser = mapper.getFactory().createParser(inputSource);
JsonParser filteredParser = new FilteringParserDelegate(baseParser,
new JsonPointerBasedFilter(basePath), false, false);) {
// Call nextToken once to initialize the filteredParser
JsonToken basePathToken = filteredParser.nextToken();
if (basePathToken != JsonToken.START_ARRAY) {
throw new IllegalStateException("Base path did not point to an array: found "
+ basePathToken);
}
while (filteredParser.nextToken() == JsonToken.START_OBJECT) {
// Parse each object inside of the array into a separate tree model
// to keep a fixed memory footprint when parsing files
// larger than the available memory
JsonNode nextNode = mapper.readTree(filteredParser);
// Consume/process the node for example:
JsonPointer fieldRelativePath = JsonPointer.compile("/test1");
JsonNode valueNode = nextNode.at(fieldRelativePath);
if (!valueNode.isValueNode()) {
throw new IllegalStateException("Did not find value at "
+ fieldRelativePath.toString()
+ " after setting base to " + basePath.toString());
}
System.out.println(valueNode.asText());
}
}