Gson: Parsing String[] - json

I'm using Gson library to parse JSON objects. In particular, I've got a JSON like this:
{
"key": ["string1", "string2"]
}
and I would like to parse it in a simple String[], without building a specific object. I tried this way:
gson.fromJson(json, String[].class);
but I had an error: "Expected BEGIN_OBJECT but was BEGIN_ARRAY", I guess because of the presence of the key. Any ideas how should I fix it?

Create a class that has a key property that has type of String[] and deserialize to that.
public class Thing
{
private String[] key;
public String[] getKey() {
return key;
}
}
Thing thing = gson.fromJson(json, Thing.class);

Since tvanfosson answer is perfect, I should not add anything but in the comment you asked is it's possible to avoid creating the Thing class. Yes, it is but I think is more fragile. I'm going to show you how to do with this code:
String json = "{\"key\": [\"string1\", \"string2\"]}";
String mJson = json.replace("{\"key\":", "").replace("}","");
String[] strings = new Gson().fromJson(mJson, String[].class);
System.out.println(Arrays.asList(strings));
Of course this code runs without errors and avoids you additional classes, but think to what happens if there's some carriage return inside. It breaks, unless you user a regexp inside the replace invocation. At this point I prefer to add a class instead of thinking of right regexp and let Gson do the whole work.
I added this as response and not as comment to have enough space to explain myself, but this answer should not be taken as right response but instead as poor hack to use. It pays better to understand a bit more how Gson reasons.

Related

Generate JSON from Nustache with proper escaping

I would like to use Nustache to generate JSON to talk to a specific webservice.
I do not want to do proper JSON-building using Newtonsoft or something like that because the specs for this webservice come as textfiles with placeholders. I agree that this is silly. So it makes sense to copy/massage/paste them into a template-format and hopefully make fewer mistakes.
But of course Nustache has no notion of what makes valid JSON.
With a template like
{ "foo": "{{bar}}" }
and a value for bar that needs escaping in JSON, say it includes curly brackets or an innocent backslash the result is string-replacy-correct, but not valid JSON.
Is there a way to tell Nustache that I want the output to be JSON and have it escape strings as it replaces them?
Or would you recommend doing a helper that can manage the escaping and put that on all the placeholders?
Thanks for reading and thinking.
I did not find a wholly satisfactory answer but a workable solution.
My workaround is a Nustache helper that takes care of the quoting and escaping. The ugly bit is that I need to specify the helper in the template in each instance:
{ "foo": "{{json bar}}" }
The helper implementation is trivial and can be listed fully here. For the actual work it delegates to JsonConvert from Newtonsoft JSON:
public class JSONHelpers
{
public static void Register()
{
if (!Helpers.Contains("json"))
{
Helpers.Register("json", JSON);
}
}
private static void JSON(RenderContext ctx, IList<object> args, IDictionary<string, object> options,
RenderBlock fn, RenderBlock inverse)
{
object input = args[0];
if (input == null)
{
ctx.Write("null");
}
else
{
string text = input.ToString();
string json = JsonConvert.ToString(text);
ctx.Write(json);
}
}
}
Hopefully this comes useful to someone else.

SpringBatch - how to set up via java config the JsonLineMapper for reading a simple json file

How to change from "setLineTokenizer(new DelimitedLineTokenizer()...)" to "JsonLineMapper" in the first code below? Basicaly, it is working with csv but I want to change it to read a simple json file. I found some threads here asking about complex json but this is not my case. Firstly I thought that I should use a very diferent approach from csv way, but after I read SBiAch05sample.pdf (see the link and snippet at the bottom), I understood that FlatFileItemReader can be used to read json format.
In almost similiar question, I can guess that I am not in the wrong direction. Please, I am trying to find the simplest but elegant and recommended way for fixing this snippet code. So, the wrapper below, unless I am really obligated to work this way, seems to go further. Additionally, the wrapper seems to me more Java 6 style than my tentative which takes advantage of anonimous method from Java 7 (as far as I can judge from studies). Please, any advise is higly appreciated.
//My Code
#Bean
#StepScope
public FlatFileItemReader<Message> reader() {
log.info("ItemReader >>");
FlatFileItemReader<Message> reader = new FlatFileItemReader<Message>();
reader.setResource(new ClassPathResource("test_json.js"));
reader.setLineMapper(new DefaultLineMapper<Message>() {
{
setLineTokenizer(new DelimitedLineTokenizer() {
{
setNames(new String[] { "field1", "field2"...
//Sample using a wrapper
http://www.manning.com/templier/SBiAch05sample.pdf
import org.springframework.batch.item.file.LineMapper;
import org.springframework.batch.item.file.mapping.JsonLineMapper;
import com.manning.sbia.ch05.Product;
public class WrappedJsonLineMapper implements LineMapper<Product> {
private JsonLineMapper delegate;
public Product mapLine(String line, int lineNumber) throws Exception {
Map<String,Object> productAsMap
= delegate.mapLine(line, lineNumber);
Product product = new Product();
product.setId((String)productAsMap.get("id"));
product.setName((String)productAsMap.get("name"));
product.setDescription((String)productAsMap.get("description"));
product.setPrice(new Float((Double)productAsMap.get("price")));
return product;
}
public void setDelegate(JsonLineMapper delegate) {
this.delegate = delegate;
}
}
Really you have two options for parsing JSON within a Spring Batch job:
Don't create a LineMapper, create a LineTokenizer. Spring Batch's DefaultLineMapper breaks up the parsing of a record into two phases, parsing the record and mapping the result to an object. The fact that the incoming data is JSON vs a CSV only impacts the parsing piece (which is handled by the LineTokenizer). That being said, you'd have to write your own LineTokenizer to parse the JSON into a FieldSet.
Use the provided JsonLineMapper. Spring Batch provides a LineMapper implementation that uses Jackson to deserialize JSON objects into java objects.
In either case, you can't map a LineMapper to a LineTokenizer as they accomplish two different things.

How to use Hamcrest to inspect Map items

I have been recently using Hamcrest library to write some tests and quite successful but now I need to do something more complex and started to see a lot of difficulties. I need to inpsect and verify the properties of the items in a Map. My production code looks something like this:
Map<String, List<MyItem>> map = new HashMap<String, List<MyItem>>();
map.put("one", Arrays.asList(new MyItem("One")));
map.put("two", Arrays.asList(new MyItem("Two")));
map.put("three", Arrays.asList(new MyItem("Three")));
I want to write some test codes like the following, but it doesn't compile. Looks like Hamcrest's hasEntry is type-parametered, while hasItem and hasProperty only expect Object.
assertThat(map, Matchers.<String, List<MyItem>>hasEntry("one", hasItem(hasProperty("name", is("One")))));
My IDE (Eclipse) is giving this error message: The parameterized method <String, List<HamcrestTest.MyItem>>hasEntry(String, List<HamcrestTest.MyItem>) of type Matchers is not applicable for the arguments (String, Matcher<Iterable<? super Object>>). For one thing I think Eclipse is confused of which hasEntry method I wanted to use, it should be hasEntry(org.hamcrest.Matcher<? super K> keyMatcher, org.hamcrest.Matcher<? super V> valueMatcher) , not the hasEntry(K key, V value).
Should I just give up and get the item from the Map and manually inspect each property? Is there a cleaner way?
Youu could just use contains or containsInAnyOrder. True, you'll have to list all items in the List that way, but it works cleaner than hasItem:
#SuppressWarnings("unchecked")
#Test
public void mapTest() {
Map<String, List<MyItem>> map = new HashMap<String, List<MyItem>>();
map.put("one", asList(new MyItem("1"), new MyItem("one")));
assertThat(map, hasEntry(is("one"),
containsInAnyOrder(hasProperty("name", is("one")),
hasProperty("name", is("1")))));
}
Since #t0mppa didn't provide a good example on how to use Hamcrest's contains and containsInAnyOrder for this, here's a little something to get your started:
Map<Integer, String> columns = new HashMap<Integer, String>();
columns.put(1, "ID");
columns.put(2, "Title");
columns.put(3, "Description");
assertThat(columns.values(), contains("ID", "Title", "Description")); // passes
assertThat(columns.values(), contains("ID", "Description", "Title")); // fails
assertThat(columns.values(), containsInAnyOrder("ID", "Description", "Title")); // passes
Note that as opposed to hasItem and hasItems, these will only work if you provide them with a full list of all the values you'll be matching against. See Hamcrest's javadocs for more information.
So just to make this simpler you might try this...
assertThat((Object)map, (Matcher)Matchers.hasEntry("one", hasItem(hasProperty("name", is("One")))));
by going to a raw type you will get a warning but no compile error. If have used this trick in the past when I don't want to worry about getting all the casting just right for the compiler.
Also, you might consider using ItIterableContainingInOrder.containingInOrder(new MyItem("One"))). This will verify the entire list and if MyItem implements equals then you won't be using reflection in your tests.
the hasEntry method has two signatures:
hasEntry(key, value)
hasEntry(matcher<key>, matcher<value>)
You are using the first signature, thus you are checking whether your map contains a matcher mapped to the string "one". t0mppa's answer is using the second signature, that's why it works. The good news is you don't need to list all the elements in the list, you can just
assertThat(map, hasEntry(is("one"), hasItem(hasProperty("name", is("One")))));

JSON fields unordered

I am writing some RESTful services using spring MVC. I am using jsckson mapper to do the It conversions.
It all works fine except that the json it produces has fields completely unordered.
for e.g.
If my entity object looks like this:
public class EntityObj
{
private String x;
private String y;
private String z;
}
If I now have a list of EntityObjs, and I return this back from the controller, the json has the order mixed up for the fields e.g.:
[{y:"ABC", z:"XYZ", x:"DEF"},{y:"ABC", z:"XYZ", x:"DEF"}]
Looked around for a solution but not finding any. Anyone else faced this issue?
Thanks for the help
As others suggested, ordering should not matter. Nonetheless, if you prefer certain ordering, use #JsonPropertyOrder annotation like so:
#JsonPropertyOrder({ "x", "y", "x" })
public class EntityObj {
}
If alphabetical order suit you and you are using Spring Boot, you can add this in your application.properties :
spring.jackson.mapper.sort-properties-alphabetically=true
I realized this doesn't work with variable names that start with upper case letters. For example a variable named "ID" will not be ordered.
If you don't want to explicitly define the field order as done in the accepted answer, you can simply use:
#JsonPropertyOrder(alphabetic=true)

Jackson JSON custom deserializer works bad with composite object

I have a problem with my custom JSON deserializer.
I use Jackson to map JSON to Java and back. In some cases I need to write my own mapping.
I have an object (filter), which contains a set of another object(metaInfoClass). I try to deserialize the filter with Jackson, but I implemented an own deserializer for the inner object.
The JSON looks like this:
{
"freetext":false,
"cityName":null,
"regionName":null,
"countryName":null,
"maxResults":50,
"minDate":null,
"maxDate":null,
"metaInfoClasses":
[
{
"id":31,
"name":"Energy",
"D_TYPE":"Relevance"
}
],
"sources":[],
"ids":[]
}
My deserializer just works fine, it finds all the fields etc.
The problem is, that somehow (no idea why) the deserializer gets invoked on the rest of the JSON string, so the sources token is getting processed, and so on.
This is very weird, since I don't want to deserialize the big object, but only the inner metaInfoClass.
Even more weird: the CollectionDeserializer class keeps calling my deserializer with the json string even after it is ended. So nothing really happens, but the method gets called.
Any idea?
Thanks a lot!
I was able to find a solution.
I modified the implementation (in the deserialize method) to use to following code:
JsonNode tree = parser.readValueAsTree();
Iterator<Entry<String, JsonNode>> fieldNameIt = tree.getFields();
while (fieldNameIt.hasNext()) {
Entry<String, JsonNode> entry = fieldNameIt.next();
String key = entry.getKey();
String value = entry.getValue().getTextValue();
// ... custom code here
}
So with this approach, it was parsing only the right piece of the code and it's working right now.