For deserializing json with unknown field into an object there's #JsonAnySetter.
But what if I read such json into my object, modify some known fields and write it back to json?
The unknown properties will be lost.
How do I handle such cases? Is it possible to map an object or do I have to read the data into a JsonNode or Map?
Unmarshalling into a custom java class has its advantages and disadvantages. It's gives you nice static typing, but it's well, static. The javadoc for #JsonAnySetter suggests that it's similar to JAXB's #XmlAnyElement, but unlike #XmlAnyElement, the data objects don't contain naming information, so it's a one-way street.
if you need to handle dynamic JSON streams, then you need to bite the bullet and use Map or JsonNode.
It is now possible to use #JsonAnyGetter to provide a method which allows serialization of dynamic properties:
#JsonAnyGetter
public Map<String, String> getDynamicProperties() {
return dynamicProperties; // a field like this exists
}
There's this RFE for Jackson: http://jira.codehaus.org/browse/JACKSON-292 to add such a feature. Makes total sense when you think about it.
Related
When I naively use Jackson to convert to JSON i receive this exception:
Caused by: com.fasterxml.jackson.databind.exc.InvalidDefinitionException: No serializer found for class org.apache.cayenne.access.DefaultDataRowStoreFactory and no properties discovered to create BeanSerializer
Edit: I'd like to do something like this:
ObjectContext context = cayenneRuntime.newContext();
List<User> users = ObjectSelect.query(User.class).select(context);
JsonObject json = Json.mapper.convertValue(obj, Map.class)
Are there any existing solutions? Thanks
Considering that in a general case Cayenne gives you not just objects, but a virtual graph of objects, serialization to JSON becomes a more quirky topic than it initially appears.
The short answer: you'd have manually build JSON for whatever subgraph of your object graph.
While not a direct answer, it may be worth mentioning that Agrest framework (ex. LinkRest) supports rule-based serialization of Cayenne object graphs to JSON. But it is not a standalone component. I.e. it will only work if you use it for your REST services.
I need to parse the jax-ws rest response and I tried the following two ways of parsing the response.Both works good.But I am in need to know the best efficient way of implementation.Please provide me your view.
First Approach:
Use getEntity Object and get the response as Input Stream.
Using Jackson ObjectMapper readValue() -covert the inputstream to java
object.
Using getters and setters of nested java class get the response objects member values.
Second Approach:
Use getEntity Object and get the response as Input Stream and and
convert the Input Stream to String.
Using Google Json API,convert the string to json object.
Using Json parser and get the nested objects member values.
I would say the first approach is better for two reasons:
You don't go through the intermediate process of reading the response payload into String
The setter methods that are called during Jackson deserialization may perform validation on input and throw appropriate exceptions, so you do validation during deserialization.
Maybe not a general answer to this question but another variant of what you're describing under "First approach". I would start with a generic data structure and would only introduce an extra bean if necessary. I wouldn't use String to pass structured data around.
Use jackson to convert the JSON response to a
Map<String,Object> or JsonNode.
Advantage:
You don't need to implement a specialized bean class. Even a very simple bean can become unhandy over time (if format changes or new nested structures are added to the json response, etc.). It also introduces some kind of metaphor to your code which sometimes helps but also can be misleading.
Map<String,Object> is in the JDK and offers a good interface to access data. You don't have to change any interfaces even if the JSON format changes.
You can always pass your data in form of a Map<String,Object>
Disadvantage
Data Encapsulation. The map is a very close representation of the input data and therefore offers not same level of abstraction like a bean.
I need to generate Json from the domain object. I can't add annotation in the domain classes.
Using Mixin is not a option because I have to ignore a lot of properties.
My approach was create a DTO object with the properties that I need. And populate the dto using dozer and then generate Json from the dto with jackson. It looks too much.
I would like to know if is posible to configure Jackson from xml, in order to generate json with the properties mapped in the xml, so it would not be necessary to use dto and dozer.
No. Jackson does not support external configuration files.
But you don't explain how or why you would use Dozer, or DTO. Why not just add properties you care about in a Map, and serialize that as JSON? Then you can use whatever mechanism you want to build/trim that Map.
Jackson can also convert values, so to create full Map with everything from another object, you can do:
Map<String,Object> map = objectMapper.convertValue(someBean, Map.class);
and maybe then only retain properties you want.
We're in a process of switching from Json.NET to ServiceStack.Text and I came across an issue with serialization of polymorphic collections.
In JSON.NET I used to create a custom JsonCreationConverter and overriding the Create() method to select the type I want to create - as outlined here:
http://dotnetbyexample.blogspot.co.uk/2012/02/json-deserialization-with-jsonnet-class.html
The collection in question is List<ItemBase> which can contain FlightItem or CarHireItem objects.
This is my version of the Create() method for JSON.NET:
protected override ItemBase Create(Type objectType, JObject jsonObject)
{
var itemType = jsonObject["Type"].ToString();
switch (itemType)
{
case "flight":
return new FlightItem();
case "carhire":
return new CarHireItem();
default:
return null;
}
}
Is that possible with ServiceStack?
Serialization and deserialization in ServiceStack for polymorphic collections works, however, it appends the object type to the JSON output, e.g.
"__type" : "TestProject.Model.FlightItem, TestProject"
This means that I need to supply the type when posting JSON and I'm not too keen on having the .NET type visible for anyone within the API calls.
Any suggestions? If it's possible to do this in a different way, can you point me to some examples?
Firstly, Interfaces or abstract types in DTOs are a bad idea.
You're now in the strange position where you're trying to support polymorphic types in DTOs, but don't want to provide JSON Serializer-specific info? The reason why you need bespoke code to support polymorphic DTO's is because it's impossible to tell what concrete type should be used based on the wire format, hence why ServiceStack emits the __type property for this purpose.
To avoid these hacks, and have it work equally well in all JSON Serializers, you're better off "flattening" your polymorphic types into a single "flat" DTO and send that across the wire instead. Once you're back in C# you can use code to project it into the ideal types.
ServiceStack does provide some JsConfig<ItemBase>.RawDeserializeFn that will let you do something similar, see the CustomSerializerTests for an example. There's also the JsConfig<ItemBase>.OnDeserializedFn hook that can potentially help, but it's dependent on whether ItemBase contains a complete property list of both concrete types.
Autobeans are pretty powerful. Yet, for the life of me, I cannot figure out how to handle root-level JSON maps or lists.
Most of the documentation suggests that you have a defined top level object that can contain a variety of sub-objects (including lists and maps), yet there is no documentation on autobeaning a Map or List.
public interface Types {
List<FieldType> getTypes();
}
public interface TypesAutoBeanFactory extends AutoBeanFactory {
AutoBean<Types> jsonItems();
}
Above is the referenced way to accomplish Lists, where incoming data will look like:
{"types":[{...},{...}]}
Yet, I find this ugly and the REST service should correctly return:
[{...},{...}]
but I cannot find a simple way to handle this with the Autobean framework. Same goes for root-level maps.
Why does this not work and is there an alternative:
public interface TypesAutoBeanFactory extends AutoBeanFactory {
AutoBean<List<FieldType>> jsonItems();
}
AutoBean works by scanning your class for get and set methods (maybe is methods too). List doesn't have those methods.
You could write a patch to hook the inner List parsing methods directly to the outer decode methods of AutoBeanCodex - most of the code you need is in there. Maybe instead of decode(Class type, Splittable input) you could just add decodeList(Splittable input). The result will probably be an actual List instead of an AutoBean.
Another easy way I can see isn't fully autobean solution.
It's obvious, that if string starts with [{, than it's array, so we can use
Object[] array = jsonString.replaceAll("[\[\]]","").split(",");
and then just work with elements of array with autobean.
I use the method you describe to decode JSON payloads containing a list under a single key.
This answer also explains another method you could take to combine the approach you mentioned with unkeyed JSON list payloads.