How to ensure application/hal+json is the first supported media type - spring-hateoas

I have created a Hateoas enabled Rest service using spring-boot-starter-data-rest, works well.
I then created a client of that rest service in another spring boot module: this is a dependency that can be included in other projects that want to use the rest service. It uses a restTemplate under the hood.
It took a bit of mucking around with HttpMessageConverters and TypeConstrainedMappingJackson2HttpMessageConverter to get it to work but it does.
I tried using this dependency in my main application but it failed to populate the links in ResponseEntity< Resource< Myclass> >, leading to null pointer exceptions.
I couldn't track down the problem so I created a basic Spring Boot application 2.1.5.RELEASE and got the client working, then traced back the problem to this configuration in my main application which unfortunately is need for another problem:
spring:
main:
web-application-type: none
If this configuration is present it seems that hal+json isn't the first accepted media type
org.springframework.core.log.CompositeLog.debug(CompositeLog.java:147) : Accept=[application/json, application/hal+json, application/octet-stream, application/*+json]
When the configuration is removed I see
org.springframework.core.log.CompositeLog.debug(CompositeLog.java:147) : Accept=[application/hal+json, application/json, application/octet-stream, application/*+json]
and I can see this logged which fixes the issue I assume ( it isn't logged when the error happens)
- #ConditionalOnProperty (spring.hateoas.use-hal-as-default-json-media-type) matched (OnPropertyCondition)
I have tried adding this configuration to force the issue but it doesn't work
spring:
hateoas:
use-hal-as-default-json-media-type: true
This is my code in the rest client to configure the message converters:
#Configuration
public class MessageConverterConfiguration {
#Bean public TypeConstrainedMappingJackson2HttpMessageConverter myhalJacksonHttpMessageConverter(){
return new TypeConstrainedMappingJackson2HttpMessageConverter( ResourceSupport.class );
}
/**
* Add {#link TypeConstrainedMappingJackson2HttpMessageConverter} to the list of {#link HttpMessageConverter}s
* configured in the {#link RestTemplate} in first position ( this position is critical ).
* #param halJacksonHttpMessageConverter automagically configured by spring-boot-starter-hateoas
* #return List of {#link HttpMessageConverter}s
*/
#Bean( name = "hal-jackson" ) public List< HttpMessageConverter<?> > mymessageConverters( TypeConstrainedMappingJackson2HttpMessageConverter halJacksonHttpMessageConverter ) {
final List<HttpMessageConverter<?>> all = new ArrayList<>( );
all.add( halJacksonHttpMessageConverter );
all.add( jacksonConverterWithOctetStreamSupport( ) );
all.addAll( new RestTemplate().getMessageConverters() );
return all;
}
/**
* This allows converting octet stream responses into {#link LastApplicationRun} ,
* when we create a last run by posting with {#link RestTemplate#postForObject(URI , Object, Class)}
* : without it we get a
* <pre>org.springframework.web.client.RestClientException: Could not extract response: no suitable HttpMessageConverter
* found for response type [class com.sparknz.ced.spark.sampling.rest.tobesampled.client.domain.LastApplicationRun]
* and content type [application/octet-stream]</pre>.
* <p></p>
* I could find no better solution: it is not needed when we make a get call, don't understand why we get an octet stream response.
* It may only now be useful for tests.
*/
private MappingJackson2HttpMessageConverter jacksonConverterWithOctetStreamSupport( ) {
final MappingJackson2HttpMessageConverter converter = new MappingJackson2HttpMessageConverter();
converter.setSupportedMediaTypes(
asList(new MediaType[]{
MediaType.valueOf( "application/hal+json" ) ,
MediaType.APPLICATION_JSON,
MediaType.APPLICATION_OCTET_STREAM }));
return converter;
}
}
What is 'web-application-type: none' doing and how can I get HypermediaHttpMessageConverterConfiguration to run?

I found that adding this to my configuration class did the trick:
#Import( RepositoryRestMvcConfiguration.class )
RepositoryRestMvcConfiguration seems to be responsible for making hal+json the highest priority by adding RepositoryRestMvcConfiguration.ResourceSupportHttpMessageConverter at position 0 in the list of HttpMessageConverters.

Related

Spring boot kafka - how to tell JsonDeserializer to ignore type header?

Spring's Kafka producer embeds type header into messages which specifies to which class the message should be deserialized by a consumer.This is a problem when the producer isn't using Spring Kafka, but the consumer is.In that case, JsonDeserializer cannot deserialize a message and will throw an exception "No type information in headers and no default type provided".
One way to get around this is to set a default deserialization type.This won't work in cases where a single topic contains multiple message schemas.
Another solution I've found is to set
spring.kafka.consumer.properties.spring.json.use.type.headers
to false (in application.properties file).This doesn't do anything as the same exception is thrown again.
How do I make sure that JsonDeserializer ignores type headers?
See this option of that deserializer:
/**
* Set to false to ignore type information in headers and use the configured
* target type instead.
* Only applies if the preconfigured type mapper is used.
* Default true.
* #param useTypeHeaders false to ignore type headers.
* #since 2.2.8
*/
public void setUseTypeHeaders(boolean useTypeHeaders) {
It can be configured via property as:
/**
* Kafka config property for using type headers (default true).
* #since 2.2.3
*/
public static final String USE_TYPE_INFO_HEADERS = "spring.json.use.type.headers";
In this case the logic is going to be like this:
this.typeMapper.setTypePrecedence(this.useTypeHeaders ? TypePrecedence.TYPE_ID : TypePrecedence.INFERRED);
which means that the type for deserialization is inferred from the listener method.
See more info in docs: https://docs.spring.io/spring-kafka/reference/html/#json-serde

How do you override the Hystrix configuration for OpenFeign?

How do you override the Hystrix default configuration for OpenFeign? Most of the documentation out there is for SpringBoot + OpenFeign, which has its own Spring-specific configuration override system.
Ideally it would be possible to configure the Hystrix core size for the client and configure and timeouts on a per endpoint basis.
Hystrix OpenFeign has a setterFactory() method on the builder that allows you to pass in a SetterFactory lambda function that is executed when setting up each target endpoint:
final SetterFactory hystrixConfigurationFactory = (target, method) -> {
final String groupKey = target.name();
final String commandKey = method.getAnnotation(RequestLine.class).value();
// Configure default thread pool properties
final HystrixThreadPoolProperties.Setter hystrixThreadPoolProperties = HystrixThreadPoolProperties.Setter()
.withCoreSize(50)
.withMaximumSize(200)
.withAllowMaximumSizeToDivergeFromCoreSize(true);
return HystrixCommand.Setter
.withGroupKey(HystrixCommandGroupKey.Factory.asKey(groupKey))
.andCommandKey(HystrixCommandKey.Factory.asKey(commandKey))
.andThreadPoolPropertiesDefaults(hystrixThreadPoolProperties);;
};
final MyTargetClient myTargetClient = HystrixFeign.builder()
.setterFactory(hystrixConfigurationFactory)
.client(new OkHttpClient())
.encoder(new JacksonEncoder(objectMapper))
.decoder(new JacksonDecoder(objectMapper))
.target(new Target.HardCodedTarget<>(MyTargetClient.class, "customclientname", baseUrl))
The above example uses boilerplate from the OpenFeign documentation to properly name Hystrix keys based on the target endpoint function. It then goes further by also configuring the default thread pool property core size and maximum core size as a default for all of the target functions.
However, since this factory is called for each target endpoint, we can actually override the Hystrix configuration on a per endpoint basis. A use good case for this is Hystrix timeouts: sometimes there are endpoints that take longer than others and we need to account for that.
The easiest way would be to first create an annotation and place it on the target endpoints that need to be overridden:
/**
* Override Hystrix configuration for Feign targets.
*/
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
#interface HystrixOverride {
int DEFAULT_EXECUTION_TIMEOUT = 2_000;
/**
* Execution timeout in milliseconds.
*/
int executionTimeout() default DEFAULT_EXECUTION_TIMEOUT;
}
interface MyTargetClient {
#HystrixOverride(executionTimeout = 10_000)
#RequestLine("GET /rest/{storeCode}/V1/products")
Products searchProducts(#Param("storeCode") String storeCode, #QueryMap Map<String, Object> queryMap);
#RequestLine("GET /rest/{storeCode}/V1/products/{sku}")
Product getProduct(#Param("storeCode") String storeCode, #Param("sku") String sku);
}
In the above example, the search API might take a little longer to load so we have an override for that.
Just putting the override annotation on the target endpoint function is not enough though. We need to go back to our factory and update it to use the data in the annotations:
final SetterFactory hystrixConfigurationFactory = (target, method) -> {
final String groupKey = target.name();
final String commandKey = method.getAnnotation(RequestLine.class).value();
// Configure per-function Hystrix configuration by referencing annotations
final HystrixCommandProperties.Setter hystrixCommandProperties = HystrixCommandProperties.Setter();
final HystrixOverride hystrixOverride = method.getAnnotation(HystrixOverride.class);
final int executionTimeout = (hystrixOverride == null)
? HystrixOverride.DEFAULT_EXECUTION_TIMEOUT
: hystrixOverride.executionTimeout();
hystrixCommandProperties.withExecutionTimeoutInMilliseconds(executionTimeout);
// Configure default thread pool properties
final HystrixThreadPoolProperties.Setter hystrixThreadPoolProperties = HystrixThreadPoolProperties.Setter()
.withCoreSize(50)
.withMaximumSize(200)
.withAllowMaximumSizeToDivergeFromCoreSize(true);
return HystrixCommand.Setter
.withGroupKey(HystrixCommandGroupKey.Factory.asKey(groupKey))
.andCommandKey(HystrixCommandKey.Factory.asKey(commandKey))
.andCommandPropertiesDefaults(hystrixCommandProperties)
.andThreadPoolPropertiesDefaults(hystrixThreadPoolProperties);;
};
The above checks that an override annotation exists and then uses the data in that annotation to configure the execution timeout for that target endpoint. If the override is not present, the default for the HystrixOverride endpoint will be used instead. The resulting hystrixCommandProperties variable is then plugged in to the overall HystrixCommand.Setter at the end.

Symfony2 format exception thrown by method annotation

In Symfony 2.4 I'm using a route and method annotation like follows:
/** di elaborazione creazione offerta
*
* #param Request $request
* #return JsonResponse
*
* #Route("/process", name="process", options={"expose" : true}, defaults={"_format" : "json"})
* #Method("POST")
*/
if I throw a MethodNotAllowedException inside action body, response is correctly a json formatted one whereas a call in HTTP: GET returns a fully formatted html exception page, as if _format attribute would not be loaded.
Is it possible to pass _format attribute to ExceptionController sub-request?
Not allowing #Method("GET") means that Symfony rejects the request at the route level and executes the default exception controller. If you want to override the default exception output, override the default exception behavior as outlined here.
I think you can get a JSON exception simply by adding some .json.twig templates. You could also override the default exception controller if you need more flexibility.

Handling bi-directional properties when serializing (if I need both)

I have two Entities:
Organization with some packages:
/**
* #ORM\OneToMany(targetEntity="Package", mappedBy="organization", cascade={"all"})
**/
private $packages;
And the package which belongs to a organization:
/**
*
* #var string #ORM\ManyToOne(targetEntity="Organization", inversedBy="packages", cascade={"all"})
* #ORM\JoinColumns({
* #ORM\JoinColumn(name="organization", referencedColumnName="id")
* })
*/
private $organization;
Now I have two use cases: I want to get to one organization with all packages in serialized form. But I also need to display the package (also serialized) with the information to which organization it belongs.
When I do simply serialize the results, lets say the organization (the packages are serialized the same way):
// Serialize to json
$serializer = new Serializer(array(
new GetSetMethodNormalizer()
), array(
'json' => new JsonEncoder()
));
$json = $serializer->serialize($result, 'json');
return $json;
I'm running into the problem that it will serialize the organization, and get all packages and in those packages it will get search for the organization (which has again the list with the packages and so on). So I'm running into a infinite loop.
Is there a best practice for doing stuff like that? Or do I have create two classes which don't have the properties like defined above and then wrap it to that classes?
I use the serializing method for all objects I want to get in JSON format.
At the end, I solved my problem with the JMSSerializerBundle look at the MaxDepth() property

Dependency injection using ".properties" file

I am using Java EE 6 and need to load configuration from a ".properties" file. Is there a recommended way (best practice) to load the values ​​from the configuration file using dependency injection? I found annotations for this in Spring, but I have not found a "standard" annotation for Java EE.
This guy have developed a solution from scratch:
http://weblogs.java.net/blog/jjviana/archive/2010/05/18/applicaction-configuration-java-ee-6-using-cdi-simple-example
"I couldn't find a simple example of how to configure your application
with CDI by reading configuration attributes from a file..."
But I wonder if there is a more standard way instead of creating a configuration factory...
Configuration annotation
package com.ubiteck.cdi;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import javax.enterprise.util.Nonbinding;
import javax.inject.Qualifier;
#Qualifier
#Retention(RetentionPolicy.RUNTIME)
public #interface InjectedConfiguration {
/**
* Bundle key
* #return a valid bundle key or ""
*/
#Nonbinding String key() default "";
/**
* Is it a mandatory property
* #return true if mandator
*/
#Nonbinding boolean mandatory() default false;
/**
* Default value if not provided
* #return default value or ""
*/
#Nonbinding String defaultValue() default "";
}
The configuration factory could look like :
import java.text.MessageFormat;
import java.util.MissingResourceException;
import java.util.ResourceBundle;
import javax.enterprise.inject.Produces;
import javax.enterprise.inject.spi.InjectionPoint;
public class ConfigurationInjectionManager {
static final String INVALID_KEY="Invalid key '{0}'";
static final String MANDATORY_PARAM_MISSING = "No definition found for a mandatory configuration parameter : '{0}'";
private final String BUNDLE_FILE_NAME = "configuration";
private final ResourceBundle bundle = ResourceBundle.getBundle(BUNDLE_FILE_NAME);
#Produces
#InjectedConfiguration
public String injectConfiguration(InjectionPoint ip) throws IllegalStateException {
InjectedConfiguration param = ip.getAnnotated().getAnnotation(InjectedConfiguration.class);
if (param.key() == null || param.key().length() == 0) {
return param.defaultValue();
}
String value;
try {
value = bundle.getString(param.key());
if (value == null || value.trim().length() == 0) {
if (param.mandatory())
throw new IllegalStateException(MessageFormat.format(MANDATORY_PARAM_MISSING, new Object[]{param.key()}));
else
return param.defaultValue();
}
return value;
} catch (MissingResourceException e) {
if (param.mandatory()) throw new IllegalStateException(MessageFormat.format(MANDATORY_PARAM_MISSING, new Object[]{param.key()}));
return MessageFormat.format(INVALID_KEY, new Object[]{param.key()});
}
}
Tutorial with explanation and Arquillian test
Even though it does not exactly cover your question, this part of the Weld documentation might be of interest for you.
Having mentioned this - no, there is no standard way to inject arbitrary resources / resource files. I guess it's simply beyond the scope of a spec to standardise such highly custom-dependent requirement (Spring is no specification, they can simply implement whatever they like). However, what CDI provides is a strong (aka typesafe) mechanism to inject configuration-holding beans on one side, and a flexible producer mechanism to read and create such beans on the other side. Definitely this is the recommended way you were asking about.
The approach you are linking to is certainly a pretty good one - even though it might me too much for your needs, depending on the kind of properties you are planning to inject.
A very CDI-ish way of continuing would be to develop a CDI extension (that would nicely encapsulate all required classes) and deploy it independently with your projects. Of course you can also contribute to the CDI-extension catalog or even Apache Deltaspike.
See #ConfigProperty of Apache DeltaSpike
The only "standard" way of doing this would be to use a qualifier with a nonbinding annotation member, and make sure all of your injections are dependent scoped. Then in your producer you can get a hold of the InjectionPoint and get the key off the qualifier in the injection point. You'd want something like this:
#Qualifier
public #interface Property {
#Nonbinding String value default "";
}
...
#Inject #Property("myKey") String myKey;
...
#Produces #Property public String getPropertyByKey(InjectionPoint ip) {
Set<Annotation> qualifiers = ip.getQualifiers
// Loop through qualifers looking for Property.class save that off
return ResourceBundle.getBundle(...).getString(property.key);
}
There are obviously some enhancements you can do to that code, but it should be enough to get you started down the right track.