I finally got video playback to work in chrome with seek feature using the headers Content-Range etc and status 206 returned. It worked well for smaller videos but fails with large videos. Just to note, I am not sending the actual byte ranges explicitly but deliver the entire stream to the webserver. I get the following errors:
org.eclipse.jetty.io.EofException,
this occurs in the backend dataserver that serves the entire inputstream to a servlet and jetty is the server being used. I am not sure how this process actually plays back and corrected the seek feature I needed but now the video fails after playing for a while. The following error also occurs in the browser debugger:
ERR_CONTENT_LENGTH_MISMATCH
I have an audio stream being requested at the same time and playedback as well since I do not know how to mix the two streams.
Any ideas or advice appreciated.
EDIT:
Thanks to the advice to change resourcehandler to defaultservlet; not sure where to do this so found the instances of where this is in the code:
private void addHttpContexts(ConfigNode cnode) throws Exception {
try {
// get all the http context nodes
ConfigNode[] httpContextNodes = cnode.getChildNode("HttpContextList").getChildNodes();
for (int s = 0; s < httpContextNodes.length; s++) {
String urlPath = httpContextNodes[s].getChildNode("ContextPath").getStringValueEx();
String resourceBase = httpContextNodes[s].getChildNode("ResourceBase").getStringValueEx();
ArrayList<String> welcomeFileList = new ArrayList<String>();
if (httpContextNodes[s].hasChildNode("WelcomeFile")) {
String welcomeFile = httpContextNodes[s].getChildNode("WelcomeFile").getStringValueEx();
welcomeFileList.add(welcomeFile);
}
ContextHandler context = new ContextHandler(contexts, urlPath);
ResourceHandler resourceHandler = new ResourceHandler();
resourceHandler.setResourceBase(resourceBase);
resourceHandler.setWelcomeFiles((String[]) welcomeFileList.toArray(new String[welcomeFileList.size()]));
context.setHandler(resourceHandler);
} catch (Exception ex) {
trace.warning("Configuration of http contexts failed", ex);
throw ex;
}
}
What is the appropriate methods for setResourceBase(resourceBase) and
setWelcomeFiles((String[]) welcomeFileList.toArray(new String[welcomeFileList.size()]));
This is the other place in the the same class I found DefaultSErvlet
ServletHolder holderDefault = new ServletHolder("default",DefaultServlet.class);
holderDefault.setInitParameter("dirAllowed","false");
and also already defined in web.xml
<servlet>
<servlet-name>default</servlet-name>
<servlet-class>org.eclipse.jetty.servlet.DefaultServlet</servlet-class>
<init-param>
<param-name>dirAllowed</param-name>
<param-value>false</param-value>
</init-param>
</servlet>
By default, Jetty's DefaultServlet will handle range requests properly for static content served by Jetty itself.
No other component in Jetty handles range requests on its own.
If you have custom code, your own Servlets, your own Jetty Handlers, a REST endpoint, specialized Filters, spring-mvc setup, etc... then you have to handle the range request yourself.
This is because its very impractical for the webserver to support this for custom code. (It would have to request the entire content from the custom code, and then only send the specific byte range to the requesting client).
Related
I need to create a benchmark report regarding whether in the grand scheme of things: minifying + GZIP dynamic HTML responses (generated through GSPs) on every request, which will lead to an additional overhead due to parsing of the generated dynamic HTML string then compressing using a Java library (which results to a smaller response size) is actually better than GZIP without minifying (which results to faster response time but a little larger response size). I got the feeling that this "improvement" maybe is insignificant, but I need the benchmark report to back it up to the team.
To do that, I modify controller actions like so:
// import ...MinifyPlugin
class HomeController {
def get() {
Map model = [:]
String htmlBody = groovyPageRenderer.render(view: "/get", model: model)
// This adds a few milliseconds and reduce few characters.
htmlBody = MinifyPlugin.minifyHtmlString(htmlBody)
render htmlBody
}
}
But the Grails project has almost a hundred actions and doing this on every existing action is impractical and not maintainable, especially that after the benchmarking, we may decide to not minify the HTML response. So I was thinking of doing this inside an Interceptor instead:
void afterView() {
if(response.getContentType().contains("text/html")) {
// This throws IllegalStateException: getWriter() has already been called for this response
OutputStream servletOutputStream = response.getOutputStream()
String htmlBody = new String(servletOutputStream.toByteArray())
htmlBody = MinifyingPlugin.minifyHtmlString(htmlBody)
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream()
byteArrayOutputStream.write(htmlBody.getBytes())
response.setCharacterEncoding("UTF-8")
response.setContentType("text/html")
response.outputStream << byteArrayOutputStream
}
}
But it seems that modification of the response body is impossible once it enters the afterView interceptor...? So is any other way to do this using Grails 3 Interceptors, or should I update every controller action we have manually and perform the modification there instead?
This is what I like to use Interceptors for.
The after() part of the interceptor can act on the model after it is returned from the controller (wherein 'before()' acts on the request before it is sent to the controller)
This allows you to manipulate all data for a set of endpoints (or one specific endpoint) prior to return to client
If you are wanting to render to a view, you do that in the interceptor rather than in the controller; you merely return data from the controller
I am looking for existing solutions to match dynamic parameters with HttpCore. What I have in mind is something similar to constraints in ruby on rails, or dynamic parameters with sails (see here for example).
My objective is to define a REST API where I could easily match requests like GET /objects/<object_id>.
To give a little bit of context, I have an application that creates an HttpServer using the following code
server = ServerBootstrap.bootstrap()
.setListenerPort(port)
.setServerInfo("MyAppServer/1.1")
.setSocketConfig(socketConfig)
.registerHandler("*", new HttpHandler(this))
.create();
And the HttpHandler class that matches the requested URI and dispatches it to the corresponding backend method:
public void handle(final HttpRequest request, final HttpResponse response, final HttpContext context) {
String method = request.getRequestLine().getMethod().toUpperCase(Locale.ROOT);
// Parameters are ignored for the example
String path = request.getRequestLine().getUri();
if(method.equals("POST") && path.equals("/object/add") {
if(request instanceof HttpEntityEnclosingRequest) {
addObject(((HttpEntityEnclosingRequest)request).getEntity())
}
[...]
For sure I can replace path.equals("/object/add") by something more sophisticated with RegEx to match these dynamic parameters, but before doing so I'd like to know if I am not reinventing the wheel, or if there is an existing lib/class I didn't see in the docs that could help me.
Using HttpCore is a requirement (it is already integrated in the application I am working on), I know some other libraries provide high-level routing mechanisms that support these dynamic parameters, but I can't really afford switching the entire server code to another library.
I am currently using httpcore 4.4.10, but I can upgrade to a newer version of this might help me.
At present HttpCore does not have a fully featured request routing layer. (The reasons for that are more political than technical).
Consider using a custom HttpRequestHandlerMapper to implement your application specific request routing logic.
final HttpServer server = ServerBootstrap.bootstrap()
.setListenerPort(port)
.setServerInfo("Test/1.1")
.setSocketConfig(socketConfig)
.setSslContext(sslContext)
.setHandlerMapper(new HttpRequestHandlerMapper() {
#Override
public HttpRequestHandler lookup(HttpRequest request) {
try {
URI uri = new URI(request.getRequestLine().getUri());
String path = uri.getPath();
// do request routing based on the request path
return new HttpFileHandler(docRoot);
} catch (URISyntaxException e) {
// Provide a more reasonable error handler here
return null;
}
}
})
.setExceptionLogger(new StdErrorExceptionLogger())
.create();
I'm using Feign from the spring-cloud-starter-feign to send requests to a defined backend. I would like to use Hystrix as a circuit-breaker but for only one type of use-case: If the backend responds with a HTTP 429: Too many requests code, my Feign client should wait exactly one hour until it contacts the real backend again. Until then, a fallback method should be executed.
How would I have to configure my Spring Boot (1.5.10) application in order to accomplish that? I see many configuration possibilities but only few examples which are - in my opinion - unfortunately not resolved around use-cases.
This can be achieved by defining an ErrorDecoder and taking manual control of the Hystrix Circuit Breaker. You can inspect the response codes from the exceptions and provide your own fallback. In addition, if you wish to retry the request, wrap and throw your exception in a RetryException.
To meet your Retry requirement, also register a Retryer bean with the appropriate configuration. Keep in mind that using a Retryer will tie up a thread for the duration. The default implementation of Retryer does use an exponential backoff policy as well.
Here is an example ErrorDecoder taken from the OpenFeign documentation:
public class StashErrorDecoder implements ErrorDecoder {
#Override
public Exception decode(String methodKey, Response response) {
if (response.status() >= 400 && response.status() <= 499) {
return new StashClientException(
response.status(),
response.reason()
);
}
if (response.status() >= 500 && response.status() <= 599) {
return new StashServerException(
response.status(),
response.reason()
);
}
return errorStatus(methodKey, response);
}
}
In your case, you would react to 419 as desired.
You can forcibly open the Circuit Breaker setting this property at runtime
hystrix.command.HystrixCommandKey.circuitBreaker.forceOpen
ConfigurationManager.getConfigInstance()
.setProperty(
"hystrix.command.HystrixCommandKey.circuitBreaker.forceOpen", true);
Replace HystrixCommandKey with your own command. You will need to restore this circuit breaker back to closed after the desired time.
I could solve it with the following adjustments:
Properties in application.yml:
hystrix.command.commandKey:
execution.isolation.thread.timeoutInMilliseconds: 10_000
metrics.rollingStats.timeInMilliseconds: 10_000
circuitBreaker:
errorThresholdPercentage: 1
requestVolumeThreshold: 1
sleepWindowInMilliseconds: 3_600_000
Code in the respective Java class:
#HystrixCommand(fallbackMethod = "fallbackMethod", commandKey = COMMAND_KEY)
public void doCall(String parameter) {
try {
feignClient.doCall(parameter);
} catch (FeignException e) {
if (e.status() == 429) {
throw new TooManyRequestsException(e.getMessage());
}
}
}
When I loaded 100,000 rows with 20 columns intp kendo grid, I am getting 500 error.
So i have checked the response of json , getting memory out of exception . This is the following code.
In the webconfig,have set
<system.web.extensions>
<scripting>
<webServices>
<jsonSerialization maxJsonLength="500000000"/>
</webServices>
</scripting>
</system.web.extensions>
This is the mvc controller.
public JsonResult JqueryKendoGridVirtualScrolling()
{
using (var s = new KendoEntities())
{
var x = s.premiumsbytreaties.ToList().Take(100000);
if (x != null)
{
var jsonResult = Json(x, JsonRequestBehavior.AllowGet);
jsonResult.MaxJsonLength = int.MaxValue;
return jsonResult;
//return Json(x.ToList(), JsonRequestBehavior.AllowGet);
}
else
{
return null;
}
};
}
It is working fine for 6 columns.But not working for 15 columns.
working fine with 20,000 thousand record,see the output
Two problems here:
500,000,000 characters is too long for the JSON content of an HTTP Response. This will take time to pass over the network and that is before the client browser can parse and process the data (taking time and memory).
1% of this is still too much.
You are not using Kendo's support for "Grid / Virtualization of remote data".
In their example the action's first parameter is [DataSourceRequest] DataSourceRequest request and then this is used (with the ToDataSource extension method) in creating the response.
Like with AJAX methods supporting Kendo Grids (and other controls)
these work together to pass paging, sorting and filtering parameters and
apply to the data before it is returned.
If your data is an IQueryable<T> for a suitable db provider this will
happen on the database.
Spring Boot Actuator's Trace does a good job of capturing input/output HTTP params, headers, users, etc. I'd like to expand it to also capture the body of the HTTP response, that way I can have a full view of what is coming in and going out of the the web layer. Looking at the TraceProperties, doesn't look like there is a way to configure response body capturing. Is there a "safe" way to capture the response body without messing up whatever character stream it is sending back?
Recently, I wrote a blog post about customization of Spring Boot Actuator's trace endpoint and while playing with Actuator, I was kinda surprised that response body isn't one of the supported properties to trace.
I thought I may need this feature and came up with a quick solution thanks to Logback's TeeFilter.
To duplicate output stream of the response, I copied and used TeeHttpServletResponse and TeeServletOutputStream without too much examination.
Then, just like I explained in the blog post, extended WebRequestTraceFilter like:
#Component
public class RequestTraceFilter extends WebRequestTraceFilter {
RequestTraceFilter(TraceRepository repository, TraceProperties properties) {
super(repository, properties);
}
#Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException {
TeeHttpServletResponse teeResponse = new TeeHttpServletResponse(response);
filterChain.doFilter(request, teeResponse);
teeResponse.finish();
request.setAttribute("responseBody", teeResponse.getOutputBuffer());
super.doFilterInternal(request, teeResponse, filterChain);
}
#Override
protected Map<String, Object> getTrace(HttpServletRequest request) {
Map<String, Object> trace = super.getTrace(request);
byte[] outputBuffer = (byte[]) request.getAttribute("responseBody");
if (outputBuffer != null) {
trace.put("responseBody", new String(outputBuffer));
}
return trace;
}
}
Now, you can see responseBody in the JSON trace endpoint serves.
From one of the spring maintainers:
Tracing the request and response body has never been supported out of the box. Support for tracing parameters was dropped as, when the request is POSTed form data, it requires reading the entire request body.
https://github.com/spring-projects/spring-boot/issues/12953
With a webflux reactive stack, it is possible to capture http request and response body using spring-cloud-gateway and inject them into actuator httptrace by defining a custom HttpTraceWebFilter.
See full associated code at https://gist.github.com/gberche-orange/06c26477a313df9d19d20a4e115f079f
This requires quite a bit of duplication, hopefully springboot team will help reduce this duplication, see related https://github.com/spring-projects/spring-boot/issues/23907