There is a web-service deployed on tomcat 6 and exposed via apache-cxf 2.3.3. A generated sources stubs using wsdl2java to be able to call this service.
Things seemed fine until I sent big request(~1Mb). This request wasn't processed and failing with exception:
Interceptor for {http://localhost/}ResourceAllocationServiceSoapService has thrown
exception, unwinding now org.apache.cxf.binding.soap.SoapFault:
Error reading XMLStreamReader.
...
com.ctc.wstx.exc.WstxEOFException: Unexpected EOF in prolog
at [row,col {unknown-source}]: [1,0]
Is some kind of max request length here, I'm totally stuck with it.
Vladimir's suggestion worked. This code below will help others with understanding where to put the 1000000.
public void handleMessage(SoapMessage message) throws Fault {
// Get message content for dirty editing...
InputStream inputStream = message.getContent(InputStream.class);
if (inputStream != null)
{
String processedSoapEnv = "";
// Cache InputStream so it can be read independently
CachedOutputStream cachedInputStream = new CachedOutputStream(1000000);
try {
IOUtils.copy(inputStream,cachedInputStream);
inputStream.close();
cachedInputStream.close();
InputStream tmpInputStream = cachedInputStream.getInputStream();
try{
String inputBuffer = "";
int data;
while((data = tmpInputStream.read()) != -1){
byte x = (byte)data;
inputBuffer += (char)x;
}
/**
* At this point you can choose to reformat the SOAP
* envelope or simply view it just make sure you put
* an InputStream back when you done (see below)
* otherwise CXF will complain.
*/
processedSoapEnv = fixSoapEnvelope(inputBuffer);
}
catch(IOException e){
}
}
catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// Re-set the SOAP InputStream with the new envelope
message.setContent(InputStream.class,new ByteArrayInputStream( processedSoapEnv.getBytes()));
/**
* If you just want to read the InputStream and not
* modify it then you just need to put it back where
* it was using the CXF cached inputstream
*
* message.setContent(InputStream.class,cachedInputStream.getInputStream());
*/
}
}
I figured out what was wrong. Actually it was bug inside interceptor's code:
CachedOutputStream requestStream = new CachedOutputStream()
When I replaced this with
CachedOutputStream requestStream = new CachedOutputStream(1000000);
things start working fine.
So the request was just trunkated during copying of streams.
I run into same issue of geting "com.ctc.wstx.exc.WstxEOFException: Unexpected EOF in prolog" when using CachedOutputStream class.
Looking at sources of CachedOutputStream class the threshold is used to switch between storing stream's data from "in memory" to "a file".
Assuming stream operates on data that exceeds threshold it gets stored in a file thus following code is going to break
IOUtils.copy(inputStream,cachedInputStream);
inputStream.close();
cachedInputStream.close(); //closes the stream, the file on disk gets deleted
InputStream tmpInputStream = cachedInputStream.getInputStream(); //returned tmpInputStream is brand *empty* one
// ... reading tmpInputStream here will produce WstxEOFException
Increasing 'threshold' does help as all stream data is stored into memory and in such scenario calling cachedInputStream.close() does not really close the underlying stream implementation so one can still read from it later on.
Here is 'fixed' version of above code (at least it worked without exception for me)
IOUtils.copy(inputStream,cachedInputStream);
inputStream.close();
InputStream tmpInputStream = cachedInputStream.getInputStream();
cachedInputStream.close();
// reading from tmpInputStream here works fine
Temporary file gets deleted when close() is called on tmpInputStream and there are no more other references to it, see source code of CachedOutputStream.maybeDeleteTempFile()
Related
I would like to catch this exception rather than simply returning a 500 to the end users which is a poor experience, at least in my application.
The intention would be to return the user back to the form page with some feedback for them to try again.
The current experience is to throw the user back a 500 and the following is printed to the logs;
Caused by: org.apache.tomcat.util.http.fileupload.FileUploadBase$SizeLimitExceededException: the request was rejected because its size (157552) exceeds the configured maximum (1024)
Crediting #james-kleeh for this head start;
But I could only get this working on Grails 4.0.0.M2 when I extend the StandardServletMultipartResolver implementation which is what is used as default. Then the maxFileSize limits continue to be resolved from config (yaml).
public class MyMultipartResolver extends StandardServletMultipartResolver {
static final String FILE_SIZE_EXCEEDED_ERROR = "fileSizeExceeded"
public MultipartHttpServletRequest resolveMultipart(HttpServletRequest request) {
try {
return super.resolveMultipart(request)
} catch (MaxUploadSizeExceededException e) {
request.setAttribute(FILE_SIZE_EXCEEDED_ERROR, true)
return new DefaultMultipartHttpServletRequest(request, new LinkedMultiValueMap<String, MultipartFile>(), new LinkedHashMap<String, String[]>(), new LinkedHashMap<String, String>());
}
}
}
With the following in resources.groovy;
// catch exception when max file size is exceeded
multipartResolver(MyMultipartResolver)
You need to subsequently check for the FILE_SIZE_EXCEEDED_ERROR attribute in the controller and handle accordingly.
I'm getting an error when unmarshalling files that only contain a single JSON object: "IllegalStateException: The Json input stream must start with an array of Json objects"
I can't find any workaround and I don't understand why it has to be so.
#Bean
public ItemReader<JsonHar> reader(#Value("file:${json.resources.path}/*.json") Resource[] resources) {
log.info("Processing JSON resources: {}", Arrays.toString(resources));
JsonItemReader<JsonHar> delegate = new JsonItemReaderBuilder<JsonHar>()
.jsonObjectReader(new JacksonJsonObjectReader<>(JsonHar.class))
.resource(resources[0]) //FIXME had to force this, but fails anyway because the file is "{...}" and not "[...]"
.name("jsonItemReader")
.build();
MultiResourceItemReader<JsonHar> reader = new MultiResourceItemReader<>();
reader.setDelegate(delegate);
reader.setResources(resources);
return reader;
}
I need a way to unmarshall single object files, what's the point in forcing arrays (which I won't have in my use case)??
I don't understand why it has to be so.
The JsonItemReader is designed to read an array of objects because batch processing is usually about handling data sources with a lot of items, not a single item.
I can't find any workaround
JsonObjectReader is what you are looking for: You can implement it to read a single json object and use it with the JsonItemReader (either at construction time or using the setter). This is not a workaround but a strategy interface designed for specific use cases like yours.
Definitely not ideal #thomas-escolan. As #mahmoud-ben-hassine pointed, ideal would be to code a custom reader.
In case some new SOF users stumble on this question, I leave here a code example on how to do it
Though this may not be ideal, this is how I handled the situation:
#Bean
public ItemReader<JsonHar> reader(#Value("file:${json.resources.path}/*.json") Resource[] resources) {
log.info("Processing JSON resources: {}", Arrays.toString(resources));
JsonItemReader<JsonHar> delegate = new JsonItemReaderBuilder<JsonHar>()
.jsonObjectReader(new JacksonJsonObjectReader<>(JsonHar.class))
.resource(resources[0]) //DEBUG had to force this because of NPE...
.name("jsonItemReader")
.build();
MultiResourceItemReader<JsonHar> reader = new MultiResourceItemReader<>();
reader.setDelegate(delegate);
reader.setResources(Arrays.stream(resources)
.map(WrappedResource::new) // forcing the bride to look good enough
.toArray(Resource[]::new));
return reader;
}
#RequiredArgsConstructor
static class WrappedResource implements Resource {
#Delegate(excludes = InputStreamSource.class)
private final Resource resource;
#Override
public InputStream getInputStream() throws IOException {
log.info("Wrapping resource: {}", resource.getFilename());
InputStream in = resource.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(in, UTF_8));
String wrap = reader.lines().collect(Collectors.joining())
.replaceAll("[^\\x00-\\xFF]", ""); // strips off all non-ASCII characters
return new ByteArrayInputStream(("[" + wrap + "]").getBytes(UTF_8));
}
}
After searching for days now and reading pretty much everything related to that, I'm finally posting my question here, since I couldn't find a solution for my specific problem.
I want my REST WebServices to return the original Exception, that has been thrown or at least the correct StackTrace. To test this, I'm using JUnit integrationtests and Wildfly 13 as app-server. After researching I found 2 possible solutions.
1.Using Exception Mappers
While this magical thing catches all of my Exceptions and allows me to return a Response, I've noticed that my StackTrace is changed if I use it like in the example. For example, "com.test.TestClass" is turned into "null.thread" or "null.interceptor". It seems like somehow the exception is changed on the way and the paths to the class are lost or censored, but I can't make sense of it.
Also I couldn't find any restrictions for the Response.entity, be it size, datatype or security.
As far as I understand, you can catch the ExceptionMapper Response OR a WebApplicationException, which contains the response. In my case, the response in the WebApplicationException contains all the relevant data except the (correct) StackTrace.
2.Using WebApplicationException
Another Solution would be to simply throw WebApplicationException instead of ECEException and not using a mapper. If I do that and catch it, the Exception is empty though. It doesn't contain any of the data set, it's always 500 - InternalServerError (I guess, Wildfly couldn't handle it then and threw an exception itself).
Or is it not supposed to be catched/thrown like that? Do I need to convert it to JSon or can I expect it to simply work out of the box with my annotations in the WebServiceInterface and the Response MediaType? Does it even make sense to put a full Response within a WebApplicationException? I mean, both contain fields for the ErrorCode, which seems redundand, even though there is a constructor for that approach.
Long story short:
What's the best approach to catch all possible exceptions and retrieve the full stack trace? Reading this post, I guess catching all "Exception"s is fine and they are always returned as WebApplicationExceptions, but the stack trace is still gone/malformed... your thoughts?
**JUnitTest**
#Test
public void testCreateTask_ClusterInvalid() throws IOException {
final RPETask taskToCreate = new RPETask();;
try
{
final long tid = taskManagerWebService.createTask(taskToCreate);
}
catch (WebApplicationException e) //Responses are ALWAYS catched as WebApplicationException
{
Response response = e.getResponse();
String emString = response.readEntity(String.class);
Gson gson = new Gson();
ECEWebErrorMessage errorMessage = gson.fromJson(emString, ECEWebErrorMessage.class);
errorMessage.displayErrorInformationOnConsole();
}
}
**WebServiceInterface**
#POST
#Path(URI_CREATE_TASK)
#Consumes(WebServiceNames.JSON)
#Produces(WebServiceNames.JSON)
long createTask(final RPETask task) throws ECEException;
**WebService**
#Override
public long createTask(final RPETask task) throws ECEException {
if (LOGGER.isTraceEnabled()) {
LOGGER.trace("createTask(" + task + ")");
}
return taskManager.createTask(task);
}
**ManagerBeanInterface**
long createTask(RPETask task) throws ECEException;
**ManagerBean**
#Override
public long createTask(final RPETask task) throws ECEException {
final ClusterEngineBean cluster = find(ClusterEngineBean.class, task.getCluster());
if (cluster == null) {
throw new ECEObjectNotFoundException(ClusterEngineBean.class, task.getCluster());
}
}
**ExceptionMapper**
#Provider
public class GenericWebExceptionMapper implements ExceptionMapper<Exception> {
final Log logger = LogFactory.getLog(getClass());
#Override
public Response toResponse(Exception ex) {
//At this point, the Exception is fully available -> Sending it as Response breaks it!
logger.error("GenericWebExceptionMapper -> toResponse(Throwable ex)", ex);
ECEWebErrorMessage errorMessage = new ECEWebErrorMessage(500,
ex.getMessage(),
ex.getClass().getCanonicalName(),
ex.getStackTrace());
return Response.status(Status.INTERNAL_SERVER_ERROR)
.entity(errorMessage)
.type(MediaType.APPLICATION_JSON)
.build();
}
}
After more research I've finally found a solution for myself.
Why is the StackTrace gone/malformed?
It's for security reasons. Wildfly automatically detects outgoing StackTraces and censors them, using interceptors. Im not sure if you can do anything about that, but I guess you shouldn't do that anyway.
What is the best approach?
Using Exception Mappers worked for me. Instead of catching them as WebApplicationException, you can always expect a response with the appropriote error code and handle them that way. For example error code 200 = OK, do this... error code 404 = NOTFOUND, do that...I that case your WebServices should always return Responses and contain the object you want to retreive in the entity field of the Response.
Feel free to add additional information to this solution.
I am using univocity bean processor for file parsing. I was able to successfully use it on my local box. But on deploying the same code on an environment with multiple hosts, the parser is showing inconsistent behavior. Say for invalid files, it is not failing processing and also for valid files it fails processing some times.
Would like to know if bean processor implementation suitable for a multi-threaded distributed environment.
Sample code:
private void validateFile(#Nonnull final File inputFile) throws NonRetriableException {
try {
final BeanProcessor<TargetingInputBean> rowProcessor = new BeanProcessor<TargetingInputBean>(
TargetingInputBean.class) {
#Override
public void beanProcessed(#Nonnull final TargetingInputBean targetingInputBean,
#Nonnull final ParsingContext context) {
final String customerId = targetingInputBean.getCustomerId();
final String segmentId = targetingInputBean.getSegmentId();
log.debug("Validating customerId {} segmentId {} for {} file", customerId, segmentId, inputFile
.getAbsolutePath());
if (StringUtils.isBlank(customerId) || StringUtils.isBlank(segmentId)) {
throw new DataProcessingException("customerId or segmentId is blank");
}
try {
someValidation(customerId);
} catch (IllegalArgumentException ex) {
throw new DataProcessingException(
String.format("customerId %s is not in required format. Exception"
+ " message %s", customerId, ex.getMessage()),
ex);
}
}
};
rowProcessor.setStrictHeaderValidationEnabled(true);
final CsvParser parser = new CsvParser(getCSVParserSettings(rowProcessor));
parser.parse(inputFile);
} catch (TextParsingException ex) {
throw new NonRetriableException(
String.format("Exception=%s occurred while getting & parsing targeting file "
+ "contents, error=%s", ex.getClass(), ex.getMessage()),
ex);
}
}
private CsvParserSettings getCSVParserSettings(#Nonnull final BeanProcessor<TargetingInputBean> rowProcessor) {
final CsvParserSettings parserSettings = new CsvParserSettings();
parserSettings.setProcessor(rowProcessor);
parserSettings.setHeaderExtractionEnabled(true);
parserSettings.getFormat().setDelimiter(AIRCubeTargetingFileConstants.FILE_SEPARATOR);
return parserSettings;
}
TargetingInputBean:
public class TargetingInputBean {
#Parsed(field = "CustomerId")
private String customerId;
#Parsed(field = "SegmentId")
private String segmentId;
}
Are you using the latest version?
I just realized you are probably affected by a bug introduced in version 2.5.0 that was fixed in version 2.5.6 if I'm not mistaken. This plagued me for a while as it was an internal concurrency issue that was hard to track down. Basically when you pass a File without an explicit encoding it will try to find a UTF BOM marker in the input (effectively consuming the first character) to determine the encoding automatically. This happened only for InputStreams and Files.
Anyway, this has been fixed so simply updating to the latest version should get rid of the problem for you (please let me know if you are not using version 2.5.something)
If you want to remain with the current version you have there, the error will be gone if you call
parser.parse(inputFile, Charset.defaultCharset());
This will prevent the parser from trying to discover whether there's a BOM marker in your file, therefore avoiding that pesky bug.
Hope this helps
Given:
I have a List<ComplexObjectThatContainsOtherObjectsAndEvenLists> and I want to retain this data across pages/requests. This objects is quite large containing up to 1000 objects.
Current implementation:
What I am currently doing is serializing this complex object using below (I just found this code here in SO and I am grateful to the author who unfortunately I cannot recall, I am sorry)
public static String serialize(Object object) {
ByteArrayOutputStream byteaOut = new ByteArrayOutputStream();
GZIPOutputStream gzipOut = null;
try {
gzipOut = new GZIPOutputStream(new Base64OutputStream(byteaOut));
gzipOut.write(new Gson().toJson(object).getBytes("UTF-8"));
} catch(Exception e) {
return null;
} finally {
if (gzipOut != null) try { gzipOut.close(); } catch (IOException logOrIgnore) {}
}
return new String(byteaOut.toByteArray());
}
and hiding the String output <input type="hidden"> in my page and passing at it back to my controller whenever I need it back. This string is around 1300-2000 characters in length.
Question:
Is saving this String in session better? (see below)
session.setAttribute("mySerializedString", mySerializedString);
Can you please provide pros and cons?
My pros and cons so far (I am not sure though):
I'm not sure but hidden implementation I think can have an effect while the page is rendered (since it's too long) and when it's submitted back to the controller, although this doesn't trouble me of manually unsetting the session variable if I choose the session implementation.