This question already has answers here:
java.lang.IllegalStateException: Cannot (forward | sendRedirect | create session) after response has been committed
(9 answers)
Closed 6 years ago.
What are the common possibilities to encounter this exception in servlet - Response Already committed?
The response gets committed because of the following reasons:
Because the Response buffer has reached the max buffer size. It could be because of the following reasons:
> the bufferSize in JSP page has reached.You can increase the JSP buffer size
in page directive. See here,
<%# page buffer="5kb" autoFlush="false" %>
> the server default response max buffer size has reached.You can increase
the server default max buffer size.
ServletRespnse.setBufferSize()
Some part of the code has called flushed on the response , i,e, invoked the method HttpServletResponse.flushBuffer().
Some part of the code has flushed the OutputStream or Writer, i,e, invoked the method HttpServletResponse.getOutputStream().flush() or `HttpServletResponse.getWriter().flush()
If you have forwarded to another page, where the response is both committed and closed. For example, when response.sendRedirect() has been called, the response is committed.
Related
One Sentence
Got MySQL invalid connection issue when MaxOpenConns are abundant and wait_timeout is 8h.
Detailed
I've a script intending to read all records from table A, make some transformation, and write the resulted records to table B. And the code works this way:
One goroutine scans table A, putting the records into a channel;
Other four goroutine (number configurable) concurrently consume from above channel, accumulating 50 rows (batch size configurable) to insert into table B, then accumulating another 50 rows, and so on so forth.
Scanner goroutine holds one *sql.DB, and inserter goroutines share another *sql.DB
go-sql-driver: either Version 1.4.1 (2018-11-14) or Version 1.5 (2020-01-07)
(problem encountered with 1.4.1, and reproducible demo, see below, uses 1.5)
Go version: go1.13.15 darwin/amd64
The invalid connection issue is almost steadily reproducible.
In a specific running case, table A has 67227 records, channel size is set to 100000, table A scanner (1 goroutine) reads 1000 a time, table B inserter(4 goroutines) write 50 a time. It ends up with 67127 records in table B (2*50 lost), and 2 lines of error output in console:
[mysql] 2020/12/11 21:54:18 packets.go:36: read tcp x.x.x.x:64062->x.x.x.x:3306: read: operation timed out
[mysql] 2020/12/11 21:54:21 packets.go:36: read tcp x.x.x.x:64070->x.x.x.x:3306: read: operation timed out
(The number of error lines varies when I reproduce, it's usually 1, 2 or 3. N error lines coincide with N*50 records insertion failure into table B.)
And from my log file, it prints invalid connection:
2020/12/11 21:54:18 main.go:135: [goroutine 56] BatchExecute: BatchInsertPlace(): SqlDb.ExecContext(): invalid connection
Stats={MaxOpenConnections:0 OpenConnections:4 InUse:3 Idle:1 WaitCount:0 WaitDuration:0s MaxIdleClosed:14 MaxLifetimeClosed:0}
2020/12/11 21:54:21 main.go:135: [goroutine 55] BatchExecute: BatchInsertPlace(): SqlDb.ExecContext(): invalid connection
Stats={MaxOpenConnections:0 OpenConnections:4 InUse:3 Idle:1 WaitCount:0 WaitDuration:0s MaxIdleClosed:14 MaxLifetimeClosed:0}
Trials and observations
By printing each success/ fail write operation with goroutine id in log, it appears that the error always happen when any 1 of all 4 inserting goroutines has an over ~45 seconds interval between 2 consecutive writes. I think it's just taking this long to accumulate 50 records before inserting them to table B.
In contrast, when I happened to make a change so that the 4 inserting goroutines write some averagely, (i.e. no one has a much longer writing interval than others), the error is not seen. Repeated 3 times.
Looks one error only affects one batch write operation, and the following batches work well. So why not retry with the errored batch? I suppose one retry and it will get through. Still, I don't mind keep retrying until success:
var retryExecTillSucc = func(goroutineId int, records []*MyDto) {
err := inserter.BatchInsert(records)
for { // retry until success. This is a workaround for 'invalid connection' issue
if err == nil { break }
logger.Printf("[goroutine %v] BatchExecute: %v \nStats=%+v\n", goroutineId, err, inserter.RdsClient.SqlDb.Stats())
err = inserter.retryBatchInsert(records)
}
logger.Printf("[goroutine %v] BatchExecute: Success \nStats=%+v\n", goroutineId, inserter.RdsClient.SqlDb.Stats())
}
Surprisingly, with this change, retries of the errored batch keep getting error and never succeed...
Summary
It looks obvious that one (idle) connection was broken when the error occur, but my question is:
MySQL wait_timeout is set 8h, so why is the connection timed out so quickly?
Since MaxOpenConns is not set, it shouldn't be a limitation, especially considering the merely 4 OpenConnections in log.
What else to check as potential root cause?
(Too long, but just hope to put it clearly and get some advice~)
Update
Minimal, reproducible example, including:
Code
One sample log file
MySQL error log
Don't you use Context? I suppose the read timeout is caused by Context Timeout, or readTimeout parameter.
MySQL doesn't provide safe and efficient canceling mechanism. When context is cancelled or reached readTimeout, DB.ExecContext returns without terminating using connection. It cause "invalid connection" next time the connection is used.
If you want to limit execution time of long query, you can use MAX_EXECUTION_TIME hint instead of context.
See https://dev.mysql.com/doc/refman/5.7/en/optimizer-hints.html#optimizer-hints-execution-time for reference.
Is it posible to have several http request read from one csv data set config?
I whant to make http request 1 to read from line 1 to 50 and http request 2 read from line 51 to 100 from the .csv file, and so on. Is this posible? or do i ahve to make more small csv files and more csv data set config.
Yes it is possible but generally not recommended. It will be much easier if smaller CSV's are used. Nevertheless you can make following changes for doing it through CSV data set config: -
Configure "Recycle on EOF" to False.
Configure "Stop thread on EOF" to False.
Sharing mode to "All threads".
Place HTTP requests in different thread group and set total number of threads to desired value. In your case the value for 1st and 2nd thread group should be 50. Also make sure that two thread groups does not start at same time. Add a startup delay for 2nd thread group.
I am trying to create a POST request that contains multipart-form-data that requires NT Credentials. The authentication request causes the POST to be resent and I get a unrepeatable entity exception.
I tried wrapping the MultipartContent entity that is produced with a BufferedHttpEntity but it throws NullPointerExceptions?
final GenericUrl sau = new GenericUrl(baseURI.resolve("Record"));
final MultipartContent c = new MultipartContent().setMediaType(MULTIPART_FORM_DATA).setBoundary("__END_OF_PART__");
final MultipartContent.Part p0 = new MultipartContent.Part(new HttpHeaders().set("Content-Disposition", format("form-data; name=\"%s\"", "RecordRecordType")), ByteArrayContent.fromString(null, "C_APP_BOX"));
final MultipartContent.Part p1 = new MultipartContent.Part(new HttpHeaders().set("Content-Disposition", format("form-data; name=\"%s\"", "RecordTitle")), ByteArrayContent.fromString(null, "JAVA_TEST"));
c.addPart(p0);
c.addPart(p1);
The documentation for ByteArrayContent says
Concrete implementation of AbstractInputStreamContent that generates repeatable input streams based on the contents of byte array.
Making all the parts repeatable does not solve the problem. Because this code
System.out.println("c.retrySupported() = " + c.retrySupported()); outputs c.retrySupported() = true.
I found the following documentation:
1.1.4.1. Repeatable entities An entity can be repeatable, meaning its content can be read more than once. This is only possible with self
contained entities (like ByteArrayEntity or StringEntity)
I have now converted my MultipartContent to a ByteArrayContent with a multi/part-form media type by extracting the string contents and still get the same error!
But I still get the following exception when I try and call request.execute().
Caused by: org.apache.http.client.NonRepeatableRequestException: Cannot retry request with a non-repeatable request entity.
So how do I go about convincing the ApacheHttpTransport to create a repeatable Entity?
I had to modify all the classes that inherited from HttpContent so that they would report back correctly with .retrySupported() so that the when the ApacheHttpTransport code was entered it would create repeatable content correctly.
The changes were made against version 1.20.0 because that is what I was using. I am submitting a pull request against dev branch HEAD so hopefully, this or some version of this will make it into the next release.
Here are the modifications that need to be merged in.
If content length of all parts in multipart entity is known (returned as a non negative value) the entity will be treated as repeatable. The easiest way to make multipart entity repeatable is to make all its parts repeatable.
I am using the following code to read an error message from a byte array and it works fine the first time but if I try to access it the second time it throws an error:
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
StandardError is of type InboundPipe?
The error is:
Error: Error #3212: Cannot perform operation on a NativeProcess that is not running.
even though the process is running (process.running is true). It's on the second call to readUTFBytes that seems to be the cause.
Update:
Here is the code calling the same call one after another. The error happens on the next line and process.running has not changed from true. Happens on the second call.
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
I also found out the standardError is a InboundPipe instance and implements IDataInput.
Update 2:
Thanks for all the help. I found this documentation when viewing the bytesAvailable property.
[Read Only] Returns the number of bytes of data available for reading
in the input buffer. User code must call bytesAvailable to ensure that
sufficient data is available before trying to read it with one of the
read methods.
When I call readUTFBytes() it resets the bytes available to 0. So when I read it a second time and there are no bytes available it causes the error. The error is or may be incorrect in my opinion or the native process.running flag is incorrect.
I looked into seeing if it has a position property and it does not, at least not in this instance.
Could you try to set position to zero before reading process, especially before repetitive access:
Moves, or returns the current position, in bytes, of the file pointer into the ByteArray object. This is the point at which the next call to a read method starts reading or a write method starts writing.
//ByteArray example
var source: String = "Some data";
var data: ByteArray = new ByteArray();
data.writeUTFBytes(source);
data.position = 0;
trace(data.readUTFBytes(data.bytesAvailable));
data.position = 0;
trace(data.readUTFBytes(data.bytesAvailable));
This was a tricky problem since the object was not a byte array although it looks and acts like one (same methods and almost same properties). It is an InboundPipe that also implements IDataInput.
I found this documentation when viewing the bytesAvailable property.
[Read Only] Returns the number of bytes of data available for reading
in the input buffer. User code must call bytesAvailable to ensure that
sufficient data is available before trying to read it with one of the
read methods.
When I call readUTFBytes() it resets the bytes available to 0. So when I call it a second time and there are no bytes available it causes the error. The error is or may be incorrect in my opinion or the native process.running flag is incorrect although I have reason to believe it's the former.
The solution is to check bytesAvailable before calling read operations and store the value if it needs to be accessed later.
if (process.standardError.bytesAvailable) {
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
errorDataArray.push(errorData);
}
I looked into seeing if it has a position property and it does not, at least not in this instance.
I've got code that's performing HTTP requests using WinInet API's asynchronously. In general, my code works, but I'm confused about the 'right' way to do things. In the documentation for InternetReadFile(), it states:
To ensure all data is retrieved, an application must continue to call
the InternetReadFile function until the function returns TRUE and the
lpdwNumberOfBytesRead parameter equals zero.
but in asynchronous mode, it may (or may not) return false, and an error of ERROR_IO_PENDING, indicating it'll do the work asynchronously, and call my callback when finished. If I read the documentation literally, it seems that the asynchronous calls could also just do a partial read of the requested buffer, and require the caller to keep calling InternetReadFile until a read of 0 bytes is encountered.
A typical implementation using InternetReadFile() synchronously would look something like this:
while(InternetReadFile(Request, Buffer, BufferSize, &BytesRead) && BytesRead != 0)
{
// do something with Buffer
}
but with the possibility that any one call to InternetReadFile() could signal that it's going to do the work asynchronously (and perhaps read part, but not all of your request), it becomes much more complicated. If I turn to MSDN sample code for guidance, the implementation is simple, simply calling InternetReadFile() once, and expecting a single return having read the entire requested buffer either instantly or asynchronously. Is this the correct way to use this function, or is MSDN Sample Code ignoring the possibility that InternetReadFile() will only read part of the requested buffer?
After a more careful reading of the asynchronous example, I see now that it is reading repeatedly until a successful read of 0 bytes is encountered. So to answer my own question, you must call InternetReadFile() over and over again, and be prepared for either a synchronous or asynchronous response.
Reading InternetReadFile() repeatedly until it returns TRUE and BytesRead is 0 is a correct way to use InternetReadFile(), but not enough if you work asynchronously.
As MSDN says
When running asynchronously, if a call to InternetReadFile does not result in a completed transaction, it will return FALSE and a subsequent call to GetLastError will return ERROR_IO_PENDING. When the transaction is completed the InternetStatusCallback specified in a previous call to InternetSetStatusCallback will be called with INTERNET_STATUS_REQUEST_COMPLETE.
So InternetReadFile() may return FALSE and set the last error to ERROR_IO_PENDING value if you work in asynchronous mode.
When InternetSetStatusCallback will be called again with INTERNET_STATUS_REQUEST_COMPLETE, the lpvStatusInformation parameter will contain the address of an INTERNET_ASYNC_RESULT structure (see InternetStatusCallback callback function). The INTERNET_ASYNC_RESULT.dwResult member will contain the result of asynchronous operation (TRUE or FALSE since you called InternetReadFile) and the INTERNET_ASYNC_RESULT.dwError will contain a error code only if dwResult is FALSE.
If dwResult is TRUE then your Buffer contains data read from Internet, and the BytesRead contains the number of bytes read asynchronously.
So one of the most important things when you work asynchronously, the Buffer and the BytesRead must be persistent between InternetStatusCallback calls, i.e. must not be allocated on the stack. Otherwise it has undefined behaviour, causes memory corruption, etc.