I need to build a very simple netty TCP server handling data sending from client. The server need to send back the result but it's not necessary for client to receive. So it may throw an exception caused by sending results after client close the connection. The channel seems stall then, but what I need server to do is continue to finish all data reading & processing.
public class TCPServerHandler extends ChannelHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ByteBuf in = (ByteBuf) msg;
String str = new String();
String retstr;
try {
while (in.isReadable()) {
char c = (char) in.readByte();
if (c != '\n' && c != '\r' && c != SEP_BYTE) {
str += c;
}
}
retstr = dosomething(str);
ByteBuf ok = Unpooled.copiedBuffer(retstr, CharsetUtil.UTF_8);
ctx.writeAndFlush(ok);//May trigger exceptionCaught when flushing
} catch (Throwable e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
ReferenceCountUtil.release(msg);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// Close the connection when an exception is raised.
System.out.println("ExceptionCaught: sending channel closed by client");
//cause.printStackTrace();
//ctx.read();//not working
//ctx.close();
}
}
So how can I ask netty to continue "channelRead" all the messages already in the buffer when exception caught?
Related
I need to make mutiple rest api calls for fetching instance, volume and vnic details. Can i reuse the same signer object created for signing the other calls?
Signer object method
public RequestSigner getSigner(Properties properties, String pemFilePath, String apiKey) {
InputStream privateKeyStream;
PrivateKey privateKey = null;
try {
privateKeyStream = Files.newInputStream(Paths.get(pemFilePath));
privateKey = PEM.readPrivateKey(privateKeyStream);
} catch (InvalidKeySpecException e) {
// throw new RuntimeException("Invalid format for private key");
properties.setProperty(OracleCloudConstants.CUSTOM_DC_ERROR,
FormatUtil.getString("am.webclient.oraclecloud.customdc.invalidformat"));
AMLog.debug("OracleCloudDataCollector::CheckAuthentication()::Invalid format for private key::"
+ e.getMessage());
e.printStackTrace();
} catch (IOException e) {
properties.setProperty(OracleCloudConstants.CUSTOM_DC_ERROR,
FormatUtil.getString("am.webclient.oraclecloud.customdc.failedload"));
AMLog.debug(
"OracleCloudDataCollector::CheckAuthentication()::Failed to load private key::" + e.getMessage()); //No I18N
e.printStackTrace();
// throw new RuntimeException("Failed to load private key");
}
RequestSigner signer = null;
if (privateKey != null) {
signer = new RequestSigner(apiKey, privateKey);
}
return signer;
}
One signer object may be used to sign multiple requests. In fact, the SDK implementation does this too.
It is not clear what version of the SDK you are using. In version 1.5.7 (the most recent at the time of writing), com.oracle.bmc.http.signing.RequestSigner (https://github.com/oracle/oci-java-sdk/blob/master/bmc-common/src/main/java/com/oracle/bmc/http/signing/RequestSigner.java#L16) is an interface which cannot be new’ed as per the snippet above.
I try to parse certificate data from an big .csv file. The documentation says it is a X509 certificate stored in raw data encoded with base64.
So i tried to decode it and load the data with java using this code:
protected X509Certificate parseCert(byte [] bytes) {
if (bytes != null) {
InputStream in = new ByteArrayInputStream(org.apache.commons.net.util.Base64.decodeBase64(bytes));
X509Certificate certificate = null;
try {
CertificateFactory cf = CertificateFactory.getInstance("X.509");
certificate = (X509Certificate) cf.generateCertificate(in);
} catch (Exception ex) {
ex.printStackTrace();
}
return certificate;
}
System.out.println("Null value in bytes!");
return null;
}
I always get a CertificateException saying "Empty Data". I don't know what i am doing wrong!
With a CSV reader the data looks good, or must I cut off \x3 ?
\x308202e23082024ba0......
\x30820224308201....
And:
\x308201db30820144a00302010202020462300d06092a864886f70d010104050030323130302e0603550403132750616e646f436c69656e74363930413639343632353536363738423737414435353341314332393020170d3133303330383137313230315a180f33303132303232393137313230315a30323130302e0603550403132750616e646f436c69656e743639304136393436323535363637384237374144353533413143323930819f300d06092a864886f70d010101050003818d0030818902818100ad97e2fd61997ca4898383b957de7d44fcf9cf4199c915538fadfd644d36df3ea1eea1d69a6b8ad75a313cb25b479966ed7f831638bbea4ac0f2f2d1452b81ed0ff73da55f477e81f4cd4dad7ca26f29a7eb38d097ec90446e531bac72e29882f1df06a75893f7d6e115d11200bff5a813ea050591f2fcc50ab2d13ddc7d3fdd0203010001300d06092a864886f70d010104050003818100603e9d0ad64f80363e3cae94b28dcb9a25409a59ce7876bd24990b62ef3901788e2c0b3a0be22f4c07deb7c005d82f46a105a6abbfca4505403a7c2248be296aae5367e04fc22b0a93f6272263a3ebf25279e5c1ae415fd9e14898c11dc74c18c128e35e7d8467028689cc304fc95359c1f7eb89018ca750145ea81f498880af
May be someone can help?
The certificate is not base64-encoded but hex-encoded.
Changing the code of the OP's parseCert method to
protected X509Certificate parseCert(byte[] bytes)
{
if (bytes != null)
{
InputStream in = new ByteArrayInputStream(Hex.decode(bytes));
X509Certificate certificate = null;
try
{
CertificateFactory cf = CertificateFactory.getInstance("X.509");
certificate = (X509Certificate) cf.generateCertificate(in);
}
catch (Exception ex)
{
ex.printStackTrace();
}
return certificate;
}
System.out.println("Null value in bytes!");
return null;
}
(making use of the BouncyCastle hex decoder org.bouncycastle.util.encoders.Hex, but any other should also do) results in successful parsing of the certificate in
#Test
public void test24542431() throws CertificateEncodingException, IOException
{
String hexEncodedCert = "308201db30820144a00302010202020462300d06092a864886f70d010104050030323130302e0603550403132750616e646f436c69656e74363930413639343632353536363738423737414435353341314332393020170d3133303330383137313230315a180f33303132303232393137313230315a30323130302e0603550403132750616e646f436c69656e743639304136393436323535363637384237374144353533413143323930819f300d06092a864886f70d010101050003818d0030818902818100ad97e2fd61997ca4898383b957de7d44fcf9cf4199c915538fadfd644d36df3ea1eea1d69a6b8ad75a313cb25b479966ed7f831638bbea4ac0f2f2d1452b81ed0ff73da55f477e81f4cd4dad7ca26f29a7eb38d097ec90446e531bac72e29882f1df06a75893f7d6e115d11200bff5a813ea050591f2fcc50ab2d13ddc7d3fdd0203010001300d06092a864886f70d010104050003818100603e9d0ad64f80363e3cae94b28dcb9a25409a59ce7876bd24990b62ef3901788e2c0b3a0be22f4c07deb7c005d82f46a105a6abbfca4505403a7c2248be296aae5367e04fc22b0a93f6272263a3ebf25279e5c1ae415fd9e14898c11dc74c18c128e35e7d8467028689cc304fc95359c1f7eb89018ca750145ea81f498880af";
X509Certificate cert = parseCert(hexEncodedCert.getBytes());
Files.write(FileSystems.getDefault().getPath("target/test-outputs", "24542431.crt"), cert.getEncoded());
}
which is not trusted here, though:
I have to process an xml against an xslt with result-document that create many xml.
As suggested here:
Catch output stream of xsl result-document
I wrote my personal URI Resolver:
public class CustomOutputURIResolver implements OutputURIResolver{
private File directoryOut;
public CustomOutputURIResolver(File directoryOut) {
super();
this.directoryOut = directoryOut;
}
public void close(Result arg0) throws TransformerException {
}
public Result resolve(String href, String base) throws TransformerException {
FileOutputStream fout = null;
try {
File f = new File(directoryOut.getAbsolutePath() + File.separator + href + File.separator + href + ".xml");
f.getParentFile().mkdirs();
fout = new FileOutputStream(f);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
return new StreamResult(fout);
}
}
that get the output directory and then saves here the files.
But then when I tested it in a junit I had some problems in the clean-up phase, when trying to delete the created files and noticed that the FileOutputStream fout is not well handled.
Trying to solve the problem gave me some thoughts:
First I came out with this idea:
public class CustomOutputURIResolver implements OutputURIResolver{
private File directoryOut;
private FileOutputStream fout
public CustomOutputURIResolver(File directoryOut) {
super();
this.directoryOut = directoryOut;
this.fout = null;
}
public void close(Result arg0) throws TransformerException {
try {
if (null != fout) {
fout.flush();
fout.close();
fout = null;
}
} catch (Exception e) {}
}
public Result resolve(String href, String base) throws TransformerException {
try {
if (null != fout) {
fout.flush();
fout.close();
}
} catch (Exception e) {}
fout = null;
try {
File f = new File(directoryOut.getAbsolutePath() + File.separator + href + File.separator + href + ".xml");
f.getParentFile().mkdirs();
fout = new FileOutputStream(f);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
return new StreamResult(fout);
}
}
So the fileOutputStream is closed anytime another one is opened.
But:
1) I don't like this solution very much
2) what if this function is called in a multithread process? (I'm not very skilled about Saxon parsing, so i really don't know..)
3) Is there a chance to create and handle one FileOutputStream for each resolve ?
The reason close() takes a Result argument is so that you can identify which stream to close. Why not:
public void close(Result arg0) throws TransformerException {
try {
if (arg0 instanceof StreamResult) {
OutputStream os = ((StreamResult)arg0).getOutputStream();
os.flush();
os.close();
}
} catch (Exception e) {}
}
From Saxon-EE 9.5, xsl:result-document executes in a new thread, so it's very important that the OutputURIResolver should be thread-safe. Because of this change, from 9.5 an OutputURIResolver must implement an additional method getInstance() which makes it easier to manage state: if your newInstance() method actually creates a new instance, then there will be one instance of the OutputURIResolver for each result document being processed, and it can hold the output stream and close it when requested.
I am trying to implement jsr-179 APi into Nokia Symbian phone for periodic location update using setLocationListener through J2me. In emulator it is working fine. While I installed Midlet on the device nokia 5230, it is given NullPointerException and the application is automatically terminating. What might be possible causes?
Below is my class, I am instantiating object for this class on a form in netbeans
class MovementTracker implements LocationListener {
LocationProvider provider;
Location lastValidLocation;
UpdateHandler handler;
boolean done;
public MovementTracker() throws LocationException
{
done = false;
handler = new UpdateHandler();
new Thread(handler).start();
//Defining Criteria for Location Provider
/*
Criteria cr = new Criteria();
cr.setHorizontalAccuracy(500);
*/
//you can place cr inside getInstance
provider = LocationProvider.getInstance(null);
//listener,interval,timeout,int maxAge
//Passing -1 selects default interval
// provider.setLocationListener(MovementTracker.this, -1, -1, -1);
provider.setLocationListener(MovementTracker.this, -1, 30000, 30000);
}
public void locationUpdated(LocationProvider provider, Location location)
{
handler.handleUpdate(location);
batteryLevel = System.getProperty("com.nokia.mid.batterylevel");
sn = System.getProperty("com.nokia.mid.networksignal");
localTime = System.currentTimeMillis();
Send_Location();
}
public void providerStateChanged(LocationProvider provider, int newState)
{
}
class UpdateHandler implements Runnable
{
private Location updatedLocation = null;
// The run method performs the actual processing of the location
public void run()
{
Location locationToBeHandled = null;
while (!done)
{
synchronized(this)
{
if (updatedLocation == null)
{
try
{
wait();
}
catch (Exception e)
{
// Handle interruption
}
}
locationToBeHandled = updatedLocation;
updatedLocation = null;
}
// The benefit of the MessageListener is here.
// This thread could via similar triggers be
// handling other kind of events as well in
// addition to just receiving the location updates.
if (locationToBeHandled != null)
processUpdate(locationToBeHandled);
}
try
{
Thread.sleep(10000); //Sleeps for 10 sec & then sends the data
}
catch (InterruptedException ex)
{
}
}
public synchronized void handleUpdate(Location update)
{
updatedLocation = update;
notify();
}
private void processUpdate(Location update)
{
latitude = update.getQualifiedCoordinates().getLatitude();
longitude = update.getQualifiedCoordinates().getLongitude();
altitude = update.getQualifiedCoordinates().getAltitude();
}
}
}
public MovementTracker() throws LocationException
...
I have not written any code for handling LocationException.
No code is very dangerous practice, just search the web for something like "java swallow exceptions".
It is quite possible that because of implementation specifics Nokia throws LocationException where emulator does not throw it. Since you don't handle exception this may indeed crash you midlet at Nokia - and you wouldn't know the reason for that because, again, you have written no code to handle it.
How can I catch that exception?
The simplest thing you can do is to display an Alert with exception message and exit the midlet after user reads and dismisses alert
I wrote an IHttpModule that compress my respone using gzip (I return a lot of data) in order to reduce response size.
It is working great as long as the web service doesn't throws an exception.
In case exception is thrown, the exception gzipped but the Content-encoding header is disappear and the client doesn't know to read the exception.
How can I solve this? Why the header is missing? I need to get the exception in the client.
Here is the module:
public class JsonCompressionModule : IHttpModule
{
public JsonCompressionModule()
{
}
public void Dispose()
{
}
public void Init(HttpApplication app)
{
app.BeginRequest += new EventHandler(Compress);
}
private void Compress(object sender, EventArgs e)
{
HttpApplication app = (HttpApplication)sender;
HttpRequest request = app.Request;
HttpResponse response = app.Response;
try
{
//Ajax Web Service request is always starts with application/json
if (request.ContentType.ToLower(CultureInfo.InvariantCulture).StartsWith("application/json"))
{
//User may be using an older version of IE which does not support compression, so skip those
if (!((request.Browser.IsBrowser("IE")) && (request.Browser.MajorVersion <= 6)))
{
string acceptEncoding = request.Headers["Accept-Encoding"];
if (!string.IsNullOrEmpty(acceptEncoding))
{
acceptEncoding = acceptEncoding.ToLower(CultureInfo.InvariantCulture);
if (acceptEncoding.Contains("gzip"))
{
response.AddHeader("Content-encoding", "gzip");
response.Filter = new GZipStream(response.Filter, CompressionMode.Compress);
}
else if (acceptEncoding.Contains("deflate"))
{
response.AddHeader("Content-encoding", "deflate");
response.Filter = new DeflateStream(response.Filter, CompressionMode.Compress);
}
}
}
}
}
catch (Exception ex)
{
int i = 4;
}
}
}
Here is the web service:
[WebMethod]
public void DoSomething()
{
throw new Exception("This message get currupted on the client because the client doesn't know it gzipped.");
}
I appriciate any help.
Thanks!
Even though it's been a while since you posted this question, I just had the same issue and here's how I fixed it:
In the Init() method, add a handler for the Error event
app.Error += new EventHandler(app_Error);
In the handler, remove Content-Type from the Response headers and set the Response.Filter property to null.
void app_Error(object sender, EventArgs e)
{
HttpApplication httpApp = (HttpApplication)sender;
HttpContext ctx = HttpContext.Current;
string encodingValue = httpApp.Response.Headers["Content-Encoding"];
if (encodingValue == "gzip" || encodingValue == "deflate")
{
httpApp.Response.Headers.Remove("Content-Encoding");
httpApp.Response.Filter = null;
}
}
Maybe there's a more elegant way to do this but did the trick for me.