session-specific info to comet's "onEvent" method - comet

I have two questions about using the Comet feature with Glassfish. I'm
pretty new at this, so if there's an easy answer or some documentation
I should read please let me know! Thanks.
I'm building a system that has multiple microcontrollers sending data
to a central DB. Users view the data via their browsers ... in formats
(metric vs. English, say) of their own choosing. The display needs to
be updated without user action. It looks like Glassfish + Comet should
be perfect. And so I started with Oracle's "hidden_Comet" example, and
that works great.
So question #1 is this: how can one get session-specific information
into the "onEvent" method?
As context, here's the code; it’s straight from the Oracle example:
private class CounterHandler implements CometHandler<HttpServletResponse> {
private HttpServletResponse response;
public void onEvent(CometEvent event) throws IOException
{
if (CometEvent.NOTIFY == event.getType())
{
PrintWriter writer = response.getWriter();
writer.write("<script type='text/javascript'>");
[... etc. Here is where I need to pass some session-specific
info to the JavaScript]
event.getCometContext().resumeCometHandler(this);
}
}
It would seem that session attributes would be perfect, but it looks
like you can't get the 'session' variable from the "HttpServletResponse".
I thought about using cookies, but they seem to be accessible only with
HttpServletRequest, not "...Response", and, as above, only ‘response’
is available in the “onEvent” method.
So question #1 is: how do you do this?
Question #2 is: is this just the wrong way to attack this problem and is
there a better way?

I'm not sure I understand the data structures and control flow of Comet very well yet, but it seems that this works:
Add a constructor to "class CounterHandler" and pass in the 'session' variable from 'doGet()' where "new CounterHandler" is called. Specifically, change:
CometHandler handler = new CometHandler();
to
HttpSession session = request.getSession();
CometHandler handler = new CometHandler(session);
Have the constructor save the session variable in a class instance variable. And then the "onEvent()" method has access to session attributes. And Bob's your uncle.
(It seems straightforward enough ... well, now.)

Related

Spring Boot JMS - Generic JSON messages without _type property

I'm implementing JMS in a Spring Boot application. Everything is going well. However I'm very surprised at the tight coupling between JSON messages and Java objects. I am looking for some direction on a more flexible solution.
Going through examples and using the MappingJackson2MessageConverter, everything is great as long as you are sending and receiving in the same application. Under the covers it's extremely tightly coupled to the java object. If I have a simple java object called person:
package acme.receivingapp.dto;
public class Person {
private String firstName;
private String lastName;
...
}
When the JmsTemplate turns that into a message the JSON looks generic enough:
{"firstName":"John", "lastName":"Doe"}
However it includes this property:
"_type" : "acme.superapp.dto.Person"
If the JmsListener isn't using that exact Java class, it throws an exception. That's true even if the class is functionally the same as in this example where it's effectively the same class but just in a different package:
package wonderco.sendingapp.dto;
public class Person {
private String firstName;
private String lastName;
...
}
We will be receiving messages from many external entities from mainframes, python apps, .Net, etc. I cannot require them to include our object types in a _type property.
I could create my own MessageConverter specifically for a Person object, but if we have hundreds of more messages / java classes it would be unwieldy to have so many message converters. I would need to design something more generic that can work for any type of JSON message / java class.
Before I go down the path of designing my own generic solution is there anything more generic that works like Spring RestControllers and Spring RestTemplates in the sense that the JSON messages aren't so tightly coupled to the very specific Java classes? I feel like I can't possibly be the first person trying to crack this nut.
I think I've got a handle on this. I'll try to explain it to hopefully help the next person who is new to Spring / JMS.
As M.Deinum points out, unlike a REST endpoint, a queue could potentially contain many different types of messages. Even if your implementation will only have one type of message per queue. Because queues allow any number of different messages that was the design for the provided MappingJackson2MessageConverter. Because the assumption was made there will always be multiple types of messages, there must be a mechanism to determine how to unmarshal the JSON for different types of messages into the correct type of Java Object.
All the examples you'll find of using a MappingJackson2MessageConverter will have this setup in them:
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTypeIdPropertyName("_type");
That's telling the message converter to set the object type in a property called _type when creating a message or to read the object type from that property when reading a message. There's no magic in that _type property. It's not a standard. It's just what the Spring folks used in their examples and then a bazillion people cut and pasted it. So for your own messages, you can change that to a more appropriate property name if you like. So in my example, I might call the property acme_receivingapp_message_type if I wanted. I would then tell the external entities sending me messages to include that property with the message type.
By default, the MappingJackson2MessageConverter will write the object type into whatever property name you chose (_type or whatever) as the fully qualified class name. In my example, it's acme.receivingapp.dto.Person. When a message is received, it looks at the type property to determine what type of Java object to create from the JSON.
Pretty straightforward so far, but still not very convenient if the people sending me messages are not using Java. Even if I can convince everyone to send me acme.receivingapp.dto.Person, what happens if I refactor that class from Person to Human? Or even just restructure the packages? Now I've got to go back and tell the 1,000 external entities to stop sending the property as acme.receivingapp.dto.Person and now send it as acme.receivingapp.dto.Human?
Like I stated in my original question, the message and Java class are being very tightly coupled together which doesn't work when you are dealing with external systems/entities.
The answer to my problem is right in the name of the **Mapping**Jackson2MessageConverter message converter. The key there is the "mapping". Mapping refers to mapping message types to Java classes which is what we want. It's just that, by default, because no mapping information is provided, the MappingJackson2MessageConverter simply uses the fully qualified java class names for creating and receiving messages. All we need to do is provide the mapping information to the message converter so it can map from friendly message-types (e.g.. "Person") to specific classes within our application (e.g. acme.receivingapp.dto.Person).
If you wanted your external systems/entities that will be sending you messages to simply include the property acme_receivingapp_message_type : Person and you wanted that unmarshalled to an acme.receivingapp.dto.Person object when it's received on your end, you'd setup your message converter like this:
#Bean
public MessageConverter jacksonJmsMessageConverter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTargetType(MessageType.TEXT);
converter.setTypeIdPropertyName("acme_receivingapp_message_type");
// Set up a map to convert our friendly message types to Java classes.
Map<String, Class<?>> typeIdMap = new HashMap<>();
typeIdMap.put("Person", acme.receivingapp.dto.Person.class);
converter.setTypeIdMappings(typeIdMap);
return converter;
}
That solves the problem of tight coupling between the message type property and Java class names. But what if you'll only be dealing with a single message type in your queue and don't want the people sending messages to have to include any property to indicate the message type? Well MappingJackson2MessageConverter simply doesn't support that. I tried using a "null" key in the map and then leaving the property off the message and unfortunately it doesn't work. I wish it did support that "null" mapping to use when the property wasn't present.
If you have the scenario where your queue will only deal with one type of message and you don't want the sender to have to include a special property to indicate the message type, you'll likely want to write your own message converter. That convertor will blindly unmarshal the JSON to the one java class you'll always be dealing with. Or maybe you opt to just receive it as a TextMessage and unmarshal it in your listener.
Hopefully this helps someone because I found it quite confusing initially.
I'm reacting to this thread because I have exactly the same feeling!
Why spring isn't able to deserialise the event based on the prototype function that implement the #JmsListener ?
If you have a function like
#JmsListener(destination = "#{beanQueue.queueName}")
public void onEvent(MyEvent event) {
// Do what you want
}
Why do we need to explicitly define the _type property that allow to know the java type output? We can extract it from the parameter function.
I don't perform deep search under the hood, but it look reasonable to me
[EDIT] After some debugging and some quick research, I found a solution. I'm not convinced that it's the more elegant solution, but at least it allow to have a kind of generic converter.
Convert all to String
Convert automatically all event to the java.lang.String type, in order to have a generic _type for all consumer
It can be done by encapsulate the actual MappingJackson2MessageConverter
#Bean
public MessageConverter jsonMessageConverter() {
final ObjectMapper objectMapper = new ObjectMapper();
// Setup the object mapper
final MappingJackson2MessageConverter messageConverter = new MappingJackson2MessageConverter();
messageConverter.setObjectMapper(objectMapper);
messageConverter.setTargetType(MessageType.TEXT);
messageConverter.setTypeIdPropertyName("_type");
return new MessageConverter() {
#Override
public Message toMessage(Object object, Session session) throws JMSException, MessageConversionException {
try {
final String stringValue = objectMapper.writeValueAsString(object);
return messageConverter.toMessage(stringValue, session);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
#Override
public Object fromMessage(Message message) throws JMSException, MessageConversionException {
return messageConverter.fromMessage(message);
}
};
}
Read all from String
In order to read all from String without need to rewrite all the actual spring implementation, we can take benefit of the ConversionService
#Override
public void addFormatters(FormatterRegistry registry) {
registry.addConverter((Converter<String, MyEvent>) source -> {
try {
return new ObjectMapper().readValue(source, MyEvent.class);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
});
}
Limitation
There is some limitation, since the event is now transfered as a String, which is not elegant at all..
I don't actually investigate with the org.springframework.jms.support.converter.MessageType that allow to define other type of message.
In addition, it force the client to always define a converter for all event that are listening inside the application.

Using GET and POST vs getter and setter methods (URLS)

As a trained programmer, I have been taught, repeatedly to use getter and setter methods to control the access and modification of class variables. This is how you're told to do it in Java, Python, C++ and pretty much every other modern language under the sun. However, when I started learning about web development, this seemed cast aside. Instead, we're told to use one URL with GET and POST calls, which seems really odd.
So imagine I have a Person object and I want to update their age. In the non-HTTP world, you're supposed to have a method called <PersonObject>.getAge() and another method called <PersonObject>.setAge(int newAge). But say, instead, you've got a webserver that holds user profile information. According to HTTP conventions, you'd have a URL like '/account/age'. To get their age, you'd request that URL with a 'GET', and to set their age, you'd request that URL with a 'POST' and somehow (form, JSON, URL-arg, etc.) send the new value along.
The HTTP method just feels awkward. To me, that's analogous to changing the non-HTTP version to one method called age, and you'd get their age with <PersonObject>.age('GET'), and set their age with <PersonObject>.age(newAge, 'SET'). Why is it done that way?
Why not have one URL called '/account/getAge' and another called '/account/setAge'?
What you are refering to is a RESTful API. While not required (you could just use getters and setters) it is indeed considered good practice. This however does not meen you have to change the code of your data objects. I always use getters and setters for my business logic in the models layer.
What you are talking to through the HTTP request are the controllers however, and they rarely use getters and setters (I suppose I do not need to explain the MVC design pattern to an experienced programmer). You should never directly access your models through HTTP (how about authentication and error handling and stuff...)
If you have some spare time I would advise you to have a look at this screencast, which I found very useful.
You certainly could have separate URLs if you like, but getters and setters can share names in the original context of your question anyway because of overloading.
class Person {
private age;
public age() {
return this.age;
}
public age(int age) {
this.age = age;
}
}
So if it helps you, you can think of it like that.

reusing queries in 2 datacontext using dependency injection

I have a web application that uses linq-to-sql queries (will soon be upgraded to linq-to-EF compiled queries) and for which there's data context and a database already in place. I want to create a demo version of the application and for the demo, I want to use an entirely different database file but that will have the same tables. So in essence, I'll have the same data structure for two different databases: one database for logged-in users and one database for demo users. I want to reuse many of the queries I've already written; they look like this:
public class FruitQueries
{
public List<SomeObjectModel> MyQuery(list of parameters)
{
using (MyDataContext TheDC = new MyDataContext())
{
var TheQueryResult = (from f in TheDC.Fruits
......).ToList();
return TheQueryResult;
}
}
public List<SomeObject> AnotherQuery(some other parameters) {...}
}
Now I think I know that this calls for dependency injection where the data context is passed in as a parameter but I'm not sure on the syntax. How do you reuse queries using dependency injection to make them work on two different databases? Right now I'm using a using statement and I want to keep this pattern; is that possible if I inject the DC as a parameter?
Thanks.
Since you already have a lot of code in place, probably the simplest thing to do is to inject a factory:
public interface IMyDataContextFactory
{
MyDataContext CreateNewContext();
}
All the code will roughly stay the same:
public List<SomeObjectModel> MyQuery(params)
{
using (var TheDC = this.factory.CreateNewContext())
{
var TheQueryResult = (from f in TheDC.Fruits
......).ToList();
return TheQueryResult;
}
}
You can let the injected IMyDataContextFactory decide how to construct a MyDataContext instance (based on the user). This would be trivial.
In the end it will probably be better to inject a MyDataContext (or an abstraction such as IUnitOfWork) into consumers, but this changes everything completely. Since this class is passed in from the outside, the consumer isn't responsible anymore for disposing it, but someone else is. Although disposing such instance isn't that hard with most DI container. It gets harder though when you want to share the same MyDataContext instance over multiple consumers (within the same web request for instance) and where do you call SubmitChanges?
Elaborating the previous answer
What you can do, is provide the connectionstring to the DC (would this qualify as contructor injection?)
using (MyDataContext TheDC = new MyDataContext(this.factory.CreateConString()))
This way, disposal is still handled by the consumer and you can continue your Using() approach. Your factory can read the two different connectionstrings from your webconfig and determine the right one to use, based on the user. (not that trivial as it may seem)
PS: I think the quickest way is to deploy the demo application to a different URL so they can have a separate web.config and you do not need to code anything but that does not answer your question.

Use javaJVMLocalObjectMimeType for local DnD and serialization for external drop

I'm working on an component in which smaller components are stacked. The user should be able to change the order of these components using drag and drop. I've made this work by implementing an TransferHandler that accepts a local reference DataFlavor (javaJVMLocalObjectMimeType) of the underlying data models. This works fine.
Now I would like to also run my application a second time and be able to drag on of my components from one application into the other. In this case I want to bundle the necessary data of the drag source into a serializable object to reconstruct the object at the drop application and use a serializable DataFlavor for that. I don't want to use object serialization in both situations.
How do I decide if my drag operation is originated in the same JVM so that I can decide to use the object reference or the serialized version of the data. The official swing DnD documentation mentions that it is possible to mix local and serialization flavors but it doesn't tell how to make best use of this.
Edit
Here's how I create the flavor in my DataModelTransferable
public static DataFlavor localFlavor;
static {
try {
localFlavor = new DataFlavor(DataFlavor.javaJVMLocalObjectMimeType + ";class=\"" + ArrayList.class.getName() + "\"");
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
}
...
#Override
public DataFlavor[] getTransferDataFlavors() {
return new DataFlavor[] { localFlavor };
}
And in my TransferHandler I do this
#Override
public boolean canImport(TransferSupport support) {
return support.isDataFlavorSupported(DataModelTransferable.localFlavor);
}
As I said, this works fine locally, but if I drag from one to another instance the drag is accepted which leads to a java.io.IOException: Owner failed to convert data on the drop application and a java.io.NotSerializableException: alignment.model.DataModel on the drag source application. This is ok but the drag shouldn't be accepted on another app in the first place.
I'm using a ArrayList as I also want to be able to drag several objects at once, fyi.
I just found out myself what the problem was. I wrote a wrapper class DataModelListfor my ArrayList of DataModels that doesn't implement Serializable and modified the declaration of my data flavor to this:
localFlavor = new DataFlavor(DataFlavor.javaJVMLocalObjectMimeType + ";class=\"" + DataModelList.class.getName() + "\"");
After that, this flavor is not considered to be equal between the drag source and the drop target if they are not within the same JVM.
I conclude that it's not possible to use local object reference flavors directly with classes that implement Serializable. If someone knows where this is documented I would be glad to hear about it.
Your object reference flavor will normally be different for each running JVM. So you first check whether the Transferable supports your 'object reference flavor' before you ask for the 'serialized dataflavor version'.

Castle Dynamic Proxy in Windsor Container

I've got a bit of a problem. I'm working in the Castle Windsor IOC Container. Now what i wanted to do is just mess about with some AOP principles and what i specifically want to do is based on a method name perform some logging. I have been looking at Interceptors and at the moment i am using the IInterceptor interface implemented as a class to perform this logging using aspects. The issue is if i want to perform the logging on a specific method then it gets messy as i need to put in some logic into my implemented aspect to check the method name etc...
I have read that you can do all of this using Dynamic Proxies and the IInterceptorSelector interface and the IProxyGenerationHook interface. I have seen a few examples of this done on the net but i am quite confused how this all fits into the Windsor container. I mean i am using the windsor container which in my code is actually a ref to the IWindsorContainer interface to create all my objects. All my configuration is done in code rather than XML.
Firstly does anyone know of a way to perform method specific AOP in the windsor container besides the way i am currently doing it.
Secondly how do i use the Dynamic Proxy in the windsor container ?
Below i have added the code where i am creating my proxy and registering my class with
the interceptors
ProxyGenerator _generator = new ProxyGenerator(new PersistentProxyBuilder());
IInterceptorSelector _selector = new CheckLoggingSelector();
var loggingAspect = new LoggingAspect();
var options = new ProxyGenerationOptions(new LoggingProxyGenerationHook())
{ Selector = _selector };
var proxy = _generator.CreateClassProxy(typeof(TestClass), options, loggingAspect);
TestClass testProxy = proxy as TestClass;
windsorContainer.Register(
Component.For<LoggingAspect>(),
Component.For<CheckLoggingAspect>(),
Component.For<ExceptionCatchAspect>(),
Component.For<ITestClass>()
.ImplementedBy<TestClass>()
.Named("ATestClass")
.Parameters(Parameter.ForKey("Name").Eq("Testing"))
.Proxy.MixIns(testProxy));
The Test Class is below:
public class TestClass : ITestClass
{
public TestClass()
{
}
public string Name
{
get;
set;
}
public void Checkin()
{
Name = "Checked In";
}
}
as for the interceptors they are very simple and just enter a method if the name starts with Check.
Now when i resolve my TestClass from the container i get an error.
{"This is a DynamicProxy2 error: Mixin type TestClassProxy implements IProxyTargetAccessor which is a DynamicProxy infrastructure interface and you should never implement it yourself. Are you trying to mix in an existing proxy?"}
I know i'm using the proxy in the wrong way but as i haven't seen any concrete example in how to use a proxy with the windsor container it's kind of confusing.
I mean if i want to use the LoggingProxyGenerationHook which just tell the interceptors to first for methods that start with the word "check" then is this the correct way to do it or am i completely on the wrong path. I just went down the proxy way as it seems very powerfull and i would like to understand how to use these proxies for future programming efforts.
By using .Interceptors() you already are using Dynamic Proxy. When component has specified interceptors Windsor will create proxy for it, and use these interceptors for it. You can also use method .SelectedWith and .Proxy property to set other options you already know from DynamicProxy.
I just added a website about Windsor AOP to documentation wiki. There's not much there yet, but I (and Mauricio ;) ) will put there all the information you need. Take a look, and let us know if everything is clear, and if something is missing.