Sharing session variables between Vaadin and embedded applications - embed

I have a Vaadin v6 application that uses the embed component to show a JSP page from another application (JPivot, in this case). Both applications are running in the same application server (Tomcat). I need both applications to communicate and I'm trying to do this by using session attributes. However, each application has its own session and so one is ignorant of the other attributes. My question is how to make these applications communicate without using a database or an external file? It can be other solution than session attributes.

What you wish to do is either IPC between two web apps,
or sharing some informations between them.
If you have a cache available (memcached or similar) you could
store/retrieve the information there.
If no cache is available, then CrossContext=true might help you.
With this you can call "the other" webapp from inside the servlet/request.
Here is a simple explanation how this works
http://lukaszbudnik.blogspot.ch/2009/06/session-sharing-in-apache-tomcat.html
If you google for "tomcat session sharing" you will get many more results.
Please note that this CrossContext stuff only works as long as they are in the same tomcat instance.
As soon as you add another tomcat instance for load balancing or high availability this will break. In that case you should use some sort of message bus or message queue.

Related

How to launch an HTML page using Rest services from another server

I have a different set of applications, each having their own war files and they could be deployed in different servers.
Assuming that all application sets may need to interact with each other I am trying to develop them as web services. It may happen that an application 'A' installed on server 'X' needs to launch an application 'B' but B's resources like HTML and js are not present on the server where A is installed.
How can we do this? I have come across a few sites where Viewable is used but then it needs the JSPs to be in the same instance. What if I want to achieve this when the calling application doesn't have HTMLs or JSPs with it.
I hope I have been able to put up my question properly. Thanks for any inputs.
As Pesskillet says, you can use a redirection to resource of server B.
If that resource is an HTML view, you don't need to worry about the loading of static files (CSS, JS, Images etc.) that are bound to the HTML view. Because the browser of the user will load it automatically as soon as it will get the HTML document.

How to use opendolphin without http sticky sessions in a load balanced scenario?

I read "Those who would like to enjoy the binding, presentation model structuring, testing capabilities, toolkit independence, and all the other benefits of OpenDolphin, but prefer REST (or other) remoting for data access, can use OpenDolphin with the in-memory configuration"
But I could not find any further hints in the docs?
I can't rely on sticky sessions in my load balanced webserver.
Therefore I need to plugin something different for the http session state.
Is there a opendolphin config property prepared for this? If not are there any plugin points available?
since OpenDolphin and Dolphin Platform use the remote presentation model pattern to synchronize presentation models between client and server you need a state on the server. Currently this state is defined in the session. As you said it's no problem to use load balancing with sticky sessions to provide several server instances. If you need dynamic updates between the clients a distributed event bus like hazelcast will help.
Therefore I need to plugin something different for the http session
state.
What do you need? With the last version (0.8.6) of Dolphin Platform you can access the http client in the client API and provide custom headers or cookies. Will this help? Can you please tell us what you need or open an issue at the Dolphin Platform github repo?

Connect Sproutcore App to MySQL Database

I'm trying to build my first Sproutcore App and I struggle to connect it to a MySQL-Database or any datasource other than fixture. I can't seem to find ANY tutorial except this one from 2009 which is marked as deprecated: http://wiki.sproutcore.com/w/page/12413058/Todos%2007-Hooking%20Up%20to%20the%20Backend .
Do people usually not connect SC-Apps to a Database? If they do so, how do they find out how to? Or does the above mentioned tutorial still work? A lot of gem-commands in the introduction seems to already differ from the official Sproutcore getting-started-guide.
SproutCore apps, as client-side "in-browser" apps, cannot connect directly to a MySQL or any other non-browser database. The application itself runs only within the user's browser (it's just HTML, CSS & JavaScript once built and deployed) and typically accesses any external data via XHR requests to an API or APIs. Therefore, you will need to create a service wrapper around your MySQL database in order for your client-side app to be able to load and update data.
There are two things worth mentioning. The first is that since the SproutCore app contains all of your user interface and a great deal of business logic, your API can be quite simple and should only return raw data (such as JSON). The second is that, I should mention that the client-server design, while more tedious to implement, is absolutely necessary in practice, because you can never trust the client side code, which is in the hands of a possibly nefarious user. Therefore, your API should also act as the final gatekeeper to validate all requests from the client.
This tutorial I found helped me a lot. Its very brief and demonstrates how to implement a very simple login-app, how to send post-requests (triggered by the login-button-action) to the backend-server and how to asynchronously process the response inside the Sproutcore-App:
http://hawkins.io/2011/04/sproutcore_login_tutorial/

How to keep backend session information in Polymer SPA

I'd like to login to a RESTful back-end server written in Laravel5, with the single page front-end application leveraging Polymer's custom element.
In this system, the persistence(CRUD) layer lives in the server. So, authentication should be done at the server in responding to client's api request. When a request is valid, the server returns User object in JSON format including user's role for access control in client.
Here, my questions is how I can keep the session, even when a user refreshes the front-end page? Thanks.
This is an issue beyond Polymer, or even just single page apps. The question is how you keep session information in a browser. With SPAs it is a bit easier, since you can keep authentication tokens in memory, but traditional Web apps have had this issue since the beginning.
You have two things you need to do:
Tokens: You need a user token that indicates that this user is authenticated. You want it to be something that cannot be guessed, else someone can spoof it. So the token better not be "jimsmith" but something more reliable. You have two choices. Either you can have a randomly generated token which the server stores, so that when presented on future requests, it can validate the token. This is how just most session managers work in app servers like nodejs sessions or Jetty session or etc. The alternative is to do something cryptographic so that the server only needs to validate mathematically, not check in a store to see if the token is valid. I did that for node in http://github.com/deitch/cansecurity but there are various options for it.
Storage: You need some way to store the tokens client-side that does not depend on JS memory, since you expect to reload the page.
There are several ways to do client-side storage. The most common by far is cookies. Since the browser stores them without your trying too hard, and presents them whenever you access the domain that the cookie is registered for, it is pretty easy to do. Many client-side and server-side auth libraries are built around them.
An alternative is html5 local storage. Depending on your target browsers and support, you can consider using it.
There also are ways you can play with URL parameters, but then you run the risk of losing it when someone switches pages. It can work, but I tend to avoid that.
I have not seen any components that handle cookies directly, but it shouldn't be too hard to build one.
Here is the gist for cookie management code I use for a recent app. Feel free to wrap it to build a Web component for cookie management.. as long as you share alike!
https://gist.github.com/deitch/dea1a3a752d54dc0d00a
UPDATE:
component.kitchen has a storage component here http://component.kitchen/components/TylerGarlick/core-resource-storage
Simplest way if you use PHP is to keep the user in a PHP session (like a normal non SPA application).
PHP will store the user info on the server, and generate automatically a cookie that the browser will send with any request. With a single server with no load balancing, the session data is local and very fast.

Interfacing with the end-user's scanner from a webapp (web/scanner integration)

Consider the following scanning procedure in a typical document handling webapp:
The user scans a document using a scanner connected to his/her computer
The scanned image is saved locally on the user's computer as a BMP/JPG/TIF/PNG file
The user hits a file upload "Browse.." button in the web application
The user is presented with a file dialog which he/she uses to locate the scanned image
The user hits "Upload image" and the scanned image is uploaded to the server where it is stored
This process is quite complicated and I'd like to reduce the number of steps in order to make the process more user friendly/fool proof. Under ideal circumstances the above steps would be replaced with only one step in which the procedure initiate document scanning, complete document scanning and upload resulting image is automatically triggered from the webapp when clicking say "Scan and upload". Unfortunely it seems like the state of "web/scanner integration" is quite poor so this might be utopia.
How would you tackle this problem? More specifically, how would you go about reducing the number steps involve in the use-case described?
Well, two years have passed, so here's an update on the state of the art for those just joining us.
Both Dynamsoft and Atalasoft have multi-browser web-scanning toolkits which are compatible with any server-side stack. Both require the user to install an ActiveX (in IE) or an NPAPI plugin (Chrome, Firefox, etc.) to get access to the scanner via the TWAIN API.
Obviously if you have the time or a limited budget, you can create your own plugin. I heartily recommend the FireBreath plugin framework, and any TWAIN library rather than writing your own TWAIN code.
Once the ActiveX or plugin is installed, the rest of the work is a combination of javascript & HTML on the client, and some kind of handler on the server to accept and process the incoming image, which can be made to look just like a multipart form submit with an attached file.
I recommend doing the image upload in javascript using AJAX, because it is then part of the same browser 'session' as the web page, and it inherits the browser's proxy settings, session cookies and server-side authentication. I don't know about Dynamsoft's control, the Atalasoft toolkit includes such AJAX uploading. The image(s) are handed from the plugin to the javascript as a base64-encoded string, so no local file is actually created.
Disclaimer: I work on Atalasoft's WingScan web-scanning toolkit.
If your target audience is running Windows and IE, and you don't mind spending a few $$, Atalasoft has some components that will do just what you're looking for.
I actually saw someone at the bank do this while setting up my account and I was totally amazed. Bank in question was using Windows and IE, I assume your in an equally controlled environment. I think the bank used a combination of a custom/ predictable scanner driver and an ActiveX control.
A page loaded which said "Open the scanner" the staff member popped the document in and hit Scan on the webpage, then the page changed to say Scanning, then it showed the scanned document on the web page for the staff member to Approve. I can only assume that the scanner driver send the image to a certain location and the active X control was polling for it to appear, once it appeared it showed the image on screen, once the staff member had approved it the active x uploaded it in the background. She opened the next page and carried on with the rest of the process.
God knows how they made all that tech work but it can be done.
Silverlight 4 is coming out soon. It is supposed to have the ability to interact with COM objects on the user's computer (provided they are running Windows). In theory you call WIA methods from your Silverlight web page.
We implemented a solution to implement Remote Deposit for a bank. It works only in IE. A winforms dll was created that interfaces with LeadTools TWAIN dll. Leadtools TWAIN dll abstracts all the TWAIN minutae. This approach is slighly better than using an ActiveX control. .NET Framework would be needed on client. The scanned images are posted back to a hidden variable on the page and are processed on the server.
Hmm, I've always wanted to look at a scanned file before I did anything with it, but I suppose that depends on your scanner and how much quality you need.
If the goal is to "automate the scanning and uploading process" as opposed to "write a web app", I'd write an AutoIt script to control the existing scanner software and a simple ftp program.
The option most likely to remove the most steps, would probably be writing a customized scan utility that the user would download and run on their local machine.
SANE or TWAIN would handle getting the scanned image. cURL could than handle uploading the image to your web app. To make things even easier for the end user, I would use something like a Comet connection to update the web page when the file was available.
If that isn't an option, you might look into seeing what options your users will likely have using their scanners software. I believe many programs now support scanning to email or ftp.
The solution I have used for an intranet app, using multifunction scanner/copiers was to scan to an SMB share that the web server had access to. The user just goes to the copier scans to the share and when they get back to their desk, they go to the new scans page which shows a list of all the new unprocessed files.
Since your audience is controlled environment, You can write your own browser extension/program based on WIA/TWAIN that does the scanning. If you choose browser extensions such as BHO/ActiveX/XPCOM, etc, you need get the user's permission to install your extension. If you choose to write a program you may need web deployment technologies like ClickOnce or Java Web Start to be launched from web.
Interfacing TWAIN is a pain on Windows. Complexity aside, you have to display some GUI written by different scanner driver developers. It may be the only way to support old scanners or features not exposed via other interfaces like full-speed multipage scans from a document feeder.
Microsoft's WIA makes interfacing with scanner much easier with a scripting object model, however scanner-specific features are not available and some old scanners do not support the interface.
After scanning you can call a web service to notify the server and the web page can refresh periodically to check new images.
We have done something similar. we used a command-line TWAIN program (http://www.burrotech.com/quickscan.php). $$ $49
1) We developed a small .Net application to run the QuickScan program as a shell command.
2) The command was assigned to the Scan button.
3) Once the user presses on the scan button, a prompt will appear to enter the file name. The user saves the transaction Id as the file name.
4) Another .Net application (or maybe the same mentioned before) will read this file and upload it into database considering that the filename is the transaction ID.
Worked like a warm knife in butter!
You can try displaying the transaction ID into IE, user to select the ID then presses Scan. Your application will read the SELECTED text and save the file using the SELECTED text as the file name. We havne't tried it but it should work.
It is only utopia if you think that web applications are limited to web browsers, in fact, web applications can include a lot of different technologies, besides HTML and Javascript.
The cool way of solving that problem -- in fact, I already used that for some usbserial devices -- is to implement your application using SOAP+XMPP. You can do that in Perl by using XML::CompileX::Transport::SOAPXMPP, Catalyst::Engine::XMPP2, Catalyst::Controller::SOAP and Catalyst::Model::SOAP.
The interesting thing about using XMPP is that it simplifies the management of addressing, since you use the JID (Jabber ID) to look for the software agent, not some host+port addressing schema. The second interesting part of using XMPP is to more easily support the server pushing information to the client.
But if you don't want to handle XMPP you still can do the same thing with a lightweight embedded http server -- HTTP::Server::Simple, in Perl -- and somehow register the current scanner address in the server so it can call back.
And a last option, which is not so cute, is to have the software agent polling the server to see when there is a "scan document and upload" order for that specific machine and realize that operation when that is present.
In summary, having a local software agent to interact with the local hardware doesn't make your webapp less "web", as long as you use web standards -- like XML, SOAP and others -- to perform that communication.
You can put a Java applet in your website. This can access the scanner and send the data via REST to your web server.