I have an application that people use through Remote Desktop/Terminal Server. The application supports digital signatures. Well, the digital signature pad is on the client, but the program runs on the server. The signature pad also does not support being shared as a device through Remote Desktop(not listed with "Supported Plug And Play Devices" in local resources).
What is the best way of being able to send the signature to the server from the client machine? Preferably with having the least amount of setup for the users(there are a lot of clients and a fair amount of servers this must be done for)
My best idea so far is sharing the clipboard and using it to send messages from server to client(with the client application "polling" the clipboard for a special clipboard format) I feel like this may not be very fast or stable though as I don't think Remote Desktop was designed for it.
Also, we are open to [reasonable] language choices like C/C++, C#, Delphi(the application is written in this), etc. Also, the signature pad is a Topaz TS460(connects by USB).
Can anyone give me ideas on how this can be done or if the clipboard idea of mine is probably the best?
tl;dr: What is the best way of sending an image from a client to a server through remote desktop?
Update:
Well, I've done a bit of testing with plain ASCII text(I can't get files to transfer) and it seems that there is problems copying large amounts of text. I tried copying 43M of text and after a long period of waiting I just got an empty clipboard(Like it did a paste, but there was no text pasted) I was able to transfer about 2M of data though (at decent speeds) between server and client, so this may be feasible for signature images(which will be either jpeg or png compressed)
Have you looked into using Remote Desktop Virtual Channels? http://msdn.microsoft.com/en-us/library/aa383509(VS.85).aspx
for topaz signature pad and credit card swiper you will need the serial type. It will work, already tried it. but I guess this question is too old for me to answer. Does IPAD as well as well as other tablets work on terminal and citrix setup?
I have not tried with Remote Desktop, but one thing that comes to mind is installing a good macro tool on the client. AutoHotKey ( http://www.autohotkey.com/ ) is a free tool that lets you create runable scripts that can do things like open applications and send key strokes to them.
I'm not sure how well it would work with remote desktop, but I know for certain that you can easily setup a script that would launch an application, send it "key strokes" to generate data, copy the data to the clipboard, switch to another application and then paste in the data.
When AutoHotKey is installed, you have the option of associating the file types of the scripts with the app so that end users could just double click your scripts desktop icon to run it. No command line messyness for them.
If all you need to do is transfer an amount of data (a file) from the client to the server it is fairly easy. Polling for a file seems also more logical as polling via the clipboard.
When you connect the client should enable sharing a harddisk (at least one). You can specify the options every time you connect, or you can send the client a .RDP file that is preconfigured.
If you can get the user to put the file on a fixed position, you can access the file C:\Shared \File.jpg using a path like \tsclient\c\Shared\File.jpg.
Here's an explanation (with nice screenshot) how to copy files with Remote Desktop:
http://www.jakeludington.com/ask_jake/20051218_copying_files_with_remote_desktop.html
I wasn't sure if your question rules already out this approach or not.
Related
Why can't a local page like
file:///C:/index.html
Send a request for a resource
file:///C:/data.json
This is prevented because it's a cross origin request, but in what way is that cross origin? I don't understand why this is a vulnerability / prevented. It just seems like a massive pain when I want to whip up a quick utility for something in JavaScript/HTML and I can't run it without uploading to to a server somewhere because of this seemingly arbitrary restriction.
HTML files are expected to be "safe". Tricking people into saving an HTML document to their hard drive and then opening it is not difficult (Here, just open the HTML file attached to this email would cause many email clients to automatically safe it to a tmp directory and open it in the default application).
If JavaScript in that file had permission to read any file on the disk, then users would be extremely vulnerable.
It's the same reason that software like Microsoft Word prompts before allowing macros to run.
It protects you from malicious HTML files reading from your hard drive.
On a real server, you are (hopefully) not serving arbitrary files, but on your local machine, you could very easily trick users into loading whatever you want.
Browsers are set up with security measures to make sure that ordinary users won't be at increased risk. Imagine that I'm a malicious website and I have you download something to your filesystem that looks, to you, like a regular website. Imagine that downloaded HTML can access other parts of your file system and then send that data to me through AJAX or perhaps another piece of executable code on the filesystem that came with this package. To a regular user this might look like a regular website that just "opened up a little weird but I still got it to work." If the browser prevents that, they're safer.
It's possible to turn these flags off (as in here: How to launch html using Chrome at "--allow-file-access-from-files" mode?), but that's more for knowledgeable users ("power users"), and probably comes with some kind of warning about how your browsing session isn't secure.
For the kind of scenarios you're talking about, you should be able to spin up a local HTTP server of some sort - perhaps using Python, Ruby, or node.js (I imagine node.js would be an attractive option for testing javascript base apps).
I've an embedded system which runs firmware and has USB mass storage with size 79kB. So when you plug in the device to any computer(MAC/Windows), it pops as a 79kB flash drive. The firmware creates files which has transaction records. The objective is to display these transactions (tables and simple graphs) to the user. I've narrowed down to a web browser. So the user (with MAC/Windows PC) can plug in the USB device mass storage and open an HTML file in the mass storage drive and view all the transactions in the form of tables and simple bar graphs. The tricky part comes here: the device(firmware) needs to update it's clock, and this time input has to be sourced from the MAC/Windows PC. How can this be achieved?
This is the minimum requirement. Further, through the web browser the user wants to write some configuration parameters for e.g. through a text box and a submit button in the HTML page.
NOTE: Here the device has USB mass storage type and the web browser approach were selected so that there is no prerequisites for the user.
Please suggest an alternative if this can be done using another approach for e.g. a different class of USB or some other application locally available on MAC/Windows desktop/laptop. For e.g. the application should run on both on Mac and Windows i.e. the code should be the same but can be built into separate packages one for Mac and the other (.exe) for Windows. Please suggest a platform for this that has same source but can be built for both mac and windows. Thanks!
As far as I know, there is no way a web browser could write to a file. If such a thing was possible, it would be a huge security issue.
You have to write a piece of native software to do all the tasks you name. That can be done in pretty much any programming language, and if you're developing embedded systems I reckon you must have some experience in programming.
I'm looking at doing something similar and have an idea, though you may be better equipped to run with it than I am. Have the define contain a directory called "SET_DATE" with files "YEAR15" through "YEAR99", "MON01" through "MON12", "DATE01" through "DATE31", "H00" through "H23", "M00" through "M59", "S00" through "S59", and "SET"; each such file should start at a different sector, though none of the sectors in question need to contain any data (they need not physically be stored anywhere). To set the date to July 4, 2020 at 12:34:56pm, read the following files in sequence:
SET_DATE/YEAR20
SET_DATE/MONTH07
SET_DATE/DATE04
SET_DATE/H12
SET_DATE/M34
SET_DATE/S56
SET_DATE/SET
The last access should cause the unit to set its clock. If a user might want to set the clock more than once, that could be accommodated by either having a bunch of essentially-identical directories under SET_DATE (so setting the date the first time would use SET_DATE/00/YEAR20, the second time SET_DATE/01/YEAR20, etc.) and/or having the drive unmount/remount itself if necessary to clear out any caching.
I would think it unwise to have directory fetches trigger actions, since Windows or an anti-virus tool might decide to pre-cache all the directories in a drive when it is mounted. I would not expect Windows or a browser to eagerly load files, however, so I would think one could have read accesses trigger actions.
I have a web service that accepts really huge files. Usually in the range of 10 - 15 GB (not MB).
However upload using a browser is only possible using Chrome on Linux. All 3 major browsers have different flaws trying to upload such a file:
Internet Explorer stops after exactly 4GB.
Firefox does not start at all.
Chrome (on Windows) transfers the whole file but fails to send the closing bondary (send 0xff instead).
Now we are searching for a way to get uploads to work. Preferably using HTML/JS only, but I see no way to make that happen. Second try would be flash, but FileReference seems to break for files > 4GB. Last way would be Java but that is not what we are looking for in the browser client.
Note that this is about the client. I know that the server side code works, as I can upload a 12GB file using standard HTML-Upload with Chrome on Linux. It is the only browser/os combination that works so far, but therefor I am sure, the server coode is fine.
Does anyone know any way to get huge file uploads to work?
Regards,
Steffen
There is a fairly young JS/HTML5 API which might cover your user case:
https://developer.mozilla.org/en-US/docs/Using_files_from_web_applications
I can't speak to its suitability though.
If you're using IIS, the default max file upload is 4GB. You need to change this in your script or your server settings.
See: Increasing Max Upload File Size on IIS7/Win7 Pro
Normally you would break and upload such files in chunks using stream upload. If you take a limited amount data of the file, upload that part to the server, server appends data to the file. Repeat till complete file is uploaded. You can read a file in parts using FileStream (update: Adobe AIR only) or with javascript using the experimental Blob/FileReader. I guess this could be a workaround for the problem.
I think this solution could help:
AS3 fileStream appears to read the file into memory
Let me know if this works out, its an interesting problem.
Consider the following scanning procedure in a typical document handling webapp:
The user scans a document using a scanner connected to his/her computer
The scanned image is saved locally on the user's computer as a BMP/JPG/TIF/PNG file
The user hits a file upload "Browse.." button in the web application
The user is presented with a file dialog which he/she uses to locate the scanned image
The user hits "Upload image" and the scanned image is uploaded to the server where it is stored
This process is quite complicated and I'd like to reduce the number of steps in order to make the process more user friendly/fool proof. Under ideal circumstances the above steps would be replaced with only one step in which the procedure initiate document scanning, complete document scanning and upload resulting image is automatically triggered from the webapp when clicking say "Scan and upload". Unfortunely it seems like the state of "web/scanner integration" is quite poor so this might be utopia.
How would you tackle this problem? More specifically, how would you go about reducing the number steps involve in the use-case described?
Well, two years have passed, so here's an update on the state of the art for those just joining us.
Both Dynamsoft and Atalasoft have multi-browser web-scanning toolkits which are compatible with any server-side stack. Both require the user to install an ActiveX (in IE) or an NPAPI plugin (Chrome, Firefox, etc.) to get access to the scanner via the TWAIN API.
Obviously if you have the time or a limited budget, you can create your own plugin. I heartily recommend the FireBreath plugin framework, and any TWAIN library rather than writing your own TWAIN code.
Once the ActiveX or plugin is installed, the rest of the work is a combination of javascript & HTML on the client, and some kind of handler on the server to accept and process the incoming image, which can be made to look just like a multipart form submit with an attached file.
I recommend doing the image upload in javascript using AJAX, because it is then part of the same browser 'session' as the web page, and it inherits the browser's proxy settings, session cookies and server-side authentication. I don't know about Dynamsoft's control, the Atalasoft toolkit includes such AJAX uploading. The image(s) are handed from the plugin to the javascript as a base64-encoded string, so no local file is actually created.
Disclaimer: I work on Atalasoft's WingScan web-scanning toolkit.
If your target audience is running Windows and IE, and you don't mind spending a few $$, Atalasoft has some components that will do just what you're looking for.
I actually saw someone at the bank do this while setting up my account and I was totally amazed. Bank in question was using Windows and IE, I assume your in an equally controlled environment. I think the bank used a combination of a custom/ predictable scanner driver and an ActiveX control.
A page loaded which said "Open the scanner" the staff member popped the document in and hit Scan on the webpage, then the page changed to say Scanning, then it showed the scanned document on the web page for the staff member to Approve. I can only assume that the scanner driver send the image to a certain location and the active X control was polling for it to appear, once it appeared it showed the image on screen, once the staff member had approved it the active x uploaded it in the background. She opened the next page and carried on with the rest of the process.
God knows how they made all that tech work but it can be done.
Silverlight 4 is coming out soon. It is supposed to have the ability to interact with COM objects on the user's computer (provided they are running Windows). In theory you call WIA methods from your Silverlight web page.
We implemented a solution to implement Remote Deposit for a bank. It works only in IE. A winforms dll was created that interfaces with LeadTools TWAIN dll. Leadtools TWAIN dll abstracts all the TWAIN minutae. This approach is slighly better than using an ActiveX control. .NET Framework would be needed on client. The scanned images are posted back to a hidden variable on the page and are processed on the server.
Hmm, I've always wanted to look at a scanned file before I did anything with it, but I suppose that depends on your scanner and how much quality you need.
If the goal is to "automate the scanning and uploading process" as opposed to "write a web app", I'd write an AutoIt script to control the existing scanner software and a simple ftp program.
The option most likely to remove the most steps, would probably be writing a customized scan utility that the user would download and run on their local machine.
SANE or TWAIN would handle getting the scanned image. cURL could than handle uploading the image to your web app. To make things even easier for the end user, I would use something like a Comet connection to update the web page when the file was available.
If that isn't an option, you might look into seeing what options your users will likely have using their scanners software. I believe many programs now support scanning to email or ftp.
The solution I have used for an intranet app, using multifunction scanner/copiers was to scan to an SMB share that the web server had access to. The user just goes to the copier scans to the share and when they get back to their desk, they go to the new scans page which shows a list of all the new unprocessed files.
Since your audience is controlled environment, You can write your own browser extension/program based on WIA/TWAIN that does the scanning. If you choose browser extensions such as BHO/ActiveX/XPCOM, etc, you need get the user's permission to install your extension. If you choose to write a program you may need web deployment technologies like ClickOnce or Java Web Start to be launched from web.
Interfacing TWAIN is a pain on Windows. Complexity aside, you have to display some GUI written by different scanner driver developers. It may be the only way to support old scanners or features not exposed via other interfaces like full-speed multipage scans from a document feeder.
Microsoft's WIA makes interfacing with scanner much easier with a scripting object model, however scanner-specific features are not available and some old scanners do not support the interface.
After scanning you can call a web service to notify the server and the web page can refresh periodically to check new images.
We have done something similar. we used a command-line TWAIN program (http://www.burrotech.com/quickscan.php). $$ $49
1) We developed a small .Net application to run the QuickScan program as a shell command.
2) The command was assigned to the Scan button.
3) Once the user presses on the scan button, a prompt will appear to enter the file name. The user saves the transaction Id as the file name.
4) Another .Net application (or maybe the same mentioned before) will read this file and upload it into database considering that the filename is the transaction ID.
Worked like a warm knife in butter!
You can try displaying the transaction ID into IE, user to select the ID then presses Scan. Your application will read the SELECTED text and save the file using the SELECTED text as the file name. We havne't tried it but it should work.
It is only utopia if you think that web applications are limited to web browsers, in fact, web applications can include a lot of different technologies, besides HTML and Javascript.
The cool way of solving that problem -- in fact, I already used that for some usbserial devices -- is to implement your application using SOAP+XMPP. You can do that in Perl by using XML::CompileX::Transport::SOAPXMPP, Catalyst::Engine::XMPP2, Catalyst::Controller::SOAP and Catalyst::Model::SOAP.
The interesting thing about using XMPP is that it simplifies the management of addressing, since you use the JID (Jabber ID) to look for the software agent, not some host+port addressing schema. The second interesting part of using XMPP is to more easily support the server pushing information to the client.
But if you don't want to handle XMPP you still can do the same thing with a lightweight embedded http server -- HTTP::Server::Simple, in Perl -- and somehow register the current scanner address in the server so it can call back.
And a last option, which is not so cute, is to have the software agent polling the server to see when there is a "scan document and upload" order for that specific machine and realize that operation when that is present.
In summary, having a local software agent to interact with the local hardware doesn't make your webapp less "web", as long as you use web standards -- like XML, SOAP and others -- to perform that communication.
You can put a Java applet in your website. This can access the scanner and send the data via REST to your web server.
I'm writing a flash app using the open source tools. I would like to load a data file in to the app and capture a screenshot of the stage on the server. The only part that seems mysterious is running the app on the server. In fact, I don't even care if it's the same app running on the server and in the browser--if I can use the flash stage and drawing routines to produce an image server-side, I'm happy. If I have to delve in to flex, fine. Right now I'm having problems finding any starting point at all.
I gather Adobe has some commercial products that may fit the bill, but I'd like to stick with open source, apache, and linux. I know this is probably possible with haxe/neko, but I'd like to use more mainstream tools if possible. Am I asking too much?
EDIT/CLARIFICATION: Many thanks for the responses so far, but I think I've been a bit muddy in my description. I've already written the actual stage-grabbing stuff using the same PNGEncoder class as was suggested. The problem is in actually running the swf on the server side. I don't want to let the client take the screen shot itself, because this opens up the possibility of the client maliciously submitting a screenshot which does not correspond to what is on the stage, that is, I don't want users uploading porn. If I could run the the actionscript code on the server, then I could generate the screenshot from my data files and be sure that the screenshot matches the data, but I have no idea how to run the actionscript or swf on the server.
Swfs run on a client computer, not on the server. The only way it would run on the server would be if you set up a special environment on your server so that it ran a web browser, opened up the page and ran the swf. But even then it would have no correlation to what an external user was doing.
You'll need to run it client side. As far as your security concerns, the best way to get rid of those is to have the php writing the actual image only accept an encrypted form of the image file, which the flash can encrypt. That way they can't simply use the PHP file to upload whatever image they want unless they happened to encrypt it the exact same that your swf did. Next encrypt the swf itself (I recommend SWF Shield) so that a potential hacker cannot read the code to know how to encrypt the image.
We just completed a similar project where we rendered JPGs from SWFs that loaded dynamic data, we used IECapt
Did you try actionscript print commands?
Try and look at this:
http://www.phpclasses.org/browse/package/4312.html
I know this question is long dead, but I had a similar problem and ended up writing a script using applescript + ui scripting to grab the inside area of the preview window of the standalone flash player in OS X. You can grab it off github here.
How about swfdec-thumbnailer from the swfdec-gnome package? It's used to create thumbnails of SWF files but can accept arbitrarily large resolutions with the -s argument.
EDIT: swfdec-gnome has been deprecated in Ubuntu 10.10 in favour of Gnash. Here is a guide on taking screenshots with Gnash (note that certain features like gradients are not yet properly supported).