What does "headless" mean? - terminology

While reading the QTKit Application Programming Guide I came across the term 'headless environments' - what does this mean? Here is the passage:
...including applications with a GUI and tools intended to run in a “headless” environment. For example, you can use the framework to write command-line tools that manipulate QuickTime movie files.

"Headless" in this context simply means without a graphical display. (i.e.: Console based.)
Many servers are "headless" and are administered over SSH for example.

Headless means that the application is running without a graphical user interface (GUI) and sometimes without user interface at all.
There are similar terms for this, which are used in slightly different context and usage. Here are some examples.
Headless / Ghost / Phantom
This term is rather used for heavy weight clients. The idea is to run a client in a non-graphical mode, with a command line for example. The client will then run until its task is finished or will interact with the user through a prompt.
Eclipse for instance can be run in headless mode. This mode comes in handy when it comes to running jobs in background, or in a build factory.
For example, you can run Eclipse in graphic mode to install plugins. This is OK if you just do it for yourself. However, if you're packaging Eclipse to be used by the devs of a large company and want to keep up with all the updates, you probably want to find a more reproducible, automatic easier way.
That's when the headless mode comes in: you may run Eclipse in command line with parameters that indicate which plugins to install.
The nice thing about this method is that it can be integrated in a build factory!
Faceless
This term is rather used for larger scale application. It's been coined in by UX designers. A faceless app interacts with users in a manner that is traditionally dedicated to human users, like mails, SMS, phone... but NOT a GUI.
For example, some companies use SMS as an entry point to dialog with users: the user sends a SMS containing a request to a certain number. This triggers automated services to run and reply to the user.
It's a nice user experience, because one can do some errands from one's telephone. You don't necessarily need to have an internet connection, and the interaction with the app is asynchronous.
On the back-end side, the service can decide that it does not understand the user's request and get out of the automated mode. The user enters then in an interactive mode with a human operator without changing his communication tool.

You most likely know what a browser is. Now take away the GUI, and you have what’s called a headless browser. Headless browsers can do all of the same things that normal browsers do, but faster. They’re great for automating and testing web pages programmatically.

Headless can be referred in terms of a browser or a program that doesn't require a GUI. Not really useful for a general person to view and only to pass the info in the form of code to another program.
So why one uses a Headless program?
Simply because it improves the speed and performance and is available for all user, including those that have access to the graphic card. Allows testing browserless setups and helps you multitask.
Guide to Headless Browser
What is GUI ?

In software development it is an architectural design that completely separates the backend from the front end. The front end, gui, or UI is a stand alone piece and communicates to the backend through an API. This allows for a multi server architecture, flexibility in software stack and performance optimization.

Related

using window.env variables to show/hide React component

Is it good idea to show/hide React component using window.env
for example we have feature which we are not ready to release yet,so we are thinking of hiding it using window.env.FEATURE_ENABLED=0 (these vars will be picked by api call to service that serves bundle to browser)
But,I am thinking its risky since user can look at windows.env and set window.env.FEATURE_ENABLED=1 and start seeing the workflow which we intend to hide.
Could anyone please provide their take on this.
Yes, it could potentially be risky for the reason you say.
A better approach would be to only include finished features in the production build - unfinished features that are still in testing should not be sent to the client. For such features, have a separate build. Host it:
On a local dev server (usually one running on the developer's personal machine) (great when one is making rapid changes), or
On a staging server - one that's accessible to all developers, and works similarly to the live site, but isn't the same as the production URL
A staging server is the professional approach when multiple devs need access to it at once. It can take some work at first to integrate it into your build process, but it's worth it for larger projects.

Building a manual sandbox for malware analysis

I want to build a manual sandbox to analyze malwares on Windows systems. I mean a manual environment, not something automated like Cuckoo Sandbox.
There are many tools and I selected some of them, but I can't really see if each of this tool is worth it or not. Can you say me what you think and if these tools are useful for my sandbox?
First I consider some of them are unavoidables like IDA, winDBG, Wireshark, npcap, an HTTP Proxy like Fiddler, the Sysinternals suite, Volatility, maybe Foremost.
Then there are others tools I never really tried but which seems to be interesting. About static analysis, I have spotted the following tools and I would like to have an eventual feedback about it : Log-MD (a tool which look at the system using advanced Windows audit policies), Cerbero Profiler, Pestudio, Unpacker (it seems it is an automated tool to unpack binaries, seems faster but I am bit skeptical but I'm not a RE specialist, if you know this tool...), oledump.py by Didier Stevens (to identify various elements like heuristic patterns, IP, strings)...
About dynamic analysis, I noted Hook Analyzer (statically analyze elements with heuristic patterns and allow you to hook applications), Malheur (detect "malicious behavior"), ViperMonkey (detect VBA macro in Microsoft Office documents and emulate their behavior.
Do you have any recommandations about my setup and tools I could have forgotten? I want to analyze classic malicious elements (PE, PDF, various scripts, Office documents, ...).
About malware evasion, is there a risk a malware refuse to be analyzed while detecting RE and analysis tools?
Finally should I use Internet in the sandbox? Most of malwares today use C&C server and I see that some sandboxes are built with simulators like iNetSim but since the connection is not real, will I lost some information?
Thank's!
You might want to consider the SEE framework to build your analysis platform.
Its plugins based design will allow you to integrate scanning tools in a pretty flexible manner.
Bear in mind that lots of malware inspect the execution environment and, if any RE tool will be spotted, will refuse to run.
For what concerns the Internet connection, it depends on how much information you want to gather. It is indeed true that lots of malware communicate with C&C nowadays, yet they must ensure their persistence on the target machine.
Therefore, the injection mechanism will still be executed even if Internet connection is absent. My 2 cents on the matter is to run without Internet by default and activate it only when necessary.

Interactive 3D visualization on browser

I am trying to create a website where users can view and interact with room-furnishings in a 3D environment in a browser. Now, I do not wish to create anything from scratch, if it is possible to build upon existing open source efforts. So far, my research shows that:
The most established open source project I could build upon, that allows me to show 3D scenes on the browser and have users interact with them, uses Java3D for browser view, encapsulated in a java applet (sweethome3Dviewer).
Java 3D itself seems to be out of vogue, with most people recommending HTML5+WebGL (where unfortunately, I can't find any solutions that are as developed).
So here are my questions for this forum:
1) Are there any serious drawbacks of using a Java3D based approach?
I am talking of ANY drawbacks here, for example: "it is too slow"; "it is not stable"; "is limited by the number of concurrent users", etc.
2) What would you suggest I start with and build upon, if not the one based on Java3D?
Please note my preference for not re-inventing the wheel!
Yes, there is a serious drawback to using Java applets today: they are likely to simply not work at all.
The biggest problem is that the Java security system, which is intended to prevent programs like applets from accessing other parts of your computer (modifying files, running additional unsandboxed programs, etc.), has a history of security holes. Because of this history, there is a general consensus that permitting Java applets is simply not an acceptable security policy for the current day. Therefore, many browsers omit the Java plug-in or disable it by default.
And there are also browsers which simply have never had a Java browser plug-in at all, such as those on Android and iOS devices. Besides the security risk, there is also the issue that Java is “heavyweight” as web content goes — it can be seen as a waste of limited resources, for portable devices.
Thus, using Java applets is not a good choice: your applet will never work for many users, and those it does work for are taking an unnecessary security risk.
WebGL, on the other hand, is “just” another JavaScript-based API, which only does graphics, not lots of other things that have to be turned off by a “security manager” element. There are risks inherent to WebGL (GPU drivers are not the most security-minded thing out there) but in the current state of things it's unlikely that WebGL will be simply shut off rather than being fixed, if a problem is found.

Interfacing with the end-user's scanner from a webapp (web/scanner integration)

Consider the following scanning procedure in a typical document handling webapp:
The user scans a document using a scanner connected to his/her computer
The scanned image is saved locally on the user's computer as a BMP/JPG/TIF/PNG file
The user hits a file upload "Browse.." button in the web application
The user is presented with a file dialog which he/she uses to locate the scanned image
The user hits "Upload image" and the scanned image is uploaded to the server where it is stored
This process is quite complicated and I'd like to reduce the number of steps in order to make the process more user friendly/fool proof. Under ideal circumstances the above steps would be replaced with only one step in which the procedure initiate document scanning, complete document scanning and upload resulting image is automatically triggered from the webapp when clicking say "Scan and upload". Unfortunely it seems like the state of "web/scanner integration" is quite poor so this might be utopia.
How would you tackle this problem? More specifically, how would you go about reducing the number steps involve in the use-case described?
Well, two years have passed, so here's an update on the state of the art for those just joining us.
Both Dynamsoft and Atalasoft have multi-browser web-scanning toolkits which are compatible with any server-side stack. Both require the user to install an ActiveX (in IE) or an NPAPI plugin (Chrome, Firefox, etc.) to get access to the scanner via the TWAIN API.
Obviously if you have the time or a limited budget, you can create your own plugin. I heartily recommend the FireBreath plugin framework, and any TWAIN library rather than writing your own TWAIN code.
Once the ActiveX or plugin is installed, the rest of the work is a combination of javascript & HTML on the client, and some kind of handler on the server to accept and process the incoming image, which can be made to look just like a multipart form submit with an attached file.
I recommend doing the image upload in javascript using AJAX, because it is then part of the same browser 'session' as the web page, and it inherits the browser's proxy settings, session cookies and server-side authentication. I don't know about Dynamsoft's control, the Atalasoft toolkit includes such AJAX uploading. The image(s) are handed from the plugin to the javascript as a base64-encoded string, so no local file is actually created.
Disclaimer: I work on Atalasoft's WingScan web-scanning toolkit.
If your target audience is running Windows and IE, and you don't mind spending a few $$, Atalasoft has some components that will do just what you're looking for.
I actually saw someone at the bank do this while setting up my account and I was totally amazed. Bank in question was using Windows and IE, I assume your in an equally controlled environment. I think the bank used a combination of a custom/ predictable scanner driver and an ActiveX control.
A page loaded which said "Open the scanner" the staff member popped the document in and hit Scan on the webpage, then the page changed to say Scanning, then it showed the scanned document on the web page for the staff member to Approve. I can only assume that the scanner driver send the image to a certain location and the active X control was polling for it to appear, once it appeared it showed the image on screen, once the staff member had approved it the active x uploaded it in the background. She opened the next page and carried on with the rest of the process.
God knows how they made all that tech work but it can be done.
Silverlight 4 is coming out soon. It is supposed to have the ability to interact with COM objects on the user's computer (provided they are running Windows). In theory you call WIA methods from your Silverlight web page.
We implemented a solution to implement Remote Deposit for a bank. It works only in IE. A winforms dll was created that interfaces with LeadTools TWAIN dll. Leadtools TWAIN dll abstracts all the TWAIN minutae. This approach is slighly better than using an ActiveX control. .NET Framework would be needed on client. The scanned images are posted back to a hidden variable on the page and are processed on the server.
Hmm, I've always wanted to look at a scanned file before I did anything with it, but I suppose that depends on your scanner and how much quality you need.
If the goal is to "automate the scanning and uploading process" as opposed to "write a web app", I'd write an AutoIt script to control the existing scanner software and a simple ftp program.
The option most likely to remove the most steps, would probably be writing a customized scan utility that the user would download and run on their local machine.
SANE or TWAIN would handle getting the scanned image. cURL could than handle uploading the image to your web app. To make things even easier for the end user, I would use something like a Comet connection to update the web page when the file was available.
If that isn't an option, you might look into seeing what options your users will likely have using their scanners software. I believe many programs now support scanning to email or ftp.
The solution I have used for an intranet app, using multifunction scanner/copiers was to scan to an SMB share that the web server had access to. The user just goes to the copier scans to the share and when they get back to their desk, they go to the new scans page which shows a list of all the new unprocessed files.
Since your audience is controlled environment, You can write your own browser extension/program based on WIA/TWAIN that does the scanning. If you choose browser extensions such as BHO/ActiveX/XPCOM, etc, you need get the user's permission to install your extension. If you choose to write a program you may need web deployment technologies like ClickOnce or Java Web Start to be launched from web.
Interfacing TWAIN is a pain on Windows. Complexity aside, you have to display some GUI written by different scanner driver developers. It may be the only way to support old scanners or features not exposed via other interfaces like full-speed multipage scans from a document feeder.
Microsoft's WIA makes interfacing with scanner much easier with a scripting object model, however scanner-specific features are not available and some old scanners do not support the interface.
After scanning you can call a web service to notify the server and the web page can refresh periodically to check new images.
We have done something similar. we used a command-line TWAIN program (http://www.burrotech.com/quickscan.php). $$ $49
1) We developed a small .Net application to run the QuickScan program as a shell command.
2) The command was assigned to the Scan button.
3) Once the user presses on the scan button, a prompt will appear to enter the file name. The user saves the transaction Id as the file name.
4) Another .Net application (or maybe the same mentioned before) will read this file and upload it into database considering that the filename is the transaction ID.
Worked like a warm knife in butter!
You can try displaying the transaction ID into IE, user to select the ID then presses Scan. Your application will read the SELECTED text and save the file using the SELECTED text as the file name. We havne't tried it but it should work.
It is only utopia if you think that web applications are limited to web browsers, in fact, web applications can include a lot of different technologies, besides HTML and Javascript.
The cool way of solving that problem -- in fact, I already used that for some usbserial devices -- is to implement your application using SOAP+XMPP. You can do that in Perl by using XML::CompileX::Transport::SOAPXMPP, Catalyst::Engine::XMPP2, Catalyst::Controller::SOAP and Catalyst::Model::SOAP.
The interesting thing about using XMPP is that it simplifies the management of addressing, since you use the JID (Jabber ID) to look for the software agent, not some host+port addressing schema. The second interesting part of using XMPP is to more easily support the server pushing information to the client.
But if you don't want to handle XMPP you still can do the same thing with a lightweight embedded http server -- HTTP::Server::Simple, in Perl -- and somehow register the current scanner address in the server so it can call back.
And a last option, which is not so cute, is to have the software agent polling the server to see when there is a "scan document and upload" order for that specific machine and realize that operation when that is present.
In summary, having a local software agent to interact with the local hardware doesn't make your webapp less "web", as long as you use web standards -- like XML, SOAP and others -- to perform that communication.
You can put a Java applet in your website. This can access the scanner and send the data via REST to your web server.

What's the best way to notify a non-web application about a change on a web page?

Let's say I have two applications which have to work together to a certain extent.
A web application (PHP, Ruby on Rails, ...)
A desktop application (Java, C++, ...)
The desktop application has to be notified from the web application and the delay between sending and receiving the notification must be short. (< 10 seconds)
What are possible ways to do this? I can think of polling in a 10 second interval, but that would produce much traffic if many desktop applications have to be notified. On a LAN I'd use an UDP broadcast, but unfortunately that's not possible here...
I appreciate any ideas you could give me.
I think the "best practice" here will depend on the number of desktop clients you expect to serve. If there's just one desktop to be notified, then polling may well be a fine approach -- yes, polling is much more overhead than an event-based notification, but it'll certainly be the easiest solution to implement.
If the overhead of polling is truly unacceptable, then I see two basic alternatives:
Keep a persistent connection open between the desktop and web-server (could be a "comet"-style web request, or a raw socket connection)
Expose a service from within the desktop app, and register the address of the service with the web-server. This way, the web-server can call out to the desktop as needed.
Be warned, though -- both alternatives are chock full of gotchas. A few highlights:
Keeping a connection open can be tricky, since you want your web-servers to be hot-swappable
Calling out to an external service (eg, your desktop) from a web-server is dangerous, because this request could hang. You'd want move this notification onto a separate thread to avoid tying up the webserver.
To mitigate some of the concerns, you might decouple the unreliable desktop from the web-server by introducing an intermediary notification server -- the web-server could post an update somewhere, and the desktop could poll/connect/register there to be notified. To avoid reinventing the wheel here, this could involve some sort of MessageQueue system... This, of course, adds the complexity of needing to maintain the new intermediary.
Again, all of these approaches are probably quite complex, so I'd say polling is probably the best bet.
I can see two ways:
Your desktop application polls the web app
Your web app notifies the desktop application
Your web app could publish an RSS feed, but your desktop app will still have to poll the feed every 10 s.
The traffic need not be huge: if you use an HTTP HEAD request, you'll get a small packet with the date of the last modification (conveniently named Last-Modified).
I don't know exactly what to do to achieve your task but I can suggest to create a windows service at the desktop application PC.
This service checks the web application every interval of time for new changes and if changes occurred it can run the desktop application with notification that there is a change in the web application and in the web application when any change occurrs you can response with acknowledgment
I hope that this may be useful I didn't try it exactly but I am suggesting using like this idea.
A layer of syndication would help to scale out the system.
The desktop app can register itself with a "publisher" service (running on one of several/many machines) This publisher service receives the "notice" from your web app that something has changed, and immediately starts notifying all of its registered subscribers.
The number of publishers you need will increase with the number of users.
Edit: Forgot to mention that the desktop app will need to listen on a socket.