Filter driver vs Direct Serial Communcation - reverse-engineering

I am reverse engineering a serial communication protocol. After i decode the bits I am supposed to write an interface for it. My choices are split between writing a filter driver to intercept the data coming in and going out or just have a basic serial direct communication. Is there any advantage over using one method as opposed to the other?

Well, there's a big difference:
Debugging your filter driver requires a kernel debugger. Options to stop a debug session in flight and edit+build your code are limited and typically requires an operating system reboot. Debugging user mode code is trivial
A bug in your filter driver will crash the operating system. A bug in your user mode code only crashes the program
Deploying your filter driver normally requires an installer. User mode code is simply linked into the program at build time.
These are pretty grave disadvantages. About the only advantage of a filter driver that I can think of is that the code is completely invisible to the user mode programmer. This is however also a liability, there's very little this programmer can do to help you with diagnostic information when the filter driver misbehaves.

Related

Detect Virtual Machine with AS3

Is it possible to detect that I am running under Virtual Machine from the code of Action Script3 (or lower)?
I know there is Capabilities class that providers some info about the system, but it doesn't seem to fit my needs fully.
Perhaps there is no such a flag anywhere, but I can count on some indirect information to understand that environment is virtualized?
Regards,
Sergey.
I doubt that checking this would be easy, especially from such an isolated enviroment as browser plugin sandbox. Modern virtual machines can emulate everything very accurately down to CPU registers and it is often hard to detect a vm even when you can access regular OS APIs. And when running in a browser you don't even have that (with AIR, you could write a native extension). You can check this thread for some info that may prove useful.

Threading : Running USB keyboard on ARM cortex A8

I am new to the Emebedded Linux. I want to use my USB keyboard using threading.. I know the concept of the threading but i want to know how i can detect it using the concept of threading. ?
If you don't wish to use non-blocking I/O to read the keyboard, you can use a thread which does a blocking read and signals your main thread (or sets a flag which it can poll) when input is available.
In addition to blocking, you may have to contend with a default setting to line mode.
A common choice for polling or responding too single characters in a single threaded program is to change the terminal mode settings - see the man pages for termios, stty, etc. You will however need to change them back if your program exits.
Another option would be to skip the whole terminal infrastructure and read the input events directly through /dev/input/. Or at an extreme you could skip the USB HID driver and write your own kernel driver for USB keyboards.
If I understood you correctly you have a embedded linux board and now you want to connect a USB keyboard and use it with applications on the embedded linux board? If this is correct then you dont need to do anything with threading. What you need to do is have drivers installed for that keyboard. For that you should look into the kernel build config to see if the USB keyboard drivers (HID drivers) are enabled or not.
You can use either of following based on your requirement :
blocking I/O
non-blocking I/O
For non-blocking I/O, try Multi-threading based approach. For blocked I/O, try epoll() system call based approach.
Regarding the method to detect the keyboard, you can try the following :
Use the file /proc/bus/input/devices to detect on the devices, but it does not get updated until you reboot in certain systems.
Detect using /dev/input/eventN and the ioctl() call to detect the event bits. The event interface is very useful as it exposes the raw events to userspace.

Change or override the behavior of a USB keyboard to a more generic controller

Changing the keystrokes of a USB keyboard
My question is very similar to the one above... (to which there was no clear answer)
I have a nice USB keyboard that I would like to use to control an audio/visual program I am writing. I can't have the USB keyboard input interfere with the regular operating system interface -
Therefore I need to have the OS recognize it as a generic HID device of some kind, or a MIDI device, or something that sends OSC messages.
I am writing the program on OSX but would like to figure out a cross platform solution that doesn't involve me hacking the hardware of the keyboard - hopefully some sort of program or script that I can use. The reason for this is I'd like to distribute this program for others to use easily.
Any ideas on where to start? I'm thinking I'll probably need to write a separate program for users to select a USB device and reroute that into my program...
Any language is fine - I write code in Python, sometimes C, and Java / Processing.
Unfortunately you're going to find this EXTREMELY difficult to do: most modern operating systems will automatically detect the HID profile and load the drivers for it, and generally speaking make it very difficult to override that default behavior.
Without hacking the hardware you would need to somehow override the OS's default behavior for that specific USB VID (vendor ID) and PID (product ID) and instruct the OS to load your own custom kernel extension? I'd suggest starting with source of the AppleUSBKeyboard drivers at http://www.opensource.apple.com/source/IOUSBFamily/IOUSBFamily-206.4.1/AppleUSBKeyboard/ and then figuring out how to install your custom build as the preferred USB driver for your specific keyboard's VID and PID. After that it should be the messy messy messy issue of only sending the keys to your app and not to anything else.
Would it be possible to write a
function that disables regular
operating system keyboard input for
all keys except something like ESC if
my program is in focus? – jeffrey May
24 at 21:59
Yes this should be quite possible, I dont know about Mac but on Win32 you want a global keyboard hook (look up SetWindowsHookEx)

What does "headless" mean?

While reading the QTKit Application Programming Guide I came across the term 'headless environments' - what does this mean? Here is the passage:
...including applications with a GUI and tools intended to run in a “headless” environment. For example, you can use the framework to write command-line tools that manipulate QuickTime movie files.
"Headless" in this context simply means without a graphical display. (i.e.: Console based.)
Many servers are "headless" and are administered over SSH for example.
Headless means that the application is running without a graphical user interface (GUI) and sometimes without user interface at all.
There are similar terms for this, which are used in slightly different context and usage. Here are some examples.
Headless / Ghost / Phantom
This term is rather used for heavy weight clients. The idea is to run a client in a non-graphical mode, with a command line for example. The client will then run until its task is finished or will interact with the user through a prompt.
Eclipse for instance can be run in headless mode. This mode comes in handy when it comes to running jobs in background, or in a build factory.
For example, you can run Eclipse in graphic mode to install plugins. This is OK if you just do it for yourself. However, if you're packaging Eclipse to be used by the devs of a large company and want to keep up with all the updates, you probably want to find a more reproducible, automatic easier way.
That's when the headless mode comes in: you may run Eclipse in command line with parameters that indicate which plugins to install.
The nice thing about this method is that it can be integrated in a build factory!
Faceless
This term is rather used for larger scale application. It's been coined in by UX designers. A faceless app interacts with users in a manner that is traditionally dedicated to human users, like mails, SMS, phone... but NOT a GUI.
For example, some companies use SMS as an entry point to dialog with users: the user sends a SMS containing a request to a certain number. This triggers automated services to run and reply to the user.
It's a nice user experience, because one can do some errands from one's telephone. You don't necessarily need to have an internet connection, and the interaction with the app is asynchronous.
On the back-end side, the service can decide that it does not understand the user's request and get out of the automated mode. The user enters then in an interactive mode with a human operator without changing his communication tool.
You most likely know what a browser is. Now take away the GUI, and you have what’s called a headless browser. Headless browsers can do all of the same things that normal browsers do, but faster. They’re great for automating and testing web pages programmatically.
Headless can be referred in terms of a browser or a program that doesn't require a GUI. Not really useful for a general person to view and only to pass the info in the form of code to another program.
So why one uses a Headless program?
Simply because it improves the speed and performance and is available for all user, including those that have access to the graphic card. Allows testing browserless setups and helps you multitask.
Guide to Headless Browser
What is GUI ?
In software development it is an architectural design that completely separates the backend from the front end. The front end, gui, or UI is a stand alone piece and communicates to the backend through an API. This allows for a multi server architecture, flexibility in software stack and performance optimization.

Is it possible to run "native" code on top of a managed OS?

I was reading up on Midori and kinda started wondering if this is possible.
On a managed OS, "managed code" is going to be native, and "native code" is going to be...alien? Is it possible, at least theoretically, to run the native code of today on a managed OS?
First, you should start by defining "managed" and "native". On a "managed" OS like Midori, the kernel is still ngen-ed (precompiled to machine code), instead of being jit-compiled from IL. So, I would rule that out as a distinction between "managed" and "native".
There are two other distinctions between "managed" and "native" code that come to my mind - code vrifiability and resource management.
Most "native" code is unverifiable, thus a "managed" OS loader might refuse to even load "native" images. Of course, it is possible to produce verifiable "native" code, but that puts a lot of limitations and in essence is no different from "managed" code.
Resources in a "managed" OS would be managed by the OS, not the app. A "native" code usually allocates and cleans up its resource. What would happen with a resource that was allocated by an OS API and given to the "native" code? Or vice versa? There should be quite clear rules on who and when will do the resource management and cleanup. For security reasons, I can't imagine the OS giving any direct control to the "native" code to any resources besides the process virtual memory. Therefore, the only reason to go "native" would be to implement your own memory management.
Today's "natve" code won't play by any of the rules above. Thus, a "managed" OS should refuse to execute it directly. Though, the "managed" OS might provide a virtualization layer like Hyper-V and host the "native" code in a virtual machine.
By managed I assume you mean the code runs in an environment which does some checks on the code for type safety, safe memory access etc. And native, well, the opposite. Now its this execution environment that determines whether it can allow native code to run without being verified. Look at it this way: The OS and the application on top both need an execution env to run in. Their only relationship is that the top application is calling the underlying OS for lower level tasks but in calling the OS, its actually being executed by the execution env(which may/may not support code verification depending on say, options passed in compiling the code for example) and when control is transferred to the OS, the execution env again is responsible for executing the OS code(this environment might be another envionment all together), in which case, it verifies the OS code(because its a managed OS).
So, theoretically, native code may/may not run on a managed OS. It all depends on the behaviour of the execution environment in which its running. Whether the OS is managed or not will not affect whether it will run on it or not.If the top application and the OS both have the same execution env(managed), then the native code will not run on the OS.
Technically, a native code emulator can be written in managed code, but it's not running on bare hardware.
I doubt any managed OS that relies on software verification to isolate access to shared resources (such as Singularity) allows running unmanaged code directly since it might be able to bypass all the protections provided by the software (unlike normal OSes, some managed OSes don't rely on protection techniques provided by hardware).
From the MS Research paper Singularity: Rethinking the Software Stack (p9):
A protection domain could, in principle, host a single process
containing unverifiable code written in an unsafe language such as
C++. Although very useful for running legacy code, we have not
yet explored this possibility. Currently, all code within a
protection domain is also contained within a SIP, which continues
to provide an isolation and failure containment boundary.
So it seems like, though unexplored at the moment, it is a distinct possibility. Unmanaged code could run in a hardware protected domain, it would take a performance hit from having to deal with virtual memory, the TLB, etc. but the system as a whole could maintain its invariants safely while running unmanaged code.