Threading : Running USB keyboard on ARM cortex A8 - embed

I am new to the Emebedded Linux. I want to use my USB keyboard using threading.. I know the concept of the threading but i want to know how i can detect it using the concept of threading. ?

If you don't wish to use non-blocking I/O to read the keyboard, you can use a thread which does a blocking read and signals your main thread (or sets a flag which it can poll) when input is available.
In addition to blocking, you may have to contend with a default setting to line mode.
A common choice for polling or responding too single characters in a single threaded program is to change the terminal mode settings - see the man pages for termios, stty, etc. You will however need to change them back if your program exits.
Another option would be to skip the whole terminal infrastructure and read the input events directly through /dev/input/. Or at an extreme you could skip the USB HID driver and write your own kernel driver for USB keyboards.

If I understood you correctly you have a embedded linux board and now you want to connect a USB keyboard and use it with applications on the embedded linux board? If this is correct then you dont need to do anything with threading. What you need to do is have drivers installed for that keyboard. For that you should look into the kernel build config to see if the USB keyboard drivers (HID drivers) are enabled or not.

You can use either of following based on your requirement :
blocking I/O
non-blocking I/O
For non-blocking I/O, try Multi-threading based approach. For blocked I/O, try epoll() system call based approach.
Regarding the method to detect the keyboard, you can try the following :
Use the file /proc/bus/input/devices to detect on the devices, but it does not get updated until you reboot in certain systems.
Detect using /dev/input/eventN and the ioctl() call to detect the event bits. The event interface is very useful as it exposes the raw events to userspace.

Related

Detect Virtual Machine with AS3

Is it possible to detect that I am running under Virtual Machine from the code of Action Script3 (or lower)?
I know there is Capabilities class that providers some info about the system, but it doesn't seem to fit my needs fully.
Perhaps there is no such a flag anywhere, but I can count on some indirect information to understand that environment is virtualized?
Regards,
Sergey.
I doubt that checking this would be easy, especially from such an isolated enviroment as browser plugin sandbox. Modern virtual machines can emulate everything very accurately down to CPU registers and it is often hard to detect a vm even when you can access regular OS APIs. And when running in a browser you don't even have that (with AIR, you could write a native extension). You can check this thread for some info that may prove useful.

Filter driver vs Direct Serial Communcation

I am reverse engineering a serial communication protocol. After i decode the bits I am supposed to write an interface for it. My choices are split between writing a filter driver to intercept the data coming in and going out or just have a basic serial direct communication. Is there any advantage over using one method as opposed to the other?
Well, there's a big difference:
Debugging your filter driver requires a kernel debugger. Options to stop a debug session in flight and edit+build your code are limited and typically requires an operating system reboot. Debugging user mode code is trivial
A bug in your filter driver will crash the operating system. A bug in your user mode code only crashes the program
Deploying your filter driver normally requires an installer. User mode code is simply linked into the program at build time.
These are pretty grave disadvantages. About the only advantage of a filter driver that I can think of is that the code is completely invisible to the user mode programmer. This is however also a liability, there's very little this programmer can do to help you with diagnostic information when the filter driver misbehaves.

Change or override the behavior of a USB keyboard to a more generic controller

Changing the keystrokes of a USB keyboard
My question is very similar to the one above... (to which there was no clear answer)
I have a nice USB keyboard that I would like to use to control an audio/visual program I am writing. I can't have the USB keyboard input interfere with the regular operating system interface -
Therefore I need to have the OS recognize it as a generic HID device of some kind, or a MIDI device, or something that sends OSC messages.
I am writing the program on OSX but would like to figure out a cross platform solution that doesn't involve me hacking the hardware of the keyboard - hopefully some sort of program or script that I can use. The reason for this is I'd like to distribute this program for others to use easily.
Any ideas on where to start? I'm thinking I'll probably need to write a separate program for users to select a USB device and reroute that into my program...
Any language is fine - I write code in Python, sometimes C, and Java / Processing.
Unfortunately you're going to find this EXTREMELY difficult to do: most modern operating systems will automatically detect the HID profile and load the drivers for it, and generally speaking make it very difficult to override that default behavior.
Without hacking the hardware you would need to somehow override the OS's default behavior for that specific USB VID (vendor ID) and PID (product ID) and instruct the OS to load your own custom kernel extension? I'd suggest starting with source of the AppleUSBKeyboard drivers at http://www.opensource.apple.com/source/IOUSBFamily/IOUSBFamily-206.4.1/AppleUSBKeyboard/ and then figuring out how to install your custom build as the preferred USB driver for your specific keyboard's VID and PID. After that it should be the messy messy messy issue of only sending the keys to your app and not to anything else.
Would it be possible to write a
function that disables regular
operating system keyboard input for
all keys except something like ESC if
my program is in focus? – jeffrey May
24 at 21:59
Yes this should be quite possible, I dont know about Mac but on Win32 you want a global keyboard hook (look up SetWindowsHookEx)

Is it possible to run "native" code on top of a managed OS?

I was reading up on Midori and kinda started wondering if this is possible.
On a managed OS, "managed code" is going to be native, and "native code" is going to be...alien? Is it possible, at least theoretically, to run the native code of today on a managed OS?
First, you should start by defining "managed" and "native". On a "managed" OS like Midori, the kernel is still ngen-ed (precompiled to machine code), instead of being jit-compiled from IL. So, I would rule that out as a distinction between "managed" and "native".
There are two other distinctions between "managed" and "native" code that come to my mind - code vrifiability and resource management.
Most "native" code is unverifiable, thus a "managed" OS loader might refuse to even load "native" images. Of course, it is possible to produce verifiable "native" code, but that puts a lot of limitations and in essence is no different from "managed" code.
Resources in a "managed" OS would be managed by the OS, not the app. A "native" code usually allocates and cleans up its resource. What would happen with a resource that was allocated by an OS API and given to the "native" code? Or vice versa? There should be quite clear rules on who and when will do the resource management and cleanup. For security reasons, I can't imagine the OS giving any direct control to the "native" code to any resources besides the process virtual memory. Therefore, the only reason to go "native" would be to implement your own memory management.
Today's "natve" code won't play by any of the rules above. Thus, a "managed" OS should refuse to execute it directly. Though, the "managed" OS might provide a virtualization layer like Hyper-V and host the "native" code in a virtual machine.
By managed I assume you mean the code runs in an environment which does some checks on the code for type safety, safe memory access etc. And native, well, the opposite. Now its this execution environment that determines whether it can allow native code to run without being verified. Look at it this way: The OS and the application on top both need an execution env to run in. Their only relationship is that the top application is calling the underlying OS for lower level tasks but in calling the OS, its actually being executed by the execution env(which may/may not support code verification depending on say, options passed in compiling the code for example) and when control is transferred to the OS, the execution env again is responsible for executing the OS code(this environment might be another envionment all together), in which case, it verifies the OS code(because its a managed OS).
So, theoretically, native code may/may not run on a managed OS. It all depends on the behaviour of the execution environment in which its running. Whether the OS is managed or not will not affect whether it will run on it or not.If the top application and the OS both have the same execution env(managed), then the native code will not run on the OS.
Technically, a native code emulator can be written in managed code, but it's not running on bare hardware.
I doubt any managed OS that relies on software verification to isolate access to shared resources (such as Singularity) allows running unmanaged code directly since it might be able to bypass all the protections provided by the software (unlike normal OSes, some managed OSes don't rely on protection techniques provided by hardware).
From the MS Research paper Singularity: Rethinking the Software Stack (p9):
A protection domain could, in principle, host a single process
containing unverifiable code written in an unsafe language such as
C++. Although very useful for running legacy code, we have not
yet explored this possibility. Currently, all code within a
protection domain is also contained within a SIP, which continues
to provide an isolation and failure containment boundary.
So it seems like, though unexplored at the moment, it is a distinct possibility. Unmanaged code could run in a hardware protected domain, it would take a performance hit from having to deal with virtual memory, the TLB, etc. but the system as a whole could maintain its invariants safely while running unmanaged code.

Design a networking application

Problem:
I need to design a networking application that can handle network failures and switch to another network in case of failure.
Suppose I am using three ethernet connections and one wireless also . At a particular moment only one connection is being used.
How should I design my system so that it can switch to another network in case of failure.
I know this is very broad question but any pointers will help!
I'd typically make sure that there's routing on the network and run one (or more) routing protocol instances on the host. That way network failure is (mostly) transparent to the application, as the host OS takes care of sending packets the right way.
On the open-source side, I have good experiences with zebra and quagga, at least on linux machines.
Create a domain model for this, describing the network elements, the kind of failures you want to be able to detect and handle, and demonstrate that it works. Then plug in the network code.
have one class polling for the connection. If poll timeout fires switch the ethernet settings. For wireless, set the wifi settings to autoconnect and then just enable/disable the wificard.
(But I dont know how you switch the ethernet connection)
First thing I would do is look for APIs that will give me network disconnection events.
I'd also find a way to check the state of the network connections.
These would vary depending on the OS and the Language used so you might want to have this abstracted in your application.
Example:
RegisterDisconnectionEvent(DisconnectionHandler);
function DisconnectionHandler()
{
FindActiveNetworkConnection();
// do something else...
}
A primitive way to do it would be to look out for network disconnection events. Your sequence would be:
Register/poll for network connections status changes. Maintain a list of all active network connections.
Use the first available network connection (Alternately you could sort it based on interface bandwidths, and use the one with highest bandwidth).
When you detect a down connection, use the next active one.
However, if there are implications to the functionality of your application, based on which network connection you use, you are much better off, having either a routing protocol do the job for you, or have a tracking application within your application. This tracking application would track network paths (through various methods like ping, traceroute, etc) across all your available interfaces to see which one can reach the ultimate destination, and use the appropriate network interface.
Also, you could monitor your network interfaces for not just status changes, but also for input/output errors, and change your selection accordingly. This would help you use the most efficient network at any given point of time. But this would need to be balanced with the churn caused by switching a network connection.
If you control all of the involved hosts, Multipath TCP will probe all of your connections and automatically choose the one that works; if multiple connections are working, it will load balance across them.
If you don't control the endpoints, there's no choice but doing the probing in the application. Mosh is an example of an application that does that quite elegantly.
You didn't mention what your application does; perhaps it would be possible to redesign your protocol so that it uses all available connections simultaneously, the way BitTorrent does, and therefore doesn't care about some links being down at any given time?