I'm reading through my computer organization textbook and have come across a list of events/exceptions, one of them says "Invoke the operating system from user program". What exactly does this mean?
Does this refer to a system call?
The INVOKE directive enables a statement in a program to be processed by the operating system as if it were entered as a command from the terminal. It provides access to most operating system utilities and facilities.
Taken from this link
As it says in the link above, Use the INVOKE directive to have the operating system treat a character string from a program as an operating system command from the terminal.
Therefore, in your case, it means getting the attention of the OS from the user program using certain programming techniques.
Related
I've an embedded system which runs firmware and has USB mass storage with size 79kB. So when you plug in the device to any computer(MAC/Windows), it pops as a 79kB flash drive. The firmware creates files which has transaction records. The objective is to display these transactions (tables and simple graphs) to the user. I've narrowed down to a web browser. So the user (with MAC/Windows PC) can plug in the USB device mass storage and open an HTML file in the mass storage drive and view all the transactions in the form of tables and simple bar graphs. The tricky part comes here: the device(firmware) needs to update it's clock, and this time input has to be sourced from the MAC/Windows PC. How can this be achieved?
This is the minimum requirement. Further, through the web browser the user wants to write some configuration parameters for e.g. through a text box and a submit button in the HTML page.
NOTE: Here the device has USB mass storage type and the web browser approach were selected so that there is no prerequisites for the user.
Please suggest an alternative if this can be done using another approach for e.g. a different class of USB or some other application locally available on MAC/Windows desktop/laptop. For e.g. the application should run on both on Mac and Windows i.e. the code should be the same but can be built into separate packages one for Mac and the other (.exe) for Windows. Please suggest a platform for this that has same source but can be built for both mac and windows. Thanks!
As far as I know, there is no way a web browser could write to a file. If such a thing was possible, it would be a huge security issue.
You have to write a piece of native software to do all the tasks you name. That can be done in pretty much any programming language, and if you're developing embedded systems I reckon you must have some experience in programming.
I'm looking at doing something similar and have an idea, though you may be better equipped to run with it than I am. Have the define contain a directory called "SET_DATE" with files "YEAR15" through "YEAR99", "MON01" through "MON12", "DATE01" through "DATE31", "H00" through "H23", "M00" through "M59", "S00" through "S59", and "SET"; each such file should start at a different sector, though none of the sectors in question need to contain any data (they need not physically be stored anywhere). To set the date to July 4, 2020 at 12:34:56pm, read the following files in sequence:
SET_DATE/YEAR20
SET_DATE/MONTH07
SET_DATE/DATE04
SET_DATE/H12
SET_DATE/M34
SET_DATE/S56
SET_DATE/SET
The last access should cause the unit to set its clock. If a user might want to set the clock more than once, that could be accommodated by either having a bunch of essentially-identical directories under SET_DATE (so setting the date the first time would use SET_DATE/00/YEAR20, the second time SET_DATE/01/YEAR20, etc.) and/or having the drive unmount/remount itself if necessary to clear out any caching.
I would think it unwise to have directory fetches trigger actions, since Windows or an anti-virus tool might decide to pre-cache all the directories in a drive when it is mounted. I would not expect Windows or a browser to eagerly load files, however, so I would think one could have read accesses trigger actions.
I am reverse engineering a serial communication protocol. After i decode the bits I am supposed to write an interface for it. My choices are split between writing a filter driver to intercept the data coming in and going out or just have a basic serial direct communication. Is there any advantage over using one method as opposed to the other?
Well, there's a big difference:
Debugging your filter driver requires a kernel debugger. Options to stop a debug session in flight and edit+build your code are limited and typically requires an operating system reboot. Debugging user mode code is trivial
A bug in your filter driver will crash the operating system. A bug in your user mode code only crashes the program
Deploying your filter driver normally requires an installer. User mode code is simply linked into the program at build time.
These are pretty grave disadvantages. About the only advantage of a filter driver that I can think of is that the code is completely invisible to the user mode programmer. This is however also a liability, there's very little this programmer can do to help you with diagnostic information when the filter driver misbehaves.
I am new to the Emebedded Linux. I want to use my USB keyboard using threading.. I know the concept of the threading but i want to know how i can detect it using the concept of threading. ?
If you don't wish to use non-blocking I/O to read the keyboard, you can use a thread which does a blocking read and signals your main thread (or sets a flag which it can poll) when input is available.
In addition to blocking, you may have to contend with a default setting to line mode.
A common choice for polling or responding too single characters in a single threaded program is to change the terminal mode settings - see the man pages for termios, stty, etc. You will however need to change them back if your program exits.
Another option would be to skip the whole terminal infrastructure and read the input events directly through /dev/input/. Or at an extreme you could skip the USB HID driver and write your own kernel driver for USB keyboards.
If I understood you correctly you have a embedded linux board and now you want to connect a USB keyboard and use it with applications on the embedded linux board? If this is correct then you dont need to do anything with threading. What you need to do is have drivers installed for that keyboard. For that you should look into the kernel build config to see if the USB keyboard drivers (HID drivers) are enabled or not.
You can use either of following based on your requirement :
blocking I/O
non-blocking I/O
For non-blocking I/O, try Multi-threading based approach. For blocked I/O, try epoll() system call based approach.
Regarding the method to detect the keyboard, you can try the following :
Use the file /proc/bus/input/devices to detect on the devices, but it does not get updated until you reboot in certain systems.
Detect using /dev/input/eventN and the ioctl() call to detect the event bits. The event interface is very useful as it exposes the raw events to userspace.
While reading the QTKit Application Programming Guide I came across the term 'headless environments' - what does this mean? Here is the passage:
...including applications with a GUI and tools intended to run in a “headless” environment. For example, you can use the framework to write command-line tools that manipulate QuickTime movie files.
"Headless" in this context simply means without a graphical display. (i.e.: Console based.)
Many servers are "headless" and are administered over SSH for example.
Headless means that the application is running without a graphical user interface (GUI) and sometimes without user interface at all.
There are similar terms for this, which are used in slightly different context and usage. Here are some examples.
Headless / Ghost / Phantom
This term is rather used for heavy weight clients. The idea is to run a client in a non-graphical mode, with a command line for example. The client will then run until its task is finished or will interact with the user through a prompt.
Eclipse for instance can be run in headless mode. This mode comes in handy when it comes to running jobs in background, or in a build factory.
For example, you can run Eclipse in graphic mode to install plugins. This is OK if you just do it for yourself. However, if you're packaging Eclipse to be used by the devs of a large company and want to keep up with all the updates, you probably want to find a more reproducible, automatic easier way.
That's when the headless mode comes in: you may run Eclipse in command line with parameters that indicate which plugins to install.
The nice thing about this method is that it can be integrated in a build factory!
Faceless
This term is rather used for larger scale application. It's been coined in by UX designers. A faceless app interacts with users in a manner that is traditionally dedicated to human users, like mails, SMS, phone... but NOT a GUI.
For example, some companies use SMS as an entry point to dialog with users: the user sends a SMS containing a request to a certain number. This triggers automated services to run and reply to the user.
It's a nice user experience, because one can do some errands from one's telephone. You don't necessarily need to have an internet connection, and the interaction with the app is asynchronous.
On the back-end side, the service can decide that it does not understand the user's request and get out of the automated mode. The user enters then in an interactive mode with a human operator without changing his communication tool.
You most likely know what a browser is. Now take away the GUI, and you have what’s called a headless browser. Headless browsers can do all of the same things that normal browsers do, but faster. They’re great for automating and testing web pages programmatically.
Headless can be referred in terms of a browser or a program that doesn't require a GUI. Not really useful for a general person to view and only to pass the info in the form of code to another program.
So why one uses a Headless program?
Simply because it improves the speed and performance and is available for all user, including those that have access to the graphic card. Allows testing browserless setups and helps you multitask.
Guide to Headless Browser
What is GUI ?
In software development it is an architectural design that completely separates the backend from the front end. The front end, gui, or UI is a stand alone piece and communicates to the backend through an API. This allows for a multi server architecture, flexibility in software stack and performance optimization.
I was reading up on Midori and kinda started wondering if this is possible.
On a managed OS, "managed code" is going to be native, and "native code" is going to be...alien? Is it possible, at least theoretically, to run the native code of today on a managed OS?
First, you should start by defining "managed" and "native". On a "managed" OS like Midori, the kernel is still ngen-ed (precompiled to machine code), instead of being jit-compiled from IL. So, I would rule that out as a distinction between "managed" and "native".
There are two other distinctions between "managed" and "native" code that come to my mind - code vrifiability and resource management.
Most "native" code is unverifiable, thus a "managed" OS loader might refuse to even load "native" images. Of course, it is possible to produce verifiable "native" code, but that puts a lot of limitations and in essence is no different from "managed" code.
Resources in a "managed" OS would be managed by the OS, not the app. A "native" code usually allocates and cleans up its resource. What would happen with a resource that was allocated by an OS API and given to the "native" code? Or vice versa? There should be quite clear rules on who and when will do the resource management and cleanup. For security reasons, I can't imagine the OS giving any direct control to the "native" code to any resources besides the process virtual memory. Therefore, the only reason to go "native" would be to implement your own memory management.
Today's "natve" code won't play by any of the rules above. Thus, a "managed" OS should refuse to execute it directly. Though, the "managed" OS might provide a virtualization layer like Hyper-V and host the "native" code in a virtual machine.
By managed I assume you mean the code runs in an environment which does some checks on the code for type safety, safe memory access etc. And native, well, the opposite. Now its this execution environment that determines whether it can allow native code to run without being verified. Look at it this way: The OS and the application on top both need an execution env to run in. Their only relationship is that the top application is calling the underlying OS for lower level tasks but in calling the OS, its actually being executed by the execution env(which may/may not support code verification depending on say, options passed in compiling the code for example) and when control is transferred to the OS, the execution env again is responsible for executing the OS code(this environment might be another envionment all together), in which case, it verifies the OS code(because its a managed OS).
So, theoretically, native code may/may not run on a managed OS. It all depends on the behaviour of the execution environment in which its running. Whether the OS is managed or not will not affect whether it will run on it or not.If the top application and the OS both have the same execution env(managed), then the native code will not run on the OS.
Technically, a native code emulator can be written in managed code, but it's not running on bare hardware.
I doubt any managed OS that relies on software verification to isolate access to shared resources (such as Singularity) allows running unmanaged code directly since it might be able to bypass all the protections provided by the software (unlike normal OSes, some managed OSes don't rely on protection techniques provided by hardware).
From the MS Research paper Singularity: Rethinking the Software Stack (p9):
A protection domain could, in principle, host a single process
containing unverifiable code written in an unsafe language such as
C++. Although very useful for running legacy code, we have not
yet explored this possibility. Currently, all code within a
protection domain is also contained within a SIP, which continues
to provide an isolation and failure containment boundary.
So it seems like, though unexplored at the moment, it is a distinct possibility. Unmanaged code could run in a hardware protected domain, it would take a performance hit from having to deal with virtual memory, the TLB, etc. but the system as a whole could maintain its invariants safely while running unmanaged code.