Is it possible to virtualize a single process? - containers

I'm looking for a way to effectively virtualize a single process (and, presumably, any children it creates). Although a Container model sounds appropriate, products like Docker don't quite fit the bill.
The Intel VMX model allows one to create a hypervisor, then launch a VM, which will exit back to the hypervisor under certain (programmable) conditions, such as privileged instruction execution, CR3/CR8 manipulation, exceptions, direct I/O, etc. The features of the VMX model fit very well with my needs, except that I can't require a VM with a separate instance of the entire OS to accomplish the task - I just want my hypervisor to control one child application (think Photoshop/Excel/Firefox; one process and its progeny, if any) that's running under the host OS, and catch VM exits under the specified conditions (for debugging and/or emulation purposes). Outside of the exit conditions, the child process should run unencumbered, and have access to all OS resources to which it would be entitled without the VM, including filesystem, graphical output, keyboard/mouse input, IPC/messaging, etc. For my purposes, I am not interested in isolation or access restriction, which is the typical motivation for using a VM - to the contrary, I want the child process to be fully enmeshed in the host OS environment. While operating entirely in user-space is preferable, I can utilize Ring 0 to facilitate this. (Note that the question is Intel-specific and OS-agnostic, although it's likely to be implemented in a *nix environment first.)
I'm wondering what would happen if I had my hypervisor set up a VMCS that simply mirrored the host's actual configuration, including page tables, IDT, etc., then VMLAUNCH 0(%rip) (in effect, a pseudo-fork?) and execute the child process from there. (That seems far too simplistic to actually work, but the notion does have some appeal). Assuming that's a Bad Idea™, how might I approach this problem?

Related

What is the difference between containers and process VMs (NOT system VMs)?

As far as I understand, ...
virtualization, although commonly used to refer to server virtualization, refers to creating virtual versions of any IT component, such as networking and storage
although containerization is commonly contrasted to virtualization, it is technically a form of server virtualization that takes place on the OS level
although virtual machines (VMs) commonly refer to the output of hardware-level server virtualization (system VMs), they can also refer to the output of application virtualization (process VMs), such as JVM
Bearing the above in mind, I am trying to wrap my head around the difference between containers and process VMs (NOT system VMs). In other words, what is the difference between OS-level server virtualization and application virtualization?
Don't both technically refer to one and the same thing: a platform-independent software execution environment that is created using software that abstract the environment’s underlying OS?
Although some say that the isolation achieved by container is a key difference, it is also stated that a system VM "is limited to the resources and abstractions provided by the virtual machine"
I have created a graphic representation for you, it is easier (for me) to explain the differences like this, I hope it helps.
OS-level virtualization aims to run unmodified application for a particular OS. Application can communicate with external world only through OS API, therefore a virtualization component put on that API allows to present different image of external world (e.g. amount of memory, network configuration, process list) to applications running in different virtualization context (container). Generally application runs on "real" CPU (if not already virtualized) and does not need (and sometimes have) to know that environment presented by OS is somehow filtered. It is not platform-independent software execution environment.
On the other hand, application VM aims to run applications that are prepared specially for that VM. For example, a Java VM interpretes a bytecode compiled for a "processor" which has little common with a real CPU. There are CPUs which can run some Java byte code natively, but the general concept is to provide a bytecode effective for software interpretation on different "real" OS platforms. For it to work, JVM has to provide some so called native code to interface with OS API calls it is run on. You can run your program on Sparc, ARM, Intel etc. provided that you have OS-specific intepreter application and your bytecode is conformant to specification.

Type-1 Hypervisors non-volatile memory isolation

Hypervisors isolate different OS running on the same physical machine from each other. Within this definition, non-volatile memory (like hard-drives or flash) separation exists as well.
When thinking on Type-2 hypervisors, it is easy to understand how they separate non-volatile memory because they just use the file system implementation of the underlying OS to allocate different "hard-drive files" to each VM.
But than, when i come to think about Type-1 hypervisors, the problem becomes harder. They can use IOMMU to isolate different hardware interfaces, but in the case of just one non-volatile memory interface in the system I don't see how it helps.
So one way to implement it will be to separate one device into 2 "partitions", and make the hypervisor interpret calls from the VMs and decide whether the calls are legit or not. I'm not keen on communication protocols to non-volatile interfaces but the hypervisor will have to be be familiar with those protocols in order to make the verdict, which sounds (maybe) like an overkill.
Are there other ways to implement this kind of isolation?
Yes you are right, hypervisor will have to be be familiar with those protocols in order to make the isolation possible.
The overhead is mostly dependent on protocol. Like NVMe based SSD basically works on PCIe and some NVMe devices support SR-IOV which greatly reduces the effort but some don't leaving the burden on the hyperviosr.
Mostly this support is configured at build time, like how much memory will be given to each guest, command privilege for each guest etc, and when a guest send a command, hypervisor verifies its bounds and forwards them accordingly.
So why there is not any support like MMU or IOMMU in this case?
There are hundreds of types of such devices with different protocols, NVMe, AHCI etc, and if the vendor came to support these to allow better virtualization, he will end up with a huge chip that's not gonna fit in.

Container for threads process isolation

I want to know if is possible to customize an LXC kernel (or relation system like OpenVZ, etc) to work just for threads process, see this mention:
Unlike Docker, Virtuozzo, and LXC, which operate on the process level,
LVE is able to operate on the thread level. This allows multithreaded
servers such as Apache (with its 'worker' MPM) to take advantage of
LVE without having to run a separate instance per LVE user.
source:
blog.phusion.nl/2016/02/03/lve-an-alternative-container-technology-to-docker-and-virtuozzolxc/

Performing a distributed CUDA/OpenCL based password cracking

Is there a way to perform a distributed (as in a cluster of a connected computers) CUDA/openCL based dictionary attack?
For example, if I have a one computer with some NVIDIA card that is sharing the load of the dictionary attack with another coupled computer and thus utilizing a second array of GPUs there?
The idea is to ensure a scalability option for future expanding without the need of replacing the whole set of hardware that we are using. (and let's say cloud is not an option)
This is a simple master / slave work delegation problem. The master work server hands out to any connecting slave process a unit of work. Slaves work on one unit and queue one unit. When they complete a unit, they report back to the server. Work units that are exhaustively checked are used to estimate operations per second. Depending on your setup, I would adjust work units to be somewhere in the 15-60 second range. Anything that doesn't get a response by the 10 minute mark is recycled back into the queue.
For queuing, offer the current list of uncracked hashes, the dictionary range to be checked, and the permutation rules to be applied. The master server should be able to adapt queues per machine and per permutation rule set so that all machines are done their work within a minute or so of each other.
Alternately, coding could be made simpler if each unit of work were the same size. Even then, no machine would be idle longer than the amount of time for the slowest machine to complete one unit of work. Size your work units so that the fastest machine doesn't enter a case of resource starvation (shouldn't complete work faster than five seconds, should always have a second unit queued). Using that method, hopefully your fastest machine and slowest machine aren't different by a factor of more than 100x.
It would seem to me that it would be quite easy to write your own service that would do just this.
Super Easy Setup
Let's say you have some GPU enabled program X that takes a hash h as input and a list of dictionary words D, then uses the dictionary words to try and crack the password. With one machine, you simply run X(h,D).
If you have N machines, you split the dictionary into N parts (D_1, D_2, D_3,...,D_N). Then run P(x,D_i) on machine i.
This could easily be done using SSH. The master machine splits the dictionary up, copies it to each of the slave machines using SCP, then connects to the slaves and tells them to run the program.
Slightly Smarter Setup
When one machine cracks the password, they could easily notify the master that they have completed the task. The master then kills the programs running on the other slaves.

Can a executable behave differently when run on a Virtualized Server?

Let's say I have a piece of code that runs fine on an OS. Now, if I install that OS on a virtual machine (server virtualization), and run that code on that, is it possible that the code behaves differently?
If so, what are the prerequisites for that? For example, does it have to be compiled machine code (in other words, are interpreted languages safe?)? Does it have to be certain OS instructions? Specific virtualization technology (Xen, KVM, VMware..)?
Also, what are the possible different behaviors?
Yes. Like any machine, the virtual machine is just another computer (implemented in software instead of hardware).
For one, lots of commercial apps will blow up when you run them on a VM due to:
copy protection detecting the VM
copy protection rigging your hardware, using undocumented features of BIOS/Kernel/hardware
Secondly, a VM is just another computer consisting of hardware implemented in assembly instead of circuits/dye/microcode/magic. This means the VM must provide the emulated hardware either through pass-through or emulation. The fact that hardware is very diverse can cause all kinds of different behavior. Also note the possible lack of drivers for or acceleration of the emulated hardware.
But of course a typical business application for example isn't nearly as likely to rely on any hardware details as all it does is call some GUI API.
Interpreted languages are only safe from this to the extent that they are "interpreted", if the interpreted language calls out to some native code, all this is possible again.
For an example of something detecting that it's running under a VM, check this, it's just one of the literally thousands of ways to detect the VM.
In theory the program should run exactly the same as on a physical machine.
In practice however, there may be differences due to
Machine\OS configuration and drivers
Load of the virtual machine host.
Differences in machine configuration are similar to difference you would see between any difference physical machine. Depending on how critical you application is to the end user, you should run the same set of tests that you would a physical box to determine whether the environment is acceptable for use.
Depending on the virtualisation technology, the host may not have the ability to guarantee the client resources at specific times. This can lead to weird behavior on the client. Potentially you would see more occurrences of application errors due to IO timeouts an starvation of memory.
To successfully virtualise an application for production use you need to do a bit of work to understand the resource profile of the application\client and virtual host.