What's the difference between a runtime environment, a runtime engine, and a runtime library? - terminology

I'd like to make sure I have the definitions of a few terms associated with runtime correct.
Does the following make sense?
A runtime system (aka runtime engine) is software that is designed to aid the execution of a computer program while it is running. The runtime system acts as the gateway for the runtime environment, which is an abstraction of the underlying system a program is running on.
Is this correct?
Also:
How is do you distinguish between a runtime system and a runtime library?
What exactly does "runtime" by itself refer to? E.g. "node.js is a Javascript runtime"
Thanks!

Since all software programs should run at least once, 'runtime' is an abused term in IT.
A runtime library is an old term, with a more precise meaning attached to it. Usually it is the hidden routines that will make your program run in a particular environment and/or operating system. For instance, when you receive your program arguments in the pair argc and argv in a C program, it was the runtime library that has gotten them from the OS and passed to your C program.
According to Wikipedia, a Runtime system is a partial implementation of the execution model. And the latter is the conceptual model that describes how a program will run. For instance, one could consider the JVM the runtime system of every Java program.
Some authors seem to consider equivalent the expressions "runtime system" and "runtime engine", but maybe that could be avoided. Maybe "engine" should be reserved for frameworks a little higher in the software stack, closer to the application layer. For instance, a game engine. Or maybe a database engine.

Related

What is the difference between containers and process VMs (NOT system VMs)?

As far as I understand, ...
virtualization, although commonly used to refer to server virtualization, refers to creating virtual versions of any IT component, such as networking and storage
although containerization is commonly contrasted to virtualization, it is technically a form of server virtualization that takes place on the OS level
although virtual machines (VMs) commonly refer to the output of hardware-level server virtualization (system VMs), they can also refer to the output of application virtualization (process VMs), such as JVM
Bearing the above in mind, I am trying to wrap my head around the difference between containers and process VMs (NOT system VMs). In other words, what is the difference between OS-level server virtualization and application virtualization?
Don't both technically refer to one and the same thing: a platform-independent software execution environment that is created using software that abstract the environment’s underlying OS?
Although some say that the isolation achieved by container is a key difference, it is also stated that a system VM "is limited to the resources and abstractions provided by the virtual machine"
I have created a graphic representation for you, it is easier (for me) to explain the differences like this, I hope it helps.
OS-level virtualization aims to run unmodified application for a particular OS. Application can communicate with external world only through OS API, therefore a virtualization component put on that API allows to present different image of external world (e.g. amount of memory, network configuration, process list) to applications running in different virtualization context (container). Generally application runs on "real" CPU (if not already virtualized) and does not need (and sometimes have) to know that environment presented by OS is somehow filtered. It is not platform-independent software execution environment.
On the other hand, application VM aims to run applications that are prepared specially for that VM. For example, a Java VM interpretes a bytecode compiled for a "processor" which has little common with a real CPU. There are CPUs which can run some Java byte code natively, but the general concept is to provide a bytecode effective for software interpretation on different "real" OS platforms. For it to work, JVM has to provide some so called native code to interface with OS API calls it is run on. You can run your program on Sparc, ARM, Intel etc. provided that you have OS-specific intepreter application and your bytecode is conformant to specification.

What is a runtime environment for supposedly "no-overhead" systems languages?

Specifically, I'm talking more about C++ and Rust than others. I don't understand how C++ has a "runtime" in the sense that Java and C# have a runtime--while Java and C# run on top of a virtual machine with its own encapsulated abstractions and such, I don't get how C++ might have one.
Take virtual tables for C++, for example. Do we consider dynamic_cast<type> a part of C++'s runtime functionality or are we talking about C++'s structure for vtables in general? Can we consider new and delete a part of the C++ runtime environment? What exactly constitutes a runtime?
For example, here we have a Rust article on its own runtime, which describes it as :
The Rust runtime can be viewed as a collection of code which enables
services like I/O, task spawning, TLS, etc. It's essentially an
ephemeral collection of objects which enable programs to perform
common tasks more easily.
But is this not the function of a standard library or language features, not an actual runtime? What constitutes this very thin but existent runtime? Even Bjarne expresses his thoughts that C++ has "zero-overhead abstraction", but if C++ has a runtime, does this not imply that C++ does indeed have some sort of "backend" code to orchestrate its own very light but still existent abstractions?
TL;DR: What is a runtime and/or runtime environment in the context of languages like C++ and Rust that have supposedly "zero-overhead" and don't have "heavy" runtimes like Java or C#?
Edit: I suspect that I'm just missing something about semantics here...
C++ requires a few things that aren't required in something like C.
For example, it typically involves some overhead for exception handling. Although it may not be strictly required, most systems have at least a tiny bit of a top-level exception handler to tell you that the program shut down if an exception was thrown but not caught anywhere.
It's open to question whether it qualifies as "runtime environment", but the compiler also generates code to search up the stack and find a handler for a particular exception when one is thrown.
On one hand, this is exceptionally tiny (bordering on negligible) compared to something like a complete JVM. On the other hand, it's quite large and complex relative to what happens by default in something like a JVM or Microsoft's CLR.
As to zero overhead...well, it depends a bit on your viewpoint. Exception handling code can normally be moved out of the main stream of the code, so it doesn't impose any overhead in terms of execution speed as long as no exception is thrown. It does, however, require extra code so there can be (often is) quite a bit of overhead if you look at executable sizes. Just for example, doing a quick look at a "hello world" program, it looks like turning off exception handling reduces the executable size by about 2 kilobytes with VC++.
Admittedly, 2K isn't a whole lot of extra code--on the other hand, that's just what's added to essentially the most trivial program humanly possible. For a program that actually does something, it's undoubtedly more.
In the end, it's not enough that most people really have a reason to care, but it does exist nonetheless.
As to how this is handled, it involves a combination of code that's linked in from the standard library and code generated by the compiler (but the exact details vary with the implementation--for example, most 32-bit Windows compilers used Microsoft's Structured Exception Handling (in which case the operating system provides part of the code) but for 64-bit Windows, I believe all of them deal with exception handling on their own (which increases executable sizes more, but reduces overhead in terms of speed).

Standard Fortran interface for cuBLAS

I am using a commercial simulation software on Linux that does intensive matrix manipulation. The software uses Intel MKL by default, but it allows me to replace it with a custom BLAS/LAPACK library. This library must be a shared object (.so) library and must export both BLAS and LAPACK standard routines. The software requires the standard Fortran interface for all of them.
To verify that I can use a custom library, I compiled ATLAS and linked LAPACK (from netlib) inside it. The software was able to use my compiled ATLAS version without any problems.
Now, I want to make the software use cuBLAS in order to enhance the simulation speed. I was confronted by the problem that cuBLAS doesn't export the standard BLAS function names (they have a cublas prefix). Moreover, the library cuBLAS library doesn't include LAPACK routines.
I use readelf -a to check for the exported function.
On another hand, I tried to use MAGMA to solve this problem. I succeeded to compile and link it against all of ATLAS, LAPACK and cuBLAS. But still it doesn't export the correct functions and doesn't include LAPACK in the final shared object. I am not sure if this is the way it is supposed to be or I did something wrong during the build process.
I have also found CULA, but I am not sure if this will solve the problem or not.
Did anybody tried to get cuBLAS/LAPACK (or a proper wrapper) linked into a single (.so) exporting the standard Fortran interface with the correct function names? I believe it is conceptually possible, but I don't know how to do it!
Updated
As indicated by #talonmies, CUDA has provided a fortran thunking wrapper interface.
http://docs.nvidia.com/cuda/cublas/index.html#appendix-b-cublas-fortran-bindings
You should be able to run your application with it. But you probably will not get any performance improvement due to the mem alloc/copy issue described below.
Old
It may not easy. CUBLAS and other CUDA library interfaces assume all the data are already stored in device memory, however in your case, all the data are still in CPU RAM before calling.
You may have to write your own wrapper to deal with it like
void dgemm(...) {
copy_data_from_cpu_ram_to_gpu_mem();
cublas_dgemm(...);
copy_data_from_gpu_mem_to_cpu_ram();
}
On the other hand, you probably have noticed that every single BLAS call requires 2 data copies. This may introduce huge overhead and slow down the overall performance, unless most of your callings are BLAS 3 operations.

What is ABI(Application Binary Interface)?

This is what wikipedia says:
In computer software, an application
binary interface (ABI) describes the
low-level interface between an
application (or any type of) program
and the operating system or another
application.
ABIs cover details such as data type,
size, and alignment; the calling
convention, which controls how
functions' arguments are passed and
return values retrieved; the system
call numbers and how an application
should make system calls to the
operating system; and in the case of a
complete operating system ABI, the
binary format of object files, program
libraries and so on. A complete ABI,
such as the Intel Binary Compatibility
Standard (iBCS), allows a program
from one operating system supporting
that ABI to run without modifications
on any other such system, provided
that necessary shared libraries are
present, and similar prerequisites are
fulfilled.
I guess that an ABI is a convention or standard, and compilers/linkers use this convention to produce object codes. Is that right? If so who made these conventions(companies or some organization)? What was it like when there was no ABIs? Is there documents about these ABIs that we can refer to?
You're correct about the definition of an ABI, up to a point. The classic example is the syscall interface in Linux (and other UNIXes).
They are a standard way for code to request the operating system to carry out certain duties.
As such, they're decided by the people that wrote the OS or, in the case where the syscalls have been added later, by whoever added them (in cases where the OS allows this). For example, the Linux syscall interface on x86 states that you load the syscall number into eax, with other parameters placed in ebx, ecx and so on, depending on the syscall you're making (eax).
Typically, it's not the compiler or linker which do the work of interfacing, rather it's the libraries provided for the language you're using.
Returning to Linux, the GNU C libraries contain code for fopen (for example) which eventually call the relevant syscall to perform the lower level tasks (syscall number 5, open). A list of the syscalls can be found in this PDF file.
Specification is more suitable term than convention, as convention is loose term for widely accepted practice whereas specification is well-defined.
You are right. The specification is made by standardization body. Take a look at POSIX specification which is supported by Windows and compiler/build tool-chains such as gcc assume OS's to adhere by it, and even Linux kernel partially (almost exactly) adheres to it.
Before ABIs? Even today, firmware is hand-crafted as new chips come along for set-top boxes and such other devices having embedded systems.
The documentation is digital logic content in the data-sheet for the chips to be programmed by assembly language and for higher-level language, the cross-compiler tool-chain documentation gives away the assumptions that should be part of ABI.
Well, the concept of ABI was presumably conceived to support the binary compatibility of your program on other operating systems and machine architectures. So, lets suppose that you wrote a program on some operating system distribution running on x86 architecture. Now, for a programmer the most important thing is that this program that you wrote on your machine should be able to run exactly the same on any other machine running on same or different architecture lets say for the sake of discussion that the other machine is running on i386 architecture and this is where the concept of ABI or Application Binary Interfaces comes in. As every machine architecture defines its own way in which the operating system kernal talks to the outside world i.e user-space programs, hence every architecture defines a different set of system calls, machine registers, how those registers are used, how are software interrupts handled by the kernal and so on. ABI is the thing that handles these things for you like compiling, linking, byte ordering and so on. System programmers have had hard luck defining a uniform ABI for same operating systems running on different architectures and that is why every machine architecture has its own and you need to compile your programs in order to confirm to the format those machines have.

What is "runtime"?

I have heard about things like "C Runtime", "Visual C++ 2008 Runtime", ".NET Common Language Runtime", etc.
What is "runtime" exactly?
What is it made of?
How does it interact with my code? Or maybe more precisely, how is my code controlled by it?
When coding assembly language on Linux, I could use the INT instruction to make the system call. So, is the runtime nothing but a bunch of pre-fabricated functions that wrap the low level function into more abstract and high level functions? But doesn't this seem more like the definition for the library, not for the runtime?
Are "runtime" and "runtime library" two different things?
ADD 1
These days, I am thinking maybe Runtime has something in common with the so called Virtual Machine, such as JVM. Here's the quotation that leads to such thought:
This compilation process is sufficiently complex to be broken into
several layers of abstraction, and these usually involve three
translators: a compiler, a virtual machine implementation, and an
assembler. --- The Elements of Computing Systems (Introduction,
The Road Down To Hardware Land)
ADD 2
The book Expert C Programming: Deep C Secrets. Chapter 6 Runtime Data Structures is an useful reference to this question.
ADD 3 - 7:31 AM 2/28/2021
Here's some of my perspective after getting some knowledge about processor design. The whole computer thing is just multiple levels of abstraction. It goes from elementary transistors all the way up to the running program. For any level N of abstraction, its runtime is the immediate level N-1 of abstraction that goes below it. And it is God that give us the level 0 of abstraction.
Runtime describes software/instructions that are executed while your program is running, especially those instructions that you did not write explicitly, but are necessary for the proper execution of your code.
Low-level languages like C have very small (if any) runtime. More complex languages like Objective-C, which allows for dynamic message passing, have a much more extensive runtime.
You are correct that runtime code is library code, but library code is a more general term, describing the code produced by any library. Runtime code is specifically the code required to implement the features of the language itself.
Runtime is a general term that refers to any library, framework, or platform that your code runs on.
The C and C++ runtimes are collections of functions.
The .NET runtime contains an intermediate language interpreter, a garbage collector, and more.
As per Wikipedia: runtime library/run-time system.
In computer programming, a runtime library is a special program library used by a compiler, to implement functions built into a programming language, during the runtime (execution) of a computer program. This often includes functions for input and output, or for memory management.
A run-time system (also called runtime system or just runtime) is software designed to support the execution of computer programs written in some computer language. The run-time system contains implementations of basic low-level commands and may also implement higher-level commands and may support type checking, debugging, and even code generation and optimization.
Some services of the run-time system are accessible to the programmer through an application programming interface, but other services (such as task scheduling and resource management) may be inaccessible.
Re: your edit, "runtime" and "runtime library" are two different names for the same thing.
The runtime or execution environment is the part of a language implementation which executes code and is present at run-time; the compile-time part of the implementation is called the translation environment in the C standard.
Examples:
the Java runtime consists of the virtual machine and the standard library
a common C runtime consists of the loader (which is part of the operating system) and the runtime library, which implements the parts of the C language which are not built into the executable by the compiler; in hosted environments, this includes most parts of the standard library
I'm not crazy about the other answers here; they're too vague and abstract for me. I think more in stories. Here's my attempt at a better answer.
a BASIC example
Let's say it's 1985 and you write a short BASIC program on an Apple II:
] 10 PRINT "HELLO WORLD!"
] 20 GOTO 10
So far, your program is just source code. It's not running, and we would say there is no "runtime" involved with it.
But now I run it:
] RUN
How is it actually running? How does it know how to send the string parameter from PRINT to the physical screen? I certainly didn't provide any system information in my code, and PRINT itself doesn't know anything about my system.
Instead, RUN is actually a program itself -- its code tells it how to parse my code, how to execute it, and how to send any relevant requests to the computer's operating system. The RUN program provides the "runtime" environment that acts as a layer between the operating system and my source code. The operating system itself acts as part of this "runtime", but we usually don't mean to include it when we talk about a "runtime" like the RUN program.
Types of compilation and runtime
Compiled binary languages
In some languages, your source code must be compiled before it can be run. Some languages compile your code into machine language -- it can be run by your operating system directly. This compiled code is often called "binary" (even though every other kind of file is also in binary :).
In this case, there is still a minimal "runtime" involved -- but that runtime is provided by the operating system itself. The compile step means that many statements that would cause your program to crash are detected before the code is ever run.
C is one such language; when you run a C program, it's totally able to send illegal requests to the operating system (like, "give me control of all of the memory on the computer, and erase it all"). If an illegal request is hit, usually the OS will just kill your program and not tell you why, and dump the contents of that program's memory at the time it was killed to a .dump file that's pretty hard to make sense of. But sometimes your code has a command that is a very bad idea, but the OS doesn't consider it illegal, like "erase a random bit of memory this program is using"; that can cause super weird problems that are hard to get to the bottom of.
Bytecode languages
Other languages (e.g. Java, Python) compile your code into a language that the operating system can't read directly, but a specific runtime program can read your compiled code. This compiled code is often called "bytecode".
The more elaborate this runtime program is, the more extra stuff it can do on the side that your code did not include (even in the libraries you use) -- for instance, the Java runtime environment ("JRE") and Python runtime environment can keep track of memory assignments that are no longer needed, and tell the operating system it's safe to reuse that memory for something else, and it can catch situations where your code would try to send an illegal request to the operating system, and instead exit with a readable error.
All of this overhead makes them slower than compiled binary languages, but it makes the runtime powerful and flexible; in some cases, it can even pull in other code after it starts running, without having to start over. The compile step means that many statements that would cause your program to crash are detected before the code is ever run; and the powerful runtime can keep your code from doing stupid things (e.g., you can't "erase a random bit of memory this program is using").
Scripting languages
Still other languages don't precompile your code at all; the runtime does all of the work of reading your code line by line, interpreting it and executing it. This makes them even slower than "bytecode" languages, but also even more flexible; in some cases, you can even fiddle with your source code as it runs! Though it also means that you can have a totally illegal statement in your code, and it could sit there in your production code without drawing attention, until one day it is run and causes a crash.
These are generally called "scripting" languages; they include Javascript, Perl, and PHP. Some of these provide cases where you can choose to compile the code to improve its speed (e.g., Javascript's WebAssembly project). So Javascript can allow users on a website to see the exact code that is running, since their browser is providing the runtime.
This flexibility also allows for innovations in runtime environments, like node.js, which is both a code library and a runtime environment that can run your Javascript code as a server, which involves behaving very differently than if you tried to run the same code on a browser.
In my understanding runtime is exactly what it means - the time when the program is run. You can say something happens at runtime / run time or at compile time.
I think runtime and runtime library should be (if they aren't) two separate things. "C runtime" doesn't seem right to me. I call it "C runtime library".
Answers to your other questions:
I think the term runtime can be extended to include also the environment and the context of the program when it is run, so:
it consists of everything that can be called "environment" during the time when the program is run, for example other processes, state of the operating system and used libraries, state of other processes, etc
it doesn't interact with your code in a general sense, it just defines in what circumstances your code works, what is available to it during execution.
This answer is to some extend just my opinion, not a fact or definition.
Matt Ball answered it correctly. I would say about it with examples.
Consider running a program compiled in Turbo-Borland C/C++ (version 3.1 from the year 1991) compiler and let it run under a 32-bit version of windows like Win 98/2000 etc.
It's a 16-bit compiler. And you will see all your programs have 16-bit pointers. Why is it so when your OS is 32bit? Because your compiler has set up the execution environment of 16 bit and the 32-bit version of OS supported it.
What is commonly called as JRE (Java Runtime Environment) provides a Java program with all the resources it may need to execute.
Actually, runtime environment is brain product of idea of Virtual Machines. A virtual machine implements the raw interface between hardware and what a program may need to execute. The runtime environment adopts these interfaces and presents them for the use of the programmer. A compiler developer would need these facilities to provide an execution environment for its programs.
Run time exactly where your code comes into life and you can see lot of important thing your code do.
Runtime has a responsibility of allocating memory , freeing memory , using operating system's sub system like (File Services, IO Services.. Network Services etc.)
Your code will be called "WORKING IN THEORY" until you practically run your code.
and Runtime is a friend which helps in achiving this.
a runtime could denote the current phase of program life (runtime / compile time / load time / link time)
or it could mean a runtime library, which form the basic low level actions that interface with the execution environment.
or it could mean a runtime system, which is the same as an execution environment.
in the case of C programs, the runtime is the code that sets up the stack, the heap etc. which a requirement expected by the C environment. it essentially sets up the environment that is promised by the language. (it could have a runtime library component, crt0.lib or something like that in case of C)
Runtime basically means when program interacts with the hardware and operating system of a machine. C does not have it's own runtime but instead, it requests runtime from an operating system (which is basically a part of ram) to execute itself.
I found that the following folder structure makes a very insightful context for understanding what runtime is:
You can see that there is the 'source', there is the 'SDK' or 'Software Development Kit' and then there is the Runtime, eg. the stuff that gets run - at runtime. It's contents are like:
The win32 zip contains .exe -s and .dll -s.
So eg. the C runtime would be the files like this -- C runtime libraries, .so-s or .dll -s -- you run at runtime, made special by their (or their contents' or purposes') inclusion in the definition of the C language (on 'paper'), then implemented by your C implementation of choice. And then you get the runtime of that implementation, to use it and to build upon it.
That is, with a little polarisation, the runnable files that the users of your new C-based program will need. As a developer of a C-based program, so do you, but you need the C compiler and the C library headers, too; the users don't need those.
If my understanding from reading the above answers is correct, Runtime is basically 'background processes' such as garbage collection, memory-allocation, basically any processes that are invoked indirectly, by the libraries / frameworks that your code is written in, and specifically those processes that occur after compilation, while the application is running.
The fully qualified name of Runtime seems to be the additional environment to provide programming language-related functions required at run time for non-web application software.
Runtime implements programming language-related functions, which remain the same to any application domain, including math operations, memory operations, messaging, OS or DB abstraction service, etc.
The runtime must in some way be connected with the running applications to be useful, such as being loaded into application memory space as a shared dynamic library, a virtual machine process inside which the application runs, or a service process communicating with the application.
Runtime is somewhat opposite to design-time and compile-time/link-time. Historically it comes from slow mainframe environment where machine-time was expensive.