Runtime vs. Compile time - language-agnostic
What is the difference between run-time and compile-time?
The difference between compile time and run time is an example of what pointy-headed theorists call the phase distinction. It is one of the hardest concepts to learn, especially for people without much background in programming languages. To approach this problem, I find it helpful to ask
What invariants does the program satisfy?
What can go wrong in this phase?
If the phase succeeds, what are the postconditions (what do we know)?
What are the inputs and outputs, if any?
Compile time
The program need not satisfy any invariants. In fact, it needn't be a well-formed program at all. You could feed this HTML to the compiler and watch it barf...
What can go wrong at compile time:
Syntax errors
Typechecking errors
(Rarely) compiler crashes
If the compiler succeeds, what do we know?
The program was well formed---a meaningful program in whatever language.
It's possible to start running the program. (The program might fail immediately, but at least we can try.)
What are the inputs and outputs?
Input was the program being compiled, plus any header files, interfaces, libraries, or other voodoo that it needed to import in order to get compiled.
Output is hopefully assembly code or relocatable object code or even an executable program. Or if something goes wrong, output is a bunch of error messages.
Run time
We know nothing about the program's invariants---they are whatever the programmer put in. Run-time invariants are rarely enforced by the compiler alone; it needs help from the programmer.
What can go wrong are run-time errors:
Division by zero
Dereferencing a null pointer
Running out of memory
Also there can be errors that are detected by the program itself:
Trying to open a file that isn't there
Trying find a web page and discovering that an alleged URL is not well formed
If run-time succeeds, the program finishes (or keeps going) without crashing.
Inputs and outputs are entirely up to the programmer. Files, windows on the screen, network packets, jobs sent to the printer, you name it. If the program launches missiles, that's an output, and it happens only at run time :-)
I think of it in terms of errors, and when they can be caught.
Compile time:
string my_value = Console.ReadLine();
int i = my_value;
A string value can't be assigned a variable of type int, so the compiler knows for sure at compile time that this code has a problem
Run time:
string my_value = Console.ReadLine();
int i = int.Parse(my_value);
Here the outcome depends on what string was returned by ReadLine(). Some values can be parsed to an int, others can't. This can only be determined at run time
Compile-time: the time period in which you, the developer, are compiling your code.
Run-time: the time period which a user is running your piece of software.
Do you need any clearer definition?
(edit: the following applies to C# and similar, strongly-typed programming languages. I'm not sure if this helps you).
For example, the following error will be detected by the compiler (at compile time) before you run a program and will result in a compilation error:
int i = "string"; --> error at compile-time
On the other hand, an error like the following can not be detected by the compiler. You will receive an error/exception at run-time (when the program is run).
Hashtable ht = new Hashtable();
ht.Add("key", "string");
// the compiler does not know what is stored in the hashtable
// under the key "key"
int i = (int)ht["key"]; // --> exception at run-time
Translation of source code into stuff-happening-on-the-[screen|disk|network] can occur in (roughly) two ways; call them compiling and interpreting.
In a compiled program (examples are c and fortran):
The source code is fed into another program (usually called a compiler--go figure), which produces an executable program (or an error).
The executable is run (by double clicking it, or typing it's name on the command line)
Things that happen in the first step are said to happen at "compile time", things that happen in the second step are said to happen at "run time".
In an interpreted program (example MicroSoft basic (on dos) and python (I think)):
The source code is fed into another program (usually called an interpreter) which "runs" it directly. Here the interpreter serves as an intermediate layer between your program and the operating system (or the hardware in really simple computers).
In this case the difference between compile time and run time is rather harder to pin down, and much less relevant to the programmer or user.
Java is a sort of hybrid, where the code is compiled to bytecode, which then runs on a virtual machine which is usually an interpreter for the bytecode.
There is also an intermediate case in which the program is compiled to bytecode and run immediately (as in awk or perl).
Basically if your compiler can work out what you mean or what a value is "at compile time" it can hardcode this into the runtime code. Obviously if your runtime code has to do a calculation every time it will run slower, so if you can determine something at compile time it is much better.
Eg.
Constant folding:
If I write:
int i = 2;
i += MY_CONSTANT;
The compiler can perform this calulation at compile time because it knows what 2 is, and what MY_CONSTANT is. As such it saves itself from performing a calculation every single execution.
Hmm, ok well, runtime is used to describe something that occurs when a program is running.
Compile time is used to describe something that occurs when a program is being built (usually, by a compiler).
Compile Time:
Things that are done at compile time incur (almost) no cost when the resulting program is run, but might incur a large cost when you build the program.
Run-Time:
More or less the exact opposite. Little cost when you build, more cost when the program is run.
From the other side; If something is done at compile time, it runs only on your machine and if something is run-time, it run on your users machine.
Relevance
An example of where this is important would be a unit carrying type. A compile time version (like Boost.Units or my version in D) ends up being just as fast as solving the problem with native floating point code while a run-time version ends up having to pack around information about the units that a value are in and perform checks in them along side every operation. On the other hand, the compile time versions requiter that the units of the values be known at compile time and can't deal with the case where they come from run-time input.
As an add-on to the other answers, here's how I'd explain it to a layman:
Your source code is like the blueprint of a ship. It defines how the ship should be made.
If you hand off your blueprint to the shipyard, and they find a defect while building the ship, they'll stop building and report it to you immediately, before the ship has ever left the drydock or touched water. This is a compile-time error. The ship was never even actually floating or using its engines. The error was found because it prevented the ship even being made.
When your code compiles, it's like the ship being completed. Built and ready to go. When you execute your code, that's like launching the ship on a voyage. The passengers are boarded, the engines are running and the hull is on the water, so this is runtime. If your ship has a fatal flaw that sinks it on its maiden voyage (or maybe some voyage after for extra headaches) then it suffered a runtime error.
Following from previous similar answer of question What is the difference between run-time error and compiler error?
Compilation/Compile time/Syntax/Semantic errors: Compilation or compile time errors are error occurred due to typing mistake, if we do not follow the proper syntax and semantics of any programming language then compile time errors are thrown by the compiler. They wont let your program to execute a single line until you remove all the syntax errors or until you debug the compile time errors.
Example: Missing a semicolon in C or mistyping int as Int.
Runtime errors: Runtime errors are the errors that are generated when the program is in running state. These types of errors will cause your program to behave unexpectedly or may even kill your program. They are often referred as Exceptions.
Example: Suppose you are reading a file that doesn't exist, will result in a runtime error.
Read more about all programming errors here
Here is a quote from Daniel Liang, author of 'Introduction to JAVA programming', on the subject of compilation:
"A program written in a high-level language is called a source program or source code. Because a computer cannot execute a source program, a source program must be translated into machine code for execution. The translation can be done using another programming tool called an interpreter or a compiler." (Daniel Liang, "Introduction to JAVA programming", p8).
...He Continues...
"A compiler translates the entire source code into a machine-code file, and the machine-code file is then executed"
When we punch in high-level/human-readable code this is, at first, useless! It must be translated into a sequence of 'electronic happenings' in your tiny little CPU! The first step towards this is compilation.
Simply put: a compile-time error happens during this phase, while a run-time error occurs later.
Remember: Just because a program is compiled without error does not mean it will run without error.
A Run-time error will occur in the ready, running or waiting part of a programs life-cycle while a compile-time error will occur prior to the 'New' stage of the life cycle.
Example of a Compile-time error:
A Syntax Error - how can your code be compiled into machine level instructions if they are ambiguous?? Your code needs to conform 100% to the syntactical rules of the language otherwise it cannot be compiled into working machine code.
Example of a run-time error:
Running out of memory - A call to a recursive function for example might lead to stack overflow given a variable of a particular degree! How can this be anticipated by the compiler!? it cannot.
And that is the difference between a compile-time error and a run-time error
For example: In a strongly typed language, a type could be checked at compile time or at runtime. At compile time it means, that the compiler complains if the types are not compatible. At runtime means, that you can compile your program just fine but at runtime, it throws an exception.
In simply word difference b/w Compile time & Run time.
compile time:Developer writes the program in .java format & converts in to the Bytecode which is a class file,during this compilation any error occurs can be defined as compile time error.
Run time:The generated .class file is use by the application for its additional functionality & the logic turns out be wrong and throws an error which is a run time error
Compile time:
Time taken to convert the source code into a machine code so that it becomes an executable is called compile time.
Run time:
When an application is running, it is called run time.
Compile time errors are those syntax errors, missing file reference errors.
Runtime errors happen after the source code has been compiled into an executable program and while the program is running. Examples are program crashes, unexpected program behavior or features don't work.
Run time means something happens when you run the program.
Compile time means something happens when you compile the program.
Imagine that you are a boss and you have an assistant and a maid, and you give them a list of tasks to do, the assistant (compile time) will grab this list and make a checkup to see if the tasks are understandable and that you didn't write in any awkward language or syntax, so he understands that you want to assign someone for a Job so he assign him for you and he understand that you want some coffee, so his role is over and the maid (run time)starts to run those tasks so she goes to make you some coffee but in sudden she doesn’t find any coffee to make so she stops making it or she acts differently and make you some tea (when the program acts differently because he found an error).
Compile Time:
Things that are done at compile time incur (almost) no cost when the resulting program is run, but might incur a large cost when you build the program.
Run-Time:
More or less the exact opposite. Little cost when you build, more cost when the program is run.
From the other side; If something is done at compile time, it runs only on your machine and if something is run-time, it run on your users machine.
I have always thought of it relative to program processing overhead and how it affects preformance as previously stated. A simple example would be, either defining the absolute memory required for my object in code or not.
A defined boolean takes x memory this is then in the compiled program and cannot be changed. When the program runs it knows exactly how much memory to allocate for x.
On the other hand if I just define a generic object type (i.e. kind of a undefined place holder or maybe a pointer to some giant blob) the actual memory required for my object is not known until the program is run and I assign something to it, thus it then must be evaluated and memory allocation, etc. will be then handled dynamically at run time (more run time overhead).
How it is dynamically handled would then depend on the language, the compiler, the OS, your code, etc.
On that note however it would really depends on the context in which you are using run time vs compile time.
Here is an extension to the Answer to the question "difference between run-time and compile-time?" -- Differences in overheads associated with run-time and compile-time?
The run-time performance of the product contributes to its quality by delivering results faster. The compile-time performance of the product contributes to its timeliness by shortening the edit-compile-debug cycle. However, both run-time performance and compile-time performance are secondary factors in achieving timely quality. Therefore, one should consider run-time and compile-time performance improvements only when justified by improvements in overall product quality and timeliness.
A great source for further reading here:
we can classify these under different two broad groups static binding and dynamic binding. It is based on when the binding is done with the corresponding values. If the references are resolved at compile time, then it is static binding and if the references are resolved at runtime then it is dynamic binding. Static binding and dynamic binding also called as early binding and late binding. Sometimes they are also referred as static polymorphism and dynamic polymorphism.
Joseph Kulandai.
The major difference between run-time and compile time is:
If there are any syntax errors and type checks in your code,then it throws compile time error, where-as run-time:it checks after executing the code.
For example:
int a = 1
int b = a/0;
here first line doesn't have a semi-colon at the end---> compile time error after executing the program while performing operation b, result is infinite---> run-time error.
Compile time doesn't look for output of functionality provided by your code, whereas run-time does.
here's a very simple answer:
Runtime and compile time are programming terms that refer to different stages of software program development.
In order to create a program, a developer first writes source code, which defines how the program will function. Small programs may only contain a few hundred lines of source code, while large programs may contain hundreds of thousands of lines of source code. The source code must be compiled into machine code in order to become and executable program. This compilation process is referred to as compile time.(think of a compiler as a translator)
A compiled program can be opened and run by a user. When an application is running, it is called runtime.
The terms "runtime" and "compile time" are often used by programmers to refer to different types of errors. A compile time error is a problem such as a syntax error or missing file reference that prevents the program from successfully compiling. The compiler produces compile time errors and usually indicates what line of the source code is causing the problem.
If a program's source code has already been compiled into an executable program, it may still have bugs that occur while the program is running. Examples include features that don't work, unexpected program behavior, or program crashes. These types of problems are called runtime errors since they occur at runtime.
The reference
Look into this example:
public class Test {
public static void main(String[] args) {
int[] x=new int[-5];//compile time no error
System.out.println(x.length);
}}
The above code is compiled successfully, there is no syntax error, it is perfectly valid.
But at the run time, it throws following error.
Exception in thread "main" java.lang.NegativeArraySizeException
at Test.main(Test.java:5)
Like when in compile time certain cases has been checked, after that run time certain cases has been checked once the program satisfies all the condition you will get an output.
Otherwise, you will get compile time or run time error.
You can understand the code compile structure from reading the actual code. Run-time structure are not clear unless you understand the pattern that was used.
public class RuntimeVsCompileTime {
public static void main(String[] args) {
//test(new D()); COMPILETIME ERROR
/**
* Compiler knows that B is not an instance of A
*/
test(new B());
}
/**
* compiler has no hint whether the actual type is A, B or C
* C c = (C)a; will be checked during runtime
* #param a
*/
public static void test(A a) {
C c = (C)a;//RUNTIME ERROR
}
}
class A{
}
class B extends A{
}
class C extends A{
}
class D{
}
It's not a good question for S.O. (it's not a specific programming question), but it's not a bad question in general.
If you think it's trivial: what about read-time vs compile-time, and when is this a useful distinction to make? What about languages where the compiler is available at runtime? Guy Steele (no dummy, he) wrote 7 pages in CLTL2 about EVAL-WHEN, which CL programmers can use to control this. 2 sentences are barely enough for a definition, which itself is far short of an explanation.
In general, it's a tough problem that language designers have seemed to try to avoid.
They often just say "here's a compiler, it does compile-time things; everything after that is run-time, have fun". C is designed to be simple to implement, not the most flexible environment for computation. When you don't have the compiler available at runtime, or the ability to easily control when an expression is evaluated, you tend to end up with hacks in the language to fake common uses of macros, or users come up with Design Patterns to simulate having more powerful constructs. A simple-to-implement language can definitely be a worthwhile goal, but that doesn't mean it's the end-all-be-all of programming language design. (I don't use EVAL-WHEN much, but I can't imagine life without it.)
And the problemspace around compile-time and run-time is huge and still largely unexplored. That's not to say S.O. is the right place to have the discussion, but I encourage people to explore this territory further, especially those who have no preconceived notions of what it should be. The question is neither simple nor silly, and we could at least point the inquisitor in the right direction.
Unfortunately, I don't know any good references on this. CLTL2 talks about it a bit, but it's not great for learning about it.
Related
When is a command's compile function called?
My understanding of TCL execution is, if a command's compile function is defined, it is first called when it comes to execute the command before its execution function is called. Take command append as example, here is its definition in tclBasic.c: static CONST CmdInfo builtInCmds[] = { {"append", (Tcl_CmdProc *) NULL, Tcl_AppendObjCmd, TclCompileAppendCmd, 1}, Here is my testing script: $ cat t.tcl set l [list 1 2 3] append l 4 I add gdb breakpoints at both functions, TclCompileAppendCmd and Tcl_AppendObjCmd. My expectation is TclCompileAppendCmd is hit before Tcl_AppendObjCmd. Gdb's target is tclsh8.4 and argument is t.tcl. What I see is interesting: TclCompileAppendCmd does get hit first, but it is not from t.tcl, rather it is from init.tcl. TclCompileAppendCmd gets hit several times and they all are from init.tcl. The first time t.tcl executes, it is Tcl_AppendObjCmd gets hit, not TclCompileAppendCmd. I cannot make sense of it: Why is the compile function called for init.tcl but not for t.tcl? Each script should be independently compiled, i.e. the object with compiled command append at init.tcl is not reused for later scripts, isn't it? [UPDATE] Thanks Brad for the tip, after I move the script to a proc, I can see TclCompileAppendCmd is hit.
The compilation function (TclCompileAppendCmd in your example) is called by the bytecode compiler when it wants to issue bytecode for a particular instance of that particular command. The bytecode compiler also has a fallback if there is no compilation function for a command: it issues instructions to invoke the standard implementation (which would be Tcl_AppendObjCmd in this case; the NULL in the other field causes Tcl to generate a thunk in case someone really insists on using a particular API but you can ignore that). That's a useful behaviour, because it is how operations like I/O are handled; the overhead of calling a standard command implementation is pretty small by comparison with the overhead of doing disk or network I/O. But when does the bytecode compiler run? On one level, it runs whenever the rest of Tcl asks for it to be run. Simple! But that's not really helpful to you. More to the point, it runs whenever Tcl evaluates a script value in a Tcl_Obj that doesn't already have bytecode type (or if the saved bytecode indicates that it is for a different resolution context or different compilation epoch) except if the evaluation has asked to not be bytecode compiled by the flag TCL_EVAL_DIRECT to Tcl_EvalObjEx or Tcl_EvalEx (which is a convenient wrapper for Tcl_EvalObjEx). It's that flag which is causing you problems. When is that flag used? It's actually pretty simple: it's used when some code is believed to be going to be run only once because then the cost of compilation is larger than the cost of using the interpretation path. It's particularly used by Tk's bind command for running substituted script callbacks, but it is also used by source and the main code of tclsh (essentially anything using Tcl_FSEvalFileEx or its predecessors/wrappers Tcl_FSEvalFile and Tcl_EvalFile). I'm not 100% sure whether that's the right choice for a sourced context, but it is what happens now. However, there is a workaround that is (highly!) worthwhile if you're handling looping: you can put the code in a compiled context within that source using a procedure that you call immediately or use an apply (I recommend the latter these days). init.tcl uses these tricks, which is why you were seeing it compile things. And no, we don't normally save compiled scripts between runs of Tcl. Our compiler is fast enough that that's not really worthwhile; the cost of verifying that the loaded compiled code is correct for the current interpreter is high enough that it's actually faster to recompile from the source code. Our current compiler is fast (I'm working on a slower one that generates enormously better code). There's a commercial tool suite from ActiveState (the Tcl Dev Kit) which includes an ahead-of-time compiler, but that's focused around shrouding code for the purposes of commercial deployment and not speed.
What is a runtime environment for supposedly "no-overhead" systems languages?
Specifically, I'm talking more about C++ and Rust than others. I don't understand how C++ has a "runtime" in the sense that Java and C# have a runtime--while Java and C# run on top of a virtual machine with its own encapsulated abstractions and such, I don't get how C++ might have one. Take virtual tables for C++, for example. Do we consider dynamic_cast<type> a part of C++'s runtime functionality or are we talking about C++'s structure for vtables in general? Can we consider new and delete a part of the C++ runtime environment? What exactly constitutes a runtime? For example, here we have a Rust article on its own runtime, which describes it as : The Rust runtime can be viewed as a collection of code which enables services like I/O, task spawning, TLS, etc. It's essentially an ephemeral collection of objects which enable programs to perform common tasks more easily. But is this not the function of a standard library or language features, not an actual runtime? What constitutes this very thin but existent runtime? Even Bjarne expresses his thoughts that C++ has "zero-overhead abstraction", but if C++ has a runtime, does this not imply that C++ does indeed have some sort of "backend" code to orchestrate its own very light but still existent abstractions? TL;DR: What is a runtime and/or runtime environment in the context of languages like C++ and Rust that have supposedly "zero-overhead" and don't have "heavy" runtimes like Java or C#? Edit: I suspect that I'm just missing something about semantics here...
C++ requires a few things that aren't required in something like C. For example, it typically involves some overhead for exception handling. Although it may not be strictly required, most systems have at least a tiny bit of a top-level exception handler to tell you that the program shut down if an exception was thrown but not caught anywhere. It's open to question whether it qualifies as "runtime environment", but the compiler also generates code to search up the stack and find a handler for a particular exception when one is thrown. On one hand, this is exceptionally tiny (bordering on negligible) compared to something like a complete JVM. On the other hand, it's quite large and complex relative to what happens by default in something like a JVM or Microsoft's CLR. As to zero overhead...well, it depends a bit on your viewpoint. Exception handling code can normally be moved out of the main stream of the code, so it doesn't impose any overhead in terms of execution speed as long as no exception is thrown. It does, however, require extra code so there can be (often is) quite a bit of overhead if you look at executable sizes. Just for example, doing a quick look at a "hello world" program, it looks like turning off exception handling reduces the executable size by about 2 kilobytes with VC++. Admittedly, 2K isn't a whole lot of extra code--on the other hand, that's just what's added to essentially the most trivial program humanly possible. For a program that actually does something, it's undoubtedly more. In the end, it's not enough that most people really have a reason to care, but it does exist nonetheless. As to how this is handled, it involves a combination of code that's linked in from the standard library and code generated by the compiler (but the exact details vary with the implementation--for example, most 32-bit Windows compilers used Microsoft's Structured Exception Handling (in which case the operating system provides part of the code) but for 64-bit Windows, I believe all of them deal with exception handling on their own (which increases executable sizes more, but reduces overhead in terms of speed).
cuda trace emulation -Need some expert insight
I am working on a gpu trace emulation tool in windows as part of my research work in grad school . I am working on cuda runtime trace emulation to be specific. I use simple DLL injection using MS Detours to enable interception of the cuda runtime APIs. I store the API calls and their parameters in a trace file. I get into some problems while trying to emulate the API from my trace file(I use the word playback to denote this action) A typical trace file begins by making calls to __cudaRegisterFatBinary and __cudaRegisterFunction. This is followed by a call to cudaMalloc. What I did? 1) I came across the famous GPUOcelot and I found the cubin structure that Nvidia is using right now. I am using that to save the address parameter of cudaRegisterFatBinary in intercept mode and I am using the pointer in the playback for _cudaRegisterFatBinary by repopulating the structure in the memory. 2)In _cudaRegisterFunction I am not sure what the parameters hostFunction,Device Function and Device Name refer to. I mean I don't understand how I could populate it while playing back from my trace file. I am just saving the pointer from the original execution and using it to imitate the call. But there is no way of knowing whether the function goes through fine since it does not have a return value. 3)cudaMalloc following these two entry point functions return cuda error code 11. It is cuda invalid value according to the Nvidia documentation. I have no idea why this should be the case. I am assuming that something is wrong with the previous two function calls. I also have a feeling that something is wrong with implicit primary context creation by the cuda runtime. Can someone give me some insights about cuda runtime execution and point me to what might I be missing? I know its a ton of information without any useful code. I dont know which part of the code to post here. I will do it when people start taking interest in my question and ask me specific things about my project. Initially am just hoping that I am missing something big and high level that one of you can spot. I greatly appreciate your time and interest!
Sounds very interesting overall. Your "Error:Cuda invalid value" is could be related to the params of _cudaRegisterFunction. The param 'DeviceName' sounds like it identifies which GPU (card?) to use. Check the CUDA SDK, there are many demos that enumerate the GPUs on the system, perhaps these values are valid for 'DeviceName'. As for 'hostFunction' and 'deviceFunction', these sound like either function IDs, or perhaps function pointers. Also, you can call 'cudaGetLastError()' to test whether the function call was successful (it returns 'cudaSuccess' if everything is ok... take a look at the error logging macros in the sdk). Good luck!
Memory access exception handling with MinGW on XP
I am trying to use the MinGW GCC toolchain on XP with some vendor code from an embedded project that accesses high memory (>0xFFFF0000) which is, I believe, beyond the virtual mem address space allowed in 'civilian' processes in XP. I want to handle the memory access exceptions myself in some way that will permit execution to continue at the instruction following the exception, ie ignore it. Is there some way to do it with MinGW? Or with MS toolchain? The vastly simplified picture is thus: ///////////// // MyFile.c MyFunc(){ VendorFunc_A(); } ///////////////// // VendorFile.c VendorFunc_A(){ VendorFunc_DoSomeDesirableSideEffect(); VendorFunc_B(); VendorFunc_DoSomeMoreGoodStuff(); } VendorFunc_B(){ int *pHW_Reg = 0xFFFF0000; *pHW_Reg = 1; // Mem Access EXCEPTION HERE return(0); // I want to continue here } More detail: I am developing an embedded project on an Atmel AVR32 platform with freeRTOS using the AVR32-gcc toolchain. It is desirable to develop/debug high level application code independent of the hardware (and the slow avr32 simulator). Various gcc, makefile and macro tricks permit me to build my Avr32/freeRTOS project in the MinGW/Win32 freeRTOS port enviroment and I can debug in eclipse/gdb. But the high-mem HW access in the (vendor supplied) Avr32 code crashes the MinGW exe (due to the mem access exception). I am contemplating some combination of these approaches: 1) Manage the access exceptions in SW. Ideally I'd be creating a kind of HW simulator but that'd be difficult and involve some gnarly assembly code, I think. Alot of the exceptions can likely just be ignored. 2) Creating a modified copy of the Avr32 header files so as to relocate the HW register #defines into user process address space (and create some structs and linker sections that commmit those areas of virtual memory space) 3) Conditional compilation of function calls that result in highMem/HW access, or alernatively more macro tricks, so as to minimize code cruft in the 'real' HW target code. (There are other developers on this project.) Any suggestions or helpful links would be appreciated. This page is on the right track, but seems overly complicated, and is C++ which I'd like to avoid. But I may try it yet, absent other suggestions. http://www.programmingunlimited.net/siteexec/content.cgi?page=mingw-seh
You need to figure out why the vendor code wants to write 1 to address 0xFFFF0000 in the first place, and then write a custom VendorFunc_B() function that emulates this behavior. It is likely that 0xFFFF0000 is a hardware register that will do something special when written to (eg. change baud rate on a serial port or power up the laser or ...). When you know what will happen when you write to this register on the target hardware, you can rewrite the vendor code to do something appropriate in the windows code (eg. write the string "Starting laser" to a log file). It is safe to assume that writing 1 to address 0xFFFF0000 on Windows XP will not be the right thing to do, and the Windows XP memory protection system detects this and terminates your program.
I had a similar issue recently, and this is the solution i settled on: Trap memory accesses inside a standard executable built with MinGW First of all, you need to find a way to remap those address ranges (maybe some undef/define combos) to some usable memory. If you can't do this, maybe you can hook through a seg-fault and handle the write yourself. I also use this to "simulate" some specific HW behavior inside a single executable, for some already written code. However, in my case, i found a way to redefine early all the register access macros.
What is "runtime"?
I have heard about things like "C Runtime", "Visual C++ 2008 Runtime", ".NET Common Language Runtime", etc. What is "runtime" exactly? What is it made of? How does it interact with my code? Or maybe more precisely, how is my code controlled by it? When coding assembly language on Linux, I could use the INT instruction to make the system call. So, is the runtime nothing but a bunch of pre-fabricated functions that wrap the low level function into more abstract and high level functions? But doesn't this seem more like the definition for the library, not for the runtime? Are "runtime" and "runtime library" two different things? ADD 1 These days, I am thinking maybe Runtime has something in common with the so called Virtual Machine, such as JVM. Here's the quotation that leads to such thought: This compilation process is sufficiently complex to be broken into several layers of abstraction, and these usually involve three translators: a compiler, a virtual machine implementation, and an assembler. --- The Elements of Computing Systems (Introduction, The Road Down To Hardware Land) ADD 2 The book Expert C Programming: Deep C Secrets. Chapter 6 Runtime Data Structures is an useful reference to this question. ADD 3 - 7:31 AM 2/28/2021 Here's some of my perspective after getting some knowledge about processor design. The whole computer thing is just multiple levels of abstraction. It goes from elementary transistors all the way up to the running program. For any level N of abstraction, its runtime is the immediate level N-1 of abstraction that goes below it. And it is God that give us the level 0 of abstraction.
Runtime describes software/instructions that are executed while your program is running, especially those instructions that you did not write explicitly, but are necessary for the proper execution of your code. Low-level languages like C have very small (if any) runtime. More complex languages like Objective-C, which allows for dynamic message passing, have a much more extensive runtime. You are correct that runtime code is library code, but library code is a more general term, describing the code produced by any library. Runtime code is specifically the code required to implement the features of the language itself.
Runtime is a general term that refers to any library, framework, or platform that your code runs on. The C and C++ runtimes are collections of functions. The .NET runtime contains an intermediate language interpreter, a garbage collector, and more.
As per Wikipedia: runtime library/run-time system. In computer programming, a runtime library is a special program library used by a compiler, to implement functions built into a programming language, during the runtime (execution) of a computer program. This often includes functions for input and output, or for memory management. A run-time system (also called runtime system or just runtime) is software designed to support the execution of computer programs written in some computer language. The run-time system contains implementations of basic low-level commands and may also implement higher-level commands and may support type checking, debugging, and even code generation and optimization. Some services of the run-time system are accessible to the programmer through an application programming interface, but other services (such as task scheduling and resource management) may be inaccessible. Re: your edit, "runtime" and "runtime library" are two different names for the same thing.
The runtime or execution environment is the part of a language implementation which executes code and is present at run-time; the compile-time part of the implementation is called the translation environment in the C standard. Examples: the Java runtime consists of the virtual machine and the standard library a common C runtime consists of the loader (which is part of the operating system) and the runtime library, which implements the parts of the C language which are not built into the executable by the compiler; in hosted environments, this includes most parts of the standard library
I'm not crazy about the other answers here; they're too vague and abstract for me. I think more in stories. Here's my attempt at a better answer. a BASIC example Let's say it's 1985 and you write a short BASIC program on an Apple II: ] 10 PRINT "HELLO WORLD!" ] 20 GOTO 10 So far, your program is just source code. It's not running, and we would say there is no "runtime" involved with it. But now I run it: ] RUN How is it actually running? How does it know how to send the string parameter from PRINT to the physical screen? I certainly didn't provide any system information in my code, and PRINT itself doesn't know anything about my system. Instead, RUN is actually a program itself -- its code tells it how to parse my code, how to execute it, and how to send any relevant requests to the computer's operating system. The RUN program provides the "runtime" environment that acts as a layer between the operating system and my source code. The operating system itself acts as part of this "runtime", but we usually don't mean to include it when we talk about a "runtime" like the RUN program. Types of compilation and runtime Compiled binary languages In some languages, your source code must be compiled before it can be run. Some languages compile your code into machine language -- it can be run by your operating system directly. This compiled code is often called "binary" (even though every other kind of file is also in binary :). In this case, there is still a minimal "runtime" involved -- but that runtime is provided by the operating system itself. The compile step means that many statements that would cause your program to crash are detected before the code is ever run. C is one such language; when you run a C program, it's totally able to send illegal requests to the operating system (like, "give me control of all of the memory on the computer, and erase it all"). If an illegal request is hit, usually the OS will just kill your program and not tell you why, and dump the contents of that program's memory at the time it was killed to a .dump file that's pretty hard to make sense of. But sometimes your code has a command that is a very bad idea, but the OS doesn't consider it illegal, like "erase a random bit of memory this program is using"; that can cause super weird problems that are hard to get to the bottom of. Bytecode languages Other languages (e.g. Java, Python) compile your code into a language that the operating system can't read directly, but a specific runtime program can read your compiled code. This compiled code is often called "bytecode". The more elaborate this runtime program is, the more extra stuff it can do on the side that your code did not include (even in the libraries you use) -- for instance, the Java runtime environment ("JRE") and Python runtime environment can keep track of memory assignments that are no longer needed, and tell the operating system it's safe to reuse that memory for something else, and it can catch situations where your code would try to send an illegal request to the operating system, and instead exit with a readable error. All of this overhead makes them slower than compiled binary languages, but it makes the runtime powerful and flexible; in some cases, it can even pull in other code after it starts running, without having to start over. The compile step means that many statements that would cause your program to crash are detected before the code is ever run; and the powerful runtime can keep your code from doing stupid things (e.g., you can't "erase a random bit of memory this program is using"). Scripting languages Still other languages don't precompile your code at all; the runtime does all of the work of reading your code line by line, interpreting it and executing it. This makes them even slower than "bytecode" languages, but also even more flexible; in some cases, you can even fiddle with your source code as it runs! Though it also means that you can have a totally illegal statement in your code, and it could sit there in your production code without drawing attention, until one day it is run and causes a crash. These are generally called "scripting" languages; they include Javascript, Perl, and PHP. Some of these provide cases where you can choose to compile the code to improve its speed (e.g., Javascript's WebAssembly project). So Javascript can allow users on a website to see the exact code that is running, since their browser is providing the runtime. This flexibility also allows for innovations in runtime environments, like node.js, which is both a code library and a runtime environment that can run your Javascript code as a server, which involves behaving very differently than if you tried to run the same code on a browser.
In my understanding runtime is exactly what it means - the time when the program is run. You can say something happens at runtime / run time or at compile time. I think runtime and runtime library should be (if they aren't) two separate things. "C runtime" doesn't seem right to me. I call it "C runtime library". Answers to your other questions: I think the term runtime can be extended to include also the environment and the context of the program when it is run, so: it consists of everything that can be called "environment" during the time when the program is run, for example other processes, state of the operating system and used libraries, state of other processes, etc it doesn't interact with your code in a general sense, it just defines in what circumstances your code works, what is available to it during execution. This answer is to some extend just my opinion, not a fact or definition.
Matt Ball answered it correctly. I would say about it with examples. Consider running a program compiled in Turbo-Borland C/C++ (version 3.1 from the year 1991) compiler and let it run under a 32-bit version of windows like Win 98/2000 etc. It's a 16-bit compiler. And you will see all your programs have 16-bit pointers. Why is it so when your OS is 32bit? Because your compiler has set up the execution environment of 16 bit and the 32-bit version of OS supported it. What is commonly called as JRE (Java Runtime Environment) provides a Java program with all the resources it may need to execute. Actually, runtime environment is brain product of idea of Virtual Machines. A virtual machine implements the raw interface between hardware and what a program may need to execute. The runtime environment adopts these interfaces and presents them for the use of the programmer. A compiler developer would need these facilities to provide an execution environment for its programs.
Run time exactly where your code comes into life and you can see lot of important thing your code do. Runtime has a responsibility of allocating memory , freeing memory , using operating system's sub system like (File Services, IO Services.. Network Services etc.) Your code will be called "WORKING IN THEORY" until you practically run your code. and Runtime is a friend which helps in achiving this.
a runtime could denote the current phase of program life (runtime / compile time / load time / link time) or it could mean a runtime library, which form the basic low level actions that interface with the execution environment. or it could mean a runtime system, which is the same as an execution environment. in the case of C programs, the runtime is the code that sets up the stack, the heap etc. which a requirement expected by the C environment. it essentially sets up the environment that is promised by the language. (it could have a runtime library component, crt0.lib or something like that in case of C)
Runtime basically means when program interacts with the hardware and operating system of a machine. C does not have it's own runtime but instead, it requests runtime from an operating system (which is basically a part of ram) to execute itself.
I found that the following folder structure makes a very insightful context for understanding what runtime is: You can see that there is the 'source', there is the 'SDK' or 'Software Development Kit' and then there is the Runtime, eg. the stuff that gets run - at runtime. It's contents are like: The win32 zip contains .exe -s and .dll -s. So eg. the C runtime would be the files like this -- C runtime libraries, .so-s or .dll -s -- you run at runtime, made special by their (or their contents' or purposes') inclusion in the definition of the C language (on 'paper'), then implemented by your C implementation of choice. And then you get the runtime of that implementation, to use it and to build upon it. That is, with a little polarisation, the runnable files that the users of your new C-based program will need. As a developer of a C-based program, so do you, but you need the C compiler and the C library headers, too; the users don't need those.
If my understanding from reading the above answers is correct, Runtime is basically 'background processes' such as garbage collection, memory-allocation, basically any processes that are invoked indirectly, by the libraries / frameworks that your code is written in, and specifically those processes that occur after compilation, while the application is running.
The fully qualified name of Runtime seems to be the additional environment to provide programming language-related functions required at run time for non-web application software. Runtime implements programming language-related functions, which remain the same to any application domain, including math operations, memory operations, messaging, OS or DB abstraction service, etc. The runtime must in some way be connected with the running applications to be useful, such as being loaded into application memory space as a shared dynamic library, a virtual machine process inside which the application runs, or a service process communicating with the application.
Runtime is somewhat opposite to design-time and compile-time/link-time. Historically it comes from slow mainframe environment where machine-time was expensive.