When is a command's compile function called? - tcl

My understanding of TCL execution is, if a command's compile function is defined, it is first called when it comes to execute the command before its execution function is called.
Take command append as example, here is its definition in tclBasic.c:
static CONST CmdInfo builtInCmds[] = {
{"append", (Tcl_CmdProc *) NULL, Tcl_AppendObjCmd,
TclCompileAppendCmd, 1},
Here is my testing script:
$ cat t.tcl
set l [list 1 2 3]
append l 4
I add gdb breakpoints at both functions, TclCompileAppendCmd and Tcl_AppendObjCmd. My expectation is TclCompileAppendCmd is hit before Tcl_AppendObjCmd.
Gdb's target is tclsh8.4 and argument is t.tcl.
What I see is interesting:
TclCompileAppendCmd does get hit first, but it is not from t.tcl,
rather it is from init.tcl.
TclCompileAppendCmd gets hit several times and they all are from init.tcl.
The first time t.tcl executes, it is Tcl_AppendObjCmd gets hit, not TclCompileAppendCmd.
I cannot make sense of it:
Why is the compile function called for init.tcl but not for t.tcl?
Each script should be independently compiled, i.e. the object with compiled command append at init.tcl is not reused for later scripts, isn't it?
[UPDATE]
Thanks Brad for the tip, after I move the script to a proc, I can see TclCompileAppendCmd is hit.

The compilation function (TclCompileAppendCmd in your example) is called by the bytecode compiler when it wants to issue bytecode for a particular instance of that particular command. The bytecode compiler also has a fallback if there is no compilation function for a command: it issues instructions to invoke the standard implementation (which would be Tcl_AppendObjCmd in this case; the NULL in the other field causes Tcl to generate a thunk in case someone really insists on using a particular API but you can ignore that). That's a useful behaviour, because it is how operations like I/O are handled; the overhead of calling a standard command implementation is pretty small by comparison with the overhead of doing disk or network I/O.
But when does the bytecode compiler run?
On one level, it runs whenever the rest of Tcl asks for it to be run. Simple! But that's not really helpful to you. More to the point, it runs whenever Tcl evaluates a script value in a Tcl_Obj that doesn't already have bytecode type (or if the saved bytecode indicates that it is for a different resolution context or different compilation epoch) except if the evaluation has asked to not be bytecode compiled by the flag TCL_EVAL_DIRECT to Tcl_EvalObjEx or Tcl_EvalEx (which is a convenient wrapper for Tcl_EvalObjEx). It's that flag which is causing you problems.
When is that flag used?
It's actually pretty simple: it's used when some code is believed to be going to be run only once because then the cost of compilation is larger than the cost of using the interpretation path. It's particularly used by Tk's bind command for running substituted script callbacks, but it is also used by source and the main code of tclsh (essentially anything using Tcl_FSEvalFileEx or its predecessors/wrappers Tcl_FSEvalFile and Tcl_EvalFile). I'm not 100% sure whether that's the right choice for a sourced context, but it is what happens now. However, there is a workaround that is (highly!) worthwhile if you're handling looping: you can put the code in a compiled context within that source using a procedure that you call immediately or use an apply (I recommend the latter these days). init.tcl uses these tricks, which is why you were seeing it compile things.
And no, we don't normally save compiled scripts between runs of Tcl. Our compiler is fast enough that that's not really worthwhile; the cost of verifying that the loaded compiled code is correct for the current interpreter is high enough that it's actually faster to recompile from the source code. Our current compiler is fast (I'm working on a slower one that generates enormously better code). There's a commercial tool suite from ActiveState (the Tcl Dev Kit) which includes an ahead-of-time compiler, but that's focused around shrouding code for the purposes of commercial deployment and not speed.

Related

How many arguments are passed in a function call?

I wish to analyze assembly code that calls functions, and for each 'call' find out how many arguments are passed to the function. I assume that the target functions are not accessible to me, but only the calling code.
I limit myself to code that was compiled with GCC only, and to System V ABI calling convention.
I tried scanning back from each 'call' instruction, but I failed to find a good enough convention (e.g., where to stop scanning? what happen on two subsequent calls with the same arguments?). Assistance is highly appreciated.
Reposting my comments as an answer.
You can't reliably tell in optimized code. And even doing a good job most of the time probably requires human-level AI. e.g. did a function leave a value in RSI because it's a second argument, or was it just using RSI as a scratch register while computing a value for RDI (the first argument)? As Ross says, gcc-generated code for stack-args calling-conventions have more obvious patterns, but still nothing easy to detect.
It's also potentially hard to tell the difference between stores that spill locals to the stack vs. stores that store args to the stack (since gcc can and does use mov stores for stack-args sometimes: see -maccumulate-outgoing-args). One way to tell the difference is that locals will be reloaded later, but args are always assumed to be clobbered.
what happen on two subsequent calls with the same arguments?
Compilers always re-write args before making another call, because they assume that functions clobber their args (even on the stack). The ABI says that functions "own" their args. Compilers do make code that does this (see comments), but compiler-generated code isn't always willing to re-purpose the stack memory holding its args for storing completely different args in order to enable tail-call optimization. :( This is hand-wavey because I don't remember exactly what I've seen as far as missed tail-call optimization opportunities.
Yet if arguments are passed by the stack, then it shall probably be the easier case (and I conclude that all 6 registers are used as well).
Even that isn't reliable. The System V x86-64 ABI is not simple.
int foo(int, big_struct, int) would pass the two integer args in regs, but pass the big struct by value on the stack. FP args are also a major complication. You can't conclude that seeing stuff on the stack means that all 6 integer arg-passing slots are used.
The Windows x64 ABI is significantly different: For example, if the 2nd arg (after adding a hidden return-value pointer if needed) is integer/pointer, it always goes in RDX, regardless of whether the first arg went in RCX, XMM0, or on the stack. It also requires the caller to leave "shadow space".
So you might be able to come up with some heuristics to will work ok for un-optimized code. Even that will be hard to get right.
For optimized code generated by different compilers, I think it would be more work to implement anything even close to useful than you'd ever save by having it.

NVRTC and __device__ functions

I am trying to optimize my simulator by leveraging run-time compilation. My code is pretty long and complex, but I identified a specific __device__ function whose performances can be strongly improved by removing all global memory accesses.
Does CUDA allow the dynamic compilation and linking of a single __device__ function (not a __global__), in order to "override" an existing function?
I am pretty sure the really short answer is no.
Although CUDA has dynamic/JIT device linker support, it is important to remember that the linkage process itself is still static.
So you can't delay load a particular function in an existing compiled GPU payload at runtime as you can in a conventional dynamic link loading environment. And the linker still requires that a single instance of all code objects and symbols be present at link time, whether that is a priori or at runtime. So you would be free to JIT link together precompiled objects with different versions of the same code, as long as a single instance of everything is present when the session is finalised and the code is loaded into the context. But that is as far as you can go.
It looks like you have a "main" kernel with a part that is "switchable" at run time.
You can definitely do this using nvrtc. You'd need to go about doing something like this:
Instead of compiling the main kernel ahead of time, store it as as string to be compiled and linked at runtime.
Let's say the main kernel calls "myFunc" which is a device kernel that is chosen at runtime.
You can generate the appropriate "myFunc" kernel based on equations at run time.
Now you can create an nvrtc program using multiple sources using nvrtcCreateProgram.
That's about it. The key is to delay compiling the main kernel until you need it at run time. You may also want to cache your kernels somehow so you end up compiling only once.
There is one problem I foresee. nvrtc may not find the curand device calls which may cause some issues. One work around would be to look at the header the device function call is in and use nvcc to compile the appropriate device kernel to ptx. You can store the resulting ptx as text and use cuLinkAddData to link with your module. You can find more information in this section.

Compiling TCL libraries with TCL_MEM_DEBUG

I compiled TCL libraries with mem flag. But when i tried to use the libraries on my application i couldn't see any message in the console. will the trace messages out to standardard output(terminal) or will there be any log files to log the messages?
When you compile Tcl with the memory debugging enabled (using a Posix configuration style, this means that you passed in --enable-symbols=mem or --enable-symbols=all to configure; I'm not certain about what happens with Windows) there is a substantial amount of extra checking of memory allocation handling by default, and an extra Tcl command — memory — is defined. Some memory subcommands do cause messages to be written to stderr; you'll need to be running inside a suitable console in order to see them, and this can be something of an issue on Windows if you are not aware of it. Other commands will dump things to a named file.
FWIW, when developing Tcl I usually build with --enable-symbols=all except when doing performance testing. The various debugging options are known to have substantial impacts on the speed of Tcl's implementation (which is why it is a compile option rather than being always present, and consequently why the interface is rather rougher than for the rest of Tcl).

What is "runtime"?

I have heard about things like "C Runtime", "Visual C++ 2008 Runtime", ".NET Common Language Runtime", etc.
What is "runtime" exactly?
What is it made of?
How does it interact with my code? Or maybe more precisely, how is my code controlled by it?
When coding assembly language on Linux, I could use the INT instruction to make the system call. So, is the runtime nothing but a bunch of pre-fabricated functions that wrap the low level function into more abstract and high level functions? But doesn't this seem more like the definition for the library, not for the runtime?
Are "runtime" and "runtime library" two different things?
ADD 1
These days, I am thinking maybe Runtime has something in common with the so called Virtual Machine, such as JVM. Here's the quotation that leads to such thought:
This compilation process is sufficiently complex to be broken into
several layers of abstraction, and these usually involve three
translators: a compiler, a virtual machine implementation, and an
assembler. --- The Elements of Computing Systems (Introduction,
The Road Down To Hardware Land)
ADD 2
The book Expert C Programming: Deep C Secrets. Chapter 6 Runtime Data Structures is an useful reference to this question.
ADD 3 - 7:31 AM 2/28/2021
Here's some of my perspective after getting some knowledge about processor design. The whole computer thing is just multiple levels of abstraction. It goes from elementary transistors all the way up to the running program. For any level N of abstraction, its runtime is the immediate level N-1 of abstraction that goes below it. And it is God that give us the level 0 of abstraction.
Runtime describes software/instructions that are executed while your program is running, especially those instructions that you did not write explicitly, but are necessary for the proper execution of your code.
Low-level languages like C have very small (if any) runtime. More complex languages like Objective-C, which allows for dynamic message passing, have a much more extensive runtime.
You are correct that runtime code is library code, but library code is a more general term, describing the code produced by any library. Runtime code is specifically the code required to implement the features of the language itself.
Runtime is a general term that refers to any library, framework, or platform that your code runs on.
The C and C++ runtimes are collections of functions.
The .NET runtime contains an intermediate language interpreter, a garbage collector, and more.
As per Wikipedia: runtime library/run-time system.
In computer programming, a runtime library is a special program library used by a compiler, to implement functions built into a programming language, during the runtime (execution) of a computer program. This often includes functions for input and output, or for memory management.
A run-time system (also called runtime system or just runtime) is software designed to support the execution of computer programs written in some computer language. The run-time system contains implementations of basic low-level commands and may also implement higher-level commands and may support type checking, debugging, and even code generation and optimization.
Some services of the run-time system are accessible to the programmer through an application programming interface, but other services (such as task scheduling and resource management) may be inaccessible.
Re: your edit, "runtime" and "runtime library" are two different names for the same thing.
The runtime or execution environment is the part of a language implementation which executes code and is present at run-time; the compile-time part of the implementation is called the translation environment in the C standard.
Examples:
the Java runtime consists of the virtual machine and the standard library
a common C runtime consists of the loader (which is part of the operating system) and the runtime library, which implements the parts of the C language which are not built into the executable by the compiler; in hosted environments, this includes most parts of the standard library
I'm not crazy about the other answers here; they're too vague and abstract for me. I think more in stories. Here's my attempt at a better answer.
a BASIC example
Let's say it's 1985 and you write a short BASIC program on an Apple II:
] 10 PRINT "HELLO WORLD!"
] 20 GOTO 10
So far, your program is just source code. It's not running, and we would say there is no "runtime" involved with it.
But now I run it:
] RUN
How is it actually running? How does it know how to send the string parameter from PRINT to the physical screen? I certainly didn't provide any system information in my code, and PRINT itself doesn't know anything about my system.
Instead, RUN is actually a program itself -- its code tells it how to parse my code, how to execute it, and how to send any relevant requests to the computer's operating system. The RUN program provides the "runtime" environment that acts as a layer between the operating system and my source code. The operating system itself acts as part of this "runtime", but we usually don't mean to include it when we talk about a "runtime" like the RUN program.
Types of compilation and runtime
Compiled binary languages
In some languages, your source code must be compiled before it can be run. Some languages compile your code into machine language -- it can be run by your operating system directly. This compiled code is often called "binary" (even though every other kind of file is also in binary :).
In this case, there is still a minimal "runtime" involved -- but that runtime is provided by the operating system itself. The compile step means that many statements that would cause your program to crash are detected before the code is ever run.
C is one such language; when you run a C program, it's totally able to send illegal requests to the operating system (like, "give me control of all of the memory on the computer, and erase it all"). If an illegal request is hit, usually the OS will just kill your program and not tell you why, and dump the contents of that program's memory at the time it was killed to a .dump file that's pretty hard to make sense of. But sometimes your code has a command that is a very bad idea, but the OS doesn't consider it illegal, like "erase a random bit of memory this program is using"; that can cause super weird problems that are hard to get to the bottom of.
Bytecode languages
Other languages (e.g. Java, Python) compile your code into a language that the operating system can't read directly, but a specific runtime program can read your compiled code. This compiled code is often called "bytecode".
The more elaborate this runtime program is, the more extra stuff it can do on the side that your code did not include (even in the libraries you use) -- for instance, the Java runtime environment ("JRE") and Python runtime environment can keep track of memory assignments that are no longer needed, and tell the operating system it's safe to reuse that memory for something else, and it can catch situations where your code would try to send an illegal request to the operating system, and instead exit with a readable error.
All of this overhead makes them slower than compiled binary languages, but it makes the runtime powerful and flexible; in some cases, it can even pull in other code after it starts running, without having to start over. The compile step means that many statements that would cause your program to crash are detected before the code is ever run; and the powerful runtime can keep your code from doing stupid things (e.g., you can't "erase a random bit of memory this program is using").
Scripting languages
Still other languages don't precompile your code at all; the runtime does all of the work of reading your code line by line, interpreting it and executing it. This makes them even slower than "bytecode" languages, but also even more flexible; in some cases, you can even fiddle with your source code as it runs! Though it also means that you can have a totally illegal statement in your code, and it could sit there in your production code without drawing attention, until one day it is run and causes a crash.
These are generally called "scripting" languages; they include Javascript, Perl, and PHP. Some of these provide cases where you can choose to compile the code to improve its speed (e.g., Javascript's WebAssembly project). So Javascript can allow users on a website to see the exact code that is running, since their browser is providing the runtime.
This flexibility also allows for innovations in runtime environments, like node.js, which is both a code library and a runtime environment that can run your Javascript code as a server, which involves behaving very differently than if you tried to run the same code on a browser.
In my understanding runtime is exactly what it means - the time when the program is run. You can say something happens at runtime / run time or at compile time.
I think runtime and runtime library should be (if they aren't) two separate things. "C runtime" doesn't seem right to me. I call it "C runtime library".
Answers to your other questions:
I think the term runtime can be extended to include also the environment and the context of the program when it is run, so:
it consists of everything that can be called "environment" during the time when the program is run, for example other processes, state of the operating system and used libraries, state of other processes, etc
it doesn't interact with your code in a general sense, it just defines in what circumstances your code works, what is available to it during execution.
This answer is to some extend just my opinion, not a fact or definition.
Matt Ball answered it correctly. I would say about it with examples.
Consider running a program compiled in Turbo-Borland C/C++ (version 3.1 from the year 1991) compiler and let it run under a 32-bit version of windows like Win 98/2000 etc.
It's a 16-bit compiler. And you will see all your programs have 16-bit pointers. Why is it so when your OS is 32bit? Because your compiler has set up the execution environment of 16 bit and the 32-bit version of OS supported it.
What is commonly called as JRE (Java Runtime Environment) provides a Java program with all the resources it may need to execute.
Actually, runtime environment is brain product of idea of Virtual Machines. A virtual machine implements the raw interface between hardware and what a program may need to execute. The runtime environment adopts these interfaces and presents them for the use of the programmer. A compiler developer would need these facilities to provide an execution environment for its programs.
Run time exactly where your code comes into life and you can see lot of important thing your code do.
Runtime has a responsibility of allocating memory , freeing memory , using operating system's sub system like (File Services, IO Services.. Network Services etc.)
Your code will be called "WORKING IN THEORY" until you practically run your code.
and Runtime is a friend which helps in achiving this.
a runtime could denote the current phase of program life (runtime / compile time / load time / link time)
or it could mean a runtime library, which form the basic low level actions that interface with the execution environment.
or it could mean a runtime system, which is the same as an execution environment.
in the case of C programs, the runtime is the code that sets up the stack, the heap etc. which a requirement expected by the C environment. it essentially sets up the environment that is promised by the language. (it could have a runtime library component, crt0.lib or something like that in case of C)
Runtime basically means when program interacts with the hardware and operating system of a machine. C does not have it's own runtime but instead, it requests runtime from an operating system (which is basically a part of ram) to execute itself.
I found that the following folder structure makes a very insightful context for understanding what runtime is:
You can see that there is the 'source', there is the 'SDK' or 'Software Development Kit' and then there is the Runtime, eg. the stuff that gets run - at runtime. It's contents are like:
The win32 zip contains .exe -s and .dll -s.
So eg. the C runtime would be the files like this -- C runtime libraries, .so-s or .dll -s -- you run at runtime, made special by their (or their contents' or purposes') inclusion in the definition of the C language (on 'paper'), then implemented by your C implementation of choice. And then you get the runtime of that implementation, to use it and to build upon it.
That is, with a little polarisation, the runnable files that the users of your new C-based program will need. As a developer of a C-based program, so do you, but you need the C compiler and the C library headers, too; the users don't need those.
If my understanding from reading the above answers is correct, Runtime is basically 'background processes' such as garbage collection, memory-allocation, basically any processes that are invoked indirectly, by the libraries / frameworks that your code is written in, and specifically those processes that occur after compilation, while the application is running.
The fully qualified name of Runtime seems to be the additional environment to provide programming language-related functions required at run time for non-web application software.
Runtime implements programming language-related functions, which remain the same to any application domain, including math operations, memory operations, messaging, OS or DB abstraction service, etc.
The runtime must in some way be connected with the running applications to be useful, such as being loaded into application memory space as a shared dynamic library, a virtual machine process inside which the application runs, or a service process communicating with the application.
Runtime is somewhat opposite to design-time and compile-time/link-time. Historically it comes from slow mainframe environment where machine-time was expensive.

Runtime vs. Compile time

What is the difference between run-time and compile-time?
The difference between compile time and run time is an example of what pointy-headed theorists call the phase distinction. It is one of the hardest concepts to learn, especially for people without much background in programming languages. To approach this problem, I find it helpful to ask
What invariants does the program satisfy?
What can go wrong in this phase?
If the phase succeeds, what are the postconditions (what do we know)?
What are the inputs and outputs, if any?
Compile time
The program need not satisfy any invariants. In fact, it needn't be a well-formed program at all. You could feed this HTML to the compiler and watch it barf...
What can go wrong at compile time:
Syntax errors
Typechecking errors
(Rarely) compiler crashes
If the compiler succeeds, what do we know?
The program was well formed---a meaningful program in whatever language.
It's possible to start running the program. (The program might fail immediately, but at least we can try.)
What are the inputs and outputs?
Input was the program being compiled, plus any header files, interfaces, libraries, or other voodoo that it needed to import in order to get compiled.
Output is hopefully assembly code or relocatable object code or even an executable program. Or if something goes wrong, output is a bunch of error messages.
Run time
We know nothing about the program's invariants---they are whatever the programmer put in. Run-time invariants are rarely enforced by the compiler alone; it needs help from the programmer.
What can go wrong are run-time errors:
Division by zero
Dereferencing a null pointer
Running out of memory
Also there can be errors that are detected by the program itself:
Trying to open a file that isn't there
Trying find a web page and discovering that an alleged URL is not well formed
If run-time succeeds, the program finishes (or keeps going) without crashing.
Inputs and outputs are entirely up to the programmer. Files, windows on the screen, network packets, jobs sent to the printer, you name it. If the program launches missiles, that's an output, and it happens only at run time :-)
I think of it in terms of errors, and when they can be caught.
Compile time:
string my_value = Console.ReadLine();
int i = my_value;
A string value can't be assigned a variable of type int, so the compiler knows for sure at compile time that this code has a problem
Run time:
string my_value = Console.ReadLine();
int i = int.Parse(my_value);
Here the outcome depends on what string was returned by ReadLine(). Some values can be parsed to an int, others can't. This can only be determined at run time
Compile-time: the time period in which you, the developer, are compiling your code.
Run-time: the time period which a user is running your piece of software.
Do you need any clearer definition?
(edit: the following applies to C# and similar, strongly-typed programming languages. I'm not sure if this helps you).
For example, the following error will be detected by the compiler (at compile time) before you run a program and will result in a compilation error:
int i = "string"; --> error at compile-time
On the other hand, an error like the following can not be detected by the compiler. You will receive an error/exception at run-time (when the program is run).
Hashtable ht = new Hashtable();
ht.Add("key", "string");
// the compiler does not know what is stored in the hashtable
// under the key "key"
int i = (int)ht["key"]; // --> exception at run-time
Translation of source code into stuff-happening-on-the-[screen|disk|network] can occur in (roughly) two ways; call them compiling and interpreting.
In a compiled program (examples are c and fortran):
The source code is fed into another program (usually called a compiler--go figure), which produces an executable program (or an error).
The executable is run (by double clicking it, or typing it's name on the command line)
Things that happen in the first step are said to happen at "compile time", things that happen in the second step are said to happen at "run time".
In an interpreted program (example MicroSoft basic (on dos) and python (I think)):
The source code is fed into another program (usually called an interpreter) which "runs" it directly. Here the interpreter serves as an intermediate layer between your program and the operating system (or the hardware in really simple computers).
In this case the difference between compile time and run time is rather harder to pin down, and much less relevant to the programmer or user.
Java is a sort of hybrid, where the code is compiled to bytecode, which then runs on a virtual machine which is usually an interpreter for the bytecode.
There is also an intermediate case in which the program is compiled to bytecode and run immediately (as in awk or perl).
Basically if your compiler can work out what you mean or what a value is "at compile time" it can hardcode this into the runtime code. Obviously if your runtime code has to do a calculation every time it will run slower, so if you can determine something at compile time it is much better.
Eg.
Constant folding:
If I write:
int i = 2;
i += MY_CONSTANT;
The compiler can perform this calulation at compile time because it knows what 2 is, and what MY_CONSTANT is. As such it saves itself from performing a calculation every single execution.
Hmm, ok well, runtime is used to describe something that occurs when a program is running.
Compile time is used to describe something that occurs when a program is being built (usually, by a compiler).
Compile Time:
Things that are done at compile time incur (almost) no cost when the resulting program is run, but might incur a large cost when you build the program.
Run-Time:
More or less the exact opposite. Little cost when you build, more cost when the program is run.
From the other side; If something is done at compile time, it runs only on your machine and if something is run-time, it run on your users machine.
Relevance
An example of where this is important would be a unit carrying type. A compile time version (like Boost.Units or my version in D) ends up being just as fast as solving the problem with native floating point code while a run-time version ends up having to pack around information about the units that a value are in and perform checks in them along side every operation. On the other hand, the compile time versions requiter that the units of the values be known at compile time and can't deal with the case where they come from run-time input.
As an add-on to the other answers, here's how I'd explain it to a layman:
Your source code is like the blueprint of a ship. It defines how the ship should be made.
If you hand off your blueprint to the shipyard, and they find a defect while building the ship, they'll stop building and report it to you immediately, before the ship has ever left the drydock or touched water. This is a compile-time error. The ship was never even actually floating or using its engines. The error was found because it prevented the ship even being made.
When your code compiles, it's like the ship being completed. Built and ready to go. When you execute your code, that's like launching the ship on a voyage. The passengers are boarded, the engines are running and the hull is on the water, so this is runtime. If your ship has a fatal flaw that sinks it on its maiden voyage (or maybe some voyage after for extra headaches) then it suffered a runtime error.
Following from previous similar answer of question What is the difference between run-time error and compiler error?
Compilation/Compile time/Syntax/Semantic errors: Compilation or compile time errors are error occurred due to typing mistake, if we do not follow the proper syntax and semantics of any programming language then compile time errors are thrown by the compiler. They wont let your program to execute a single line until you remove all the syntax errors or until you debug the compile time errors.
Example: Missing a semicolon in C or mistyping int as Int.
Runtime errors: Runtime errors are the errors that are generated when the program is in running state. These types of errors will cause your program to behave unexpectedly or may even kill your program. They are often referred as Exceptions.
Example: Suppose you are reading a file that doesn't exist, will result in a runtime error.
Read more about all programming errors here
Here is a quote from Daniel Liang, author of 'Introduction to JAVA programming', on the subject of compilation:
"A program written in a high-level language is called a source program or source code. Because a computer cannot execute a source program, a source program must be translated into machine code for execution. The translation can be done using another programming tool called an interpreter or a compiler." (Daniel Liang, "Introduction to JAVA programming", p8).
...He Continues...
"A compiler translates the entire source code into a machine-code file, and the machine-code file is then executed"
When we punch in high-level/human-readable code this is, at first, useless! It must be translated into a sequence of 'electronic happenings' in your tiny little CPU! The first step towards this is compilation.
Simply put: a compile-time error happens during this phase, while a run-time error occurs later.
Remember: Just because a program is compiled without error does not mean it will run without error.
A Run-time error will occur in the ready, running or waiting part of a programs life-cycle while a compile-time error will occur prior to the 'New' stage of the life cycle.
Example of a Compile-time error:
A Syntax Error - how can your code be compiled into machine level instructions if they are ambiguous?? Your code needs to conform 100% to the syntactical rules of the language otherwise it cannot be compiled into working machine code.
Example of a run-time error:
Running out of memory - A call to a recursive function for example might lead to stack overflow given a variable of a particular degree! How can this be anticipated by the compiler!? it cannot.
And that is the difference between a compile-time error and a run-time error
For example: In a strongly typed language, a type could be checked at compile time or at runtime. At compile time it means, that the compiler complains if the types are not compatible. At runtime means, that you can compile your program just fine but at runtime, it throws an exception.
In simply word difference b/w Compile time & Run time.
compile time:Developer writes the program in .java format & converts in to the Bytecode which is a class file,during this compilation any error occurs can be defined as compile time error.
Run time:The generated .class file is use by the application for its additional functionality & the logic turns out be wrong and throws an error which is a run time error
Compile time:
Time taken to convert the source code into a machine code so that it becomes an executable is called compile time.
Run time:
When an application is running, it is called run time.
Compile time errors are those syntax errors, missing file reference errors.
Runtime errors happen after the source code has been compiled into an executable program and while the program is running. Examples are program crashes, unexpected program behavior or features don't work.
Run time means something happens when you run the program.
Compile time means something happens when you compile the program.
Imagine that you are a boss and you have an assistant and a maid, and you give them a list of tasks to do, the assistant (compile time) will grab this list and make a checkup to see if the tasks are understandable and that you didn't write in any awkward language or syntax, so he understands that you want to assign someone for a Job so he assign him for you and he understand that you want some coffee, so his role is over and the maid (run time)starts to run those tasks so she goes to make you some coffee but in sudden she doesn’t find any coffee to make so she stops making it or she acts differently and make you some tea (when the program acts differently because he found an error).
Compile Time:
Things that are done at compile time incur (almost) no cost when the resulting program is run, but might incur a large cost when you build the program.
Run-Time:
More or less the exact opposite. Little cost when you build, more cost when the program is run.
From the other side; If something is done at compile time, it runs only on your machine and if something is run-time, it run on your users machine.
I have always thought of it relative to program processing overhead and how it affects preformance as previously stated. A simple example would be, either defining the absolute memory required for my object in code or not.
A defined boolean takes x memory this is then in the compiled program and cannot be changed. When the program runs it knows exactly how much memory to allocate for x.
On the other hand if I just define a generic object type (i.e. kind of a undefined place holder or maybe a pointer to some giant blob) the actual memory required for my object is not known until the program is run and I assign something to it, thus it then must be evaluated and memory allocation, etc. will be then handled dynamically at run time (more run time overhead).
How it is dynamically handled would then depend on the language, the compiler, the OS, your code, etc.
On that note however it would really depends on the context in which you are using run time vs compile time.
Here is an extension to the Answer to the question "difference between run-time and compile-time?" -- Differences in overheads associated with run-time and compile-time?
The run-time performance of the product contributes to its quality by delivering results faster. The compile-time performance of the product contributes to its timeliness by shortening the edit-compile-debug cycle. However, both run-time performance and compile-time performance are secondary factors in achieving timely quality. Therefore, one should consider run-time and compile-time performance improvements only when justified by improvements in overall product quality and timeliness.
A great source for further reading here:
we can classify these under different two broad groups static binding and dynamic binding. It is based on when the binding is done with the corresponding values. If the references are resolved at compile time, then it is static binding and if the references are resolved at runtime then it is dynamic binding. Static binding and dynamic binding also called as early binding and late binding. Sometimes they are also referred as static polymorphism and dynamic polymorphism.
Joseph Kulandai‏.
The major difference between run-time and compile time is:
If there are any syntax errors and type checks in your code,then it throws compile time error, where-as run-time:it checks after executing the code.
For example:
int a = 1
int b = a/0;
here first line doesn't have a semi-colon at the end---> compile time error after executing the program while performing operation b, result is infinite---> run-time error.
Compile time doesn't look for output of functionality provided by your code, whereas run-time does.
here's a very simple answer:
Runtime and compile time are programming terms that refer to different stages of software program development.
In order to create a program, a developer first writes source code, which defines how the program will function. Small programs may only contain a few hundred lines of source code, while large programs may contain hundreds of thousands of lines of source code. The source code must be compiled into machine code in order to become and executable program. This compilation process is referred to as compile time.(think of a compiler as a translator)
A compiled program can be opened and run by a user. When an application is running, it is called runtime.
The terms "runtime" and "compile time" are often used by programmers to refer to different types of errors. A compile time error is a problem such as a syntax error or missing file reference that prevents the program from successfully compiling. The compiler produces compile time errors and usually indicates what line of the source code is causing the problem.
If a program's source code has already been compiled into an executable program, it may still have bugs that occur while the program is running. Examples include features that don't work, unexpected program behavior, or program crashes. These types of problems are called runtime errors since they occur at runtime.
The reference
Look into this example:
public class Test {
public static void main(String[] args) {
int[] x=new int[-5];//compile time no error
System.out.println(x.length);
}}
The above code is compiled successfully, there is no syntax error, it is perfectly valid.
But at the run time, it throws following error.
Exception in thread "main" java.lang.NegativeArraySizeException
at Test.main(Test.java:5)
Like when in compile time certain cases has been checked, after that run time certain cases has been checked once the program satisfies all the condition you will get an output.
Otherwise, you will get compile time or run time error.
You can understand the code compile structure from reading the actual code. Run-time structure are not clear unless you understand the pattern that was used.
public class RuntimeVsCompileTime {
public static void main(String[] args) {
//test(new D()); COMPILETIME ERROR
/**
* Compiler knows that B is not an instance of A
*/
test(new B());
}
/**
* compiler has no hint whether the actual type is A, B or C
* C c = (C)a; will be checked during runtime
* #param a
*/
public static void test(A a) {
C c = (C)a;//RUNTIME ERROR
}
}
class A{
}
class B extends A{
}
class C extends A{
}
class D{
}
It's not a good question for S.O. (it's not a specific programming question), but it's not a bad question in general.
If you think it's trivial: what about read-time vs compile-time, and when is this a useful distinction to make? What about languages where the compiler is available at runtime? Guy Steele (no dummy, he) wrote 7 pages in CLTL2 about EVAL-WHEN, which CL programmers can use to control this. 2 sentences are barely enough for a definition, which itself is far short of an explanation.
In general, it's a tough problem that language designers have seemed to try to avoid.
They often just say "here's a compiler, it does compile-time things; everything after that is run-time, have fun". C is designed to be simple to implement, not the most flexible environment for computation. When you don't have the compiler available at runtime, or the ability to easily control when an expression is evaluated, you tend to end up with hacks in the language to fake common uses of macros, or users come up with Design Patterns to simulate having more powerful constructs. A simple-to-implement language can definitely be a worthwhile goal, but that doesn't mean it's the end-all-be-all of programming language design. (I don't use EVAL-WHEN much, but I can't imagine life without it.)
And the problemspace around compile-time and run-time is huge and still largely unexplored. That's not to say S.O. is the right place to have the discussion, but I encourage people to explore this territory further, especially those who have no preconceived notions of what it should be. The question is neither simple nor silly, and we could at least point the inquisitor in the right direction.
Unfortunately, I don't know any good references on this. CLTL2 talks about it a bit, but it's not great for learning about it.