Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I started learning Ada for its potential use in an embedded device which is safety critical. So far, I'm really liking it. However, in my research on embedded programming, I came across the hot topic of whether to use exception handling in embedded systems. I think I understand why some people seem to avoid it:
depending on its implementation it can introduce either run-time overhead or larger code size (mentioned here under "Implementation")
the time it takes to execute exceptions can be non-deterministic (one of several sources I saw)
Now my question is, Does the Ada language or the GNAT compiler address these concerns? My understanding of safety critical code is that non-deterministic code size and execution time is often not acceptable.
Due Diligence: I am having a bit of trouble finding out exactly how deterministic Ada exceptions can be, but my understanding is their original implementation called for more run-time overhead in exchange for reduced code size impact (above first link mentions Ada explicitly). Beyond the above first link, I have looked into profiles mentioning determinism of code, like the Ravenscar profile and this paper, but nothing seems to mention exception handling determinism. To be fair, I may be looking in the wrong places, as this topic seems quite deep.
There are embedded systems that are safety- or mission-critical, embedded systems that are hard real time, and embedded systems that are both.
Embedded systems that are hard real time may be constrained or not. Colleagues worked on a missile guidance system in the 70s that had about 4 instructions worth of headroom in its main loop! (as you can imagine, it was written in assembler and used a tuned executive, not an RTOS. Exceptions weren't supported). On the other hand, the last one I worked on, on a 1 GHz PowerPC board, had a 2 millisecond deadline for the response to a particular interrupt, and our measured worst case was 1.3 milliseconds (and it was a soft real time requirement anyway, you just didn't have to miss too many in a row).
That system also had safety requirements (I know, I know, safe missile systems, huh) and although we were permitted to use exceptions, an unhandled exception meant that the system had to be shut down, missile in flight or no, resulting in loss of missile. And we were strictly forbidden to say when others => null; to swallow an exception, so any exception we didn't handle would be 'unhandled' and would bounce up to the top level.
The argument is, if an unhandled exception happens, you can no longer know the state of the system, so you can't justify continuing. Of course, the wider safety engineering has to consider what action the overall system should take (for example, perhaps this processor should restart in a recovery mode).
Sometimes people use exceptions as part of their control flow; indeed, for handling random text inputs a commonly used method is, rather than checking for end of file, just carry on until you get an End_Error;
loop
begin
-- read input
-- process input
exception
when End_Error => exit;
end;
end loop;
Jacob's answer discusses using SPARK. You don't have to use SPARK to not handle exceptions, though of course it would be nice to be able to prove to yourself (and your safety auditor!) that there won't be any. Handling exceptions is very tricky, and some RTSs (e.g Cortex GNAT RTS) don't; the configuration pragma
pragma Restrictions (No_Exception_Propagation);
means that exceptions can't be propagated out of the scope where they're raised (the program will crash out with a call to a Last_Chance_Handler).
Propagating exceptions only withon the scope where they're raised isn't, IMO, that useful:
begin
-- do something
if some error condition then
raise Err;
end if;
-- do more
exception
when Err =>
null;
end;
would be a rather confusing way of avoiding the "do more" code. Better to use a label!
Exceptions are deterministic in Ada. (But some checks which can raise an exception have some freedom. If the compiler can provide a correct answer, it doesn't always have to raise an exception, if an intermediate result is out of bounds for the type in question.)
At least one Ada compiler (GNAT) has a "zero cost" exception implementation. This doesn't make exceptions completely free, but you don't pay a run-time cost until you actually raise an exception. You still pay a cost in terms of code space. How large that cost is depends on the architecture.
I haven't worked on safety critical systems myself, but I know for sure that the run-time used for the software in the Ariane 4 inertial navigation system included exceptions.
If you don't want exceptions, one option is to use SPARK (a language derived from Ada). You can still use any Ada compiler you like, but you use the SPARK tools to prove that the program can't raise any exceptions. You should note that SPARK isn't magic. You have to help the tools, by inserting assertions, which the tools can use as intermediate steps for the proofs.
I have no problem with the IO Monad. But I want to understand the followings:
In All/almost Haskell tutorials/ text books they keep saying that getChar is not a pure function, because it can give you a different result. My question is: Who said that this is a function in the first place. Unless you give me the implementation of this function, and I study that implementation, I can't guarantee it is pure. So, where is that implementation?
In All/almost Haskell tutorials/ text books, it's said that, say (IO String) is an action that (When executed) it can give you back a value of type String. This is fine, but who/where this execution is taking place. Of course! The computer is doing this execution. This is OK too. but since I am only a beginner, I hope you forgive me to ask, where is the recipe for this "execution". I would guess it is not written in Haskell. Does this later idea mean that, after all, that a Haskell program is converted into a C-like program, which will eventually be converted into Assembly -> Machine code? If so, where one can find the implementation of the IO stuff in Haskell?
Many thanks
Haskell functions are not the same as computations.
A computation is a piece of imperative code (perhaps written in C or Assembler, and then compiled to machine code, directly executable on a processor), that is by nature effectful and even unrestricted in its effects. That is, once it is ran, a computation may access and alter any memory and perform any operations, such as interacting with keyboard and screen, or even launching missiles.
By contrast, a function in a pure language, such as Haskell, is unable to alter arbitrary memory and launch missiles. It can only alter its own personal section of memory and return a result that is specified in its type.
So, in a sense, Haskell is a language that cannot do anything. Haskell is useless. This was a major problem during the 1990's, until IO was integrated into Haskell.
Now, an IO a value is a link to a separately prepared computation that will, eventually, hopefully, produce a. You will not be able to create an IO a out of pure Haskell functions. All the IO primitives are designed separately, and packaged into GHC. You can then compose these simple computations into less trivial ones, and eventually your program may have any effects you may wish.
One point, though: pure functions are separate from each other, they can only influence each other if you use them together. Computations, on the other hand, may interact with each other freely (as I said, they can generally do anything), and therefore can (and do) accidentally break each other. That's why there are so many bugs in software written in imperative languages! So, in Haskell, computations are kept in IO.
I hope this dispels at least some of your confusion.
What are the applications and advantages of explicitly raising exceptions in a program. For example, if we consider Ada language specifically here provides an interface to raise exceptions in the program. Example:
raise <Exception>;
But what are the advantages and application areas where we would need to raise exceptions explicitly?
For example, in a procedure which accepts one of the parameters as string:
function Fixed_Str_To_Chr_Ptr (Source_String : String) return C.Strings.Chars_Ptr is
...
begin
...
-- Check whether source string is of acceptable length
if Source_String'Length <= 100 then
...
else
...
raise Constraint_Error;
end if;
return Ptr;
exception
when Constraint_Error=>
.. Do Something..
end Fixed_Str_To_Chr_Ptr;
Is there any advantage or good practice if I raise an exception in the above function and handle it when the passed string length bound exceeds the tolerable limits? Or a simple If-else handler logic should do the business?
I'll make my 2 cents an answer in order to bundle the various aspects. Let's start with the general question
But what are the advantages and application areas where we would need to raise exceptions explicitly?
There are a few typical reasons for raising exceptions. Most of them are not Ada-specific.
First of all there may be a general design decision to use or not use exceptions. Some general criteria:
Exception handlers may incur a run time cost even if an exception is actually never thrown (cf. e.g. https://gcc.gnu.org/onlinedocs/gnat_ugn/Exception-Handling-Control.html). That may be unacceptable.
Issues of inter-operability with other languages may preclude the use of exceptions, or at least require that none leave the part programmed in Ada.
To a degree the decision is also a matter of taste. A programmer coming from a language without exceptions may feel more confident with a design which just relies on checking return values.
Some programs will benefit from exceptions more than others. If traditional error handling obscures the actual program structure it may be time for exceptions. If, on the other hand, potential errors are few, easily detected and can be handled locally, exceptions may obscure potential execution paths more than handling errors traditionally would.
Once the general decision to use exceptions has been made the problem arises when and when not it is appropriate to raise them in your code. I mentioned one general criteria in my comment. What comes to mind:
Exceptions should not be part of normal, expected program flow (they are called exceptions, not expectations ;-) ). This is partly because the control flow is harder to see and partly because of the potential run time cost.
Errors which can be handled locally don't need exceptions. (It can still be useful to raise one in order to have a uniform error handling though. I'll discuss that below when I get to your code snippet.)
On the other hand, exceptions are great if a function has no idea how to handle errors. This is particularly true for utility and library functions which can be called from a variety of contexts (GUI, console program, embedded, server ...). Exceptions allow the error to propagate up the call chain until somebody can handle it, without any error handling code in the intervening layers.
Some people say that a library should only expose custom exceptions, at least for any anticipated errors. E.g. when an I/O exception occurs, wrap it in a custom exception and explicitly raise that custom exception instead.
Now to your specific code question:
Is there any advantage or good practice if I raise an exception in the above function and handle it when the passed string length bound exceeds the tolerable limits? Or a simple If-else handler logic should do the business?
I don't particularly like that (although I don't find it terrible) because my general argument above ("if you can handle it locally, don't raise") would indicate that a simple if/else is clearer.1 For example, if the function is long the exception handler will be far away from the error location, so one may wonder where exactly the exception could occur (and finding one raise location is no guarantee that one has found them all, so the reviewer must scrutinize the whole function!).
It depends a bit on the specific circumstances though. Raising an exception may be elegant if an error can happen in several places. For example, if several strings can be too short it may be nice to have a centralized error handling through the exception handler, instead of scattering if/then/elses (nested??) across the function body. The situation is so common that a legitimate case can be made for using goto constructs in languages without exceptions. An exception is then clearly superior.
1But in all reality, how do you handle that error there? Do you have a guaranteed logging facility? What do you return? Does the caller know the result can be invalid? Maybe you should throw and not catch.
There are two problems with the given example:
It's simple enough that control flow doesn't need the exception. That won't always be the case, however, and I'll come back to that in a moment.
Constraint_Error is a spectacularly bad exception to raise, to detect a string length error. The standard exceptions Program_Error, Constraint_Error, Storage_Error ought to be reserved for programming error conditions, and in most circumstances ought to bring down the executable before it can do any damage, with enough debugging information (a stack traceback at the very least) to let you find the mistake and guarantee it never happens again.
It's remarkably satisfying to get a Constraint_Error pointing spookily close to your mistake, instead of whatever undefined behaviour happens much later on... (It's useful to learn how to turn on stack tracebacks, which aren't generally on by default).
Instead, you probably want to define your own String_Size_Error exception, raise that and handle it. Then, anything else in your unshown code that raises Constraint_Error will be properly debugged instead of silently generating a faulty Chars_Ptr.
For a valid use case for raising exceptions, consider a circuit simulator such as SPICE (or a CFD simulator for gas flow, etc). These tools, even when working properly, are prone to failures thanks to numerical problems that happen in matrix computations. (Two terms cancel, producing zero +/- rounding error, which causes infeasibly large numbers or divide-by-zero later on). It's often an iterative approximation, where the error should reduce in each step until it's an acceptably low value. But if a failure occurs, the error term will start growing...
Typically the simulation happens step by step, where each step is a sufficiently small time step, maybe 1 us or 1 ns. The main loop requests a step, and this request is passed to thousands of agents in the simulation representing components in a circuit, or triangles in a CFD mesh.
Any one of those agents may fail to compute a solution, and the cleanest way to handle a failure is to raise an exception, maybe Convergence_Error. There may be thousands of possible points where an exception can be raised.
Testing thousands of return codes would get ugly fast. But with exceptions, the main loop only needs one handler, which takes some corrective action such as reducing the simulation step size and running the step again.
Sanitizing user text input in a browser may be another good use case, closer to the example code.
One word on the runtime cost of exceptions : the Gnat compiler and its RTS supports a "Zero Cost Exception" (ZCX) model - at least for some targets. There's a larger penalty when an exception is raised, as a tradeoff against eliminating the penalty in the normal case. If the penalty matters to you, refer to the documentation to see if it's worthwhile 9or even possible) in your case.
You raise an exception explicitly to control which exception is reported to the user of a subprogram. - Or in some cases just to control the message associated with the raised exception.
In very special cases you may also raise an exception as a program flow control.
Exceptions should stay true to their name, which is to represent exceptional situations.
As always after some research I was unable to find anything of real value. My question is how does one go about handling exceptions in a real time system? As program failure generally is not the best case i.e. nuclear reactor/ heart monitor.
Ok since everyone got lost on the second piece of this, which had NOTHING to do with the main question. I had it in there to show how I normally escape code blocks.
Exception handling in real-time/embedded systems has several layers. Not just the language supported options, but also MMU, CPU exceptions and one of my favorites: watchdogs.
Language exceptions (C/C++)
- not often used, beause it is hard to prove that all exceptions are handled at the right level. Also it is pretty hard to determine what threat/process should be responsible. Instead, programming by contract is preferred.
Programming style:
- i.e. programming by contract. Additional constraints : Misra/C Misra/C++. This can be checked to unsure that all possible cases are somehow handled. (i.e. no if without else)
Hardware support:
- MMU : use of multiple processes which are protected against each other. This allows
- watchdog
- CPU exceptions
- multi core: use of multiple cores to separate cricical processes from the rest. Also allows to have voting mechanisms (you want this and more for your nuclear reactor).
- multi-system
Most important is to define a strategy. Depending on the other nonfunctional requirements (safety, reliability, security) a strategy needs to be thought of. Can be graceful degradation to partial system reboot.
In a 'real-time', 'nuclear reactor' type system, chances are the exception handling allows the system to instead of fail, do the next best thing.
Let's say that we have a heart monitor. If it isn't receiving a signal, that might trigger an exception. In that case, the heart monitor might handle the exception by waiting a few seconds and trying again.
In a nuclear reactor, getting to a certain temperature might trigger an exception. In that case, the handling might shut off various parts of the reactor to start to cool it down, and then start them back up when it gets to a reasonable temperature.
Exceptions are meant to have a lower-level system say that it doesn't know what to do, and to have a higher level system handle it. Like in the nuclear reactor, the system that measures temperature probably doesn't know how to turn on parts of the reactor, so it triggers an exception so that some higher-level system can handle it.
A critical system is like any other system, except it's specified more clearly, passes through more testing phases, and is generally will fail-safe.
Regarding your form, yes it's pretty bad. I do mind the lack of {} very much; and it's been said so-often that this is just plain bad style, and leads to confusion when adding new code.
What does "orthogonality" mean when talking about programming languages?
What are some examples of Orthogonality?
Orthogonality is the property that means "Changing A does not change B". An example of an orthogonal system would be a radio, where changing the station does not change the volume and vice-versa.
A non-orthogonal system would be like a helicopter where changing the speed can change the direction.
In programming languages this means that when you execute an instruction, nothing but that instruction happens (which is very important for debugging).
There is also a specific meaning when referring to instruction sets.
From Eric S. Raymond's "Art of UNIX programming"
Orthogonality is one of the most important properties that can help make even complex designs compact. In a purely orthogonal design, operations do not have side effects; each action (whether it's an API call, a macro invocation, or a language operation) changes just one thing without affecting others. There is one and only one way to change each property of whatever system you are controlling.
Think of it has being able to change one thing without having an unseen affect on another part.
Broadly, orthogonality is a relationship between two things such that they have minimal effect on each other.
The term comes from mathematics, where two vectors are orthogonal if they intersect at right angles.
Think about a typical 2 dimensional cartesian space (your typical grid with X/Y axes). Plot two lines: x=1 and y=1. The two lines are orthogonal. You can change x=1 by changing x, and this will have no effect on the other line, and vice versa.
In software, the term can be appropriately used in situations where you're talking about two parts of a system which behave independently of each other.
If you have a set of constructs. A langauge is said to be orthogonal if it allows the programmer to mix these constructs freely. For example, in C you can't return an array(static array), C is said to be unorthognal in this case:
int[] fun(); // you can't return a static array.
// Of course you can return a pointer, but the langauge allows passing arrays.
// So, it is unorthognal in case.
Most of the answers are very long-winded, and even obscure. The point is: if a tool is orthogonal, it can be added, replaced, or removed, in favor of better tools, without screwing everything else up.
It's the difference between a carpenter having a hammer and a saw, which can be used for hammering or sawing, or having some new-fangled hammer/saw combo, which is designed to saw wood, then hammer it together. Either will work for sawing and then hammering together, but if you get some task that requires sawing, but not hammering, then only the orthogonal tools will work. Likewise, if you need to screw instead of hammering, you won't need to throw away your saw, if it's orthogonal (not mixed up with) your hammer.
The classic example is unix command line tools: you have one tool for getting the contents of a disk (dd), another for filtering lines from the file (grep), another for writing those lines to a file (cat), etc. These can all be mixed and matched at will.
While talking about project decisions on programming languages, orthogonality may be seen as how easy is for you to predict other things about that language for what you've seen in the past.
For instance, in one language you can have:
str.split
for splitting a string and
len(str)
for getting the lenght.
On a language more orthogonal, you would always use str.x or x(str).
When you would clone an object or do anything else, you would know whether to use
clone(obj)
or
obj.clone
That's one of the main points on programming languages being orthogonal. That avoids you to consult the manual or ask someone.
The wikipedia article talks more about orthogonality on complex designs or low level languages.
As someone suggested above on a comment, the Sebesta book talks cleanly about orthogonality.
If I would use only one sentence, I would say that a programming language is orthogonal when its unknown parts act as expected based on what you've seen.
Or... no surprises.
;)
From Robert W. Sebesta's "Concepts of Programming Languages":
As examples of the lack of orthogonality in a high-level language,
consider the following rules and exceptions in C. Although C has two
kinds of structured data types, arrays and records (structs), records
can be returned from functions but arrays cannot. A member of a
structure can be any data type except void or a structure of the same
type. An array element can be any data type except void or a function.
Parameters are passed by value, unless they are arrays, in which case
they are, in effect, passed by reference (because the appearance of an
array name without a subscript in a C program is interpreted to be
the address of the array’s first element)
from wikipedia:
Computer science
Orthogonality is a system design property facilitating feasibility and compactness of complex designs. Orthogonality guarantees that modifying the technical effect produced by a component of a system neither creates nor propagates side effects to other components of the system. The emergent behavior of a system consisting of components should be controlled strictly by formal definitions of its logic and not by side effects resulting from poor integration, i.e. non-orthogonal design of modules and interfaces. Orthogonality reduces testing and development time because it is easier to verify designs that neither cause side effects nor depend on them.
For example, a car has orthogonal components and controls (e.g. accelerating the vehicle does not influence anything else but the components involved exclusively with the acceleration function). On the other hand, a non-orthogonal design might have its steering influence its braking (e.g. electronic stability control), or its speed tweak its suspension.1 Consequently, this usage is seen to be derived from the use of orthogonal in mathematics: One may project a vector onto a subspace by projecting it onto each member of a set of basis vectors separately and adding the projections if and only if the basis vectors are mutually orthogonal.
An instruction set is said to be orthogonal if any instruction can use any register in any addressing mode. This terminology results from considering an instruction as a vector whose components are the instruction fields. One field identifies the registers to be operated upon, and another specifies the addressing mode. An orthogonal instruction set uniquely encodes all combinations of registers and addressing modes.
From Wikipedia:
Orthogonality is a system design
property facilitating feasibility and
compactness of complex designs.
Orthogonality guarantees that
modifying the technical effect
produced by a component of a system
neither creates nor propagates side
effects to other components of the
system. The emergent behavior of a
system consisting of components should
be controlled strictly by formal
definitions of its logic and not by
side effects resulting from poor
integration, i.e. non-orthogonal
design of modules and interfaces.
Orthogonality reduces testing and
development time because it is easier
to verify designs that neither cause
side effects nor depend on them.
For example, a car has orthogonal
components and controls (e.g.
accelerating the vehicle does not
influence anything else but the
components involved exclusively with
the acceleration function). On the
other hand, a non-orthogonal design
might have its steering influence its
braking (e.g. electronic stability
control), or its speed tweak its
suspension.[1] Consequently, this
usage is seen to be derived from the
use of orthogonal in mathematics: One
may project a vector onto a subspace
by projecting it onto each member of a
set of basis vectors separately and
adding the projections if and only if
the basis vectors are mutually
orthogonal.
An instruction set is said to be
orthogonal if any instruction can use
any register in any addressing mode.
This terminology results from
considering an instruction as a vector
whose components are the instruction
fields. One field identifies the
registers to be operated upon, and
another specifies the addressing mode.
An orthogonal instruction set uniquely
encodes all combinations of registers
and addressing modes.
To put it in the simplest terms possible, two things are orthogonal if changing one has no effect upon the other.
Orthogonality means the degree to which language consists of a set of independent primitive constructs that can be combined as necessary to express a program.
Features are orthogonal if there are no restrictions on how they may be combined
Example : non-orthogonality
PASCAL: functions can't return structured types.
Functional Languages are highly orthogonal.
Real life examples of orthogonality in programming languages
There are a lot of answers already that explain what orthogonality generally is while specifying some made up examples. E.g. this answer explains it well. I wanted to provide (and gather) some real life examples of orthogonal or non-orthogonal features in programming languages:
Orthogonal: C++20 Modules and Namespaces
On the cppreference-page about the new Modules system in c++20 is written:
Modules are orthogonal to namespaces
In this case they write that modules are orthogonal to namespaces because a statement like import foo will not import the module-namespace related to foo:
import foo; // foo exports foo::bar()
bar (); // Error
foo::bar (); // Ok
using namespace foo;
bar (); // Ok
(adapted from modules-cppcon2017 slide 9)
In programming languages a programming language feature is said to be orthogonal if it is bounded with no restrictions (or exceptions).
For example, in Pascal functions can't return structured types. This is a restriction on returning values from a function. Therefore we it is considered as a non-orthogonal feature. ;)
Orthogonality in Programming:
Orthogonality is an important concept, addressing how a relatively small number of components can be combined in a relatively small number of ways to get the desired results. It is associated with simplicity; the more orthogonal the design, the fewer exceptions. This makes it easier to learn, read and write programs in a programming language. The meaning of an orthogonal feature is independent of context; the key parameters are symmetry and consistency (for example, a pointer is an orthogonal concept).
from Wikipedia
Orthogonality in a programming language means that a relatively small set of
primitive constructs can be combined in a relatively small number of ways to
build the control and data structures of the language. Furthermore, every pos-
sible combination of primitives is legal and meaningful. For example, consider data types. Suppose a language has four primitive data types (integer, float,
double, and character) and two type operators (array and pointer). If the two
type operators can be applied to themselves and the four primitive data types,
a large number of data structures can be defined.
The meaning of an orthogonal language feature is independent of the
context of its appearance in a program. (the word orthogonal comes from the
mathematical concept of orthogonal vectors, which are independent of each
other.) Orthogonality follows from a symmetry of relationships among primi-
tives. A lack of orthogonality leads to exceptions to the rules of the language.
For example, in a programming language that supports pointers, it should be
possible to define a pointer to point to any specific type defined in the language.
However, if pointers are not allowed to point to arrays, many potentially useful user-defined data structures cannot be defined.
We can illustrate the use of orthogonality as a design concept by compar-
ing one aspect of the assembly languages of the IBM mainframe computers
and the VAX series of minicomputers. We consider a single simple situation:
adding two 32-bit integer values that reside in either memory or registers and
replacing one of the two values with the sum. The IBM mainframes have two
instructions for this purpose, which have the forms
A Reg1, memory_cell
AR Reg1, Reg2
where Reg1 and Reg2 represent registers. The semantics of these are
Reg1 ← contents(Reg1) + contents(memory_cell)
Reg1 ← contents(Reg1) + contents(Reg2)
The VAX addition instruction for 32-bit integer values is
ADDL operand_1, operand_2
whose semantics is
operand_2 ← contents(operand_1) + contents(operand_2)
In this case, either operand can be a register or a memory cell.
The VAX instruction design is orthogonal in that a single instruction can
use either registers or memory cells as the operands. There are two ways to
specify operands, which can be combined in all possible ways. The IBM design
is not orthogonal. Only two out of four operand combinations possibilities are
legal, and the two require different instructions, A and AR . The IBM design
is more restricted and therefore less writable. For example, you cannot add
two values and store the sum in a memory location. Furthermore, the IBM
design is more difficult to learn because of the restrictions and the additional instruction.
Orthogonality is closely related to simplicity: The more orthogonal the
design of a language, the fewer exceptions the language rules require. Fewer
exceptions mean a higher degree of regularity in the design, which makes the
language easier to learn, read, and understand. Anyone who has learned a sig-
nificant part of the English language can testify to the difficulty of learning its
many rule exceptions (for example, i before e except after c).
The basic idea of orthogonality is that things that are not related conceptually should not be related in the system. Parts of the architecture that really have nothing to do with the other, such as the database and the UI, should not need to be changed together. A change to one should not cause a change to the other.
Orthogonality is the idea that things that are not related conceptually should not be related in the system so parts of the architecture that have nothing to do with each other, like the database and UI should not be changed together. A change to one part of your system should not cause the change to the other.
If for example, you change a few lines on the screen and cause a change in the database schema, this is called coupling. You usually want to minimize coupling between things that are mostly unrelated because it can grow and the system can become a nightmare to maintain in the long run.
From Michael C. Feathers' book "Working Effectively With Legacy Code":
If you want to change existing behavior in your code and there is exactly one place you have to go to make that change, you've got orthogonality.