How to specify chisel’s post-processor? - chisel

Quote from libcores wiki
One post-processor generates a Verilog that is tuned for FPGA execution. A second generates Verilog that is tuned for ASIC.
Is this true? How to specify which post-processor to use?
I noticed that we can send an option ‘-X xxx’ to chisel, in which ‘xxx’ can be high, middle, low, verilog... Is this related? What’s the exact meaning of these ‘compilers’?
Thank you!

Very narrowly addressing your latter question, the -X/--compiler command line argument determines which FIRRTL compiler and emitter to use.
The Chisel3 compiler generates CHIRRTL (a high level form of the FIRRTL intermediate representation). The FIRRTL intermediate representation (IR), described in more detail in a UC Berkeley Technical Report, is a simple language for describing a circuit.
The FIRRTL compiler, broadly, is moving a circuit, represented in the FIRRTL IR, from a high-level representation (what is described in the specification) to a mid-level representation, and finally to a low-level representation that will easily map to Verilog. The FIRRTL compiler can elect to stop early at High FIRRTL, Mid FIRRTL, or Low FIRRTL or going all the way to Verilog. That -X/--compiler argument is telling it if you want to exit early and only target one of these representations.
Note: CHIRRTL will eventually be removed and High FIRRTL will be emitted directly by the Chisel compiler.

I'm not fully familiar with the librecores flow, but glancing over https://github.com/librecores/riscv-sodor I don't see any post-processing scripts. It might be worth filing an issue on the repo to ask for clarification on that point.
For Chisel designs in general, people use transforms on the IR to specialize the code for FPGA vs. ASIC. The most common one is with handling memory structures. The behavioral memories emitted by default work well for FPGAs as they are correctly inferred as BRAMs. For ASICs, there is a standard transform to replace memories with blackboxed interfaces such that the user can provide implementations that use SRAM macros from their given implementation technology.

Related

Chisel Output with SystemVerilog Interfaces/Structs

I'm finding when generating Verilog output from the Chisel framework, all of the 'structure' defined in the chisel framework is lost at the interface.
This is problematic for instantiating this work in larger SystemVerilog designs.
Are there any extensions or features in Chisel to support this better? For example, automatically converting Chisel "Bundle" objects into SystemVerilog 'struct' ports.
Or creating SV enums, when the Chisel code is written using the Enum class.
Currently, no. However, both suggestions sound like very good candidates for discussion for future implementation in Chisel/FIRRTL.
SystemVerilog Struct Generation
Most Chisel code instantiated inside Verilog/SystemVerilog will use some interface wrapper that deals with converting the necessary signal names that the instantiator wants to use into Chisel-friendly names. As one example of doing this see AcceleratorWrapper. That instantiates a specific accelerator and does the connections to the Verilog names the instantiator expects. You can't currently do this with SystemVerilog structs, but you could accomplish the same thing with a SystemVerilog wrapper that maps the SystemVerilog structs to deterministic Chisel names. This is the same type of problem/solution that most people encounter/solve when integrating external IP in their project.
Kludges aside, what you're talking about is possible in the future...
Some explanation is necessary as to why this is complex:
Chisel is converted to FIRRTL. FIRRTL is then lowered to a reduced subset of FIRRTL called "low" FIRRTL. Low FIRRTL is then mapped to Verilog. Part of this lowering process flattens all bundles using uniquely determined names (typically a.b.c will lower to a_b_c but will be uniquified if a namespace conflict due to the lowering would result). Verilog has no support for structs, so this has to happen. Additionally, and more critically, some optimizations happen at the Low FIRRTL level like Constant Propagation and Dead Code Elimination that are easier to write and handle there.
However, SystemVerilog or some other language that a FIRRTL backend is targeting that supports non-flat types benefits from using the features of that language to produce more human-readable output. There are two general approaches for rectifying this:
Lowered types retain information about how they were originally constructed via annotations and the SystemVerilog emitter reconstructs those. This seems inelegant due to lowering and then un-lowering.
The SystemVerilog emitter uses a different sequence of FIRRTL transforms that does not go all the way to Low FIRRTL. This would require some of the optimizing transforms run on Low FIRRTL to be rewritten to work on higher forms. This is tractable, but hard.
If you want some more information on what passes are run during each compiler phase, take a look at LoweringCompilers.scala
Enumerated Types
What you mention for Enum is planned for the Verilog backend. The idea here was to have Enums emit annotations describing what they are. The Verilog emitter would then generate localparams. The preliminary work for annotation generation was added as part of StrongEnum (chisel3#885/chisel3#892), but the annotations portion had to be later backed out. A solution to this is actively being worked on. A subsequent PR to FIRRTL will then augment the Verilog emitter to use these. So, look for this going forward.
On Contributions and Outreach
For questions like this with (currently) negative answers, feel free to file an issue on the respective Chisel3 or FIRRTL repository. And even better than that is an RFC followed by an implementation.

Core of Verifier in Isabelle/HOL

Question
What is the core algorithm of the Isabelle/HOL verifier?
I'm looking for something on the level of a scheme metacircular evaluator.
Clarification
I'm only interested in the Verifier , not the strategies for automated theorem proving.
Context
I want to implement a simple proof verifier from scratch (purely for education reasons, not for production use.)
I want to understand the core Verifier algorithm of Isabelle/HOL. I don't care about the strategies / code used for automated theorem proving.
I have a suspicion that the core Verifier algorithm is very simple (and elegant). However, I can't find it.
Thanks!
Isabelle is a member of the "LCF family" of proof checkers, which means you have a special module --- the inference kernel -- where all inferences are run through to produce values of the abstract datatype thm. This is a bit like an operating system kernel processing system calls. Everything you can produce this way is "correct by construction" relative to the correctness of the kernel implementation. Since the programming language environment of the prover (Standard ML) has very strong static type-correctness properties, the correctness-by-construction of the inference kernel carries over to the rest of the proof assistant implementation, which can be quite huge.
So in principle you have a relatively small "trusted kernel" part and a really big "application user-space". In particular, most of Isabelle/HOL is actually a big collection of library theories and add-on tools (mostly in SML) in Isabelle user-land.
In Isabelle, the kernel infrastructure is quite complex, but still very small compared to the rest of the system. The kernel is in fact layered into a "micro kernel" (the Thm module) and a "nano kernel" (the Context module). Thm produces thm values in the sense of Milner's LCF-approach, and Context takes care of theory certficates for any results you produce, as well as proof contexts for local reasoning (notably in the Isar proof language).
If you want to learn more about LCF-style provers, I recommend looking also at HOL-Light which is probably the smallest realistic system of the LCF-family, realistic in the sense that people have done big applications with it. HOL-Light has the big advantage that its implementation can be easily understood, but this minimalism also has some disdavantages: it does not fully protect the user from doing non-sense in its ML environment, which is OCaml instead of SML. For various technical reasons, OCaml is not as "safe" by default as Standard ML.
If you untar the Isabelle sources, e.g.
http://isabelle.in.tum.de/dist/Isabelle2013_linux.tar.gz
you will find the core definitions in
src/Pure/thm.ML
And, there is such a project already you want to tackle:
http://www.proof-technologies.com/holzero/
added later: another, more serious project is
https://team.inria.fr/parsifal/proofcert/

Can First-class functions in Scala be a concern for allocating a large PermGen Space in JVM?

Regarding first-class functions in Scala, it is written in the book Programming by Scala:
A function literal is compiled into a
class that when instantiated at
run-time is a function value.
When there will be many first-class functions used in a program, will this affect the JVM's PermGen space? because instead of simple functions the compiler is generating classes for each variation of the function value (e.g. in the case of varied definitions of partially applied functions).
The memory profile is certainly going to be different than that of normal Java programs, though you can tune pretty much any memory parameter on the JVM.
All I can say, however, is that in one year of deep involvement in the Scala community, I have never seen anyone complain about this.
I don't have substantiation for this, but my feeling is that if you're writing any non-trivial program, the amount of space taken up for your program's "real" data will vastly dwarf the amount of space taken up by a few extra function-as-class definitions.
In other words, I wouldn't worry about it.
It is a proven mathematical fact that the number of classes you generate with first-class functions will be able to asymptotically approach, but never surpass, the number of compiled classes in the full Spring distribution. Don't worry, those pioneers will deal with the permgen issues first!

ANN for decompiler?

Has there ever been any attempts at utilizing artificial neural networks in decompilation? It would be nice if it was possible to provide the trimmed semantics of source along with the code in to a neural network so it could learn the connection between the two. I assume this would likely lose it's effectiveness when there is optimizations and maybe work better for high level languages too but I'm interested in hearing any attempts anyone has had at this.
I added this as a comment, but I think I will go ahead and post it as an answer as well. It looks like in the 11 years sense this question has been posted, there has been work done in this direction. Here is a link:
https://www.groundai.com/project/a-neural-based-program-decompiler/1
And here is the abstract
A Neural-based Program Decompiler
Reverse engineering of binary executables is a critical problem in the computer security domain. On the one hand, malicious parties may recover interpretable source codes from the software products to gain commercial advantages. On the other hand, binary decompilation can be leveraged for code vulnerability analysis and malware detection. However, efficient binary decompilation is challenging. Conventional decompilers have the following major limitations: (i) they are only applicable to specific source-target language pair, hence incurs undesired development cost for new language tasks; (ii) their output high-level code cannot effectively preserve the correct functionality of the input binary; (iii) their output program does not capture the semantics of the input and the reversed program is hard to interpret. To address the above problems, we propose Coda111Coda is the abbreviation for CodeAttack., the first end-to-end neural-based framework for code decompilation. Coda decomposes the decompilation task into of two key phases: First, Coda employs an instruction type-aware encoder and a tree decoder for generating an abstract syntax tree (AST) with attention feeding during the code sketch generation stage. Second, Coda then updates the code sketch using an iterative error correction machine guided by an ensembled neural error predictor. By finding a good approximate candidate and then fixing it towards perfect, Coda achieves superior performance compared to baseline approaches. We assess Coda’s performance with extensive experiments on various benchmarks. Evaluation results show that Coda achieves an average of 82% program recovery accuracy on unseen binary samples, where the state-of-the-art decompilers yield 0% accuracy. Furthermore, Coda outperforms the sequence-to-sequence model with attention by a margin of 70% program accuracy. Our work reveals the vulnerability of binary executables and imposes a new threat to the protection of Intellectual Property (IP) for software development.
I'm assuming you mean decompilation to human readable C/C++ as compared to Assembly then,
Given the input size (optimized/compiled code) and the output size of succinct code, and the multi-line stateful nature of decomplilation process, I would have though this is a larger problem that a ANN could ever handle.

What is "Orthogonality"?

What does "orthogonality" mean when talking about programming languages?
What are some examples of Orthogonality?
Orthogonality is the property that means "Changing A does not change B". An example of an orthogonal system would be a radio, where changing the station does not change the volume and vice-versa.
A non-orthogonal system would be like a helicopter where changing the speed can change the direction.
In programming languages this means that when you execute an instruction, nothing but that instruction happens (which is very important for debugging).
There is also a specific meaning when referring to instruction sets.
From Eric S. Raymond's "Art of UNIX programming"
Orthogonality is one of the most important properties that can help make even complex designs compact. In a purely orthogonal design, operations do not have side effects; each action (whether it's an API call, a macro invocation, or a language operation) changes just one thing without affecting others. There is one and only one way to change each property of whatever system you are controlling.
Think of it has being able to change one thing without having an unseen affect on another part.
Broadly, orthogonality is a relationship between two things such that they have minimal effect on each other.
The term comes from mathematics, where two vectors are orthogonal if they intersect at right angles.
Think about a typical 2 dimensional cartesian space (your typical grid with X/Y axes). Plot two lines: x=1 and y=1. The two lines are orthogonal. You can change x=1 by changing x, and this will have no effect on the other line, and vice versa.
In software, the term can be appropriately used in situations where you're talking about two parts of a system which behave independently of each other.
If you have a set of constructs. A langauge is said to be orthogonal if it allows the programmer to mix these constructs freely. For example, in C you can't return an array(static array), C is said to be unorthognal in this case:
int[] fun(); // you can't return a static array.
// Of course you can return a pointer, but the langauge allows passing arrays.
// So, it is unorthognal in case.
Most of the answers are very long-winded, and even obscure. The point is: if a tool is orthogonal, it can be added, replaced, or removed, in favor of better tools, without screwing everything else up.
It's the difference between a carpenter having a hammer and a saw, which can be used for hammering or sawing, or having some new-fangled hammer/saw combo, which is designed to saw wood, then hammer it together. Either will work for sawing and then hammering together, but if you get some task that requires sawing, but not hammering, then only the orthogonal tools will work. Likewise, if you need to screw instead of hammering, you won't need to throw away your saw, if it's orthogonal (not mixed up with) your hammer.
The classic example is unix command line tools: you have one tool for getting the contents of a disk (dd), another for filtering lines from the file (grep), another for writing those lines to a file (cat), etc. These can all be mixed and matched at will.
While talking about project decisions on programming languages, orthogonality may be seen as how easy is for you to predict other things about that language for what you've seen in the past.
For instance, in one language you can have:
str.split
for splitting a string and
len(str)
for getting the lenght.
On a language more orthogonal, you would always use str.x or x(str).
When you would clone an object or do anything else, you would know whether to use
clone(obj)
or
obj.clone
That's one of the main points on programming languages being orthogonal. That avoids you to consult the manual or ask someone.
The wikipedia article talks more about orthogonality on complex designs or low level languages.
As someone suggested above on a comment, the Sebesta book talks cleanly about orthogonality.
If I would use only one sentence, I would say that a programming language is orthogonal when its unknown parts act as expected based on what you've seen.
Or... no surprises.
;)
From Robert W. Sebesta's "Concepts of Programming Languages":
As examples of the lack of orthogonality in a high-level language,
consider the following rules and exceptions in C. Although C has two
kinds of structured data types, arrays and records (structs), records
can be returned from functions but arrays cannot. A member of a
structure can be any data type except void or a structure of the same
type. An array element can be any data type except void or a function.
Parameters are passed by value, unless they are arrays, in which case
they are, in effect, passed by reference (because the appearance of an
array name without a subscript in a C program is interpreted to be
the address of the array’s first element)
from wikipedia:
Computer science
Orthogonality is a system design property facilitating feasibility and compactness of complex designs. Orthogonality guarantees that modifying the technical effect produced by a component of a system neither creates nor propagates side effects to other components of the system. The emergent behavior of a system consisting of components should be controlled strictly by formal definitions of its logic and not by side effects resulting from poor integration, i.e. non-orthogonal design of modules and interfaces. Orthogonality reduces testing and development time because it is easier to verify designs that neither cause side effects nor depend on them.
For example, a car has orthogonal components and controls (e.g. accelerating the vehicle does not influence anything else but the components involved exclusively with the acceleration function). On the other hand, a non-orthogonal design might have its steering influence its braking (e.g. electronic stability control), or its speed tweak its suspension.1 Consequently, this usage is seen to be derived from the use of orthogonal in mathematics: One may project a vector onto a subspace by projecting it onto each member of a set of basis vectors separately and adding the projections if and only if the basis vectors are mutually orthogonal.
An instruction set is said to be orthogonal if any instruction can use any register in any addressing mode. This terminology results from considering an instruction as a vector whose components are the instruction fields. One field identifies the registers to be operated upon, and another specifies the addressing mode. An orthogonal instruction set uniquely encodes all combinations of registers and addressing modes.
From Wikipedia:
Orthogonality is a system design
property facilitating feasibility and
compactness of complex designs.
Orthogonality guarantees that
modifying the technical effect
produced by a component of a system
neither creates nor propagates side
effects to other components of the
system. The emergent behavior of a
system consisting of components should
be controlled strictly by formal
definitions of its logic and not by
side effects resulting from poor
integration, i.e. non-orthogonal
design of modules and interfaces.
Orthogonality reduces testing and
development time because it is easier
to verify designs that neither cause
side effects nor depend on them.
For example, a car has orthogonal
components and controls (e.g.
accelerating the vehicle does not
influence anything else but the
components involved exclusively with
the acceleration function). On the
other hand, a non-orthogonal design
might have its steering influence its
braking (e.g. electronic stability
control), or its speed tweak its
suspension.[1] Consequently, this
usage is seen to be derived from the
use of orthogonal in mathematics: One
may project a vector onto a subspace
by projecting it onto each member of a
set of basis vectors separately and
adding the projections if and only if
the basis vectors are mutually
orthogonal.
An instruction set is said to be
orthogonal if any instruction can use
any register in any addressing mode.
This terminology results from
considering an instruction as a vector
whose components are the instruction
fields. One field identifies the
registers to be operated upon, and
another specifies the addressing mode.
An orthogonal instruction set uniquely
encodes all combinations of registers
and addressing modes.
To put it in the simplest terms possible, two things are orthogonal if changing one has no effect upon the other.
Orthogonality means the degree to which language consists of a set of independent primitive constructs that can be combined as necessary to express a program.
Features are orthogonal if there are no restrictions on how they may be combined
Example : non-orthogonality
PASCAL: functions can't return structured types.
Functional Languages are highly orthogonal.
Real life examples of orthogonality in programming languages
There are a lot of answers already that explain what orthogonality generally is while specifying some made up examples. E.g. this answer explains it well. I wanted to provide (and gather) some real life examples of orthogonal or non-orthogonal features in programming languages:
Orthogonal: C++20 Modules and Namespaces
On the cppreference-page about the new Modules system in c++20 is written:
Modules are orthogonal to namespaces
In this case they write that modules are orthogonal to namespaces because a statement like import foo will not import the module-namespace related to foo:
import foo; // foo exports foo::bar()
bar (); // Error
foo::bar (); // Ok
using namespace foo;
bar (); // Ok
(adapted from modules-cppcon2017 slide 9)
In programming languages a programming language feature is said to be orthogonal if it is bounded with no restrictions (or exceptions).
For example, in Pascal functions can't return structured types. This is a restriction on returning values from a function. Therefore we it is considered as a non-orthogonal feature. ;)
Orthogonality in Programming:
Orthogonality is an important concept, addressing how a relatively small number of components can be combined in a relatively small number of ways to get the desired results. It is associated with simplicity; the more orthogonal the design, the fewer exceptions. This makes it easier to learn, read and write programs in a programming language. The meaning of an orthogonal feature is independent of context; the key parameters are symmetry and consistency (for example, a pointer is an orthogonal concept).
from Wikipedia
Orthogonality in a programming language means that a relatively small set of
primitive constructs can be combined in a relatively small number of ways to
build the control and data structures of the language. Furthermore, every pos-
sible combination of primitives is legal and meaningful. For example, consider data types. Suppose a language has four primitive data types (integer, float,
double, and character) and two type operators (array and pointer). If the two
type operators can be applied to themselves and the four primitive data types,
a large number of data structures can be defined.
The meaning of an orthogonal language feature is independent of the
context of its appearance in a program. (the word orthogonal comes from the
mathematical concept of orthogonal vectors, which are independent of each
other.) Orthogonality follows from a symmetry of relationships among primi-
tives. A lack of orthogonality leads to exceptions to the rules of the language.
For example, in a programming language that supports pointers, it should be
possible to define a pointer to point to any specific type defined in the language.
However, if pointers are not allowed to point to arrays, many potentially useful user-defined data structures cannot be defined.
We can illustrate the use of orthogonality as a design concept by compar-
ing one aspect of the assembly languages of the IBM mainframe computers
and the VAX series of minicomputers. We consider a single simple situation:
adding two 32-bit integer values that reside in either memory or registers and
replacing one of the two values with the sum. The IBM mainframes have two
instructions for this purpose, which have the forms
A Reg1, memory_cell
AR Reg1, Reg2
where Reg1 and Reg2 represent registers. The semantics of these are
Reg1 ← contents(Reg1) + contents(memory_cell)
Reg1 ← contents(Reg1) + contents(Reg2)
The VAX addition instruction for 32-bit integer values is
ADDL operand_1, operand_2
whose semantics is
operand_2 ← contents(operand_1) + contents(operand_2)
In this case, either operand can be a register or a memory cell.
The VAX instruction design is orthogonal in that a single instruction can
use either registers or memory cells as the operands. There are two ways to
specify operands, which can be combined in all possible ways. The IBM design
is not orthogonal. Only two out of four operand combinations possibilities are
legal, and the two require different instructions, A and AR . The IBM design
is more restricted and therefore less writable. For example, you cannot add
two values and store the sum in a memory location. Furthermore, the IBM
design is more difficult to learn because of the restrictions and the additional instruction.
Orthogonality is closely related to simplicity: The more orthogonal the
design of a language, the fewer exceptions the language rules require. Fewer
exceptions mean a higher degree of regularity in the design, which makes the
language easier to learn, read, and understand. Anyone who has learned a sig-
nificant part of the English language can testify to the difficulty of learning its
many rule exceptions (for example, i before e except after c).
The basic idea of orthogonality is that things that are not related conceptually should not be related in the system. Parts of the architecture that really have nothing to do with the other, such as the database and the UI, should not need to be changed together. A change to one should not cause a change to the other.
Orthogonality is the idea that things that are not related conceptually should not be related in the system so parts of the architecture that have nothing to do with each other, like the database and UI should not be changed together. A change to one part of your system should not cause the change to the other.
If for example, you change a few lines on the screen and cause a change in the database schema, this is called coupling. You usually want to minimize coupling between things that are mostly unrelated because it can grow and the system can become a nightmare to maintain in the long run.
From Michael C. Feathers' book "Working Effectively With Legacy Code":
If you want to change existing behavior in your code and there is exactly one place you have to go to make that change, you've got orthogonality.