Chisel function for mux with more than two inputs - chisel

What single function can I use in chisel to represent a multiplexer with more than two inputs (choices)?
MuxLookup()?

You can cascade 2-input Muxes, describe the behavior through a hierarchy of when/elsewhen/otherwise statements, or use MuxCase to describe an n-way Mux
result := MuxCase(defaultValue, Array(sel0 -> value0, sel1 -> value1, ...))
MuxLookup implies comparators (in addition to the multiplexers) to match the value of a signal to a number of values.
A good description of different chisel Mux constructs is available here:
https://www.chisel-lang.org/chisel3/docs/explanations/muxes-and-input-selection.html

Related

Checking understanding of: "Variable" v.s. "Value", and "function" vs "abstraction"

(This question is a follow-up of this one while studying Haskell.)
I used to find the notion between "variable" and "value" confusing. Therefore I read about the wiki-page of lambda calculus as well as the previous answer above. I come out with below interpretations.
May I confirm whether these are correct? Just want to double confirm because these concept are quite basic but essential to functional programming. Any advice is welcome.
Premises from wiki:
Lambda Calculus syntax
exp → ID
| (exp)
| λ ID.exp // abstraction
| exp exp // application
(Notation: "<=>" equivalent to)
Interpretations:
"value": it is the actual data or instructions stored in computer.
"variable": it is a way locating the data, a value-replacing reference , but not itself the set of data or instruction stored in computer.
"abstraction" <=> "function" ∈ syntactic form. (https://stackoverflow.com/a/25329157/3701346)
"application": it takes an input of "abstraction", and an input of "lambda expression", results in an "lambda expression".
"abstraction" is called "abstraction" because in usual function definition, we abbreviate the (commonly longer) function body into a much shorter form, i.e. a function identifier followed by a list of formal parameters. (Though lambda abstractions are anonymous functions, other functions usually do have name.)
"variable" <=> "symbol" <=> "reference"
a "variable" is associated with a "value" via a process called "binding".
"constant" ∈ "variable"
"literal" ∈ "value"
"formal parameter" ∈ "variable"
"actual parameter"(argument) ∈ "value"
A "variable" can have a "value" of "data"
=> e.g. variable "a" has a value of 3
A "variable"can also have a "value" of "a set of instructions"
=> e.g. an operator "+" is a variable
"value": it is the actual data or instructions stored in computer.
You're trying to think of it very concretely in terms of the machine, which I'm afraid may confuse you. It's better to think of it in terms of math: a value is just a thing that never changes, like the number 42, the letter 'H', or the sequence of letters that constitutes "Hello world".
Another way to think of it is in terms of mental models. We invent mental models in order to reason indirectly about the world; by reasoning about the mental models, we make predictions about things in the real world. We write computer programs to help us work with these mental models reliably and in large volumes.
Values are then things in the mental model. The bits and bytes are just encodings of the model into the computer's architecture.
"variable": it is a way locating the data, a value-replacing reference , but not itself the set of data or instruction stored in computer.
A variable is just a name that stands for a value in a certain scope of the program. Every time a variable is evaluated, its value needs to be looked up in an environment. There are several implementations of this concept in computer terms:
A stack frame in an eager language is an implementation of an environment for looking up the values of local variable, on each invocation of a routine.
A linker provides environments for looking up global-scope names when a program is compiled or loaded into memory.
"abstraction" <=> "function" ∈ syntactic form.
Abstraction and function are not equivalent. In the lambda calculus, "abstraction" a type of syntactic expression, but a function is a value.
One analogy that's not too shabby is names and descriptions vs. things. Names and descriptions are part of language, while things are part of the world. You could say that the meaning of a name or description is the thing that it names or describes.
Languages contain both simple names for things (e.g., 12 is a name for the number twelve) and more complex descriptions of things (5 + 7 is a description of the number twelve). A lambda abstraction is a description of a function; e.g., the expression \x -> x + 7 is a description of the function that adds seven to its argument.
The trick is that when descriptions get very complex, it's not easy to figure out what thing they're describing. If I give you 12345 + 67890, you need to do some amount of work to figure out what number I just described. Computers are machines that do this work way faster and more reliably than we can do it.
"application": it takes an input of "abstraction", and an input of "lambda expression", results in an "lambda expression".
An application is just an expression with two subexpressions, which describes a value by this means:
The first subexpression stands for a function.
The second subexpression stands for some value.
The application as a whole stands for the value that results for applying the function in (1) to the value from (2).
In formal semantics (and don't be scared of that word) we often use the double brackets ⟦∙⟧ to stand for "the meaning of"; e.g. ⟦dog⟧ = "the meaning of dog." Using that notation:
⟦e1 e2⟧ = ⟦e1⟧(⟦e2⟧)
where e1 and e2 are any two expressions or terms (any variable, abstraction or application).
"abstraction" is called "abstraction" because in usual function definition, we abbreviate the (commonly longer) function body into a much shorter form, i.e. a function identifier followed by a list of formal parameters. (Though lambda abstractions are anonymous functions, other functions usually do have name.)
To tell you the truth, I've never stopped to think whether the term "abstraction" is a good term for this or why it was picked. Generally, with math, it doesn't pay to ask questions like that unless the terms have been very badly picked and mislead people.
"constant" ∈ "variable"
"literal" ∈ "value"
The lambda calculus, in and of itself, doesn't have the concepts of "constant" nor "literal." But one way to define these would be:
A literal is an expression that, because of the rules of the language, always has the same value no matter where it occurs.
A constant, in a purely functional language, is a variable at the topmost scope of a program. Every (non-shadowed) use of that variable will always have the same value in the program.
"formal parameter" ∈ "variable"
"actual parameter"(argument) ∈ "value"
Formal parameter is one kind of use of a variable. In any expression of the form λv.e (where v is a variable and e is an expression), v is a formal variable.
An argument is any expression (not value!) that occurs as the second subexpression of an application.
A "variable" can have a "value" of "data" => e.g. variable "a" has a value of 3
All expressions have values, not just variables. For example, 5 + 7 is an application, and it has the value of twelve.
A "variable"can also have a "value" of "a set of instructions" => e.g. an operator "+" is a variable
The value of + is a function—it's the function that adds its arguments. The set of instructions is an implementation of that function.
Think of a function as an abstract table that says, for each combination of argument values, what the result is. The way the instructions come in is this:
For a lot of functions we cannot literally implement them as a table. In the case of addition it's because the table would be infinitely large.
Even for functions where we can enumerate the cases, we want to implement them much more briefly and efficiently.
But the way you check whether a function implementation is correct is, in some sense, to check that in every case it does the same thing the "infinite table" would do. Two sets of instructions that both check out in this way are really two different implementations of the same function.
The word "abstraction" is used because we can't "look inside" a function and see what's going on for the most part so it's "abstract" (contrast with "concrete"). Application is the process of applying a function to an argument. This means that its body is run, but with the thing that's being applied to it replacing the argument name (avoiding any capture). Hopefully this example will explain better than I can (in Haskell syntax. \ represents lambda):
(\x -> x + x) 5 <=> 5 + 5
Here we are applying the lambda expression on the left to the value 5 on the right. We get 5 + 5 as our result (which then may be further reduced to 10).
A "reference" might refer to something somewhat different in the context of Haskell (IORefs and STRefs), but, internally, all bindings ("variables") in Haskell have a layer of indirection like references in other languages (actually, they have even more indirection than that in a way because of the non-strict evaluation).
This mostly looks okay except for the reference issue I mentioned above.
In Haskell, there isn't really a distinction between a variable and a constant.
A "literal" usually is specifically a constructor for a value. For example, 20 constructs the the number 20, but a function application (\x -> 2 * x) 10 wouldn't be considered a literal for 20 because it has an extra step before you get the value.
Right, not all variables are parameters. A parameter is something that is passed to a function. The xs in the lambda expressions above are examples of parameters. A non-example would be something like let a = 15 in a * a. a is a "variable" but not a parameter. Actually, I would call a a "binding" here because it can never change or take on a different value (vary).
The formal parameter vs actual parameter part looks about right.
That looks okay.
I would say that a variable can be a function instead. Usually, in functional programming, we typically think in terms of functions and function applications instead of lists of instructions.
I'd like to point out also that you might get in trouble by thinking of functions as just syntactic forms. You can create new functions by applying certain kinds of higher order functions without using one of the syntactic forms to construct a function directly. A simple example of this is function composition, (.) in Haskell
(f . g) x = f (g x) -- Definition of (.)
(* 10) . (+ 1) <=> \x -> ((* 10) ((+ 1) x)) <=> \x -> 10 * (x + 1)
Writing it as (* 10) . (+ 1) doesn't directly use the lambda syntax or the function definition syntax to create the new function.

Difference between declarative and model-based specification

I've read definition of these 2 notions on wiki, but the difference is still not clear. Could someone give examples and some easy explanation?
A declarative specification describes an operation or a function with a constraint that relates the output to the input. Rather than giving you a recipe for computing the output, it gives you a rule for checking that the output is correct. For example, consider a function that takes an array a and a value e, and returns the index of an element of the array matching e. A declarative specification would say exactly that:
function index (a, e)
returns i such that a[i] = e
In contrast, an operational specification would look like code, eg with a loop through the indices to find i. Note that declarative specifications are often non-deterministic; in this case, if e matches more than one element of e, there are several values of i that are valid (and the specification doesn't say which to return). This is a powerful feature. Also, declarative specifications are often not total: here, for example, the specification assumes that such an i exists (and in some languages you would add a precondition to make this explicit).
To support declarative specification, a language must generally provide logical operators (especially conjunction) and quantifiers.
A model-based language is one that uses standard mathematical structures (such as sets and relations) to describe the state. Alloy and Z are both model based. In contrast, algebraic languages (such as OBJ and Larch) use equations to describe state implicitly. For example, to specify an operation that inserts an element in a set, in an algebraic language you might write something like
member(insert(s,e),x) = (e = x) or member(s,x)
which says that after you insert e into s, the element x will be a member of the set if you just inserted that element (e is equal to x) or if it was there before (x is a member of s). In contrast, in a language like Z or Alloy you'd write something like
insert (s, e)
s' = s U {e}
with a constraint relating the new value of the set (s') to the old value (s).
For many examples of declarative, model-based specification, see the materials on Alloy at http://alloy.mit.edu, or my book Software Abstractions. You can also see comparisons between model-based declarative languages through an example in the appendix of the book, available at the book's website http://softwareabstractions.org.

In Haskell, (+) is a function, ((+) 2) is a function, ((+) 2 3) is 5. What exactly is going on there?

How is this possible, what is going on there?
Is there a name for this?
What other languages have this same behavior?
Any without the strong typing system?
This behaviour is really simple and intuitive if you look at the types. To avoid the complications of infix operators like +, I'm going to use the function plus instead. I'm also going to specialise plus to work only on Int, to reduce the typeclass line noise.
Say we have a function plus, of type Int -> Int -> Int. One way to read that is "a function of two Ints that returns an Int". But that notation is a little clumsy for that reading, isn't it? The return type isn't singled out specially anywhere. Why would we write function type signatures this way? Because the -> is right associative, an equivalent type would be Int -> (Int -> Int). This looks much more like it's saying "a function from an Int to (a function from an Int to an Int)". But those two types are in fact exactly the same, and the latter interpretation is the key to understanding how this behaviour works.
Haskell views all functions as being from a single argument to a single result. There may be computations you have in mind where the result of the computation depends on two or more inputs (such as plus). Haskell says that the function plus is a function that takes a single input, and produces an output which is another function. This second function takes a single input and produces an output which is a number. Because the second function was computed by first (and will be different for different inputs to the first function), the "final" output can depend on both the inputs, so we can implement computations with multiple inputs with these functions that take only single inputs.
I promised this would be really easy to understand if you looked at the types. Here's some example expressions with their types explicitly annotated:
plus :: Int -> Int -> Int
plus 2 :: Int -> Int
plus 2 3 :: Int
If something is a function and you apply it to an argument, to get the type of the result of that application all you need to do is remove everything up to the first arrow from the function's type. If that leaves a type that has more arrows, then you still have a function! As you add arguments the right of an expression, you remove parameter types from the left of its type. The type makes it immediately clear what the type of all the intermediate results are, and why plus 2 is a function which can be further applied (its type has an arrow) and plus 2 3 is not (its type doesn't have an arrow).
"Currying" is the process of turning a function of two arguments into a function of one argument that returns a function of another argument that returns whatever the original function returned. It's also used to refer to the property of languages like Haskell that automatically have all functions work this way; people will say that Haskell "is a curried language" or "has currying", or "has curried functions".
Note that this works particularly elegantly because Haskell's syntax for function application is simple token adjacency. You are free to read plus 2 3 as the application of plus to 2 arguments, or the application of plus to 2 and then the application of the result to 3; you can mentally model it whichever way most fits what you're doing at the time.
In languages with C-like function application by parenthesised argument list, this breaks down a bit. plus(2, 3) is very different from plus(2)(3), and in languages with this syntax the two versions of plus involved would probably have different types. So languages with that kind of syntax tend not to have all functions be curried all the time, or even to have automatic currying of any function you like. But such languages have historically also tended not to have functions as first class values, which makes the lack of currying a moot point.
In Haskell, all functions take exactly 1 input, and produce exactly 1 output. Sometimes, the input to or output of a function can be another function. The input to or output of a function can also be a tuple. You can simulate a function with multiple inputs in one of two ways:
Use a tuple as input
(in1, in2) -> out
Use a function as output*
in1 -> (in2 -> out)
Likewise, you can simulate a function with multiple outputs in one of two ways:
Use a tuple as output*
in -> (out1, out2)
Use a function as a "second input" (a la function-as-output)
in -> ((out1 -> (out2 -> a)) -> a)
*this way is typically favored by Haskellers
The (+) function simulates taking 2 inputs in the typical Haskell way of producing a function as output. (Specializing to Int for ease of communication:)
(+) :: Int -> (Int -> Int)
For the sake of convenience, -> is right-associative, so the type signature for (+) can also be written
(+) :: Int -> Int -> Int
(+) is a function that takes in a number, and produces another function from number to number.
(+) 5 is the result of applying (+) to the argument 5, therefore, it is a function from number to number.
(5 +) is another way to write (+) 5
2 + 3 is another way of writing (+) 2 3. Function application is left-associative, so this is another way of writing (((+) 2) 3). In other words: Apply the function (+) to the input 2. The result will be a function. Take that function, and apply it to the input 3. The result of that is a number.
Therefore, (+) is a function, (5 +) is a function, and (+) 2 3 is a number.
In Haskell, you can take a function of two arguments, apply it to one argument, and get a function of one argument. In fact, strictly speaking, + isn't a function of two arguments, it's a function of one argument that returns a function of one argument.
In layman's terms, the + is the actual function and it is waiting to receive a certain number of parameters (in this case 2 or more) until it returns. If you don't give it two or more parameters, then it will remain a function waiting for another parameter.
It's called Currying
Lots of functional languages (Scala,Scheme, etc.)
Most functional languages are strongly typed, but this is good in the end because it reduces errors, which works well in enterprise or critical systems.
As a side note, the language Haskell is named after Haskell Curry, who re-discovered the phenomenon of Functional Currying while working on combinatory logic.
Languages like Haskell or OCaml have a syntax that lends itself particularily to currying, but you can do it in other languages with dynamic typing, like currying in Scheme.

Is there a relationship between calling a function and instantiating an object in pure functional languages?

Imagine a simple (made up) language where functions look like:
function f(a, b) = c + 42
where c = a * b
(Say it's a subset of Lisp that includes 'defun' and 'let'.)
Also imagine that it includes immutable objects that look like:
struct s(a, b, c = a * b)
Again analogizing to Lisp (this time a superset), say a struct definition like that would generate functions for:
make-s(a, b)
s-a(s)
s-b(s)
s-c(s)
Now, given the simple set up, it seems clear that there is a lot of similarity between what happens behind the scenes when you either call 'f' or 'make-s'. Once 'a' and 'b' are supplied at call/instantiate time, there is enough information to compute 'c'.
You could think of instantiating a struct as being like a calling a function, and then storing the resulting symbolic environment for later use when the generated accessor functions are called. Or you could think of a evaluting a function as being like creating a hidden struct and then using it as the symbolic environment with which to evaluate the final result expression.
Is my toy model so oversimplified that it's useless? Or is it actually a helpful way to think about how real languages work? Are there any real languages/implementations that someone without a CS background but with an interest in programming languages (i.e. me) should learn more about in order to explore this concept?
Thanks.
EDIT: Thanks for the answers so far. To elaborate a little, I guess what I'm wondering is if there are any real languages where it's the case that people learning the language are told e.g. "you should think of objects as being essentially closures". Or if there are any real language implementations where it's the case that instantiating an object and calling a function actually share some common (non-trivial, i.e. not just library calls) code or data structures.
Does the analogy I'm making, which I know others have made before, go any deeper than mere analogy in any real situations?
You can't get much purer than lambda calculus: http://en.wikipedia.org/wiki/Lambda_calculus. Lambda calculus is in fact so pure, it only has functions!
A standard way of implementing a pair in lambda calculus is like so:
pair = fn a: fn b: fn x: x a b
first = fn a: fn b: a
second = fn a: fn b: b
So pair a b, what you might call a "struct", is actually a function (fn x: x a b). But it's a special type of function called a closure. A closure is essentially a function (fn x: x a b) plus values for all of the "free" variables (in this case, a and b).
So yes, instantiating a "struct" is like calling a function, but more importantly, the actual "struct" itself is like a special type of function (a closure).
If you think about how you would implement a lambda calculus interpreter, you can see the symmetry from the other side: you could implement a closure as an expression plus a struct containing the values of all the free variables.
Sorry if this is all obvious and you just wanted some real world example...
Both f and make-s are functions, but the resemblance doesn't go much further. Applying f calls the function and executes its code; applying make-s creates a structure.
In most language implementations and modelizations, make-s is a different kind of object from f: f is a closure, whereas make-s is a constructor (in the functional languages and logic meaning, which is close to the object oriented languages meaning).
If you like to think in an object-oriented way, both f and make-s have an apply method, but they have completely different implementations of this method.
If you like to think in terms of the underlying logic, f and make-s have a type build on the samme type constructor (the function type constructor), but they are constructed in different ways and have different destruction rules (function application vs. constructor application).
If you'd like to understand that last paragraph, I recommend Types and Programming Languages by Benjamin C. Pierce. Structures are discussed in §11.8.
Is my toy model so oversimplified that it's useless?
Essentially, yes. Your simplified model basically boils down to saying that each of these operations involves performing a computation and putting the result somewhere. But that is so general, it covers anything that a computer does. If you didn't perform a computation, you wouldn't be doing anything useful. If you didn't put the result somewhere, you would have done work for nothing as you have no way to get the result. So anything useful you do with a computer, from adding two registers together, to fetching a web page, could be modeled as performing a computation and putting the result somewhere that it can be accessed later.
There is a relationship between objects and closures. http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg03277.html
The following creates what some might call a function, and others might call an object:
Taken from SICP ( http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-21.html )
(define (make-account balance)
(define (withdraw amount)
(if (>= balance amount)
(begin (set! balance (- balance amount))
balance)
"Insufficient funds"))
(define (deposit amount)
(set! balance (+ balance amount))
balance)
(define (dispatch m)
(cond ((eq? m 'withdraw) withdraw)
((eq? m 'deposit) deposit)
(else (error "Unknown request -- MAKE-ACCOUNT"
m))))
dispatch)

What fun can be had with lambda-definitions?

Not having them used them all that much, I'm not quite sure about all the ways
lambda-definitions can be used (other than map/collect/do/lightweight local function syntax). For anyone interested in posting some examples:
provide explanations to help readers understand how lambda-definitions are being used;
preferred languages for the examples: Python, Smalltalk, Haskell.
You can make a functional data structure out of lambdas. Here is a simple one - a functional list (Python), supporting add and contains methods:
empty = lambda x : None
def add(lst, item) :
return lambda x : x == item or lst(x)
def contains(lst, item) :
return lst(item) or False
I just coded this quickly for fun - notice that you're not allowed to add any falsy values as is. It also is not tail-recursive, as a good functional structure should be. Exercises for the reader!
You can use them for control flow. For example, in Smalltalk, the "ifTrue:ifFalse:" method is a method on Boolean objects, with a different implementation on each of True and False classes. The expression
someBoolean ifTrue: [self doSomething] ifFalse: [self doSomethingElse]
uses two closures---blocks, in [square brackets] in Smalltalk syntax---one for the true branch, and one for the false branch. The implementation of "ifTrue:ifFalse:" for instances of class True is
ifTrue: block1 ifFalse: block2
^ block1 value
and for class False:
ifTrue: block1 ifFalse: block2
^ block2 value
Closures, here, are used to delay evaluation so that a decision about control flow can be taken, without any specialised syntax at all (besides the syntax for blocks).
Haskell is a little different, with its lazy evaluation model effectively automatically producing the effect of closures in many cases, but in Scheme you end up using lambdas for control flow a lot. For example, here is a utility to retrieve a value from an association-list, supplying an optionally-computed default in the case where the value is not present:
(define (assq/default key lst default-thunk)
(cond
((null? lst) (default-thunk)) ;; actually invoke the default-value-producer
((eq? (caar lst) key) (car lst))
(else (assq/default key (cdr lst) default-thunk))))
It would be called like this:
(assq/default 'mykey my-alist (lambda () (+ 3 4 5)))
The key here is the use of the lambda to delay computation of the default value until it's actually known to be required.
See also continuation-passing-style, which takes this to an extreme. Javascript, for instance, relies on continuation-passing-style and closures to perform all of its blocking operations (like sleeping, I/O, etc).
ETA: Where I've said closures above, I mean lexically scoped closures. It's the lexical scope that's key, often.
You can use a lambda to create a Y Combinator, that is a function that takes another function and returns a recursive form of it. Here is an example:
def Y(le):
def _anon(cc):
return le(lambda x: cc(cc)(x))
return _anon(_anon)
This is a thought bludgeon that deserves some more explanation, but rather than regurgitate it here check out this blog entry (above sample comes from there too).
Its C#, but I personally get a kick out of this article every time I read it:
Building Data out of Thin Air - an implementation of Lisp's cons, car, and cdr functions in C#. It shows how to build a simple stack data structure entirely out of lambda functions.
It isn't really quite the same concept as in haskell etc, but in C#, the lambda construct has (optionally) the ability to compile to an objcet model representing the code (expression-trees) rather than code itself (this is itself one of the cornerstones of LINQ).
This in turn can lead to some very expressive meta-programming opportunities, for example (where the lambda here is expressing "given a service, what do you want to do with it?"):
var client = new Client<ISomeService>();
string captured = "to show a closure";
var result = client.Invoke(
svc => svc.SomeMethodDefinedOnTheService(123, captured)
);
(assuming a suitable Invoke signature)
There are lots of uses for this type of thing, but I've used it to build an RPC stack that doesn't require any runtime code generation - it simply parses the expression-tree, figures out what the caller intended, translates it to RPC, invokes it, gathers the response, etc (discussed more here).
An example in Haskell to compute the derivative of a single variabled function using a numerical approximation:
deriv f = \x -> (f (x + d) - f x) / d
where
d = 0.00001
f x = x ^ 2
f' = deriv f -- roughly equal to f' x = 2 * x