Does the Chisel/FIRRTL toolchain do boolean expression optimization? - chisel

I am generating input to be compiled by Chisel. Doing it the easy way might result in suboptimal boolean expressions. For example, I tend to generate chains of nested Mux()-es, like this:
x :=
Mux(!a && !b && !c && d, 13,
Mux(!a && !b && c, 3,
Mux(!a && !b, 2,
Mux(!a && b, temp1,
Mux(a && e, 11,
Mux(a, 0,
-1))))))
As you can see,
some of the boolean expressions are repeated, such as "!a", so likely some optimization could be done to express the same function using fewer evaluations, such as common sub-expression elimination,
tests are repeated, again, such as "!a", so likely some optimization could be done to factor that out and test it once instead, and
similar to point 2 above, the expression is very deep, so likely some optimization could be done to make it more like a tree and less like a linear sequence of Mux-es.
One thing I do not do is have complex predicate expressions: every predicate is just a conjunction of terms and each term is just a var or its negation.
I could try to implement these kind of transforms in my code generator, but in doing so I would end up writing my own optimizing compiler for boolean expressions. Instead can I just generate the above and rely on the Chisel/FIRRTL toolchain to optimize boolean expressions of this level of complexity? Such expressions are likely to be about the size of the one above or up to maybe twice its size.

The FIRRTL compiler does support Common Subexpression Elimination (CSE), but not Global Value Numbering (GVN). In effect, you can expect that most common subexpressions will be combined as you'd expect in emitted Verilog.
The FIRRTL compiler does not do mux tree optimization. The synthesis tool should be able to optimized whatever it's given, but it's sadly not always the case. Therefore, Chisel and the FIRRTL compiler choose to not do mux tree optimization to preserve the intent of the user. Commonly, the user is writing some specific Chisel intended to be optimized in a certain way by the synthesis tool. If the FIRRTL compiler reorders the mux tree and produces a quality of result (QOR) regression, that's really bad. Consider this comment for more context.
That said, if a user really wants to apply some mux reordering at the FIRRTL-level, they can write a custom FIRRTL optimization transform (that may be scoped to only the module/region they want to optimize). This could be a good optional feature of the FIRRTL compiler. This is also an option available if you're generating Chisel---it may be simpler to write an optimization over FIRRTL IR instead of in the Chisel generation library.
Now, how does this interact with the original example? Start with a slightly simplified version:
import chisel3._
import chisel3.internal.sourceinfo.UnlocatableSourceInfo
class Foo extends RawModule {
private implicit val noInfo = UnlocatableSourceInfo
val a = IO(Input(Bool()))
val b = IO(Input(Bool()))
val c = IO(Input(Bool()))
val d = IO(Input(Bool()))
val e = IO(Input(Bool()))
val x = IO(Output(UInt()))
x := Mux(!a && !b && !c && d, 1.U,
Mux(!a && !b && c, 2.U,
Mux(!a && !b, 3.U,
Mux(!a && b, 4.U,
Mux(a && e, 5.U,
Mux(a, 6.U, 0.U))))))
}
When compiled with Chisel 3.3.2 and FIRRTL 1.3.2, the following Verilog is the result:
module Foo(
input a,
input b,
input c,
input d,
input e,
output [2:0] x
);
wire _T = ~a;
wire _T_1 = ~b;
wire _T_2 = _T & _T_1;
wire _T_3 = ~c;
wire _T_4 = _T_2 & _T_3;
wire _T_5 = _T_4 & d;
wire _T_9 = _T_2 & c;
wire _T_14 = _T & b;
wire _T_15 = a & e;
wire [2:0] _T_16 = a ? 3'h6 : 3'h0;
wire [2:0] _T_17 = _T_15 ? 3'h5 : _T_16;
wire [2:0] _T_18 = _T_14 ? 3'h4 : _T_17;
wire [2:0] _T_19 = _T_2 ? 3'h3 : _T_18;
wire [2:0] _T_20 = _T_9 ? 3'h2 : _T_19;
assign x = _T_5 ? 3'h1 : _T_20;
endmodule
Observations:
CSE is doing it's job, e.g., ~a & ~b is put in _T_2 and reused.
The mux tree structure is unmodified.
Chisel does have a reduceTree method defined for Vec which can be used to produce balanced mux trees. Also, the chain of muxes in the original example can be perhaps more scalably described with util.MuxCase (without affecting the resulting mux tree):
x := MuxCase(
default = 0.U,
mapping = Seq(
(!a && !b && !c && d) -> 1.U,
(!a && !b && c) -> 2.U,
(!a && !b) -> 3.U,
(!a && b) -> 4.U,
(a && e) -> 5.U,
(a) -> 6.U)
)

Related

Order of Evaluation of Arguments in Ocaml

I would like to know why does ocaml evaluate the calls from right to left, is that a FP principle or it doesn't matter at all to a FP language ?
A quicksort example :
let rec qs = function
| [] -> []
| h::t -> let l, r = List.partition ((>) h) t in
List.iter (fun e -> print_int e; print_char ' ') l; Printf.printf " <<%d>> " h;
List.iter (fun e -> print_int e; print_char ' ') r; print_char '\n';
(qs l)#(h::qs r)
In my example the call to (qs r) is evaluated first and then (qs l) but I expected it to be otherwise.
# qs [5;43;1;10;2];;
1 2 <<5>> 43 10
10 <<43>>
<<10>>
<<1>> 2
<<2>>
- : int list = [1; 2; 5; 10; 43]
EDIT :
from https://caml.inria.fr/pub/docs/oreilly-book/html/book-ora029.html
In Objective CAML, the order of evaluation of arguments is not
specified. As it happens, today all implementations of Objective CAML
evaluate arguments from left to right. All the same, making use of
this implementation feature could turn out to be dangerous if future
versions of the language modify the implementation.
The order of evaluation of arguments to a function is not specified in OCaml.
This is documented in Section 6.7 of the manual.
In essence this gives the greatest possible freedom to the system (compiler or interpreter) to evaluate expressions in an order that is advantageous in some way. It means you (as an OCaml programmer) must write code that doesn't depend on the order of evaluation.
If your code is purely functional, its behavior can't depend on the order. So you need to be careful only when writing code with effects.
Update
If you care about order, use let:
let a = <expr1> in
let b = <expr2> in
f a b
Or, more generally:
let f = <expr0> in
let a = <expr1> in
let b = <expr2> in
f a b
Update 2
For what it's worth, the book you cite above was published in 2002. A lot has changed since then, including the name of the language. A more current resource is Real World OCaml.

Solidity Order of Operations - Logical NOT

I have a question concerning the order of operations in Solidity. In the docs it says that the logical NOT operation takes precedence over the logical AND operation. The thing is, when I have an if statement like that if(false && !function()) I thought the function is called first because of the order of operations, but in reality the short-circuiting of the && operator is done first. So my question is: Why?
It's because the two operators || and && apply the common short-circuiting rules, as described in Solidity document:
The operators || and && apply the common short-circuiting rules. This means that in the expression f(x) || g(y), if f(x) evaluates to true, g(y) will not be evaluated even if it may have side-effects.
Because of the common short-circuiting rules, the behavior described here is exactly the same as many other languages, e.g Java or Scala. Here is a Scala REPL demonstration:
scala> def foo(x: Int): Boolean = { if (x >= 0) true else ??? }
foo: (x: Int)Boolean
scala> foo(10)
res0: Boolean = true
scala> foo(-10)
scala.NotImplementedError: an implementation is missing
at scala.Predef$.$qmark$qmark$qmark(Predef.scala:230)
at .foo(<console>:11)
... 32 elided
scala> if (false && !foo(-10)) "boo" else "bar"
res2: String = bar

How can I make a predefined Python function work with Z3py?

As a beginner in Z3, I am wondering if there is a way to make Z3Py work with a predefined Python function. Following is a small example which explains my question.
from z3 import *
def f(x):
if x > 0:
print " > 0"
return 1
else:
print " <= 0"
return 0
a=Int('a')
s=Solver()
s.insert(f(a) == 0)
t = s.check()
print t
print a
m = s.model()
print m
f(x) is a function defined in python. When x <= 0, f(x) returns 0. I add a constraint s.insert(f(a) == 0) and hope that Z3 can find a appropriate value for variable "a" (e.g. -3). But these codes are not working correctly. How should I change it?
(Please note that I need f(x) defined outside Z3, and then is called by Z3. )
What I am trying to do is calling a predefined function provided by a graph library without translating it to Z3. I am using the NetworkX library, and some codes are given as following:
import networkx as nx
G1=nx.Graph()
G1.add_edge(0,1)
G2=nx.Graph()
G2.add_edge(0,1)
G2.add_edge(1,2)
print(nx.is_isomorphic(G1, G2))
#False
I need Z3 to help me find a vertex in G2 such that after removing this vertex, G2 is isomorphic to G1. e.g.
G2.remove_node(0)
print(nx.is_isomorphic(G1, G2))
#True
I think this will be tough if f is a general function (e.g., what if it's recursive?), although if you assume it has some structure (e.g., if then else), you might be able to write a simple translator. The issue is that Z3's functions are mathematical in nature and not directly equivalent to Python functions (see http://en.wikipedia.org/wiki/Uninterpreted_function ). If possible for your purpose, I would propose to go the opposite direction: define your function f in terms of Z3 constructs, then evaluate it within your program (e.g., using this method: Z3/Python getting python values from model ). If that won't work for you, please include some additional details on how you need to use the function. Here's a minimal example (rise4fun link: http://rise4fun.com/Z3Py/pslw ):
def f(x):
return If(x > 0, 1, 0)
a=Int('a')
s=Solver()
P = (f(a) == 0)
s.add(P)
t = s.check()
print t
print a
m = s.model()
print m
res = simplify(f(m[a])) # evaluate f at the assignment to a found by Z3
print "is_int_value(res):", is_int_value(res)
print res.as_long() == 0 # Python 0
print f(1)
print simplify(f(1)) # Z3 value of 1, need to convert as above

Weeding duplicates from a list of functions

Is it possible to remove the duplicates (as in nub) from a list of functions in Haskell?
Basically, is it possible to add an instance for (Eq (Integer -> Integer))
In ghci:
let fs = [(+2), (*2), (^2)]
let cs = concat $ map subsequences $ permutations fs
nub cs
<interactive>:31:1:
No instance for (Eq (Integer -> Integer))
arising from a use of `nub'
Possible fix:
add an instance declaration for (Eq (Integer -> Integer))
In the expression: nub cs
In an equation for `it': it = nub cs
Thanks in advance.
...
Further, based on larsmans' answer, I am now able to do this
> let fs = [AddTwo, Double, Square]
> let css = nub $ concat $ map subsequences $ permutations fs
in order to get this
> css
[[],[AddTwo],[Double],[AddTwo,Double],[Square],[AddTwo,Square],[Double,Square],[AddTwo,Double,Square],[Double,AddTwo],[Double,AddTwo,Square],[Square,Double],[Square,AddTwo],[Square,Double,AddTwo],[Double,Square,AddTwo],[Square,AddTwo,Double],[AddTwo,Square,Double]]
and then this
> map (\cs-> call <$> cs <*> [3,4]) css
[[],[5,6],[6,8],[5,6,6,8],[9,16],[5,6,9,16],[6,8,9,16],[5,6,6,8,9,16],[6,8,5,6],[6,8,5,6,9,16],[9,16,6,8],[9,16,5,6],[9,16,6,8,5,6],[6,8,9,16,5,6],[9,16,5,6,6,8],[5,6,9,16,6,8]]
, which was my original intent.
No, this is not possible. Functions cannot be compared for equality.
The reason for this is:
Pointer comparison makes very little sense for Haskell functions, since then the equality of id and \x -> id x would change based on whether the latter form is optimized into id.
Extensional comparison of functions is impossible, since it would require a positive solution to the halting problem (both functions having the same halting behavior is a necessary requirement for equality).
The workaround is to represent functions as data:
data Function = AddTwo | Double | Square deriving Eq
call AddTwo = (+2)
call Double = (*2)
call Square = (^2)
No, it's not possible to do this for Integer -> Integer functions.
However, it is possible if you're also ok with a more general type signature Num a => a -> a, as your example indicates! One naïve way (not safe), would go like
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE NoMonomorphismRestriction #-}
data NumResLog a = NRL { runNumRes :: a, runNumResLog :: String }
deriving (Eq, Show)
instance (Num a) => Num (NumResLog a) where
fromInteger n = NRL (fromInteger n) (show n)
NRL a alog + NRL b blog
= NRL (a+b) ( "("++alog++ ")+(" ++blog++")" )
NRL a alog * NRL b blog
= NRL (a*b) ( "("++alog++ ")*(" ++blog++")" )
...
instance (Num a) => Eq (NumResLog a -> NumResLog a) where
f == g = runNumResLog (f arg) == runNumResLog (g arg)
where arg = NRL 0 "THE ARGUMENT"
unlogNumFn :: (NumResLog a -> NumResLog c) -> (a->c)
unlogNumFn f = runNumRes . f . (`NRL`"")
which works basically by comparing a "normalised" version of the functions' source code. Of course this fails when you compare e.g. (+1) == (1+), which are equivalent numerically but yield "(THE ARGUMENT)+(1)" vs. "(1)+(THE ARGUMENT)" and thus are indicated as non-equal. However, since functions Num a => a->a are essentially constricted to be polynomials (yeah, abs and signum make it a bit more difficult, but it's still doable), you can find a data type that properly handles those equivalencies.
The stuff can be used like this:
> let fs = [(+2), (*2), (^2)]
> let cs = concat $ map subsequences $ permutations fs
> let ncs = map (map unlogNumFn) $ nub cs
> map (map ($ 1)) ncs
[[],[3],[2],[3,2],[1],[3,1],[2,1],[3,2,1],[2,3],[2,3,1],[1,2],[1,3],[1,2,3],[2,1,3],[1,3,2],[3,1,2]]

Does Scala have an operator similar to Haskell's `$`?

Does Scala have an operator similar to Haskell's $?
-- | Application operator. This operator is redundant, since ordinary
-- application #(f x)# means the same as #(f '$' x)#. However, '$' has
-- low, right-associative binding precedence, so it sometimes allows
-- parentheses to be omitted; for example:
--
-- > f $ g $ h x = f (g (h x))
--
-- It is also useful in higher-order situations, such as #'map' ('$' 0) xs#,
-- or #'Data.List.zipWith' ('$') fs xs#.
{-# INLINE ($) #-}
($) :: (a -> b) -> a -> b
f $ x = f x
Yes, it's written "apply"
fn apply arg
There's no standard punctuation operator for this, but it would be easy enough to add one via library pimping.
class RichFunction[-A,+B](fn: Function1[A, B]){ def $(a:A):B = fn(a)}
implicit def function2RichFunction[-A,+B](t: Function1[A, B]) = new RichFunction[A, B](t)
In general, while Scala code is much denser than Java, it's not quite as dense as Haskell. Thus, there's less payoff to creating operators like '$' and '.'