How to cast UInt to SInt value in Chisel3? - chisel

As tile, how to cast UInt to SInt value in Chisel3 in right way?
ig:
val opC = RegInit(0.U(64.W))
val result = RegInit(0.U(64.W))
result := Mux(opC.toSInt > 0.S, opC, 0.U)

It depends on if you want to reinterpret as an SInt (same width), or actually cast (ie. casting an 8-bit UInt results in a 9-bit SInt).
You should reinterpret a UInt to an SInt by calling .asSInt on the UInt. eg. opC.asSInt, the result will be the same width.
You should cast a UInt to an SInt by calling .zext on the UInt. eg. opC.zext, the result will be 1-bit wider with a zero in the msb.

Related

How to compare an ascii string with a uint8 array in Solidity?

I have a uint8 array containing ASCII codes for characters and a string variable, and I wish to make a comparison between them. For example:
uint8[3] memory foo = [98, 97, 122]; // baz
string memory bar = "baz";
bool result = keccak256(abi.encodePacked(foo)) == keccak256(abi.encodePacked(bytes(bar))); // false
Here I want the comparison to succeed, but it's a failure because encodePacked will keep the padding of all the uint8 elements in the array when encoding it.
How can I do it instead?
You are currently comparing encoded value abi.encodePacked(foo)) to hashed value keccak256(abi.encodePacked(bytes(bar)), which would never equal.
The uint8 fixed-size array is stored in memory in three separate slots - one for each item - and each of the items is ordered right to left (little endian).
0x
0000000000000000000000000000000000000000000000000000000000000062
0000000000000000000000000000000000000000000000000000000000000061
000000000000000000000000000000000000000000000000000000000000007a
But the string literal is stored as a dynamic-size byte array ordered left to right (big endian):
0x
0000000000000000000000000000000000000000000000000000000000000020 # pointer
0000000000000000000000000000000000000000000000000000000000000003 # length
62617a0000000000000000000000000000000000000000000000000000000000 # value
So because the actual data is stored differently, you cannot perform a simple byte comparison of both arrays.
You can, however, loop through all items of the array and compare each item separately.
pragma solidity ^0.8;
contract MyContract {
function compare() external pure returns (bool) {
uint8[3] memory foo = [98, 97, 122]; // baz
string memory bar = "baz";
// typecast the `string` to `bytes` dynamic-length array
// so that you can use its `.length` member property
// and access its items individually (see `barBytes[i]` below, not possible with `bar[i]`)
bytes memory barBytes = bytes(bar);
// prevent accessing out-of-bounds index in the following loop
// as well as false positive if `foo` contains just the beginning of `bar` but not the whole string
if (foo.length != barBytes.length) {
return false;
}
// loop through each item of `foo`
for (uint i; i < foo.length; i++) {
uint8 barItemDecimal = uint8(barBytes[i]);
// and compare it to each decimal value of `bar` character
if (foo[i] != barItemDecimal) {
return false;
}
}
// all items have equal values
return true;
}
}

Indexing of elements in a Seq of string with chisel

I have, tab=Array(1.U, 6.U, 5.U, 2.U, 4.U, 3.U) and Y=Seq(b,g,g,g,b,g), tab is an array of UInt.
I want to do a map on tab as follows:
tab.map(case idx=>Y(idx))
But I keep getting the error: found chisel3.core.UInt, required Int.
I tried using the function peek() to convert idx to an Int by doing
tab.map(case idx=>Y(peek(idx).toInt)
but I get peek not found. I also saw that I cannot convert a chisel UInt to an Int here but did not understand the use of peek well with the example given. So please, is there another approach to do the above?
Thanks!
The immediate problem is that you cannot access the elements of scala collections using a hardware construct like UInt or SInt. It should work if you wrap Y in a Vec. Depending on your overall module this would probably look like
val YVec = VecInit(Y)
val mappedY = tab.map { case idx => YVec(idx) }

ClojureScript floats hashed as ints

At first I thought this is a bug, but looking at the source code it's clearly intentional. Does anybody know why this is being done? It's inconsistent with Clojure and a subtle source for bugs.
(hash 1) ; => 1
(hash 1.5) ; => 1
https://github.com/clojure/clojurescript/blob/master/src/main/cljs/cljs/core.cljs#L985
(defn hash
"Returns the hash code of its argument. Note this is the hash code
consistent with =."
[o]
(cond
(implements? IHash o)
(bit-xor (-hash ^not-native o) 0)
(number? o)
(if (js/isFinite o)
(js-mod (Math/floor o) 2147483647)
(case o
Infinity
2146435072
-Infinity
-1048576
2146959360))
...))
JavaScript has only one number type: 64-bit float between -(2^53)-1 and (2^53)-1. However, bitwise operations work on 32-bit signed integers. So, a lossy conversion is needed, when a float is converted to a hash that works with bitwise operators. The magic number 2147483647 for the modulo operation in core.cljs/hash is the maximum integer representable through a 32bit signed number. Note that there is also special handling for values Infinity and -Infinity.

Chisel: Verilog generated code for Sint and UInt

When using SInt and UInt to implement an adder I get the same Verilog code, see the codes below,
import Chisel._
class Unsigned_Adder extends Module{
val io = new Bundle{
val a = UInt(INPUT, 16)
val b = UInt(INPUT, 16)
val out = UInt(OUTPUT)
}
io.out := io.a + io.b
}
and
import Chisel._
class Signed_Adder extends Module{
val io = new Bundle{
val a = SInt(INPUT, 16)
val b = SInt(INPUT, 16)
val out = SInt(OUTPUT)
}
io.out := io.a + io.b
}
This will generate the same Verilog code,
module Signed_Adder(
input [15:0] io_a,
input [15:0] io_b,
output[15:0] io_out
);
wire[15:0] T0;
assign io_out = T0;
assign T0 = io_a + io_b;
endmodule
Of-course the modules names will differ. when implementing a multiplier in chisel using the multiplication operator (*)
io.out := io.a * io.b
I will get a different Verilog code for the UInt and SInt, where in the SInt the code will look like,
module Multi(
input [15:0] io_a,
input [15:0] io_b,
output[31:0] io_out
);
wire[31:0] T0;
assign io_out = T0;
assign T0 = $signed(io_a) * $signed(io_b);
endmodule
Adding $signed to the code. Why is that? why is it that in the addition case I get the same Verilog code but in the multiplication case I get a different code generated for UInt and SInt?
With addition if the size of variable are equal, the adder don't care about the sign, the addition will be correct thanks to overflow bit.
But with multiplication, we have to know the sign to manage it.
See this documentation about the signed arithmetic in verilog for more information :
enter link description here
If you use +& then the Addr.io.out's width will be 17 bit
and in verilog this Addr won't care of the sign as FabienM said, because it will be handled by the upper hierarchy design like unsigned to signed transform then connected to this Addr, and it will be transformed to the 2-complement form.

passing 2 arguments to a function in Haskell

In Haskell, I know that if I define a function like this add x y = x + y
then I call like this add e1 e2. that call is equivalent to (add e1) e2
which means that applying add to one argument e1 yields a new function which is then applied to the second argument e2.
That's what I don't understand in Haskell. in other languages (like Dart), to do the task above, I would do this
add(x) {
return (y) => x + y;
}
I have to explicitly return a function. So does the part "yields a new function which is then applied to the second argument" automatically do underlying in Haskell? If so, what does that "hiding" function look like? Or I just missunderstand Haskell?
In Haskell, everything is a value,
add x y = x + y
is just a syntactic sugar of:
add = \x -> \y -> x + y
For more information: https://wiki.haskell.org/Currying :
In Haskell, all functions are considered curried: That is, all functions > in Haskell take just single arguments.
This is mostly hidden in notation, and so may not be apparent to a new
Haskeller. Let's take the function
div :: Int -> Int -> Int
which performs integer division. The expression div 11 2
unsurprisingly evaluates to 5. But there's more that's going on than
immediately meets the untrained eye. It's a two-part process. First,
div 11
is evaluated and returns a function of type
Int -> Int
Then that resulting function is applied to the value 2, and yields 5.
You'll notice that the notation for types reflects this: you can read
Int -> Int -> Int
incorrectly as "takes two Ints and returns an Int", but what it's
really saying is "takes an Int and returns something of the type Int
-> Int" -- that is, it returns a function that takes an Int and returns an Int. (One can write the type as Int x Int -> Int if you
really mean the former -- but since all functions in Haskell are
curried, that's not legal Haskell. Alternatively, using tuples, you
can write (Int, Int) -> Int, but keep in mind that the tuple
constructor (,) itself can be curried.)