What is the difference between global and :: in TCL? - tcl

I am working with an EDA SW. It requires me to rely on global variables.
Say I have a proc, and I am looking for a global variable CCK_FOO. I have 2 choices:
Use global CCK_FOO within the code.
Use ::CCK_FOO
In terms of "management level", these appear identical. Is there an "under the hood" pro and con for either of the methods? I actually prefer using ::, as it minimizes the chances of accidental override.

Under the hood, using ::CCK_FOO goes through the parsed variable name route every time the execution engine uses it, whereas global CCK_FOO allows the engine to set up a local variable (with a local variable table — LVT — slot) that is linked to the global variable. Accesses via the LVT is much faster because that's just an index into a C array (and an extra pointer dereference because it's a link) whereas looking up a global variable means doing a hash table lookup (there's a hash table for variables in the global namespace implementation). The actual internal parse of ::CCK_FOO into :: and CCK_FOO is cached.
In practical terms, it's perhaps slightly faster to use ::CCK_FOO if you are only accessing a variable once, but as soon as you use it twice (let alone more) you get better performance by paying the extra cost of global CCK_FOO and accessing it via LVT indexing.
% proc style1 {} {
set ::CCK_FOO abc
}
% proc style2 {} {
global CCK_FOO
set CCK_FOO abc
}
% time { style1 } 100000
0.52350635 microseconds per iteration
% time { style2 } 100000
0.5267007100000001 microseconds per iteration
Note, times between code above and code below are not comparable as they do different amounts of other work. Look instead at the differences in timings between style1 and style2.
% proc style1 {} {
set ::CCK_FOO [string reverse $::CCK_FOO]
}
% proc style2 {} {
global CCK_FOO
set CCK_FOO [string reverse $CCK_FOO]
}
% time { style1 } 100000
0.9733970200000001 microseconds per iteration
% time { style2 } 100000
0.78782093 microseconds per iteration
# Calibration...
% time { string reverse abc } 100000
0.28694849 microseconds per iteration
As you can see, with just two accesses, we're getting quite a lot of speedup by using global.

Related

When functions are assigned to variables, how are they stored?

Normally, if you create a variable, it's usually trivial how to store it in memory, just get the size of it (or of all of it's components, for example in structs) and allocate that many bytes in memory to store it. However a function is a bit different from other data types, it's not just some primitive with a set size. My question is, how exactly are functions stored in memory?
Some example code in JavaScript:
let factorial = function(x) {
if(x == 0) return 1;
return x*factorial(x-1);
}
Once defined, I can use this function like any other variable, putting it in objects, arrays, passing it into other functions, etc.
So how does it keep track of the function? I understand that this is eventually compiled to machine code (or not in the case of JavaScript but I just used it since it was a convenient example), but how would memory look after such a function is defined? Does it store a pointer to the code and a marker that it's a function, or does it store the literal machine code/bytecode for the function in memory, or something else?

differences between double colon and global variables use?

Very recently I found out about the namespace concept and the use of double-colon (::) for program variables.
Before I start reshaping all my scripts, I wanted to know if there is a real difference between accessing a variable with the global keyword and with the double colon syntax.
e.g.
set var bla
proc kuku {} { puts $::var }
vs.
proc gaga {} {global var ; puts $var}
In both cases I'm getting 'bla' written to my screen.
What am I missing?
I understand that editing the variable will be a bit problematic (is it even possible?), but for read-only vars, is there a difference between the two methods?
They're talking about the same variable. With the ::var form, you're using the fully-qualified name, whereas with the form with global you're making a local link to the global variable (which really is a pointer to the global variable). Reading from or writing to them should work exactly the same, whichever way you choose.
There is a measurable difference between the two. With global, you've got the extra cost of setting up the link, but thereafter for the remainder of the procedure the cost per use (read or write) is quite a lot lower. With the other form, you're not paying any setup overhead, but the per-use cost is higher. For one use only, the cost of the two are pretty similar. If you're using the variable several times, global is cheaper. OTOH, sometimes it is clearer to use the fully qualified version anyway (particularly true with vwait and trace) despite the reduction in speed.
I find that I access the ::env and ::tcl_platform arrays using their fully-qualified form, but most other things will get accessed via global. That's just my preference though.
Here's an example interactive session:
% set x 1
1
% proc y1 {} {incr ::x;return ok}
% time { y1 } 10000
0.5398216 microseconds per iteration
% proc y2 {} {global x;incr x;return ok}
% time { y2 } 10000
0.4537753 microseconds per iteration
% proc z1 {} {return $::x}
% time { z1 } 10000
0.4864713 microseconds per iteration
% proc z2 {} {global x; return $x}
% time { z2 } 10000
0.4433554 microseconds per iteration
(I wouldn't expect you to get the same absolute figures as me. Do your own performance testing. I would expect similar relative figures…)

Double values lose precision in tcl after string\list operations

While working with tcl, I discover such a behavior: when I looping over double variable it lose its precision.
set dbl [expr { double(13.0/7.0) }]
set dbl2 [expr { double(13.0/7.0) }]
foreach a $dbl {
}
if { $dbl == $dbl2 } {
puts "\$dbl == \$dbl2"
} else {
puts "\$dbl != \$dbl2" ;# they will be not equal
}
As I soon find out, when you use operations that work with strings or lists (e.g. llength, lindex, string first, rsub, foreach, etc.) the double representation of variable will be replaced with string representation which will be created or was created earlier, based on $tcl_precision value. Furthermore, every copy of this double variable that was created with set command, also will be spoiled.
Is there a way not to lose precision after such operations in tcl8.4 and without forcing tcl_precision to some fixed value?
P.S. set tcl_precision 0 will work only in tcl8.5 or above versions.
From Tcl 8.5 onwards, your code should Just Work. Considerable effort was put into 8.5 to make the default conversion of doubles to strings (and hence to other types) not lose information. It also tries to use the minimum number of digits to do this on the grounds that this minimises the amount of surprise presented to people; yes, we had a real expert working on this.
For 8.4 and before, set tcl_precision to 17. That guarantees that no significant bits are lost, though the representation used may be considerably longer than minimal.

error in Assigning values to bytes in a 2d array of registers in Verilog .Error

Hi when i write this piece of code :
module memo(out1);
reg [3:0] mem [2:0] ;
output wire [3:0] out1;
initial
begin
mem[0][3:0]=4'b0000;
mem[1][3:0]=4'b1000;
mem[2][3:0]=4'b1010;
end
assign out1= mem[1];
endmodule
i get the following warnings which make the code unsynthesizable
WARNING:Xst:1780 - Signal mem<2> is never used or assigned. This unconnected signal will be trimmed during the optimization process.
WARNING:Xst:653 - Signal mem<1> is used but never assigned. This sourceless signal will be automatically connected to value 1000.
WARNING:Xst:1780 - Signal > is never used or assigned. This unconnected signal will be trimmed during the optimization process.
Why am i getting these warnings ?
Haven't i assigned the values of mem[0] ,mem[1] and mem[2]!?? Thanks for your help!
Your module has no inputs and a single output -- out1. I'm not totally sure what the point of the module is with respect to your larger system, but you're basically initializing mem, but then only using mem[1]. You could equivalently have a module which just assigns out1 to the value 4'b1000 (mem never changes). So yes -- you did initialize the array, but because you didn't use any of the other values the xilinx tools are optimizing your module during synthesis and "trimming the fat." If you were to simulate this module (say in modelsim) you'd see your initializations just fine. Based on your warnings though I'm not sure why you've come to the conclusion that your code is unsynthesizable. It appears to me that you could definitely synthesize it, but that it's just sort of a weird way to assign a single value to 4'b1000.
With regards to using initial begins to store values in block ram (e.g. to make a ROM) that's fine. I've done that several times without issue. A common use for this is to store coefficients in block ram, which are read out later. That stated the way this module is written there's no way to read anything out of mem anyway.

Accessing an array element directly vs. assigning it to a variable

Performance-wise, is it better to access an array element 'directly' multiple times, or assign its value to a variable and use that variable? Assuming I'll be referencing the value several times in the following code.
The reasoning behind this question is that, accessing an array element presumably involves some computing cost each time it is done, without requiring extra space. On the other hand, storing the value in a variable eliminates this access-cost, but takes up extra space.
// use a variable to store the value
Temp = ArrayOfValues(0)
If Temp > 100 Or Temp < 50 Then
Dim Blah = Temp
...
// reference the array element 'directly'
If ArrayOfValues(0) > 100 Or ArrayOfValues(0) < 50 Then
Dim Blah = ArrayOfValues(0)
...
I know this is a trivial example, but assuming we're talking about a larger scale in actual use (where the value will be referenced many times) at what point is the tradeoff between space and computing time worth considering (if at all)?
This is tagged language agnostic, but I don't really believe that it is. This post answers the C and C++ version of the question.
An optimizing compiler can take care of "naked" array accesses; in C or C++ there's no reason to think that the compiler wouldn't remember the value of a memory location if no functions were called in between. E.g.
int a = myarray[19];
int b = myarray[19] * 5;
int c = myarray[19] / 2;
int d = myarray[19] + 3;
However, if myarray is not just defined as int[] but is actually something "fancy", especially some user defined container type with a function operator[]() defined in another translation unit, then that function must be called each time the value is requested (since the function is returning the data at location in memory and the local function doesn't know that the result of the function is intended to be constant).
Even with 'naked' arrays though, if you access the same thing multiple times around function calls, the compiler similarly must assume that the value has been changed (even if it can remember the address itself). E.g.
int a = myarray[19];
NiftyFunction();
int b = myarray[19] * 8;
There's no way that the compiler can know that myarray[19] will have the same value before and after the function call.
So- generally speaking, if you know that a value is constant through the local scope, "cache" it in a local variable. You can program defensively and use assertions to validate this condition you've put on things:
int a = myarray[19];
NiftyFunction();
assert(myarray[19] == a);
int b = a * 8;
A final benefit is that it's much easier to inspect the values in a debugger if they're not buried in an array somewhere.
The overhead in memory consumption is very limited because for reference types it's just a pointer (couple of bytes) and most value types also require just a few bytes.
Arrays are very efficient structures in most languages. Getting to an index doesn't involve any lookup but just some math (each array slot takes 4 bytes so the 11th slot is at offset 40). Then there is probably a bit of overhead for bounds checking. Allocating the memory for a new local var and freeing it requires a bit of cpu cycles as well. So in the end it also depends how many array lookups you eliminate by copying to a local var.
Fact is that you really need exceptionally crappy hardware or have really big loops for this to be really important and if it is run a decent test on it. I personally choose often for the seperate variable as I find that it makes the code more readible.
Your example is odd btw since you do 2 array lookups before you create the local var :)
This makes more sense (elimination 2 more lookups)
Dim blah = ArrayOfValues(0)
if blah > 100 or blah < 50 then
...