void(* resetDevice) (void) = 0;
This was a function declaration I stumbled upon, moreover no definition found later.
What kind of a function is this? What does it do? What does it return?
This is pointer to void function accepting no parameters. Pointer is named resetDevice and is initialized to null. Some library code later will initialize it to point to real function. You will be able to call it with
resetDevice();
Now, what will that function do exactly, I have no idea - it depends on your environment/libraries, which you have not specified. My guess would be it is Arduino, you can read bit more about it here
http://forum.arduino.cc/index.php?topic=385427.0
I use this piece of code to reset my Arduino board.
As I found it, it is not a really elegant method to reset the board, but it works.
If you call "resetDevice()" while it is pointing to address 0 (NULL), it will cause a runtime error or "null pointer exception" as the pointer is not pointing to a valid memory location containing a function. And that error causing a board reboot.
Related
Here is a simplified code
func MyHandler(a int) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteCode(a)
})
}
Whenever a http request comes MyHandler will be called, and it will return a function which will be used to handle the request. So whenever a http request comes a new function object will be created. Function is taken as the first class in Go. I'm trying to understand what actually happened when you return a function from memory's perspective. When you return a value, for example an integer, it will occupy 4 bytes in stack. So how about return a function and lots of things inside the function body? Is it an efficient way to do so? What's shortcomings?
If you're not used to closures, they may seem a bit magic. They are, however, easy to implement in compilers:
The compiler finds any variables that must be captured by the closure. It puts them into a working area that will be allocated and remain allocated as long as the closure itself exists.
The compiler then generates the inner function with a secret extra parameter, or some other runtime trickery,1 such that calling the function activates the closure.
Because the returned function accesses its closure variables through the compile-time arrangement, there's nothing special needed. And since Go is a garbage-collected language, there's nothing else needed either: the pointer to the closure keeps the closure data alive until the pointer is gone because the function cannot be called any more, at which point the closure data evaporates (well, at the next GC).
1GCC sometimes uses trampolines to do this for C, where trampolines are executable code generated at runtime. The executable code may set a register or pass an extra parameter or some such. This can be expensive since something treated as data at runtime (generated code) must be turned into executable code at runtime (possibly requiring a system call and potentially requiring that some supervisory code "vet" the resulting runtime code).
Go does not need any of this because the language was defined with closures in mind, so implementors don't, er, "close off" any easy ways to make this all work. Some runtime ABIs are defined with closures in mind as well, e.g., register r1 is reserved as the closure-variables pointer in all pointer-to-function types, or some such.
Actual function size is irrelevant. When you return a function like this, memory will be allocated for the closure, that is, any variables in the scope that the function uses. In this case, a pointer will be returned containing the address of the function and a pointer to the closure, which will contain a reference to the variable a.
I'm using Chisel and blackbox to run my chisel logic against a verilog register file.
The registerfile does not have reset signal so I expect the register to be randomly initialized.
I passed the --x-initial unique to verilator,
Basically this is how I launch the test:
private val backendName = "verilator"
"NOCDMA" should s" do blkwrite and blkread correctly (with $backendName)" in {
Driver.execute(Array("--fint-write-vcd","--backend-name",s"$backendName",
"--more-vcs-flags","--trace-depth 1 --x-initial unique"),
()=>new DMANetworkWithMem(memAddrWidth,memDataWidth)(nocDataWidth)(nNodesX,nNodesY)){
c => new DMANetworkRWTest(c)
}
}
But The data I read from the register file is all zero before I wrote anything to it.
The read data is correct after I wrote to it.
So, is there anything inside chisel that I need to tune or I did not do everything properly ?
Any suggestions?
I'm not certain, but I found the following issue on Verilator with a similar issue: https://github.com/verilator/verilator/issues/1399.
From skimming the above issue, I think you also need to pass +verilator+seed+<value> and +verilator+rand+reset+<value> at runtime. I am not an expert in the iotesters, but I believe you can add these runtime values through the iotesters argument: --more-vcs-c-flags.
Side note, I would also set --x-assign unique in Verilator if there are cases in the Verilog where runtime would otherwise inject an X (eg. out-of-bounds index).
I hope this helps!
Using Flash CC
I have the following array. The first key retrieves the string "Fish" from another array.
static public var comboType_1:Array =
[Constants.level_Data_Edible[Constants.currentLevel],"Frog"];
When I try to iterate over this array with
for(var i:String in hint)
trace(hint[i]);
I get this result:
Fish,
Frog,
function Function() {}
I debug it, it shows the array length is 2.
I googled the function Function() {} and it produces zero hits on Google.
No one has ever heard of this function Function() {} error.
There is no third index, you can see in the array. This error prevents me from iterating over the array. #1065. Variable Function function () {} is not defined.
I'm pretty sure this is yet another Flash bug, but I just want to know the workaround.
Look it is clear that you have discovered how useful static variables can be and you use them as global variables all over your app but since you do not understand when those static variables are instantiated you are running into troubles. If you prefer to blame everything on the technology itself suit yourself but that won't fix your problem and won't make you a better coder. static variables are instantiated prior to anything else in your app so for example using another static variable like that one:
Constants.currentLevel
Is likely to produce error cos whatever the currentLevel is will be ignored anyway. Anything labelled 'current' is a red flag when instantiating static variables cos when static variables are created there's nothing current going on.
The result you are getting and the code you are showing do not match and you only do that on purpose to prove your point. You could show the whole code but you won't cos you know very well that the true mistake (your own code) will be pointed out.
Stop feeling sorry for yourself, stop blaming anything but yourself and post a real question with real code so we can show you where your mistake is.
I was looking at a program in IDA as i was trying to figure out how a certain function worked, when I came across something like this:
; C_TestClass::Foo(void)
__text:00000000 __ZN14C_TestClass7FooEv proc near
__text:00000000 jmp __ZN14C_TestClass20Barr ; C_TestClass::Barr(void)
__text:00000000 __ZN14C_TestClass7FooEv endp
__text:00000000
Can anyone explain to me what exactly jumping to a function would do in a case like this?
I am guessing that it acts as a wrapper for the other function?
You are correct that jumping is often a way to efficiently handle wrapper functions that aren't inlined.
Normally, you'd have to read all the function parameters and push them back onto the stack before you can call the sub-function.
But when the wrapper function has the exact same prototype:
Same calling convention
Same parameters (and in the same order)
Same return type
there's no need for all the usual function calling overhead. You can just jump directly to the target. (These categories may not be entirely necessary, as it may still be possible in some other cases.)
All the parameters (either on the stack or register) that were setup when calling the wrapper function will already be in place (and compatible) for the sub-function.
This is what's known as a tail-call. The compiler is able to invoke the function directly and let it return to the original caller without causing any problems. For example:
int f(int x)
{
...
return g(y);
}
The compiler can simply jmp g at the end because there is room for the arguments in the same place as f's arguments and the return value is not modified before returning to f's caller.
Do you check for data validity in every constructor, or do you just assume the data is correct and throw exceptions in the specific function that has a problem with the parameter?
A constructor is a function too - why differentiate?
Creating an object implies that all the integrity checks have been done. It's perfectly reasonable to check parameters in a constructor and throw an exception once an illegal value has been detected.
Among all this simplifies debugging. When your program throws exception in a constructor you can observe a stack trace and often immediately see the cause. If you delay the check you'll then have to do more investigation to detect what earlier event causes the current error.
It's always better to have a fully-constructed object with all the invariants "satisfied" from the very beginning. Beware, however, of throwing exceptions from constructor in non-managed languages since that may result in a memory leak.
I always coerce values in constructors. If the users can't be bothered to follow the rules, I just silently enforce them without telling them.
So, if they pass a value of 107%, I'll set it to 100%. I just make it clear in the documentation that that's what happens.
Only if there's no obvious logical coercion do I throw an exception back to them. I like to call this the "principal of most astonishment to those too lazy or stupid to read the documentation".
If you throw in the constructor, the stack trace is more likely to show where the wrong values are coming from.
I agree with sharptooth in the general case, but there are sometimes objects that can have states in which some functions are valid and some are not. In these situations it's better to defer checks to the functions with these particular dependencies.
Usually you'll want to validate in a constructor, if only because that means your objects will always be in a valid state. But some kinds of objects need function-specific checks, and that's OK too.
This is a difficult question. I have changed my habit related to this question several times in the last years, so here are some thoughts coming to my mind :
Checking the arguments in the constructor is like checking the arguments for a traditional method... so I would not differentiate here...
Checking the arguments of a method of course does help to ensure parameters passed are correct, but also introduces a lot of more code you have to write, which not always pays off in my opinion... Especially when you do write short methods which delegate quite frequently to other methods, I guess you would end up with 50 % of your code just doing parameter checks...
Furthermore consider that very often when you write a unit test for your class, you don't want to have to pass in valid parameters for all methods... Quite often it does make sense to only pass those parameters important for the current test case, so having checks would slow you down in writing unit tests...
I think this really depends on the situation... what I ended up is to only write parameter checks when I know that the parameters do come from an external system (like user input, databases, web services, or something like that...). If data comes from my own system, I don't write tests most of the time.
I always try to fail as early as possible. So I definitively check the parameters in the constructor.
In theory, the calling code should always ensure that preconditions are met, before calling a function. The same goes for constructors.
In practice, programmers are lazy, and to be sure it's best to still check preconditions. Asserts come in handy there.
Example. Excuse my non-curly brace syntax:
// precondition b<>0
function divide(a,b:double):double;
begin
assert(b<>0); // in case of a programming error.
result := a / b;
end;
// calling code should be:
if foo<>0 then
bar := divide(1,0)
else
// do whatever you need to do when foo equals 0
Alternatively, you can always change the preconditions. In case of a constructor this is not really handy.
// no preconditions.. still need to check the result
function divide(a,b:double; out hasResult:boolean):double;
begin
hasResult := b<>0;
if not hasResult then
Exit;
result := a / b;
end;
// calling code:
bar := divide(1,0,result);
if not result then
// do whatever you need to do when the division failed
Always fail as soon as possible. A good example of this practice is exhibited by the runtime ..
* if you try and use an invalid index for an array you get an exception.
* casting fails immediately when attempted not at some later stage.
Imagine what a disaster code be if a bad cast remained in a reference and everytime you tried to use that you got an exception. The only sensible approach is to fail as soon as possible and try and avoid bad / invalid state asap.