What is the scope argument to NPN_Evaluate for, really? In this question, it's explained that you can not limit the scope of eval(). I thought NPN_Evaluate() was equivalent to eval()?
I am calling NPN_Evaluate with different NPObjects as the scope, and I don't see any difference. My script is alert(this.name) and I was expecting that this would be the object I am passing into NPN_Evaluate as the scope. But instead, this is actually window, regardless of what I pass in. Examples on using NPN_Evaluate on the net show people always using the window object...
I'm a little unsure of what's going on, so I am not ruling out a mistake in my code just yet, however from what I can see, the above seems to be the case. Any information on the intended use of the scope argument would be welcome.
Related
There is an answered question which will help you understand what exactly I want to say.
How does the function passed to http.HandleFunc get access to http.ResponseWriter and http.Request?
There are many built-in Go functions where the function parameters get assigned this way. I want to use that coding style in my daily coding life.
I want to write a similar function/method which will get its parameter values from somewhere just like http.Handlefunc's w and r.
func (s SchoolStruct) GetSchoolDetails(name string){
// here the parameter "name" should get assigned exactly like http.HandleFunc()'s "w" and "r".
}
What http does is that it registers a callback and uses it when the time comes. You don't have to pass the arguments it takes, as servers implementation provides these arguments with correct state. If you want to copy this approach, first you have to ask:
Is there some kind of generic abstraction that computes these parameters? Is the function I write just reacting to something? Does this function have any side effects? Does it return value back to the system?
This approach is very good when you are modifying existing system, extending its behavior with independent units. So to speak, integrating into robust API.
You may be correct that this is a style of doing things, but you cannot use this style on everything. Its just too specific and good at certain group of tasks.
As #mkopriva pointed out, declaring rules and requirements, your logic should satisfy, is known way to execute this style in Go. You have to realize that your logic, encapsulated behind function pointer or interface, has to be passed and controlled by some other code you call indirectly.
I cannot possibly imagine going to such lengths when all components of the system are under your control and system has only one logic to run.
Is using global constants within (member) functions considered bad practice? I personally don't feel comfortable doing this, though I cannot explain why. It just does not look right to have that global constant inside the body of the function. Is it better to introduce a parameter in place of the constant instead (and default its value to that global constant)? This is what I tend to do, and it does look better to me (looks better is all I can say though). But after reading some posts below, I realize now that it would be nice to be able to override these constants for whatever reasons (like testing), and we all know that this really isn't feasible if that constant appears in many places in the function body. How do others feel about this? I use C++, but I figure this question is language-independent
But for some reason, I feel fine using global variables in a function, though I sometimes wonder if I should also introduce a parameter for this and default it to that global variable. This inconsistency of mine makes me wonder about the whole thing.
I'm currently trying out a few of the new C++0x features, namely std::function and std::bind. These two functions seem rather suitable for a event-delegate-system for C++ that works like in C♯. I've tried myself to create something like delegates before, but the Hacks I would have needed for member-function-pointers were to much for me…
During my tests I noticed that std::bind copies every object you bind. While that surely enhances safety - can't delete a still registered eventhandler :) - it's also a problem with stateful objects. Is there a way to deactivate the copying - or at least a way to obtain the encapsulated object from the std::function again?
PS: Is there a reference for the features that are going to be included in C++0x (hopefully C++11!) In the end it's at major parts of TR1 and a few additions…
I tried cppreference.org, but they are still at an early stage at documentation, cplusplus.com on the other seems to not even have started on covering C++0x.
If you want to avoid copying use std::ref and/or std::cref. They wrap the object into a pseudoreference
It isn't quite right that:
I noticed that std::bind copies every
object you bind.
At least that isn't the intended specification. You should be able to move a non-copyable object into a bind:
std::bind(f, std::unique_ptr<int>(new int(3)))
However, now that the move-only object is stored in the binder, it is an lvalue. Therefore you can only call it if f accepts an lvalue move-only object (say by lvalue reference). If this is not acceptable, and if the source object outlives the binder, then use of std::ref is another good solution (as mentioned by Armen).
If you need to copy the bound object, then all of its bound arguments must be copyable. But if you only move construct the bound object, then it will only move construct its bound arguments.
The best reference is N3242. There isn't a good and comprehensive tutorial that I'm aware of yet. I might start with the boost documentation with the understanding that std::bind has been adapted to work with rvalue-refs as much as possible.
I have created a move compatable version of bind. there are still lots of problems with it like the binders constructor and a few buggylines here and there etc but it seems to work
check it out here
http://code-slim-jim.blogspot.jp/2012/11/perfect-forwarding-bind-compatable-with.html
Ok I am coming into a stumbling block no matter what language I am using. I am trying to understand when I need to pass arguments in a Function and when I don't need to pass arguments in a function. Can someone give me some direction on where to find guidance on this?
I would rather say if your function needs data, you MUST pass parameters, cuz the other alternative is to put the data in a global store and let the function access it from there. DO NOT DO IT as it will make your code nearly impossible to maintain as it grows more complex.
Does the function need external data to perform its job? If so, then you need to pass arguments.
If the function doesn't need external data to perform its job, you don't need to worry about passing arguments.
That handles creating your own functions. If you're simply trying to call somebody else's function, you need to pass arguments for each required function parameter.
Well...if a function takes parameters, then you have to pass arguments to it. If it takes no parameters, then you don't. (If you happen to be working in a language in which functions have optional parameters, you only have to pass an argument if you want something other than the default value.)
Well that pretty much depends on what are you trying to accomplish. If your functions needs some values to modify or use you will probably need to pass arguments. Why don't you try it with some examples in some books. Most of them are pretty relevant.
You should not think on what you "need" to pass to a function, you should try to think what are you writing that function for and then you will see if you need arguments or not.
Are you talking about existing function or writing your own?
If it is an existing one - you have no choice - in order for it to work you need to pass it whatever it wants. To figure out what it wants - read the manual, the function code, or harass the author of the function
If you are talking about designing your own - it is a much bigger discussion which goes way beyond a single function. You need to understand what the function (and any other components) have to do to accomplish the ultimate goal, how they interact with each other, etc.
I got a rather big class library that contains a lot of code.
I am looking at how to optimize the performance of some of the code, and for some rather simple utility methods I've found that the parameter validation occupies a rather large portion of the runtime for some core methods.
Let me give a typical example:
A.MethodA1 runs a loop, iterating over a collection, calling B.MethodB1 for each element
B.MethodB1 processes the element and returns the result, it's a rather basic calculation, but since it is used many places, it has been put into its own method instead of being copied and pasted where needed
A.MethodA1 calls C.MethodC1 with the results of B.MethodB1, and puts the result into a list that is returned at the end of the loop
In the case I've found now, B.MethodB1 does rudimentary parameter validation. Since the method calls other internal methods, I'd like to avoid having NullReferenceExceptions several layers deep into the code, and rather fail early, hence B.MethodB1 validates the parameters, like checking for null and some basic range checks on another parameter.
However, in this particular call scenario, it is impossible (due to other program logic) for these parameters to ever have the wrong values. If they had, from the program standpoint, B.MethodB1 would never be called at all for those values, A.MethodA1 would fail before the call to B.MethodB1.
So I was considering removing the parameter validation in B.MethodB1, since it occupies roughly 65% of the method runtime (and this is part of some heavily used code.)
However, B.MethodB1 is a public method, and can thus be called from the program, in which case I want the parameter validation.
So how would you solve this dilemma?
Keep the parameter validation, and take the performance hit
Remove the parameter validation, and have potentially fail-late problems in the method
Split the method into two, one internal that doesn't have parameter validation, called by the "safe" path, and one public that has the parameter validation + a call to the internal version.
The latter one would give me the benefits of having no parameter validation, while still exposing a public entrypoint which does have parameter validation, but for some reason it doesn't sit right with me.
Opinions?
I would go with option 3. I tend to use assertions for private and internal methods and do all the validation in public methods.
By the way, is the performance hit really that big?
That's an interesting question.
Hmmm, makes me think ... "code contracts" .. It would seem like it might be technically possible to statically (at compile time) have certain code contracts be proven to be fulfilled. If this were the case and you had such a compilation validation option you could state these contracts without ever having to validate the conditions at runtime.
It would require that the client code itself be validated against the code contacts.
And, of course it would inevitably be highly dependent on the type of conditions you'd want to write, and it would probably only be feasible to prove these contracts to a certain point (how far up the possible call graph would you go?). Beyond this point the validator might have to beg off, and insist that you place a runtime check (or maybe a validation warning suppression?).
All just idle speculation. Does make me wonder a bit more about C# 4.0 code contracts. I wonder if these have support for static analysis. Have you checked them out? I've been meaning to, but learning F# is having to take priority at the moment!
Update:
Having read up a little on it, it appears that C# 4.0 does indeed have a 'static checker' as well as a binary rewriter (which takes care of altering the output binary so that pre and post condition checks are in the appropriate location)
What's not clear from my extremely quick read, is whether you can opt out of the binary rewriting - what I'm thinking here is that what you'd really be looking for is to use the code contracts, have the metadata (or code) for the contracts maintained within the various assemblies but use only the static checker for at least a selected subset of contracts, so that you in theory get proven safety without any runtime hit.
Here's a link to an article on the code contracts