I always get confused between the static and dynamic scoping and hence need someone to examine my evaluation. Following is the example code:
int x = 1;
procedure P(i) {
int x = 1;
i++;
Q(i);
}
procedure Q(j) {
j = j + x;
}
P(x)
print x
In static scoping, we always look at the placement of the function/procedure to understand its scope. Hence,
1 main is the outermost scope of the program
2 > Procedures P and Q are within main and hence they shall always refer to the variables in the main function if not defined in its local scope. This is irrespective of the manner of procedure call
3 > In the example, procedure P has variable x defined. Hence it shall shadow the main's x.
4 > Procedure Q does not have variable x defined and hence shall refer to main's x
The output is
1 > For static scoping and pass by value=> 1
2 > For dynamic scoping and pass by value=> 2
3 > For static scoping and pass by reference=> 4
4 > For dynamic scoping and pass by reference=> 3
Please let me know if I have gone wrong somewhere. Also, it will be great if anyone can provide me with useful link on static and dynamic scoping examples such as above.
Thanks,
darkie
There's a number of articles out there. Google is your friend :-)
[Edit] After reading through some of those links I think the following is true:
For static scoping and pass by value=> 1
For dynamic scoping and pass by value=> 1
For static scoping and pass by reference=> 4
For dynamic scoping and pass by reference=> 3
Point 2 should return 1 because you're passing by value so the x you're passing in never gets modified.
Tim Hoolihan has an example which is easier to follow.
Related
This question already has answers here:
What is the difference between currying and partial application?
(16 answers)
Closed 5 years ago.
While studying functional programming, the concept partially applied functions comes up a lot. In Haskell, something like the built-in function take is considered to be partially applied.
I am still unclear as to what it exactly means for a function to be partially applied or the use/implication of it.
the classic example is
add :: Int -> Int -> Int
add x y = x + y
add function takes two arguments and adds them, we can now implement
increment = add 1
by partially applying add, which now waits for the other argument.
A function by itself can't be "partially applied" or not. It's a meaningless concept.
When you say that a function is "partially applied", you refer to how the function is called (aka "applied"). If the function is called with all its parameters, then it is said to be "fully applied". If some of the parameters are missing, then the function is said to be "partially applied".
For example:
-- The function `take` is fully applied here:
oneTwoThree = take 3 [1,2,3,4,5]
-- The function `take` is partially applied here:
take10 = take 10 -- see? it's missing one last argument
-- The function `take10` is fully applied here:
oneToTen = take10 [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,42]
The result of a partial function application is another function - a function that still "expects" to get its missing arguments - like the take10 in the example above, which still "expects" to receive the list.
Of course, it all gets a bit more complicated once you get into the higher-order functions - i.e. functions that take other functions as arguments or return other functions as result. Consider this function:
mkTake n = take (n+5)
The function mkTake has only one parameter, but it returns another function as result. Now, consider this:
x = mkTake 10
y = mkTake 10 [1,2,3]
On the first line, the function mkTake is, obviously, "fully applied", because it is given one argument, which is exactly how many arguments it expects. But the second line is also valid: since mkTake 10 returns a function, I can then call this function with another parameter. So what does it make mkTake? "Overly applied", I guess?
Then consider the fact that (barring compiler optimizations) all functions are, mathematically speaking, functions of exactly one argument. How could this be? When you declare a function take n l = ..., what you're "conceptually" saying is take = \n -> \l -> ... - that is, take is a function that takes argument n and returns another function that takes argument l and returns some result.
So the bottom line is that the concept of "partial application" isn't really that strictly defined, it's just a handy shorthand to refer to functions that "ought to" (as in common sense) take N arguments, but are instead given M < N arguments.
Strictly speaking, partial application refers to a situation where you supply fewer arguments than expected to a function, and get back a new function created on the fly that expects the remaining arguments.
Also strictly speaking, this does not apply in Haskell, because every function takes exactly one argument. There is no partial application, because you either apply the function to its argument or you don't.
However, Haskell provides syntactic sugar for defining functions that includes emulating multi-argument functions. In Haskell, then, "partial application" refers to supplying fewer than the full number of arguments needed to obtain a value that cannot be further applied to another argument. Using everyone's favorite add example,
add :: Int -> Int -> Int
add x y = x + y
the type indicates that add takes one argument of type Int and returns a function of type Int -> Int. The -> type constructor is right-associative, so it helps to explicitly parenthesize it to emphasize the one-argument nature of a Haskell function: Int -> (Int -> Int).
When calling such a "multi-argument" function, we take advantage of the fact the function application is left-associative, so we can write add 3 5 instead of (add 3) 5. The call add 3 is thought of as partial application, because we could further apply the result to another argument.
I mentioned the syntactic sugar Haskell provides to ease defining complex higher-order functions. There is one fundamental way to define a function: using a lambda expression. For example, to define a function that adds 3 to its argument, we write
\x -> x + 3
For ease of reference, we can assign a name to this function:
add = \x -> x + 3
To further ease the defintion, Haskell lets us write in its place
add x = x + 3
hiding the explicit lambda expression.
For higher-order, "multiargument" functions, we would write a function to add two values as
\x -> \y -> x + y
To hide the currying, we can write instead
\x y -> x + y.
Combined with the syntactic sugar of replacing a lambda expression with a paramterized name, all of the following are equivalent definitions for the explicitly typed function add :: Num a => a -> a -> a:
add = \x -> \y -> x + y -- or \x y -> x + y as noted above
add x = \y -> x + y
add x y = x + y
This is best understood by example. Here's the type of the addition operator:
(+) :: Num a => a -> a -> a
What does it mean? As you know, it takes two numbers and outputs another one. Right? Well, sort of.
You see, a -> a -> a actually means this: a -> (a -> a). Whoa, looks weird! This means that (+) is a function that takes one argument and outputs a function(!!) that also takes one argument and outputs a value.
What this means is you can supply that one value to (+) to get another, partially applied, function:
(+ 5) :: Num a => a -> a
This function takes one argument and adds five to it. Now, call that new function with some argument:
Prelude> (+5) 3
8
Here you go! Partial function application at work!
Note the type of (+ 5) 3:
(+5) 3 :: Num a => a
Look what we end up with:
(+) :: Num a => a -> a -> a
(+ 5) :: Num a => a -> a
(+5) 3 :: Num a => a
Do you see how the number of as in the type signature decreases by one every time you add a new argument?
Function with one argument that outputs a function with one argument that in turn, outputs a value
Function with one argument that outputs a value
The value itself
"...something like the built-in function take is considered to be partially applied" - I don't quite understand what this means. The function take itself is not partially applied. A function is partially applied when it is the result of supplying between 1 and (N - 1) arguments to a function that takes N arguments. So, take 5 is a partially applied function, but take and take 5 [1..10] are not.
I have this code:
let sumfunc(n: int byref) =
let mutable s = 0
while n >= 1 do
s <- n + (n-1)
n <- n-1
printfn "%i" s
sumfunc 6
I get the error:
(8,10): error FS0001: This expression was expected to have type
'byref<int>'
but here has type
'int'
So from that I can tell what the problem is but I just dont know how to solve it. I guess I need to specify the number 6 to be a byref<int> somehow. I just dont know how. My main goal here is to make n or the function argument mutable so I can change and use its value inside the function.
Good for you for being upfront about this being a school assignment, and for doing the work yourself instead of just asking a question that boils down to "Please do my homework for me". Because you were honest about it, I'm going to give you a more detailed answer than I would have otherwise.
First, that seems to be a very strange assignment. Using a while loop and just a single local variable is leading you down the path of re-using the n parameter, which is a very bad idea. As a general rule, a function should never modify values outside of itself — and that's what you're trying to do by using a byref parameter. Once you're experienced enough to know why byref is a bad idea most of the time, you're experienced enough to know why it might — MIGHT — be necessary some of the time. But let me show you why it's a bad idea, by using the code that s952163 wrote:
let sumfunc2 (n: int byref) =
let mutable s = 0
while n >= 1 do
s <- n + (n - 1)
n <- n-1
printfn "%i" s
let t = ref 6
printfn "The value of t is %d" t.contents
sumfunc t
printfn "The value of t is %d" t.contents
This outputs:
The value of t is 7
13
11
9
7
5
3
1
The value of t is 0
Were you expecting that? Were you expecting the value of t to change just because you passed it to a function? You shouldn't. You really, REALLY shouldn't. Functions should, as far as possible, be "pure" -- a "pure" function, in programming terminology, is one that doesn't modify anything outside itself -- and therefore, if you run it twice with the same input, it should produce the same output every time.
I'll give you a way to solve this soon, but I'm going to post what I've written so far right now so that you see it.
UPDATE: Now, here's a better way to solve it. First, has your teacher covered recursion yet? If he hasn't, then here's a brief summary: functions can call themselves, and that's a very useful technique for solving all sorts of problems. If you're writing a recursive function, you need to add the rec keyword immediately after let, like so:
let rec sumExampleFromStackOverflow n =
if n <= 0 then
0
else
n + sumExampleFromStackOverflow (n-1)
let t = 7
printfn "The value of t is %d" t
printfn "The sum of 1 through t is %d" (sumExampleFromStackOverflow t)
printfn "The value of t is %d" t
Note how I didn't need to make t mutable this time. In fact, I could have just called sumExampleFromStackOverflow 7 and it would have worked.
Now, this doesn't use a while loop, so it might not be what your teacher is looking for. And I see that s952163 has just updated his answer with a different solution. But you should really get used to the idea of recursion as soon as you can, because breaking the problem down into individual steps using recursion is a really powerful technique for solving a lot of problems in F#. So even though this isn't the answer you're looking for right now, it is the answer you're going to be looking for soon.
P.S. If you use any of the help you've gotten here, tell your teacher that you've done so, and give him the URL of this question (http://stackoverflow.com/questions/39698430/f-how-to-call-a-function-with-argument-byref-int) so he can read what you asked and what other people told you. If he's a good teacher, he won't lower your grade for doing that; in fact, he might raise it for being honest and upfront about how you solved the problem. But if you got help with your homework and you don't tell your teacher, 1) that's dishonest, and 2) you'll only hurt yourself in the long run, because he'll think you understand a concept that you maybe haven't understood yet.
UPDATE 2: s952163 suggests that I show you how to use the fold and scan functions, and I thought "Why not?" Keep in mind that these are advanced techniques, so you probably won't get assignments where you need to use fold for a while. But fold is basically a way to take any list and do a calculation that turns the list into a single value, in a generic way. With fold, you specify three things: the list you want to work with, the starting value for your calculation, and a function of two parameters that will do one step of the calculation. For example, if you're trying to add up all the numbers from 1 to n, your "one step" function would be let add a b = a + b. (There's an even more advanced feature of F# that I'm skipping in this explanation, because you should learn just one thing at a time. By skipping it, it keeps the add function simple and easy to understand.)
The way you would use fold looks like this:
let sumWithFold n =
let upToN = [1..n] // This is the list [1; 2; 3; ...; n]
let add a b = a + b
List.fold add 0 upToN
Note that I wrote List.fold. If upToN was an array, then I would have written Array.fold instead. The arguments to fold, whether it's List.fold or Array.fold, are, in order:
The function to do one step of your calculation
The initial value for your calculation
The list (if using List.fold) or array (if using Array.fold) that you want to do the calculation with.
Let me step you through what List.fold does. We'll pretend you've called your function with 4 as the value of n.
First step: the list is [1;2;3;4], and an internal valueSoFar variable inside List.fold is set to the initial value, which in our case is 0.
Next: the calculation function (in our case, add) is called with valueSoFar as the first parameter, and the first item of the list as the second parameter. So we call add 0 1 and get the result 1. The internal valueSoFar variable is updated to 1, and the rest of the list is [2;3;4]. Since that is not yet empty, List.fold will continue to run.
Next: the calculation function (add) is called with valueSoFar as the first parameter, and the first item of the remainder of the list as the second parameter. So we call add 1 2 and get the result 3. The internal valueSoFar variable is updated to 3, and the rest of the list is [3;4]. Since that is not yet empty, List.fold will continue to run.
Next: the calculation function (add) is called with valueSoFar as the first parameter, and the first item of the remainder of the list as the second parameter. So we call add 3 3 and get the result 6. The internal valueSoFar variable is updated to 6, and the rest of the list is [4] (that's a list with one item, the number 4). Since that is not yet empty, List.fold will continue to run.
Next: the calculation function (add) is called with valueSoFar as the first parameter, and the first item of the remainder of the list as the second parameter. So we call add 6 4 and get the result 10. The internal valueSoFar variable is updated to 10, and the rest of the list is [] (that's an empty list). Since the remainder of the list is now empty, List.fold will stop, and return the current value of valueSoFar as its final result.
So calling List.fold add 0 [1;2;3;4] will essentially return 0+1+2+3+4, or 10.
Now we'll talk about scan. The scan function is just like the fold function, except that instead of returning just the final value, it returns a list of the values produced at all the steps (including the initial value). (Or if you called Array.scan, it returns an array of the values produced at all the steps). In other words, if you call List.scan add 0 [1;2;3;4], it goes through the same steps as List.fold add 0 [1;2;3;4], but it builds up a result list as it does each step of the calculation, and returns [0;1;3;6;10]. (The initial value is the first item of the list, then each step of the calculation).
As I said, these are advanced functions, that your teacher won't be covering just yet. But I figured I'd whet your appetite for what F# can do. By using List.fold, you don't have to write a while loop, or a for loop, or even use recursion: all that is done for you! All you have to do is write a function that does one step of a calculation, and F# will do all the rest.
This is such a bad idea:
let mutable n = 7
let sumfunc2 (n: int byref) =
let mutable s = 0
while n >= 1 do
s <- n + (n - 1)
n <- n-1
printfn "%i" s
sumfunc2 (&n)
Totally agree with munn's comments, here's another way to implode:
let sumfunc3 (n: int) =
let mutable s = n
while s >= 1 do
let n = s + (s - 1)
s <- (s-1)
printfn "%i" n
sumfunc3 7
Let's say I have an array A = [1 2 3 4 5 6 7 8 9 10]. I want to iterate through it and do something with each number.
A(start:step:end) -> since I want to iterate with step 1 I use A(1:10).
Question here is, how can I use that iteration? In C++ you would do
for (int i = 0; i < 10; i++)
{
//DO SOMETHING
}
I've spent 4 hours searching how to use that iteration. I have not found a single explanation to such a trivial thing: passing block of code to actually do something with numbers. Don't even know how to use current index (i.e., i in C++).
I have my function in Octave f = #(variable) (...), however when I call f(A(1:10)) it is not really passing each number to the function but rather finishes iteration and then executes function.
I'd expect something like
A(1:10) (DO SOMETHING WITH EACH NUMBER)
or in my example
A(1:10) ( f(INDEX) )
but that does not seem to work either.
I know Octave has a built-in for loop but in my case it is too slow.
That was simplified explanation, here is more advanced.
I want to multiply matrix A in such way that one matrix starts iteration with 1 and the other one with 2 (e.g., A(1:end-1).*A(2:end)) and use each multiplied number in my custom function.
An analogue of C++ loop
for (int i = 0; i < 10; i++)
{
//DO SOMETHING
}
would be
for i = 1:10
//DO SOMETHING
end
since Octave indices are 1-based.
There is no array.forEach(do something) construction in Octave.
Most of the time, a speed-up is achieved by passing arrays (vectors and matrices) to a function at once, and structuring the function itself so that it can handle an array. How to do the latter depends on what the function is.
I have a function that calculates an array of numbers (randparam) that I want to input element by element into another function that does a simulation.
For example
function [randparam] = arraycode
code
randparam = results of code
% randparam is now a 1x1001 vector.
end
next I want to input randparam 1 by 1 into my simulation function
function simulation
x = constant + constant * randparam + constant
return
end
What makes this difficult for me is because of the return command in the simulation function, it only calculates one step of the equation x above, returns the result into another function, call it integrator, and then the integrator function will call simulation function again to calculate x.
so the integrator function might look like
function integrator (x)
y = simulation(x) * 5
u = y+10
yy = simulation(x) * 10 + u
end
As you can see, integrator function calls the simulation function twice. which creates two problems for me:
If I create a for loop in the simulation function where I input element by element using something like:
for i = 1:100
x = constant + constant * randparam(i) + constant
return
end
then every time my integrator function calls my simulation function again, my for loop starts at 1 all over again.
2.If I some how kept i in my base workspace so that my for loop in my simulation function would know to step up from 1, then my y and yy functions would have different x inputs because as soon as it would be called the second time for yy, then i would now be i+1 thanks to the call due to y.
Is there a way to avoid for loops in this scenario? One potential solution to problem number two is to duplicate the script but with a different name, and have my for loop use a different variable, but that seems rather inefficient.
Hope I made this clear.
thanks.
First, if you generically want to apply the same function to each element of an array and there isn't already a built in vectorized way to do it, you could use arrayfun (although often a simple for loop is faster and more readable):
%# randparam is a 1x1001 vector.
%#next I want to input randparam 1 by 1 into my simulation function
function simulation
x = constant + constant * randparam + constant
return
end
(Note: ask yourself what this function can possibly be doing, since it isn't returning a value and MATLAB doesn't pass by reference.) This is what arrayfun is for: applying a function to each element of an array (or vector, in this case). Again, you should make sure in your case that it makes sense to do this, rather than an explicit loop.
function simulation(input_val)
#% your stuff
end
sim_results = arrayfun( #simulation, randparam);
Of course, the way you've written it, the line
x = constant + constant*randparam + constant;
can (and will) be done vectorized - if you give it a vector or matrix, a vector or matrix will be the result.
Second it seems that you're not clear on the "scope" of function variables in MATLAB. If you call a function, a clean workspace is created. So x from one function isn't automatically available within another function you call. Variables also go out of scope at the end of a function, so using x within a function doesn't change/overwrite a variable x that exists outside that function. And multiple invocations of a function each have their own workspace.
What's wrong with a loop at the integrator level?
function integrator (x)
for i=1:length(x)
y = simulation(x(i)) * 5
u = y+10
yy = simulation(x(i)) * 10 + u
end
And pass your entire randparm into integrator? It's not clear from your question whether you want simulation to return the same value when given the same input, or whether you want it step twice with the same input, or whether you want a fresh input on every call. It is also not clear if simulation keeps any internal state. The way you've written the example, simulation depends only on the input value, not on any previous inputs or outputs, which would make it trivial to vectorize. If we're all missing the boat, please edit your question with more refined example code.
I'm confused about how binding works for statically scoped variables in nested subroutines.
proc A:
var a, x
...
proc B:
var x, y
...
proc B2:
var a, b
...
end B2
end B
proc C:
var x, z, w
....
end C
end A
First, this is what I have understood: if static scoping is considered, then B2 can use the variable x and y present in its parent B. Similarly C can use the variable a used in proc A.
Now, my questions are: are these bindings made during the compile-time or run-time? Does it make a difference if the variables are statically scoped or dynamically scoped?
Until it comes naturally, I find it easy to draw environment model diagrams. They are also pretty much essential for exams and those esoteric examples that are intended to be confusing. I suggest the famous SICP (http://mitpress.mit.edu/sicp/), but there are obviously more than enough resources on the internet (a quick google brought me to this: http://www.icsi.berkeley.edu/~gelbart/cs61a/EnvDiagrams.pdf).
It depends on the language/implementation when/how bindings are done, however in your example the bindings can be done at compile time. In general, static scoping, as the name suggests allows for a lot of static/compile-time binding. A compiler can look into a function and see all references and resolve them immediately. For example in B2, a reference to y can be resolved immediately to belong to the enclosing scope, i.e. that of B.
As per dynamic vs. static scoping, there is a huge difference. Dynamic, as the name suggests, is much harder to do compile-time bindings with, since the structure of the code does not define the references to the variables. Different paths of execution may yield different bindings. You'll have to be more specific with the question though.