For example
function f(x)
# do something
# then I assigned the outside variable name of 'x' to y
println(y)
end
f(1)
I will get
# something and
1
then,
a = 1
f(a)
I will get
# something and
"a"
Is it possible in julia? If not, how can I get my function operation log?
The most idiomatic way would be to slightly change your interface of f and require a keyword argument:
julia> function f(;kwargs...)
for (k, v) in kwargs
println("$k = $v")
end
end
f (generic function with 1 method)
julia> f(a = 1)
a = 1
Alternatively (short of inspecting stack traces), you need something macro-based:
julia> struct Quot
expr
value
end
julia> macro quot(e)
return :($Quot($(QuoteNode(e)), $e))
end
#quot (macro with 1 method)
julia> function f2(x::Quot)
println(x)
end
f2 (generic function with 1 method)
julia> x = 2
2
julia> f2(#quot x)
Quot(:x, 2)
Depending on what you need a simples macro that dumps function calls that still get executed could be:
macro logs(expr)
#info expr
expr
end
And this can be used as:
julia> a = π/2;
julia> #logs sin(a)
[ Info: sin(a)
1.0
How can I declare a Julia function that returns a function with a specific signature. For example, say I want to return a function that takes an Int and returns an Int:
function buildfunc()::?????
mult(x::Int) = x * 2
return mult
end
What should the question marks be replaced with?
One thing needs to be made clear.
Adding a type declaration on the returned parameter is just an assertion, not part of function definition. To understand what is going on look at the lowered (this is a pre-compilation stage) code of a function:
julia> f(a::Int)::Int = 2a
f (generic function with 1 method)
julia> #code_lowered f(5)
CodeInfo(
1 ─ %1 = Main.Int
│ %2 = 2 * a
│ %3 = Base.convert(%1, %2)
│ %4 = Core.typeassert(%3, %1)
└── return %4
)
In this case since the returned type is obvious this assertion will be actually removed during the compilation process (try #code_native f(5) to see yourself).
If you want for some reason to generate functions I recommend to use the #generated macro. Be warned: meta-programming is usually an overkill for solving any Julia related problem.
#generated function f2(x)
if x <: Int
quote
2x
end
else
quote
10x
end
end
end
Now we have a function f2 where the source code of f2 is going to depend on the parameter type:
julia> f2(3)
6
julia> f2(3.)
30.0
Note that this function generation is actually happening during the compile time:
julia> #code_lowered f2(2)
CodeInfo(
# REPL[34]:1 within `f2'
┌ # REPL[34]:4 within `macro expansion'
1 ─│ %1 = 2 * x
└──│ return %1
└
)
Hope that clears things out.
You can use Function type for this purpose. From Julia documentation:
Function is the abstract type of all functions
function b(c::Int64)::Int64
return c+2;
end
function a()::Function
return b;
end
Which prints:
julia> println(a()(2));
4
Julia will throw exception for Float64 input.
julia> println(a()(2.0));
ERROR: MethodError: no method matching b(::Float64) Closest candidates are: b(::Int64)
Why does the order of the method definition differ in this case? It doesn't make much sense in my opinion.
julia> f() = 1
f (generic function with 1 method)
julia> f(;arg) = 1
f (generic function with 1 method)
julia> f()
ERROR: UndefKeywordError: keyword argument arg not assigned
Stacktrace:
[1] f() at ./REPL[2]:1
[2] top-level scope at REPL[3]:1
julia> f() = 1
f (generic function with 1 method)
julia> f()
1
julia> f(arg=1)
1
The order of method definition gives different result because of how function
with keyword arguments fits into the mechanics of method dispatch in Julia 1.x.
As pointed in the comments above, the short answer is: because the second definition completely overwrites the other.
But I think this is not completely exact, lets see.
Case 1: with the order:
julia> f() = 2
f (generic function with 1 method)
julia> f(;arg) = 1
f (generic function with 1 method)
julia> f()
ERROR: UndefKeywordError: keyword argument arg not assigned
The user defined function f() is overridden.
Case 2: reversing the order both methods are visible:
julia> f(;arg) = 1
f (generic function with 1 method)
julia> f() = 2
f (generic function with 1 method)
julia> f()
2
julia> f(arg=3)
1
When f(;arg) is lowered the compiler produces the method f(), without keyword arguments,
to handle the case where no keyword arguments are passed.
This produce two different outcomes:
Case 1: the produced method f() overrides the user defined f().
Case 2: the user defined f() overrides the produced method f() but f(;args) remains visible.
Note that from both cases it seems that as final result
we get a function f with 1 method, but indeed in the second case we have effectively 2 functions with 1 method each,
one that manage the user defined f() and one that manages the keyword arguments version f(;arg).
The full details of how keyword arguments method definition is lowered is detailed
in the docs
Now that fast anonymous functions are native to julia, do I still have to use the decorator, or is it automatically implemented. Also when I pass a function as an argument into another function, can I static type it? What can I do to improve the run speed.
FastAnonymous is definitely not necessary anymore. Here's how you can verify this yourself:
julia> #noinline g(f, x) = f(x) # prevent inlining so you know it's general
g (generic function with 1 method)
julia> h1(x) = g(identity, x)
h1 (generic function with 1 method)
julia> h2(x) = g(sin, x)
h2 (generic function with 1 method)
julia> #code_warntype h1(1)
Variables
#self#::Core.Compiler.Const(h1, false)
x::Int64
Body::Int64
1 ─ %1 = Main.g(Main.identity, x)::Int64
└── return %1
julia> #code_warntype h2(1)
Variables
#self#::Core.Compiler.Const(h2, false)
x::Int64
Body::Float64
1 ─ %1 = Main.g(Main.sin, x)::Float64
└── return %1
julia> h3(x) = g(z->"I'm a string", x)
h3 (generic function with 1 method)
julia> #code_warntype h3(1)
Variables
#self#::Core.Compiler.Const(h3, false)
x::Int64
#9::getfield(Main, Symbol("##9#10"))
Body::String
1 ─ (#9 = %new(Main.:(##9#10)))
│ %2 = #9::Core.Compiler.Const(getfield(Main, Symbol("##9#10"))(), false)
│ %3 = Main.g(%2, x)::Core.Compiler.Const("I'm a string", false)
└── return %3
In every case Julia knows the return type, and that requires that it "understand" what your function-argument is doing. Moreover:
julia> m = first(methods(g))
g(f, x) in Main at REPL[1]:1
julia> m.specializations
Core.TypeMapEntry(Core.TypeMapEntry(Core.TypeMapEntry(nothing, Tuple{typeof(g),typeof(identity),Int64}, nothing, svec(), 1, -1, MethodInstance for g(::typeof(identity), ::Int64), true, true, false), Tuple{typeof(g),typeof(sin),Int64}, nothing, svec(), 1, -1, MethodInstance for g(::typeof(sin), ::Int64), true, true, false), Tuple{typeof(g),getfield(Main, Symbol("##9#10")),Int64}, nothing, svec(), 1, -1, MethodInstance for g(::getfield(Main, Symbol("##9#10")), ::Int64), true, true, false)
This is a bit hard to read, but if you look carefully you'll see that g has been compiled for 3 inputs:
Tuple{typeof(identity), Int64}
Tuple{typeof(sin), Int64}
Tuple{getfield(Main, Symbol("##9#10")),Int64}
(The compiled versions also take g itself as an extra argument, for reasons having to do with things like the internal implementation of keyword-argument handling, but let's ignore that for now.) The last one is the generated name for the type implementing the anonymous function. What this shows you is that each function has its own type, which is the reason why passing functions as arguments is fast.
For the gurus, there is one other factor that can come in to play: because type inference is subject to the unsolvable halting problem, there are circumstances where inference will decide that this is all getting too complex and abort "early." In such cases (which are relatively rare), it can help to force the compiler to specialize against a particular argument. In our example, that would mean declaring g as
#noinline g(f::F, x) where F = f(x)
rather than
#noinline g(f, x) = f(x)
That ::F is normally unnecessary and appears useless, but you can use it as a compiler-hint to increase the amount of effort used to infer the result. I don't recommend doing that by default (it makes your code a bit harder to read), but if you see weird performance problems it's one thing to try.
I am trying writing a simple Fortran code that calculates h = g(f(x)). The x is a vector of length=2.
module m1
implicit none
contains
function f(x)
implicit none
real::f(2),x(2)
f(1)=x(1)-x(2)
f(2)=exp(x(1))-x(2)**2
end function f
function g(ff)
implicit none
real::g(2),x1(2),ffreslt(2)
interface
function ff(x)
implicit none
real::x(2),ff(2)
end function ff
end interface
ffreslt=ff(x1)
g(1)=1-ffreslt(1)
g(2)=2*ffreslt(1)**2-3*ffreslt(2)+4.2
end function g
end module m1
program hgf
use m1
implicit none
real::x1(2),h(2)
x1 = (/0.55,2.47/)
h = g(f(x1))
write(*,*) h
end program hgf
But, I am getting this error message:
h = g(f(x1))
1
Error: Actual parameter 'ff' at <1> is not a PROCEDURE
Am I missing something? Thanks.
in the call to g() you are not passing the function f() but rather the result of calling the function f() with the value of x1.
Check this Notes on converting from F77 to F90 and look at page 24, Section 3.2.7.
Also check this question on procedures as arguments.