How to use function into another function in fortran [duplicate] - function

I am trying to build a Fortran program, but I get errors about an undefined reference or an unresolved external symbol. I've seen another question about these errors, but the answers there are mostly specific to C++.
What are common causes of these errors when writing in Fortran, and how do I fix/prevent them?
This is a canonical question for a whole class of errors when building Fortran programs. If you've been referred here or had your question closed as a duplicate of this one, you may need to read one or more of several answers. Start with this answer which acts as a table of contents for solutions provided.

A link-time error like these messages can be for many of the same reasons as for more general uses of the linker, rather than just having compiled a Fortran program. Some of these are covered in the linked question about C++ linking and in another answer here: failing to specify the library, or providing them in the wrong order.
However, there are common mistakes in writing a Fortran program that can lead to link errors.
Unsupported intrinsics
If a subroutine reference is intended to refer to an intrinsic subroutine then this can lead to a link-time error if that subroutine intrinsic isn't offered by the compiler: it is taken to be an external subroutine.
implicit none
call unsupported_intrinsic
end
With unsupported_intrinsic not provided by the compiler we may see a linking error message like
undefined reference to `unsupported_intrinsic_'
If we are using a non-standard, or not commonly implemented, intrinsic we can help our compiler report this in a couple of ways:
implicit none
intrinsic :: my_intrinsic
call my_intrinsic
end program
If my_intrinsic isn't a supported intrinsic, then the compiler will complain with a helpful message:
Error: ‘my_intrinsic’ declared INTRINSIC at (1) does not exist
We don't have this problem with intrinsic functions because we are using implicit none:
implicit none
print *, my_intrinsic()
end
Error: Function ‘my_intrinsic’ at (1) has no IMPLICIT type
With some compilers we can use the Fortran 2018 implicit statement to do the same for subroutines
implicit none (external)
call my_intrinsic
end
Error: Procedure ‘my_intrinsic’ called at (1) is not explicitly declared
Note that it may be necessary to specify a compiler option when compiling to request the compiler support non-standard intrinsics (such as gfortran's -fdec-math). Equally, if you are requesting conformance to a particular language revision but using an intrinsic introduced in a later revision it may be necessary to change the conformance request. For example, compiling
intrinsic move_alloc
end
with gfortran and -std=f95:
intrinsic move_alloc
1
Error: The intrinsic ‘move_alloc’ declared INTRINSIC at (1) is not available in the current standard settings but new in Fortran 2003. Use an appropriate ‘-std=*’ option or enable ‘-fall-intrinsics’ in order to use it.
External procedure instead of module procedure
Just as we can try to use a module procedure in a program, but forget to give the object defining it to the linker, we can accidentally tell the compiler to use an external procedure (with a different link symbol name) instead of the module procedure:
module mod
implicit none
contains
integer function sub()
sub = 1
end function
end module
use mod, only :
implicit none
integer :: sub
print *, sub()
end
Or we could forget to use the module at all. Equally, we often see this when mistakenly referring to external procedures instead of sibling module procedures.
Using implicit none (external) can help us when we forget to use a module but this won't capture the case here where we explicitly declare the function to be an external one. We have to be careful, but if we see a link error like
undefined reference to `sub_'
then we should think we've referred to an external procedure sub instead of a module procedure: there's the absence of any name mangling for "module namespaces". That's a strong hint where we should be looking.
Mis-specified binding label
If we are interoperating with C then we can specify the link names of symbols incorrectly quite easily. It's so easy when not using the standard interoperability facility that I won't bother pointing this out. If you see link errors relating to what should be C functions, check carefully.
If using the standard facility there are still ways to trip up. Case sensitivity is one way: link symbol names are case sensitive, but your Fortran compiler has to be told the case if it's not all lower:
interface
function F() bind(c)
use, intrinsic :: iso_c_binding, only : c_int
integer(c_int) :: f
end function f
end interface
print *, F()
end
tells the Fortran compiler to ask the linker about a symbol f, even though we've called it F here. If the symbol really is called F, we need to say that explicitly:
interface
function F() bind(c, name='F')
use, intrinsic :: iso_c_binding, only : c_int
integer(c_int) :: f
end function f
end interface
print *, F()
end
If you see link errors which differ by case, check your binding labels.
The same holds for data objects with binding labels, and also make sure that any data object with linkage association has matching name in any C definition and link object.
Equally, forgetting to specify C interoperability with bind(c) means the linker may look for a mangled name with a trailing underscore or two (depending on compiler and its options). If you're trying to link against a C function cfunc but the linker complains about cfunc_, check you've said bind(c).
Not providing a main program
A compiler will often assume, unless told otherwise, that it's compiling a main program in order to generate (with the linker) an executable. If we aren't compiling a main program that's not what we want. That is, if we're compiling a module or external subprogram, for later use:
module mod
implicit none
contains
integer function f()
f = 1
end function f
end module
subroutine s()
end subroutine s
we may get a message like
undefined reference to `main'
This means that we need to tell the compiler that we aren't providing a Fortran main program. This will often be with the -c flag, but there will be a different option if trying to build a library object. The compiler documentation will give the appropriate options in this case.

There are many possible ways you can see an error like this. You may see it when trying to build your program (link error) or when running it (load error). Unfortunately, there's rarely a simple way to see which cause of your error you have.
This answer provides a summary of and links to the other answers to help you navigate. You may need to read all answers to solve your problem.
The most common cause of getting a link error like this is that you haven't correctly specified external dependencies or do not put all parts of your code together correctly.
When trying to run your program you may have a missing or incompatible runtime library.
If building fails and you have specified external dependencies, you may have a programming error which means that the compiler is looking for the wrong thing.

Not linking the library (properly)
The most common reason for the undefined reference/unresolved external symbol error is the failure to link the library that provides the symbol (most often a function or subroutine).
For example, when a subroutine from the BLAS library, like DGEMM is used, the library that provides this subroutine must be used in the linking step.
In the most simple use cases, the linking is combined with compilation:
gfortran my_source.f90 -lblas
The -lblas tells the linker (here invoked by the compiler) to link the libblas library. It can be a dynamic library (.so, .dll) or a static library (.a, .lib).
In many cases, it will be necessary to provide the library object defining the subroutine after the object requesting it. So, the linking above may succeed where switching the command line options (gfortran -lblas my_source.f90) may fail.
Note that the name of the library can be different as there are multiple implementations of BLAS (MKL, OpenBLAS, GotoBLAS,...).
But it will always be shortened from lib... to l... as in liopenblas.so and -lopenblas.
If the library is in a location where the linker does not see it, you can use the -L flag to explicitly add the directory for the linker to consider, e.g.:
gfortran -L/usr/local/lib -lopenblas
You can also try to add the path into some environment variable the linker searches, such as LIBRARY_PATH, e.g.:
export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/lib
When linking and compilation are separated, the library is linked in the linking step:
gfortran -c my_source.f90 -o my_source.o
gfortran my_source.o -lblas

Not providing the module object file when linking
We have a module in a separate file module.f90 and the main program program.f90.
If we do
gfortran -c module.f90
gfortran program.f90 -o program
we receive an undefined reference error for the procedures contained in the module.
If we want to keep separate compilation steps, we need to link the compiled module object file
gfortran -c module.f90
gfortran module.o program.f90 -o program
or, when separating the linking step completely
gfortran -c module.f90
gfortran -c program.f90
gfortran module.o program.o -o program

Problems with the compiler's own libraries
Most Fortran compilers need to link your code against their own libraries. This should happen automatically without you needing to intervene, but this can fail for a number of reasons.
If you are compiling with gfortran, this problem will manifest as undefined references to symbols in libgfortran, which are all named _gfortran_.... These error messages will look like
undefined reference to '_gfortran_...'
The solution to this problem depends on its cause:
The compiler library is not installed
The compiler library should have been installed automatically when you installed the compiler. If the compiler did not install correctly, this may not have happened.
This can be solved by correctly installing the library, by correctly installing the compiler. It may be worth uninstalling the incorrectly installed compiler to avoid conflicts.
N.B. proceed with caution when uninstalling a compiler: if you uninstall the system compiler it may uninstall other necessary programs, and may render other programs unusable.
The compiler cannot find the compiler library
If the compiler library is installed in a non-standard location, the compiler may be unable to find it. You can tell the compiler where the library is using LD_LIBRARY_PATH, e.g. as
export LD_LIBRARY_PATH="/path/to/library:$LD_LIBRARY_PATH"
If you can't find the compiler library yourself, you may need to install a new copy.
The compiler and the compiler library are incompatible
If you have multiple versions of the compiler installed, you probably also have multiple versions of the compiler library installed. These may not be compatible, and the compiler might find the wrong library version.
This can be solved by pointing the compiler to the correct library version, e.g. by using LD_LIBRARY_PATH as above.
The Fortran compiler is not used for linking
If you are linking invoking the linker directly, or indirectly through a C (or other) compiler, then you may need to tell this compiler/linker to include the Fortran compiler's runtime library. For example, if using GCC's C frontend:
gcc -o program fortran_object.o c_object.o -lgfortran

Related

How to find dependend functions in octave

I would like to identify all functions needed to run a specific function in octave. I need this to deploy an application written in Octave.
While Matlab offers some tools to analyse a function on its dependencies, I could not find something similar for Octave.
Trying inmem as recommended in matlab does not produce the expected result:
> inmem
warning: the 'inmem' function is not yet implemented in Octave
Is there any other solution to this problem available?
First, let me point out that from your description, the matlab tool you're after is not inmem, but deprpt.
Secondly, while octave does not have a built-in tool for this, there is a number of ways to do so yourself. I have not tried these personally, so, ymmv.
1) Run your function while using the profiler, then inspect the functions used during the running process. As suggested in the octave archives: https://lists.gnu.org/archive/html/help-octave/2015-10/msg00135.html
2) There are some external tools on github that attempt just this, e.g. :
https://git.osuv.de/m/about
https://github.com/KaeroDot/mDepGen
3) If I had to attack this myself, I would approach the problem as follows:
Parse and tokenise the m-file in question. (possibly also use binary checks like isvarname to further filter useless tokens before moving to the next step.)
For each token x, wrap a "help(x)" call to a try / catch block
Inspect the error, this will be one of:
"Invalid input" (i.e. token was not a function)
"Not found" (i.e. not a valid identifier etc)
"Not documented" (function exists but has no help string)
No error, in which case you stumbled upon a valid function call within the file
To further check if these are builtin functions or part of a loaded package, you could further parse the first line of the "help" output, which typically tells you where this function came from.
If the context for this is that you're trying to check if a matlab script will work on octave, one complication will be that typically packages that will be required on octave are not present in matlab code. Then again, if this is your goal, you should probably be using deprpt from matlab directly instead.
Good luck.
PS. I might add that the above is for creating a general tool etc. In terms of identifying dependencies in your own code, good software engineering practices go a long way towards providing maintenable code and easily resolving dependency problems for your users. E.g: -- clearly identifying required packages (which, unlike matlab, octave does anyway by requiring such packages to be visibly loaded in code) -- similarly, for custom dependencies, consider wrapping and providing these as packages / namespaces, rather than scattered files -- if packaging dependencies isn't possible, you can create tests / checks in your file that throw errors if necessary files are missing, or at least mention such dependencies in comments in the file itself, etc.
According to Octave Compatibility FAQ here,
Q. inmem
A. who -functions
You can use who -function. (Note: I have not tried yet.)

At a language level, what exactly is `ccall`?

I'm new to Julia, and I'm trying to understand, at the language level, what ccall is. At the syntax level, it looks like a normal function, but it clearly doesn't behave the same way in how it takes its arguments:
Note that the argument type tuple must be a literal tuple, and not a
tuple-valued variable or expression.
Additionally, if I evaluate a variable bound to a function in the Julia REPL, I get something like
julia> max
max (generic function with 15 methods)
But if I try to do the same with ccall:
julia> ccall
ERROR: syntax: invalid "ccall" syntax
Clearly, ccall is a special piece of syntax, but it's also not a macro (no # prefix, and invalid macro usage gives a more specific error). So, what is it? Is it something baked into the language, or something I could define myself with some language construct I'm not familiar with?
And if it is some baked-in piece of syntax, why was it decided to use function call notation, instead of implementing it as a macro or designing a more readable and distinct syntax?
In the current nightly (and thus, upcoming 0.6 release), much of the special behavior you observe has been removed (see this pull-request). ccall is no longer a reserved word, so it can be used as a function or macro name.
However there is still a slight oddity: defining a 3- or 4-argument function called ccall is allowed, but actually calling such a function will give an error about ccall argument types (other numbers of arguments are ok). The reasons go directly to your question:
So, what is it? Is it something baked into the language
Yes, ccall, though it will no longer be a keyword in 0.6, is still "baked in" to the language in several ways:
the :ccall([four args...]) expression form is recognized and specially handled during syntax lowering. This lowering step does several things including wrapping arguments in a call to unsafe_convert, which allows for customized conversion from Julia objects to C-compatible objects; as well as pulling out arguments that might need to be rooted to prevent garbage collection of a referenced object during the ccall. (see code_lowered output, or try the expand function; more info on the compiler here).
ccall requires extensive handling in the code generation backend, including: look-up of the requested function name in the specified shared library, and generation of an LLVM call instruction -- which is eventually translated to platform-specific machine code by the LLVM Just-In-Time compiler. (see the different stages with code_llvm and code_native).
And if it is some baked-in piece of syntax, why was it decided to use
function call notation, instead of implementing it as a macro or
designing a more readable and distinct syntax?
For the reasons detailed above, ccall requires special handling whether it looks like a macro or a function. In this mailing list thread, one of the Julia creators (Stefan Karpinski) commented on why not to make it a macro:
I suppose we could reimplement it as a macro, but that would really just be pushing the magic further down.
As far as "a more readable and distinct syntax", perhaps that is a matter of taste. It's not clear to me why some other syntax would be preferable (except for the convenience of a LuaJIT/CFFI-style inline C syntax parsing, of which I am a fan). My only strong personal wish for ccall would be to have arguments and types entered adjacent (e.g. ccall((:foo, :libbar), Void, (x::Int, y::Float))), because working with longer argument lists can be inconvenient. In 0.6 it will be possible to implement this form as a macro!
In Julia 0.5 and earlier.
It is not a function and it is not a macro.
It is indeed something special baked into the language.
It is an Intrinsic.
In julia 0.6 this changes
It a lot of ways it is more like a Macro than a function call.
But in other ways it is not -- it does not return an AST.
It does call a function and on a low enough level it looks similar to calling a julia function.
The history of why it looks the way it does is beyond me, you'ld need to hear from one of the people who worked on the earliest code for the language.
Right now it is everywhere, and is one of the harder things to change -- but not impossible. It would trigger up for 3 years of bikeshedding though :-P .
I like to think of ccall as being two things.
Foreign Function Interface, for C and other compiled languages (eg Fortran, Rust apparently work)
Way to access the raw guts of the language "runtime".
Foreign Function Interface (FFI)
Most of the time when one uses ccall in a package one wants to invoke some code that is in a compile library. In this sense it is C-Call, like R-Call, or Py-Call.
I think mlewe/BlossomV.jl is a nice compact example.
For a more intense example oxinabox/SLEEF.jl.
As an FFI, it does not have to share memory space/a process with julia -- PyCall.jl does, RCall.jl and Matlab.jl don't.
It doesn't matter as long as the result comes back.
In these cases it is theoretically possible to replace ccall with some kind of safe_ccall which would run the called library in a separate process, and would not segfault julia if the library being called segfaulted.
But as of yet, no-one has written such a method/package.
Using ccall for FFI is even done in Base, like for accessing MPFR to define BigFloat.
But this is not the main reason ccall is used in Base.
Accessing the guts of the language.
ccall is really what drives a large portion of the program "doing a thing".
It is used throughout Base, to call the functions from src.
For this, ccall basically triggers a function call at the compiled level, that shifts the instruction pointer directly into the compiled code of the ccalled function. Like calling a function would if the whole thing had been written in say C.
You can see in base/threadingconstructs.jl ccall being used to manage work on threads -- that triggers code from src/threading.c.
It is used to map a section of disk to memory. mmap.jl. -- obviously can't be done from another process.
It is used to make a section of code non-intruptable
It is used call LibC to do things like malloc to allocate memory (though right now this is mostly used as part of FFI).
There are tricks you can do with ccall to #undef a variable after it has already been assigned.
ccall is in many ways the "master" key to the language.
Conclusion
I've described ccall here as two things, a FFI function and a core part of the language "runtime". This duality is not real, and there is plenty of overlap, like filehandling (is it FFI?).
The behavour many expect ccall to have comes from its FFI uses.
Here ccall could just be a function.
The behaviour it actually has comes from it's use as a core part of the language -- linking the julia code of the standard library in Base to the low level C code from src.
Allowing the very direct control over the running of the julia process.

Building GPL C program with CUDA module

I am attempting to modify a GPL program written in C. My goal is to replace one method with a CUDA implementation, which means I need to compile with nvcc instead of gcc. I need help building the project - not implementing it (You don't need to know anything about CUDA C to help, I don't think).
This is my first time trying to change a C project of moderate complexity that involves a .configure and Makefile. Honestly, this is my first time doing anything in C in a long time, including anything involving gcc or g++, so I'm pretty lost.
I'm not super interested in learning configure and Makefiles - this is more of an experiment. I would like to see if the project implementation goes well before spending time creating a proper build script. (Not unwilling to learn as necessary, just trying to give an idea of the scope).
With that said, what are my options for building this project? I have a myriad of questions...
I tried adding "CC=nvcc" to the configure.in file after AC_PROG_CC. This appeared to work - output from running configure and make showed nvcc as the compiler. However make failed to compile the source file with the CUDA kernel, not recognizing the CUDA specific syntax. I don't know why, was hoping this would just work.
Is it possible to compile a source file with nvcc, and then include it at the linking step in the make process for the main program? If so, how? (This question might not make sense - I'm really rusty at this)
What's the correct way to do this?
Is there a quick and dirty way I could use for testing purposes?
Is there some secret tool everyone uses to setup and understand these configure and Makefiles? This is even worse than the Apache Ant scripts I'm used to (Yeah, I'm out of my realm)
You don't need to compile everything with nvcc. Your guess that you can just compile your CUDA code with NVCC and leave everything else (except linking) is correct. Here's the approach I would use to start.
Add a 1 new header (e.g. myCudaImplementation.h) and 1 new source file (with .cu extension, e.g. myCudaImplementation.cu). The source file contains your kernel implementation as well as a (host) C wrapper function that invokes the kernel with the appropriate execution configuration (aka <<<>>>) and arguments. The header file contains the prototype for the C wrapper function. Let's call that wrapper function runCudaImplementation()
I would also provide another host C function in the source file (with prototype in the header) that queries and configures the GPU devices present and returns true if it is successful, false if not. Let's call this function configureCudaDevice().
Now in your original C code, where you would normally call your CPU implementation you can do this.
// must include your new header
#include "myCudaImplementation.h"
// at app initialization
// store this variable somewhere you can access it later
bool deviceConfigured = configureCudaDevice;
...
// then later, at run time
if (deviceConfigured)
runCudaImplementation();
else
runCpuImplementation(); // run the original code
Now, since you put all your CUDA code in a new .cu file, you only have to compile that file with nvcc. Everything else stays the same, except that you have to link in the object file that nvcc outputs. e.g.
nvcc -c -o myCudaImplementation.o myCudaImplementation.cu <other necessary arguments>
Then add myCudaImplementation.o to your link line (something like:)
g++ -o myApp myCudaImplementation.o
Now, if you have a complex app to work with that uses configure and has a complex makefile already, it may be more involved than the above, but this is the general approach. Bottom line is you don't want to compile all of your source files with nvcc, just the .cu ones. Use your host compiler for everything else.
I'm not expert with configure so can't really help there. You may be able to run configure to generate a makefile, and then edit that makefile -- it won't be a general solution, but it will get you started.
Note that in some cases you may also need to separate compilation of your .cu files from linking them. In this case you need to use NVCC's separate compilation and linking functionality, for which this blog post might be helpful.

Launch interactive OCaml session with library (Yojson) available

I've installed the Yojson library for OCaml via GODI:
http://martin.jambon.free.fr/yojson.html
I want to start an interactive ocaml session (i.e. via the ocaml command) and execute functions from the Yojson library e.g.
Yojson.Safe.from_string;;
How do I do this? The above command gives "Error: Unbound module Yojson". I've worked out how to compile via ocamlc with Yojson available, but I want to launch an interactive session instead.
I know this seems like a horrible beginners question but Yojson comes with no samples and minimal instructions so I'm really stumped. I've tried various combinations of "#load" and compiler switches and I'm stuck.
The tool you are after is called findlib. It is included in the base GODI installation. The tools that come with findlib allow you to easily compile against most OCaml libraries and use those libraries from a toplevel session (ocaml). The findlib documentation is fairly comprehensive, but here is a quick summary to get started.
To start using findlib from within a toplevel session:
#use "topfind";;
This will display a brief usage message. Then you can type:
#list;;
This will show you a list of all of the available packages. Yojson will likely be among them. Finally:
#require "yojson";;
where yojson is replaced by the appropriate entry shown by #list;;. Yojson's modules should be available for you to use at this point.

What is the explanation behind the below declaration/keyword?

I would like to know what the following declarations do. I have seen them in a C code on MSVisual Studio Compiled code.
extern "C" __declspec(dllexport)
extern "C" __declspec(dllimport)
I know somewhat that they are used to declare external linkage for functions(functional defined in different source file.But would like to know in detail how this works.
-Ajit
The extern "C" part tells a C++ compiler that the item being declared should use C linkage, which means that the name will not be mangled (or will be mangled in the same way that a C compiler would). This makes it so the item can be linked to from C code and most other languages as well, since C linkage is typically the standard used for that on a platform.
The __declspec(dllexport) and __declspec(dllimport) items are non-standard attributes that tell the compiler that the item should be exported (or imported) from a DLL. The __declspec() attribute is supported on MS compilers and probably other compilers that target Windows. I'm not sure if GCC does or not. Other storage class attributes that can be specified with __declspec() (at least in MSVC) include uuid(), naked, deprecated and others that provide the compiler with information on how an object or function should be compiled.
dllexport tells the compiler to generate a .lib file. dllimport tells the compiler to look in a .lib file for the function declaration (its definition will be in a dll).
It means the functions/classes that follow it are visible and accessible across a DLL boundary so you can link against them and call them from other code