On an Amazon EC2 (uname -r gives "3.4.37-40.44.amzn1.x86_64", which I hear is based on Cent OS) I tried installing:
yum install mysql
yum install mysql-devel
And even
yum install mysql-libs
(Due to this thread.)
I'm trying to compile a program and link the MySQL libraries to it. This works fine on my Mac, (but the Mac has libmysqlclient.a). libmysqlclient.a is absolutely nowhere to be found on this machine. All it has is libmysqlclient.so, and many versions of it too.
$ sudo find / -name libmysqlclient*
Gives
/usr/lib64/mysql/libmysqlclient_r.so
/usr/lib64/mysql/libmysqlclient.so
/usr/lib64/mysql/libmysqlclient.so.18
/usr/lib64/mysql/libmysqlclient.so.18.0.0
/etc/alternatives/libmysqlclient
/etc/alternatives/libmysqlclient_r
And
ls -l /usr/lib64/mysql
Gives
lrwxrwxrwx 1 root root 34 Apr 11 19:21 libmysqlclient_r.so -> /etc/alternatives/libmysqlclient_r
lrwxrwxrwx 1 root root 32 Apr 11 19:21 libmysqlclient.so -> /etc/alternatives/libmysqlclient
lrwxrwxrwx 1 root root 24 Apr 11 18:24 libmysqlclient.so.18 -> libmysqlclient.so.18.0.0
-rwxr-xr-x 1 root root 2983360 Mar 14 10:09 libmysqlclient.so.18.0.0
-rwxr-xr-x 1 root root 11892 Mar 14 09:12 mysqlbug
-rwxr-xr-x 1 root root 7092 Mar 14 10:08 mysql_config
So the only real file is libmysqlclient.so.18.0.0.
The compiler command:
g++ main.cpp -L/usr/lib64/mysql -lmysqlclient.so.18.0.0
Fails with
/usr/bin/ld: cannot find -lmysqlclient.so.18.0.0
collect2: ld returned 1 exit status
So somebody is lying or I got completely ripped off at the YUM repo and was not given my libmysqlclient.a like I was supposed to.
(I avoided using the many symlinks on the system so I could eliminate possible issues).
bobobobo! You are so wrong.
First of all, you don't need a libmysqlclient.a file, when you have the .so file. The .a file is for static linking, .so file for dynamic linking.. .so files are decidely better and make you cool.
The problem you get when you try and compile without library link is
g++ main.cpp
Gives
undefined reference to `mysql_init'
But that can be fixed with
g++ main.cpp `mysql_config --cflags --libs`
When you use .so, they are linked a run-time. This makes your compiled code smaller. Not usually big deal these days. The really great feature is that when you update your system, and the library gets updated, you will link in the new (and hopefully) better library. The updates often contain bug fixes and security fixes. Possibly performance improvements. Therefore they make your code more cool and, indirectly, make you a little bit more cool.
Related
I have Ubuntu 20.04 and python 3.10.6 on WSL.
I have been trying to install airflow, and am getting 'airflow: command not found' when I'm trying to do 'airflow initdb' or 'airflow info'.
I have done
export AIRFLOW_HOME=~/airflow
and when I run
myname#LAPTOP-28BMMQV7:/root$ ls -l ~/.local/bin
I can see airflow in the list of files.
drwxrwxr-x 2 myname myname 4096 Nov 20 14:17 __pycache__
-rwxrwxr-x 1 myname myname 3472 Nov 20 14:17 activate-global-python-argcomplete
-rwxrwxr-x 1 myname myname 215 Nov 20 14:17 airflow
-rwxrwxr-x 1 myname myname 213 Nov 20 14:17 alembic
when I run this command to see where my python is, I can see this
myname#LAPTOP-28BMMQV7:/root$ ls -l /usr/bin/python*
lrwxrwxrwx 1 root root 10 Aug 18 11:39 /usr/bin/python3 -> python3.10
lrwxrwxrwx 1 root root 17 Aug 18 11:39 /usr/bin/python3-config -> python3.10-config
-rwxr-xr-x 1 root root 5912936 Nov 2 18:53 /usr/bin/python3.10
I also warnings similar to this:
WARNING: The script pygmentize is installed in '/home/myname/.local/bin' which is not on PATH.
So I need to find a way to add this directory to PATH.
I have found the following advice from the airflow documentation,
If the airflow command is not getting recognized (can happen on Windows when using WSL), then ensure that ~/.local/bin is in your PATH environment variable, and add it in if necessary:
PATH=$PATH:~/.local/bin
am not quite sure how to do it?
I also have a MySQL workbench/server 8.0.31 installed and want to connect it to airflow instead of SQLite. can anybody refer me to a good guide on how to install it correctly?
I have run 'pip install 'apache-airflow[mysql]'.
You were so close! I think your local python (and your terminal whenever you tried airflow db init ) was not able to see the airflow you installed on its path.
There is this video series I go to, whenever I need to install Airflow for a fellow coworker.
This video shows how to install Airflow locally. Also, in the second video it shows how to write a DAG.
And more importantly, on the third video it shows how to connect to a different database just like you wanted.
The site I administer has some CGI scripts that run scripts of the form:
#!/usr/bin/env bash
perl my-script.pl
my-script.pl uses DBD::mysql.
use DBD::mysql;
My scripts use many CPAN modules and I do not want to pollute the "system" Perl (5.16) installed by the Linux distro. Our security policy requires that httpd run as user "apache" and that apache not have a home directory on our server, so my solution has been to install Perl with perlbrew under a different home dir I have access to. Then the Apache config file for the virtual host sets some env vars to access it.
SetEnv PATH /export/home/user1/perl5/perlbrew/perls/perl-5.30.2/bin:${PATH}
SetEnv PERL5LIB /export/home/user1/perl5/perlbrew/perls/perl-5.30.2/lib # this may not be needed
This works well-enough for loading most modules. For example, apache can run:
perl -mDateTime -e 'print $DateTime::VERSION' # prints "1.52"
but if apache attempts:
perl -mDBD::mysql -e 'print $DBD::mysql::VERSION'
it barfs:
Can't locate loadable object for module DBD::mysql in #INC (#INC contains: /export/home/user1/perl5/perlbrew/perls/perl-5.30.2/lib/site_perl/5.30.2/x86_64-linux /export/home/user1/perl5/perlbrew/perls/perl-5.30.2/lib/site_perl/5.30.2 /export/home/user1/perl5/perlbrew/perls/perl-5.30.2/lib/5.30.2/x86_64-linux /export/home/user1/perl5/perlbrew/perls/perl-5.30.2/lib/5.30.2) at -e line 0.
Compilation failed in require.
BEGIN failed--compilation aborted.
The error message "Can't locate ..." is misleading. I confirmed that DBD::mysql is available from the 3rd path in #INC:
$ find ~user1/perl5/perlbrew/perls/perl-5.30.2/lib/site_perl/5.30.2 -name mysql -ls
16540213 4 drwxr-x--- 2 user1 user1 4096 Apr 21 12:51 /export/home/user1/perl5/perlbrew/perls/perl-5.30.2/lib/site_perl/5.30.2/x86_64-linux/auto/DBD/mysql
16540211 4 drwxr-xr-x 2 user1 user1 4096 Apr 21 11:26 /export/home/user1/perl5/perlbrew/perls/perl-5.30.2/lib/site_perl/5.30.2/x86_64-linux/DBD/mysql
Furthermore, user1 can load DBD::mysql with no problem:
perl -mDBD::mysql -e 'print $DBD::mysql::VERSION' # prints 4.050
Therefore, I suspect that the above error message should have read "Can't load libmysqlclient.so ..."
libmysqlclient.so is located in /usr/lib64/mysql/
ls -l /usr/lib64/mysql/
total 3076
lrwxrwxrwx 1 root root 17 Apr 16 11:59 libmysqlclient_r.so -> libmysqlclient.so
lrwxrwxrwx 1 root root 20 Apr 16 11:59 libmysqlclient.so -> libmysqlclient.so.18
lrwxrwxrwx 1 root root 24 Apr 16 11:57 libmysqlclient.so.18 -> libmysqlclient.so.18.0.0
-rwxr-xr-x 1 root root 3135664 Aug 18 2019 libmysqlclient.so.18.0.0
-rwxr-xr-x 1 root root 6758 Aug 18 2019 mysql_config
drwxr-xr-x. 2 root root 4096 Apr 16 11:57 plugin
If user1 runs perl -V, the Linker and Dynamic Linking sections show the following:
Linker and Libraries:
ld='cc'
ldflags =' -fstack-protector-strong -L/usr/local/lib'
libpth=/usr/local/lib /usr/lib /lib/../lib64 /usr/lib/../lib64 /lib /lib64 /usr/lib64 /usr/local/lib64
libs=-lpthread -lnsl -lgdbm -ldb -ldl -lm -lcrypt -lutil -lc -lgdbm_compat
perllibs=-lpthread -lnsl -ldl -lm -lcrypt -lutil -lc
libc=libc-2.17.so
so=so
useshrplib=false
libperl=libperl.a
gnulibc_version='2.17'
Dynamic Linking:
dlsrc=dl_dlopen.xs
dlext=so
d_dlsymun=undef
ccdlflags='-Wl,-E'
cccdlflags='-fPIC'
lddlflags='-shared -O2 -L/usr/local/lib -fstack-protector-strong'
If I run this same perl as apache, it will produce the same result:
sudo -u apache bash
PATH=~user1/perl5/perlbrew/perls/perl-5.30.2/bin:/usr/local/bin:/usr/bin:/usr/X11R6/bin
perl -V
...
Linker and Libraries:
ld='cc'
ldflags =' -fstack-protector-strong -L/usr/local/lib'
libpth=/usr/local/lib /usr/lib /lib/../lib64 /usr/lib/../lib64 /lib /lib64 /usr/lib64 /usr/local/lib64
libs=-lpthread -lnsl -lgdbm -ldb -ldl -lm -lcrypt -lutil -lc -lgdbm_compat
perllibs=-lpthread -lnsl -ldl -lm -lcrypt -lutil -lc
libc=libc-2.17.so
so=so
useshrplib=false
libperl=libperl.a
gnulibc_version='2.17'
Dynamic Linking:
dlsrc=dl_dlopen.xs
dlext=so
d_dlsymun=undef
ccdlflags='-Wl,-E'
cccdlflags='-fPIC'
lddlflags='-shared -O2 -L/usr/local/lib -fstack-protector-strong'
How come user1 perl can load DBD::mysql but apache can't even though both are running the same Perl with the same #INC paths and their dynamic library loading paths look identical? Does anyone know what else can I do to get to the bottom of this?
For starters, you should never do
SetEnv PERL5LIB /export/home/user1/perl5/perlbrew/perls/perl-5.30.2/lib
If you use .../perl-5.30.2/bin/perl, it will know to look in .../perl-5.30.2/lib, and that's the only perl that should look in that directory.
Ideally, you wouldn't do the following either:
SetEnv PATH /export/home/user1/perl5/perlbrew/perls/perl-5.30.2/bin:${PATH}
The shebang of the script should point to the perl it's meant to use (the one with which it was tested and known to work).
In other words, use the following in the bash script:
./my-script.pl
And use the following shebang in my-script.pl:
#!/export/home/user1/perl5/perlbrew/perls/perl-5.30.2/bin/perl
What you are currently doing isn't terrible, but could bite you if you try to upgrade something.
Finally, perl can't find the module because of permission issues. Assuming the apache user isn't a member of the user1 group, you showed that apache user can't access lib/site_perl/5.30.2/x86_64-linux/auto/DBD/mysql (and it might not be able access other pertinent files either).
Fix:
chmod go+X \
/export \
/export/home \
/export/home/user1\
/export/home/user1/perl5 \
/export/home/user1/perl5/perlbrew \
/export/home/user1/perl5/perlbrew/perls
chmod -R go+rX /export/home/user1/perl5/perlbrew/perls/perl-5.30.2
I'm writing some loadable modules for Zabbix, as such, compiling shared objects. I've written one which uses the MySQL C API to read some data from tables, it's fairly standard, and includes:
#include <my_global.h>
#include <mysql.h>
My gcc command looks like so (expanded mysql_config for clarity):
gcc -fPIC -shared -o zbx_mysql.so zbx_mysql.c -I/usr/lib64/mysql `mysql_config --cflags` -I/opt/zabbix/3.2/include -L/usr/lib64/mysql -lmysqlclient -lpthread -lm -lrt -ldl
Contents of /usr/lib64/mysql:
-rw-r--r-- 1 root root 21358968 Sep 13 17:15 libmysqlclient.a
lrwxrwxrwx 1 root root 20 Nov 19 23:19 libmysqlclient_r.so.18 -> libmysqlclient.so.18
lrwxrwxrwx 1 root root 24 Nov 19 23:19 libmysqlclient_r.so.18.1.0 -> libmysqlclient.so.18.1.0
lrwxrwxrwx 1 root root 20 Nov 19 23:19 libmysqlclient.so -> libmysqlclient.so.20
lrwxrwxrwx 1 root root 24 Nov 19 23:19 libmysqlclient.so.18 -> libmysqlclient.so.18.1.0
-rwxr-xr-x 1 root root 9580608 Sep 13 17:07 libmysqlclient.so.18.1.0
lrwxrwxrwx 1 root root 24 Nov 19 23:18 libmysqlclient.so.20 -> libmysqlclient.so.20.3.7
-rwxr-xr-x 1 root root 9884704 Sep 13 17:15 libmysqlclient.so.20.3.7
-rw-r--r-- 1 root root 44102 Sep 13 17:13 libmysqlservices.a
drwxr-xr-x 4 root root 28 Nov 19 23:18 mecab
drwxr-xr-x. 3 root root 4096 Nov 19 23:19 plugin
The .so compiles and runs fine on the dev box, but copying it to a box without mysql-devel installed yields the following error:
cannot load module "zbx_mysql.so": libmysqlclient.so.20: cannot open shared object file: No such file or directory
I can only assume this means that the libmysqlclient.so.20.so isn't being bundled into my .so. I'm pretty much a novice here, so if anyone can advise it'd be greatly appreciated.
Shared libraries aren't "bundled", that's why they're shared. The machine you're trying to run on obviously misses the library. Libraries typically aren't in the "-dev" or "-devel" packages.
On your typical *nix system, you can have multiple versions of the same shared library installed, but normally only one development package. If you have the dev package for mysql-client 20 installed, the compiled code will link against that version. If you want your compiled code to link against mysql-client 18, install the older version of the development package.
If you need to be independent of the libraries installed on your target system, one possibility would be to link a static library instead.
I'm trying to debug a page in a web app that keeps crashing Chrome ("Aw, snap!" error). I've enabled/disabled automatic crash reporting, tried logging with google-chrome --enable-logging --v=1, (as well as various levels of verbosity), and all I get is a "crash dump ID" in the chrome_debug.log chrome://crashes Shows all of the dump IDs, but no actual dump file
I see other questions referring to reading the dump files, but I can't find the dump files themselves (just the ID).
Grepping for the crash ID in /tmp and ~/.config/google-chrome/ turns up nothing, but the ~/.config/google-chrome/chrome_debug.log shows that something was sent:
--2015-04-06 11:10:00-- https://clients2.google.com/cr/report
Resolving clients2.google.com (clients2.google.com)... 74.125.228.224, 74.125.228.225, 74.125.228.231, ...
Connecting to clients2.google.com (clients2.google.com)|74.125.228.224|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘/dev/fd/3’
0K
Crash dump id: 7dac9d5d58258264
Any ideas on where to find the actual file/data that's sent?
Details:
Chrome version: 40.0.2214.111 (Official Build)
Linux Mint 16 (Petra)
Edit: Some extra info:
curtis#localhost:-$ tail -n 5 uploads.log && echo $(pwd)
1428584493,ddc357e4600a49e6
1428584497,7ac16455c152381a
1428589439,d00ad6f5e6426f3d
1428934450,66b3f722430511e8
1428939578,7a2efc2b681515d1
/home/curtis/.config/google-chrome/Crash Reports
curtis#localhost:-$ ll -a
total 12
drwx------ 2 curtis curtis 4096 Apr 6 11:32 .
drwx------ 9 curtis curtis 4096 Apr 13 11:43 ..
-rw------- 1 curtis curtis 3291 Apr 13 11:39 uploads.log
Automatic reporting is enabled...
Thanks!
The *.dmp files are stored in /tmp/, and this has nothing to do with the "Automatic crash reporting" checkbox. The file is also not related to the hash stored in ~/.config/google-chrome/
In ~/.config/google-chrome/Crash Reports/uploads.log:
1429189585,5bddea9f7433e3da
From using , the crash dump file for this particular report was:
chromium-renderer-minidump-2113a256de381bce.dmp
Solution:
root#localhost:-$ mkdir /tmp/misc && chmod 777 /tmp/misc
root#localhost:-$ cd /tmp
root#localhost:-$ watch -n 1 'find . -mmin -1 -exec cp {} /tmp/misc/ \;'
Then, as a regular user (not root):
google-chrome --enable-logging --v=1
Once you see files created by the watch command, run:
root#localhost:-$ ls -l
-rw------- 1 root root 230432 Apr 16 09:06 chromium-renderer-minidump-2113a256de381bce.dmp
-rw------- 1 root root 230264 Apr 16 09:12 chromium-renderer-minidump-95889ebac3d8ac81.dmp
-rw------- 1 root root 231264 Apr 16 09:13 chromium-renderer-minidump-da0752adcba4e7ca.dmp
-rw------- 1 root root 236246 Apr 16 09:12 chromium-upload-56dc27ccc3570a10
-rw------- 1 root root 237247 Apr 16 09:13 chromium-upload-5cebb028232dd944
Now you can use breakpad to work on the *.dmp files.
Google Chrome - Crash Dump Location
To generate the Crash Dump locally,
CHROME_HEADLESS=1 google-chrome
The .dmp files are then stored in ~/.config/google-chrome/Crash Reports
Produce Stack Trace
Check out and add depot_tools to your PATH (used to build breakpad)
git clone https://chromium.googlesource.com/chromium/tools/depot_tools
export PATH=`pwd`/depot_tools:"$PATH"
Check out and build breakpad (using fetch from depot_tools)
mkdir breakpad && cd breakpad
fetch breakpad
cd src
./config && make
To produce stack trace without symbols:
breakpad/src/processor/minidump_stackwalk -m /path/to/minidump
More here https://www.chromium.org/developers/decoding-crash-dumps
Personally Preferred Method
Enable crash reporting:
Chrome menu > Settings > Show advanced settings > Tick "Automatically send usage statistics and crash reports to Google"
Go to chrome://crashes > File bug > Takes you to crbug.com > Complete
report leaving the auto-added report_id field unchanged.
Someone from the Chrome/Chromium team will follow up. They can provide
you with your stack trace and aid at resolving the issue.
I'm building Octave from sources in order to include the ATLAS libraries. Did I get them included correctly? I don't know what to expect from the Octave configure script. I find "-llapack" suspiciously generic.
./configure --with-lapack=/usr/local/atlas
Source directory: .
Installation prefix: /usr/local
C compiler: gcc -Wall -W -Wshadow -Wformat -Wpointer-arith -Wmissing-prototypes -Wstrict-prototypes -Wwrite-strings -Wcast-align -Wcast-qual -g -O2 -pthread
C++ compiler: g++ -Wall -W -Wshadow -Wold-style-cast -Wformat -Wpointer-arith -Wwrite-strings -Wcast-align -Wcast-qual -g -O2 -pthread
Fortran compiler: gfortran -O
Fortran libraries: -L/usr/lib/gcc/x86_64-linux-gnu/4.8 -L/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../x86_64-linux-gnu -L/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../lib -L/lib/x86_64-linux-gnu -L/lib/../lib -L/usr/lib/x86_64-linux-gnu -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-linux-gnu/4.8/../../.. -lgfortran -lm -lquadmath
Lex libraries:
LIBS: -lutil -lm
...
HDF5 libraries: -lhdf5
Java home: /usr/lib/jvm/java-7-openjdk-amd64
Java JVM path: /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server
Java CPPFLAGS: -I/usr/lib/jvm/java-7-openjdk-amd64/include -I/usr/lib/jvm/java-7-openjdk-amd64/include/linux
Java libraries:
LAPACK libraries: -llapack
LLVM CPPFLAGS:
LLVM LDFLAGS:
LLVM libraries:
Magick++ CPPFLAGS: -I/usr/include/GraphicsMagick
Magick++ LDFLAGS:
Magick++ libraries: -lGraphicsMagick++ -lGraphicsMagick
...
allusers#vbubuntu:~/Downloads/octave-3.8.1$ ll -R /usr/local/atlas/
/usr/local/atlas/:
total 16
drwxr-xr-x 4 root root 4096 May 25 23:01 ./
drwxr-xr-x 13 root root 4096 May 25 23:01 ../
drwxr-xr-x 3 root root 4096 May 25 23:01 include/
drwxr-xr-x 2 root root 4096 May 25 23:01 lib/
/usr/local/atlas/include:
total 60
drwxr-xr-x 3 root root 4096 May 25 23:01 ./
drwxr-xr-x 4 root root 4096 May 25 23:01 ../
drwxr-xr-x 2 root root 4096 May 25 23:01 atlas/
-rw-r--r-- 1 root root 33962 May 25 23:06 cblas.h
-rw-r--r-- 1 root root 9708 May 25 23:06 clapack.h
/usr/local/atlas/include/atlas:
total 604
drwxr-xr-x 2 root root 4096 May 25 23:01 ./
drwxr-xr-x 3 root root 4096 May 25 23:01 ../
-rw-r--r-- 1 root root 2089 May 25 23:06 atlas_buildinfo.h
-rw-r--r-- 1 root root 90 May 25 23:06 atlas_cacheedge.h
...
-rw-r--r-- 1 root root 2716 May 25 23:06 zmm.h
-rw-r--r-- 1 root root 552 May 25 23:06 zXover.h
/usr/local/atlas/lib:
total 26548
drwxr-xr-x 2 root root 4096 May 25 23:01 ./
drwxr-xr-x 4 root root 4096 May 25 23:01 ../
-rw-r--r-- 1 root root 14165306 May 25 23:06 libatlas.a
-rw-r--r-- 1 root root 455844 May 25 23:06 libcblas.a
-rw-r--r-- 1 root root 572392 May 25 23:06 libf77blas.a
-rw-r--r-- 1 root root 10942494 May 25 23:06 liblapack.a
-rw-r--r-- 1 root root 456426 May 25 23:06 libptcblas.a
-rw-r--r-- 1 root root 572788 May 25 23:06 libptf77blas.a
allusers#vbubuntu:~/Downloads/octave-3.8.1$
Additional info:
After spamming echo statements in the config file I've noticed the following:
This line:
$as_echo "$as_me:${as_lineno-$LINENO}: checking for $cheev in $LAPACK_LIBS" >&5
has the correct $LAPACK_LIBS variable in it (the one I passed in). It's this line that appears to be the first failure to find something in the lapack libraries I'm telling it about:
if ac_fn_c_try_link "$LINENO"; then :
Just before that line I see the config file define some c code that I believe it's running to identify whether whatever 'cheeve' is, is found in the libraries.
checking for cheev_ in /usr/local/atlas/lib/... no
checking for cheev_... no
checking for cheev_ in -llapack... yes
configuration script
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
/* Override any GCC internal prototype to avoid an error.
Use char because int might match the return type of a GCC
builtin and then its argument prototype would still apply. */
#ifdef __cplusplus
extern "C"
#endif
char $cheev ();
#ifdef F77_DUMMY_MAIN
# ifdef __cplusplus
extern "C"
# endif
int F77_DUMMY_MAIN() { return 1; }
#endif
int
main ()
{
return $cheev ();
;
return 0;
}
_ACEOF
At this point the C code has gone beyond my comprehension level. It seems like it has something to do with whether the F77 compiler (compiler translator??) is being invoked or not.
Well, I think I worked this out after a marathon debugging session.
Octave doesn't appear to recognize the atlas libraries unless they're in shared format (.so files not the .a files that are generated by default).
When I build ATLAS with the --shared option added, and I reference the .so files generated by ATLAS, the Octave config script accepts them. Note: Make sure you use libtatlas.so, not libsatlas.so, assuming you want the multithreaded libraries.
Reference material:
ATLAS ./configure arguments:
../configure --shared -b 64 -D c -DPentiumCPS=3000 --with-netlib-lapack-tarfile=/home/allusers/Downloads/lapack-3.5.0.tgz
Octave ./configure arguments:
./configure --with-lapack=/usr/local/atlas/lib/libtatlas.so --with-blas=/usr/local/atlas/lib/libtatlas.so
Expected Octave ./configure output:
...
BLAS libraries: /usr/local/atlas/lib/libtatlas.so
...
LAPACK libraries: /usr/local/atlas/lib/libtatlas.so
...
Incorrect Octave ./configure output:
...
BLAS libraries: -lblas
...
LAPACK libraries: -llapack
...
My full build process for ATLAS and Octave:
ATLAS setup:
bunzip2 -c atlas3.10.x.tar.bz2 | tar xfm -
mv ATLAS atlas3.10.1
cd atlas3.10.1
mkdir build_vbubuntu
cd build_vbubuntu
sudo apt-get install gfortran f2c libcnf-dev # ???
../configure --shared -b 64 -D c -DPentiumCPS=3000 --with-netlib-lapack-tarfile=/home/allusers/Downloads/lapack-3.5.0.tgz
make build
make check # test serial routines
make ptcheck # check parallel routines
make time
sudo make install
Octave setup:
sudo apt-get build-dep octave
./configure --with-lapack=/usr/local/atlas/lib/libtatlas.so --with-blas=/usr/local/atlas/lib/libtatlas.so
sudo make install
Full disclosure: While I've written up this answer because I got octave to admit that the atlas libraries exist (and I don't want to forget to write it later), the end result is still not working, a large scale matrix multiplication doesn't use multiple cores. Hence, if the cause of that issue is related I may be back to edit this answer in the future.
My successful attempt to compile octave (3.8.2) on CENTOS including atlas:
(make sure to remove blas-devel and lapack-devel, just in case)
> yum install atlas-sse3.x86_64
> setenv LDFLAGS -L/usr/lib64/atlas-sse3
>./configure --with-lapack=-latlas --with-blas=-latlas --enable-jit
> make -j20
(as root)> make install
After configure you should see:
BLAS libraries: -lcblas -lf77blas -latlas
LAPACK libraries: -llapack