Is there any link/documentation available around installing DC/OS on Google Compute Engine where instances are ubuntu 16.04 instances including the bootstrap node instead of CentOS 7?
Currently , the documents I find use Ansible and CentOS 7 on GCE as below.
https://dcos.io/docs/1.7/administration/installing/cloud/gce/
Short answer: Debian based distributions are currently (at least up to DC/OS 1.10) not supported.
Long answer: It's possible, but requires some extra steps.
DC/OS doesn't use any RedHat specific features. Most important differences could be solved by symlinks for few system binaries, as RedHat systems have different paths and systemd doesn't support $PATH variable in service definition. You'll need following:
sudo apt-get install libcurl3-nss ipset selinux-utils curl unzip bc
sudo ln -s /bin/mkdir /usr/bin/mkdir
sudo ln -s /bin/ln /usr/bin/ln
sudo ln -s /bin/tar /usr/bin/tar
sudo ln -s /bin/rm /usr/bin/rm
sudo ln -s /usr/sbin/useradd /usr/bin/useradd
sudo ln -s /bin/bash /usr/bin/bash
sudo ln -s /sbin/ipset /usr/sbin/ipset
Another requirements are:
systemd with version >=200
Docker >=1.6
Slightly outdated scripts from John Omernik, there's also puppet module (I'm the author). For further details see discussion on DC/OS Jira.
Next step is manual DC/OS compilation (it might sound scary, but actually it's very easy). C++ components (especially mesos-slave) are dependent on system libraries and they'd better be linked to proper libraries.
apt install python3-venv build-essential git
git clone https://github.com/dcos/dcos
./build_local.sh
Resulting "image" will be located in:
$HOME/dcos-artifacts/testing/`whoami`/dcos_generate_config.sh
You can copy it to your bootstrap server and extract:
bash dcos_generate_config.sh --genconf
after updating genconf/config.yaml you can start a container for serving the installation scripts:
docker run -d -p 9090:80 -v $PWD/genconf/serve:/usr/share/nginx/html:ro nginx
On a new node simply fetch the installation script:
rm -rf /tmp/dcos && mkdir /tmp/dcos && cd /tmp/dcos && curl -O http://bootstrap.example.com:9090/dcos_install.sh
bash dcos_install.sh slave
Unless you don't want to run packages from DC/OS Universe (like Elastic, Kafka, etc.) that depends on libmesos-bundle, you might be just fine. The bundle is fetched into each executor's directory, it includes numerous libraries, such as libmesos.so
...
-rwxr-xr-x 1 nobody nogroup 55077256 Jun 28 19:50 libmesos-1.4.0.so
-rwxr-xr-x 1 nobody nogroup 1487 Jun 28 19:50 libmesos.la
lrwxrwxrwx 1 nobody nogroup 17 Jun 28 19:50 libmesos.so -> libmesos-1.4.0.so
-rwxr-xr-x 1 nobody nogroup 398264 Jun 28 19:53 libpcre.so.1
-rwxr-xr-x 1 nobody nogroup 121296 Jun 28 19:53 libsasl2.so.3
-rwxr-xr-x 1 nobody nogroup 155744 Jun 28 19:53 libselinux.so.1
-rwxr-xr-x 1 nobody nogroup 454008 Jun 28 19:53 libssl.so.10
-rwxr-xr-x 1 nobody nogroup 999944 Jun 28 19:53 libstdc++.so.6
-rwxr-xr-x 1 nobody nogroup 79000 Jun 28 19:53 libsvn_delta-1.so.0
-rwxr-xr-x 1 nobody nogroup 1820208 Jun 28 19:53 libsvn_subr-1.so.0
-rwxr-xr-x 1 nobody nogroup 20040 Jun 28 19:53 libuuid.so.1
-rwxr-xr-x 1 nobody nogroup 90664 Jun 28 19:53 libz.so.1
drwxr-xr-x 3 nobody nogroup 4096 Jun 28 19:53 mesos
drwxr-xr-x 2 nobody nogroup 4096 Jun 28 19:37 pkgconfig
Some libraries might be compatible with your system, but the versions between CentOS and Debian might (and will) differ. You might encounter errors like:
libmesos-bundle/lib/libcurl.so.4: version `CURL_OPENSSL_3' not found (required by curl)
which will cause that all agent based health checks that use curl won't work, therefore most instances will refuse to start.
Related
I have Ubuntu 20.04 and python 3.10.6 on WSL.
I have been trying to install airflow, and am getting 'airflow: command not found' when I'm trying to do 'airflow initdb' or 'airflow info'.
I have done
export AIRFLOW_HOME=~/airflow
and when I run
myname#LAPTOP-28BMMQV7:/root$ ls -l ~/.local/bin
I can see airflow in the list of files.
drwxrwxr-x 2 myname myname 4096 Nov 20 14:17 __pycache__
-rwxrwxr-x 1 myname myname 3472 Nov 20 14:17 activate-global-python-argcomplete
-rwxrwxr-x 1 myname myname 215 Nov 20 14:17 airflow
-rwxrwxr-x 1 myname myname 213 Nov 20 14:17 alembic
when I run this command to see where my python is, I can see this
myname#LAPTOP-28BMMQV7:/root$ ls -l /usr/bin/python*
lrwxrwxrwx 1 root root 10 Aug 18 11:39 /usr/bin/python3 -> python3.10
lrwxrwxrwx 1 root root 17 Aug 18 11:39 /usr/bin/python3-config -> python3.10-config
-rwxr-xr-x 1 root root 5912936 Nov 2 18:53 /usr/bin/python3.10
I also warnings similar to this:
WARNING: The script pygmentize is installed in '/home/myname/.local/bin' which is not on PATH.
So I need to find a way to add this directory to PATH.
I have found the following advice from the airflow documentation,
If the airflow command is not getting recognized (can happen on Windows when using WSL), then ensure that ~/.local/bin is in your PATH environment variable, and add it in if necessary:
PATH=$PATH:~/.local/bin
am not quite sure how to do it?
I also have a MySQL workbench/server 8.0.31 installed and want to connect it to airflow instead of SQLite. can anybody refer me to a good guide on how to install it correctly?
I have run 'pip install 'apache-airflow[mysql]'.
You were so close! I think your local python (and your terminal whenever you tried airflow db init ) was not able to see the airflow you installed on its path.
There is this video series I go to, whenever I need to install Airflow for a fellow coworker.
This video shows how to install Airflow locally. Also, in the second video it shows how to write a DAG.
And more importantly, on the third video it shows how to connect to a different database just like you wanted.
Background: We produce a big Library Management System, the server parts written in C, compiled on Linux SLES 15 and deployed to ~100 customers. The version in question was compiled on SLES 15 SP2 a year ago, and our Internal IT Department updated meanwhile the Dev and QA hosts to SP3.
It turned out, that the libcrypt.so moved with this update from SP2 to SP3 to a new location, from /lib64 to /usr/lib64 and contains a new symbol:
strings /usr/lib64/libcrypt.so.1.1.0 | grep XCRYPT_2.0
XCRYPT_2.0
# rpm -q -f /usr/lib64/libcrypt.so.1
libcrypt1-4.4.15-150300.4.2.41.x86_64
# zypper info libcrypt1
Information for package libcrypt1:
----------------------------------
Repository : SLE-Module-Basesystem15-SP3-Updates
Name : libcrypt1
Version : 4.4.15-150300.4.2.41
Arch : x86_64
If you now compile a server application on SP3 and ship this to customers (as a fix for an urgent bug) who is still using SP2, these application are missing this symbol and do not start anymore:
/opt/lib/sisis/avserver/batch/bin/prg/BASTVL: /lib64/libcrypt.so.1: version `XCRYPT_2.0' not found (required by /opt/lib/sisis/avserver/batch/bin/prg/BASTVL)
# strings /lib64/libcrypt.so.1 | grep XCR
# strings /usr/lib64/libcrypt.so.1 | grep XCR
strings: '/usr/lib64/libcrypt.so.1': No such file
# rpm -q -f /lib64/libcrypt.so.1
glibc-2.26-13.48.1.x86_64
# rpm -q -f /usr/lib64/libcrypt.so.1
error: file /usr/lib64/libcrypt.so.1: No such file or directory
i.e. our internal update from SP2 to SP3, make it impossible to deliver fixes to customers running SP2, or they need update as well to SP3 before installing fixes, at least if libcrypt.so is involved.
Any comments or hints for a workaround?
At the end I compiled from source with
git clone https://github.com/besser82/libxcrypt.git
cd libxcrypt
./autogen.sh
./configure --prefix /usr/local/sisis-pap/libxcrypt
make
sudo make install
ls -l /usr/local/sisis-pap/libxcrypt/lib64
insgesamt 1300
-rw-r--r-- 1 root root 635620 26. Jul 14:09 libcrypt.a
-rwxr-xr-x 1 root root 945 26. Jul 14:09 libcrypt.la
lrwxrwxrwx 1 root root 17 26. Jul 14:09 libcrypt.so -> libcrypt.so.1.1.0
lrwxrwxrwx 1 root root 17 26. Jul 14:09 libcrypt.so.1 -> libcrypt.so.1.1.0
-rwxr-xr-x 1 root root 681656 26. Jul 14:09 libcrypt.so.1.1.0
lrwxrwxrwx 1 root root 10 26. Jul 14:09 libowcrypt.a -> libcrypt.a
lrwxrwxrwx 1 root root 11 26. Jul 14:09 libowcrypt.so -> libcrypt.so
lrwxrwxrwx 1 root root 13 26. Jul 14:09 libowcrypt.so.1 -> libcrypt.so.1
lrwxrwxrwx 1 root root 10 26. Jul 14:09 libxcrypt.a -> libcrypt.a
lrwxrwxrwx 1 root root 11 26. Jul 14:09 libxcrypt.so -> libcrypt.so
and pointed our application via LD_LIBRARY_PATH to use this version of libcrypt.so.1.
On Ubuntu 19.10, I have a broken install/removal of MariaDB. I attempted every solution to this sort of problem. Yet I still can't replace MariaDB with mysql-server and am seemingly unable to remove MariaDB.
What is left on the system in /var/lib/dpkg/info/mariadb* that I want to go away is:
-rw-r--r-- 1 root root 118 Oct 27 14:02 mariadb-client-10.3.list
-rwxr-xr-x 1 root root 174 Aug 2 10:53 mariadb-client-10.3.postrm
-rw-r--r-- 1 root root 28 Oct 27 14:02 mariadb-common.list
-rwxr-xr-x 1 root root 361 Aug 2 10:53 mariadb-common.postrm
-rw-r--r-- 1 root root 509 Oct 27 14:02 mariadb-server-10.3.list
-rwxr-xr-x 1 root root 3449 Aug 2 10:53 mariadb-server-10.3.postrm
Yet when I try anything like:
sudo mv /var/lib/dpkg/info/mariadb* /tmp/
sudo dpkg --remove --force-remove-reinstreq mariadb (or variations)
gives only:
dpkg: warning: ignoring request to remove mariadb-server which isn't installed
SO how do I rebuild the database of installed packages??? and create a health system again??
I'm writing some loadable modules for Zabbix, as such, compiling shared objects. I've written one which uses the MySQL C API to read some data from tables, it's fairly standard, and includes:
#include <my_global.h>
#include <mysql.h>
My gcc command looks like so (expanded mysql_config for clarity):
gcc -fPIC -shared -o zbx_mysql.so zbx_mysql.c -I/usr/lib64/mysql `mysql_config --cflags` -I/opt/zabbix/3.2/include -L/usr/lib64/mysql -lmysqlclient -lpthread -lm -lrt -ldl
Contents of /usr/lib64/mysql:
-rw-r--r-- 1 root root 21358968 Sep 13 17:15 libmysqlclient.a
lrwxrwxrwx 1 root root 20 Nov 19 23:19 libmysqlclient_r.so.18 -> libmysqlclient.so.18
lrwxrwxrwx 1 root root 24 Nov 19 23:19 libmysqlclient_r.so.18.1.0 -> libmysqlclient.so.18.1.0
lrwxrwxrwx 1 root root 20 Nov 19 23:19 libmysqlclient.so -> libmysqlclient.so.20
lrwxrwxrwx 1 root root 24 Nov 19 23:19 libmysqlclient.so.18 -> libmysqlclient.so.18.1.0
-rwxr-xr-x 1 root root 9580608 Sep 13 17:07 libmysqlclient.so.18.1.0
lrwxrwxrwx 1 root root 24 Nov 19 23:18 libmysqlclient.so.20 -> libmysqlclient.so.20.3.7
-rwxr-xr-x 1 root root 9884704 Sep 13 17:15 libmysqlclient.so.20.3.7
-rw-r--r-- 1 root root 44102 Sep 13 17:13 libmysqlservices.a
drwxr-xr-x 4 root root 28 Nov 19 23:18 mecab
drwxr-xr-x. 3 root root 4096 Nov 19 23:19 plugin
The .so compiles and runs fine on the dev box, but copying it to a box without mysql-devel installed yields the following error:
cannot load module "zbx_mysql.so": libmysqlclient.so.20: cannot open shared object file: No such file or directory
I can only assume this means that the libmysqlclient.so.20.so isn't being bundled into my .so. I'm pretty much a novice here, so if anyone can advise it'd be greatly appreciated.
Shared libraries aren't "bundled", that's why they're shared. The machine you're trying to run on obviously misses the library. Libraries typically aren't in the "-dev" or "-devel" packages.
On your typical *nix system, you can have multiple versions of the same shared library installed, but normally only one development package. If you have the dev package for mysql-client 20 installed, the compiled code will link against that version. If you want your compiled code to link against mysql-client 18, install the older version of the development package.
If you need to be independent of the libraries installed on your target system, one possibility would be to link a static library instead.
Trying to back out of a macports mysql installation and return to Snow Leopard Server's built-in MySQL server, but I cannot get it to work.
When I disable macports and enable the built-in service, mysql.sock cannot be found (locate mysql.sock returns nothing). When I re-enable the macport mysql, mysql.sock is found but now I cannot disable the built-in MySQL service.
Every time I try, it just re-enables it.
I have to run the following commands to get MacPorts MySQL to work upon reboots:
sudo launchctl unload -w /Library/LaunchDaemons/org.macports.mysql5.plist
sudo launchctl load -w /Library/LaunchDaemons/org.macports.mysql5.plist
ln -s /var/mysql/mysql.sock /tmp/mysql.sock
Permissions on /var/mysql are (which is where the built-in service is set to):
drwxr-xr-x 111 _mysql _mysql
Permissions on the macports datadir are:
drwxr-xr-x 116 _mysql _mysql
At one time, according to the access log file for the built-in mysql, it started up correctly (2010). Is there a way to manually disable this service from starting up when I reboot?
I realize how unclear my problem is, but somehow the previous admin got macports mysql tied in with the built-in mysql and I'm having a heck of a time untangling them.
In /Library/LaunchDaemons/ plist-files of the installed applications are located, here's what I have there:
$ ls -l /Library/LaunchDaemons/
-r--r--r-- 1 root wheel 573 Jan 10 18:33 at.obdev.littlesnitchd.plist
-rw-r--r-- 1 root wheel 567 Mar 5 19:02 com.parallels.desktop.launchdaemon.plist
lrwxr-xr-x 1 root admin 74 Jan 20 06:21 org.macports.mysql5.plist -> /opt/local/etc/LaunchDaemons/org.macports.mysql5/org.macports.mysql5.plist
lrwxr-xr-x 1 root admin 74 Oct 14 2011 org.macports.rsyncd.plist -> /opt/local/etc/LaunchDaemons/org.macports.rsyncd/org.macports.rsyncd.plist
And if you'd like to check the MacOS bundled services' config, take a look at /System/Library/LaunchDaemons/.