unshare command doesn't create new PID namespace - namespaces

I'm learning linux core and I'm on the namepsaces topic now.
I tried to play with "unshare" command just to get hang of namespace and its essentials.
The problem is that it doesn't or, what is more probable, I'm doing something wrong.
I'd appreciate if you could help me to understand.
I try to execute busybox sh program as within its own PID namespace. That's what I do:
[ab#a ~]$ sudo unshare --pid busybox sh
/home/ab # ps
PID TTY TIME CMD
6014 pts/1 00:00:00 sudo
6016 pts/1 00:00:00 busybox
6026 pts/1 00:00:00 ps
So as I can see it from the output of the ps command all process are visible in the new env. And it gets confirmed when I check pid namespace id of newly created process and current one. See below
[ab#a ~]$ ps -p 6016,$$
PID TTY TIME CMD
4604 pts/0 00:00:00 bash
6016 pts/1 00:00:00 busybox
[ab#a ~]$ sudo ls -l /proc/4604/ns
total 0
lrwxrwxrwx. 1 ab ab 0 Aug 8 23:49 ipc -> ipc:[4026531839]
lrwxrwxrwx. 1 ab ab 0 Aug 8 23:49 mnt -> mnt:[4026531840]
lrwxrwxrwx. 1 ab ab 0 Aug 8 23:49 net -> net:[4026531968]
lrwxrwxrwx. 1 ab ab 0 Aug 8 23:49 pid -> pid:[4026531836]
lrwxrwxrwx. 1 ab ab 0 Aug 8 23:49 user -> user:[4026531837]
lrwxrwxrwx. 1 ab ab 0 Aug 8 23:49 uts -> uts:[4026531838]
[ab#a ~]$ sudo ls -l /proc/6016/ns
total 0
lrwxrwxrwx. 1 root root 0 Aug 9 00:07 ipc -> ipc:[4026531839]
lrwxrwxrwx. 1 root root 0 Aug 9 00:07 mnt -> mnt:[4026531840]
lrwxrwxrwx. 1 root root 0 Aug 9 00:07 net -> net:[4026531968]
lrwxrwxrwx. 1 root root 0 Aug 9 00:07 pid -> pid:[4026531836]
lrwxrwxrwx. 1 root root 0 Aug 9 00:07 user -> user:[4026531837]
lrwxrwxrwx. 1 root root 0 Aug 9 00:07 uts -> uts:[4026531838]
So, pid namespace stays the same eventhough I provided --pid argument to unshare call.
Could you please help me to understand why it happens.
Thanks

Solution
you should add --fork and --mount-proc switch to unshare as stated in the man page
-f, --fork
Fork the specified program as a child process of unshare rather than running it directly. This is useful
when creating a new PID namespace. Note that when unshare is waiting for the child process, then it
ignores SIGINT and SIGTERM and does not forward any signals to the child. It is necessary to send
signals to the child process.
Explanation (from man pid_namespaces)
a process's PID namespace membership is determined when the process is created and cannot be changed thereafter.
what unshare actually does when you supply --pid is setting the file descriptor at /proc/[PID]/ns/pid_for_children for the current process to the new PID namespace, causing children subsequently created by this process to be places in a different PID namespace (its children not itself!! important!).
So, when you supply --fork to unshare, it will fork your program (in this case busybox sh) as a child process of unshare and place it in the new PID namespace.
Why do I need --mount-proc ?
Try running unshare with only --pid and --fork and let's see what happen.
wendel#gentoo-grill ~ λ sudo unshare --pid --fork busybox sh
/home/wendel # echo $$
1
/home/wendel # ps
PID USER TIME COMMAND
12443 root 0:00 unshare --pid --fork busybox sh
12444 root 0:00 busybox sh
24370 root 0:00 {ps} busybox sh
.
.
. // bunch more
from echo $$ we can see that the pid is actually 1 so we know that we must be in the new PID namespace, but when we run ps we see other processes as if we are still in the parent PID namespace.
This is because of /proc is a special filesystem called procfs that kernel created in memory, and from the man page.
A /proc filesystem shows (in the /proc/[pid] directories) only processes visible in the PID namespace of the process that performed the mount, even if the /proc filesystem is viewed from processes in other namespaces.
So, in order for tools such as ps to work correctly, we need to re-mount /proc using a process in the new namespace.
But, assuming that your process is in the root mount namespace, if we re-mount /proc, this will mess up many things for other processes in the same mount namespace, because now they can't see anything (in /proc). So you should also put your process in new mount namespace too.
Good thing is unshare has --mount-proc.
--mount-proc[=mountpoint]
Just before running the program, mount the proc filesystem at mountpoint (default is /proc). This is useful when creating a new PID namespace. It also implies creating a new mount namespace since the /proc mount would
otherwise mess up existing programs on the system. The new proc filesystem is explicitly mounted as private (with MS_PRIVATE|MS_REC).
Let's verify that --mount-proc also put your process in new mount namespace.
bash outside:
wendel#gentoo-grill ~ λ ls -go /proc/$$/ns/{user,mnt,pid}
lrwxrwxrwx 1 0 Aug 9 10:05 /proc/17011/ns/mnt -> 'mnt:[4026531840]'
lrwxrwxrwx 1 0 Aug 9 10:10 /proc/17011/ns/pid -> 'pid:[4026531836]'
lrwxrwxrwx 1 0 Aug 9 10:10 /proc/17011/ns/user -> 'user:[4026531837]'
busybox:
wendel#gentoo-grill ~ λ doas ls -go /proc/16436/ns/{user,mnt,pid}
lrwxrwxrwx 1 0 Aug 9 10:05 /proc/16436/ns/mnt -> 'mnt:[4026533479]'
lrwxrwxrwx 1 0 Aug 9 10:04 /proc/16436/ns/pid -> 'pid:[4026533481]'
lrwxrwxrwx 1 0 Aug 9 10:17 /proc/16436/ns/user -> 'user:[4026531837]'
Notice that their user namespace is the same but mount and pid aren't.
Note: You can see that I cited a lot from man pages. If you want to learn more about linux namespaces (or anything unix really) first thing for you to do is to read the man page of each namespace. It is well written and really informative.

Related

airflow command not found when installing in Ubuntu via WSL - how to add it to path?

I have Ubuntu 20.04 and python 3.10.6 on WSL.
I have been trying to install airflow, and am getting 'airflow: command not found' when I'm trying to do 'airflow initdb' or 'airflow info'.
I have done
export AIRFLOW_HOME=~/airflow
and when I run
myname#LAPTOP-28BMMQV7:/root$ ls -l ~/.local/bin
I can see airflow in the list of files.
drwxrwxr-x 2 myname myname 4096 Nov 20 14:17 __pycache__
-rwxrwxr-x 1 myname myname 3472 Nov 20 14:17 activate-global-python-argcomplete
-rwxrwxr-x 1 myname myname 215 Nov 20 14:17 airflow
-rwxrwxr-x 1 myname myname 213 Nov 20 14:17 alembic
when I run this command to see where my python is, I can see this
myname#LAPTOP-28BMMQV7:/root$ ls -l /usr/bin/python*
lrwxrwxrwx 1 root root 10 Aug 18 11:39 /usr/bin/python3 -> python3.10
lrwxrwxrwx 1 root root 17 Aug 18 11:39 /usr/bin/python3-config -> python3.10-config
-rwxr-xr-x 1 root root 5912936 Nov 2 18:53 /usr/bin/python3.10
I also warnings similar to this:
WARNING: The script pygmentize is installed in '/home/myname/.local/bin' which is not on PATH.
So I need to find a way to add this directory to PATH.
I have found the following advice from the airflow documentation,
If the airflow command is not getting recognized (can happen on Windows when using WSL), then ensure that ~/.local/bin is in your PATH environment variable, and add it in if necessary:
PATH=$PATH:~/.local/bin
am not quite sure how to do it?
I also have a MySQL workbench/server 8.0.31 installed and want to connect it to airflow instead of SQLite. can anybody refer me to a good guide on how to install it correctly?
I have run 'pip install 'apache-airflow[mysql]'.
You were so close! I think your local python (and your terminal whenever you tried airflow db init ) was not able to see the airflow you installed on its path.
There is this video series I go to, whenever I need to install Airflow for a fellow coworker.
This video shows how to install Airflow locally. Also, in the second video it shows how to write a DAG.
And more importantly, on the third video it shows how to connect to a different database just like you wanted.

SuSE Linux 15 SP2 update to SP3 && symbol XCRYPT_2.0 in libcrypt.so.1.1.0

Background: We produce a big Library Management System, the server parts written in C, compiled on Linux SLES 15 and deployed to ~100 customers. The version in question was compiled on SLES 15 SP2 a year ago, and our Internal IT Department updated meanwhile the Dev and QA hosts to SP3.
It turned out, that the libcrypt.so moved with this update from SP2 to SP3 to a new location, from /lib64 to /usr/lib64 and contains a new symbol:
strings /usr/lib64/libcrypt.so.1.1.0 | grep XCRYPT_2.0
XCRYPT_2.0
# rpm -q -f /usr/lib64/libcrypt.so.1
libcrypt1-4.4.15-150300.4.2.41.x86_64
# zypper info libcrypt1
Information for package libcrypt1:
----------------------------------
Repository : SLE-Module-Basesystem15-SP3-Updates
Name : libcrypt1
Version : 4.4.15-150300.4.2.41
Arch : x86_64
If you now compile a server application on SP3 and ship this to customers (as a fix for an urgent bug) who is still using SP2, these application are missing this symbol and do not start anymore:
/opt/lib/sisis/avserver/batch/bin/prg/BASTVL: /lib64/libcrypt.so.1: version `XCRYPT_2.0' not found (required by /opt/lib/sisis/avserver/batch/bin/prg/BASTVL)
# strings /lib64/libcrypt.so.1 | grep XCR
# strings /usr/lib64/libcrypt.so.1 | grep XCR
strings: '/usr/lib64/libcrypt.so.1': No such file
# rpm -q -f /lib64/libcrypt.so.1
glibc-2.26-13.48.1.x86_64
# rpm -q -f /usr/lib64/libcrypt.so.1
error: file /usr/lib64/libcrypt.so.1: No such file or directory
i.e. our internal update from SP2 to SP3, make it impossible to deliver fixes to customers running SP2, or they need update as well to SP3 before installing fixes, at least if libcrypt.so is involved.
Any comments or hints for a workaround?
At the end I compiled from source with
git clone https://github.com/besser82/libxcrypt.git
cd libxcrypt
./autogen.sh
./configure --prefix /usr/local/sisis-pap/libxcrypt
make
sudo make install
ls -l /usr/local/sisis-pap/libxcrypt/lib64
insgesamt 1300
-rw-r--r-- 1 root root 635620 26. Jul 14:09 libcrypt.a
-rwxr-xr-x 1 root root 945 26. Jul 14:09 libcrypt.la
lrwxrwxrwx 1 root root 17 26. Jul 14:09 libcrypt.so -> libcrypt.so.1.1.0
lrwxrwxrwx 1 root root 17 26. Jul 14:09 libcrypt.so.1 -> libcrypt.so.1.1.0
-rwxr-xr-x 1 root root 681656 26. Jul 14:09 libcrypt.so.1.1.0
lrwxrwxrwx 1 root root 10 26. Jul 14:09 libowcrypt.a -> libcrypt.a
lrwxrwxrwx 1 root root 11 26. Jul 14:09 libowcrypt.so -> libcrypt.so
lrwxrwxrwx 1 root root 13 26. Jul 14:09 libowcrypt.so.1 -> libcrypt.so.1
lrwxrwxrwx 1 root root 10 26. Jul 14:09 libxcrypt.a -> libcrypt.a
lrwxrwxrwx 1 root root 11 26. Jul 14:09 libxcrypt.so -> libcrypt.so
and pointed our application via LD_LIBRARY_PATH to use this version of libcrypt.so.1.

How to fix the 'Can't load'-error when a perl script is executing 'use DBD::mysql;'

For years, I'm running a perl script on my Synology NAS. This script writes to a MariaDB database. I never had any errors. Until the migration to DSM7. This is what happened ...
I never had any problem when using MariaDB5
I migrated my database from MariaDB5 to MariaDB10. I changed the database name and the credentials to make sure my perl script used the MariaDB10 database. Everything worked fine!
I upgraded my Synology NAS to DSM7, following the provided procedure, including the requirement to remove the MariaDB5 package.
Afterwards, I noticed I following error:
Can't load '/usr/local/lib/perl5/vendor_perl/auto/DBD/mysql/mysql.so' for module DBD::mysql: libmariadb.so.3: cannot open shared object file: No such file or directory at /usr/local/lib/perl5/core_perl/DynaLoader.pm line 193.
at ./mysql1_MDB10.pl line 2.
Compilation failed in require at ./mysql1_MDB10.pl line 2.
BEGIN failed--compilation aborted at ./mysql1_MDB10.pl line 2.
To make it easy to debug, I created a test script:
#!/usr/bin/perl
use DBD::mysql;
$solDBIDB = 'DBI:mysql:database=SOLAR2_10;host=127.0.0.1;port=3307';
$DBUser = 'mydbuser';
$DBPass = 'mydbpass';
$dbh = DBI->connect($solDBIDB, $DBUser, $DBPass) || die "Could not connect to database: $DBI::errstr";
$query = "SELECT * FROM SOLAR2_10.SUM_DATA_ITEMS";
$query_handle = $dbh->prepare($query);
# EXECUTE THE QUERY
$query_handle->execute();
I'm using a Synology NAS DS415P with DSM DSM 7.1-42661 Update 2 (latest updates)
Perl is installed as a package (via the Synology Package center)
perl -v provides the following version information: This is perl 5, version 28, subversion 3 (v5.28.3) built for i686-linux
The version of MariaDB10 is 10.3.32-1040
Update after comment #ikegami
I'm able to find the mysql.so file:
-r-xr-xr-x 1 root root 131923 Apr 21 2021 /usr/local/lib/perl5/vendor_perl/auto/DBD/mysql/mysql.so
I'm able to find the various libmariadb.so files:
ls -l in /volume1/#appstore/MariaDB10/usr/local/mariadb10/lib
lrwxrwxrwx 1 root root 15 Nov 23 2021 libmariadb.so -> libmariadb.so.3
-rwxr-xr-x 1 root root 275356 Nov 23 2021 libmariadb.so.3
lrwxrwxrwx 1 root root 15 Nov 23 2021 libmysqlclient_r.so -> libmariadb.so.3
lrwxrwxrwx 1 root root 15 Nov 23 2021 libmysqlclient.so -> libmariadb.so.3
lrwxrwxrwx 1 root root 17 Nov 23 2021 libmysqld.so -> libmariadbd.so.19
drwxr-xr-x 3 root root 4096 Nov 23 2021 mysql
drwxr-xr-x 2 root root 4096 Nov 23 2021 pkgconfig
Additional info:
cpan -l | egrep -i "mysql"
Bundle::DBD::mysql 4.048
DBD::mysql 4.048
DBD::mysql::GetInfo undef
To me, everything looks properly installed ...
What am I missing?

Compiling MySQL C API client does not link libmysqlclient.so.20

I'm writing some loadable modules for Zabbix, as such, compiling shared objects. I've written one which uses the MySQL C API to read some data from tables, it's fairly standard, and includes:
#include <my_global.h>
#include <mysql.h>
My gcc command looks like so (expanded mysql_config for clarity):
gcc -fPIC -shared -o zbx_mysql.so zbx_mysql.c -I/usr/lib64/mysql `mysql_config --cflags` -I/opt/zabbix/3.2/include -L/usr/lib64/mysql -lmysqlclient -lpthread -lm -lrt -ldl
Contents of /usr/lib64/mysql:
-rw-r--r-- 1 root root 21358968 Sep 13 17:15 libmysqlclient.a
lrwxrwxrwx 1 root root 20 Nov 19 23:19 libmysqlclient_r.so.18 -> libmysqlclient.so.18
lrwxrwxrwx 1 root root 24 Nov 19 23:19 libmysqlclient_r.so.18.1.0 -> libmysqlclient.so.18.1.0
lrwxrwxrwx 1 root root 20 Nov 19 23:19 libmysqlclient.so -> libmysqlclient.so.20
lrwxrwxrwx 1 root root 24 Nov 19 23:19 libmysqlclient.so.18 -> libmysqlclient.so.18.1.0
-rwxr-xr-x 1 root root 9580608 Sep 13 17:07 libmysqlclient.so.18.1.0
lrwxrwxrwx 1 root root 24 Nov 19 23:18 libmysqlclient.so.20 -> libmysqlclient.so.20.3.7
-rwxr-xr-x 1 root root 9884704 Sep 13 17:15 libmysqlclient.so.20.3.7
-rw-r--r-- 1 root root 44102 Sep 13 17:13 libmysqlservices.a
drwxr-xr-x 4 root root 28 Nov 19 23:18 mecab
drwxr-xr-x. 3 root root 4096 Nov 19 23:19 plugin
The .so compiles and runs fine on the dev box, but copying it to a box without mysql-devel installed yields the following error:
cannot load module "zbx_mysql.so": libmysqlclient.so.20: cannot open shared object file: No such file or directory
I can only assume this means that the libmysqlclient.so.20.so isn't being bundled into my .so. I'm pretty much a novice here, so if anyone can advise it'd be greatly appreciated.
Shared libraries aren't "bundled", that's why they're shared. The machine you're trying to run on obviously misses the library. Libraries typically aren't in the "-dev" or "-devel" packages.
On your typical *nix system, you can have multiple versions of the same shared library installed, but normally only one development package. If you have the dev package for mysql-client 20 installed, the compiled code will link against that version. If you want your compiled code to link against mysql-client 18, install the older version of the development package.
If you need to be independent of the libraries installed on your target system, one possibility would be to link a static library instead.

How to change the stdout and stderr log location of processes started by supervisor?

So in my system, the supervisor captures stderr and stdout into these files:
root#3a1a895598f8:/var/log/supervisor# ls -l
total 24
-rw------- 1 root root 18136 Sep 14 03:35 gunicorn-stderr---supervisor-VVVsL1.log
-rw------- 1 root root 0 Sep 14 03:35 gunicorn-stdout---supervisor-lllimW.log
-rw------- 1 root root 0 Sep 14 03:35 nginx-stderr---supervisor-HNIPIA.log
-rw------- 1 root root 0 Sep 14 03:35 nginx-stdout---supervisor-2jDN7t.log
-rw-r--r-- 1 root root 1256 Sep 14 03:35 supervisord.log
But I would like to change gunicorn's stdout and stderr log files 'location to /var/log/gunicorn and fixed the file names for monitoring purpose.
This is what I have done in the config file:
[program:gunicorn]
stdout_capture_maxbytes=50MB
stderr_capture_maxbytes=50MB
stdout = /var/log/gunicorn/gunicorn-stdout.log
stderr = /var/log/gunicorn/gunicorn-stderr.log
command=/usr/bin/gunicorn -w 2 server:app
However it does not take any effect at all. Did I miss anything in the configuration?
Change stdout and stderr to stdout_logfile and stderr_logfile and this should solve your issue.
You can also change childlogdir in the main configuration to make all the child logs appear in another directory. If your are using Auto log mode the logfile names will be auto generated into the childlogdir specified without you needing to set stdout_logfile.
In order for your changes to be reflected you need to either restart the supervisor service with:
service supervisord restart
or
reload the config supervisorctl reload and update the config in the running processes supervisorctl update.
Documentation on this can be found here http://supervisord.org/logging.html#child-process-logs