I am having some trouble with singularity on a system that I don't have sudo permissions on. Singularity is a module on this system so that other users can use it.
I am the first user trying to use it. I am trying to use --fakeroot (-f) to build a very simple container.
$ module load singularity
$ singularity --version
singularity version 3.6.4
$ cat t.def
Bootstrap: library
From: ubuntu:18.04
Stage: build
$ singularity build -f t.sif t.def
ERROR : Failed to create container process: Invalid argument
$ singularity -d build -f t.sif t.def
DEBUG [U=1013,P=8140] persistentPreRun() Singularity version: 3.6.4
DEBUG [U=1013,P=8140] persistentPreRun() Parsing configuration file /opt/singularity/3.6.4//etc/singularity/singularity.conf
DEBUG [U=1013,P=8140] handleConfDir() /home/Thomas.Robinson/.singularity already exists. Not creating.
DEBUG [U=1013,P=8140] init() Use starter binary /opt/singularity/3.6.4/libexec/singularity/bin/starter-suid
VERBOSE [U=0,P=8140] print() Set messagelevel to: 5
VERBOSE [U=0,P=8140] init() Starter initialization
DEBUG [U=0,P=8140] get_pipe_exec_fd() PIPE_EXEC_FD value: 8
VERBOSE [U=0,P=8140] is_suid() Check if we are running as setuid
VERBOSE [U=0,P=8140] priv_drop() Drop root privileges
DEBUG [U=1013,P=8140] init() Read engine configuration
DEBUG [U=1013,P=8140] init() Wait completion of stage1
VERBOSE [U=1013,P=8150] priv_drop() Drop root privileges permanently
DEBUG [U=1013,P=8150] set_parent_death_signal() Set parent death signal to 9
VERBOSE [U=1013,P=8150] init() Spawn stage 1
DEBUG [U=1013,P=8150] startup() fakeroot runtime engine selected
VERBOSE [U=1013,P=8150] startup() Execute stage 1
DEBUG [U=1013,P=8150] StageOne() Entering stage 1
VERBOSE [U=1013,P=8140] wait_child() stage 1 exited with status 0
DEBUG [U=1013,P=8140] cleanup_fd() Close file descriptor 4
DEBUG [U=1013,P=8140] cleanup_fd() Close file descriptor 5
DEBUG [U=1013,P=8140] cleanup_fd() Close file descriptor 6
DEBUG [U=1013,P=8140] init() Set child signal mask
DEBUG [U=1013,P=8140] init() Create socketpair for master communication channel
DEBUG [U=1013,P=8140] init() Create RPC socketpair for communication between stage 2 and RPC server
VERBOSE [U=1013,P=8140] user_namespace_init() Create user namespace
VERBOSE [U=1013,P=8140] pid_namespace_init() Create pid namespace
ERROR [U=1013,P=8140] init() Failed to create container process: Invalid argument
The debug build doesn't give any information about what the invalid argument is. I tried to remove one of the files, and it just told me I was missing an argument. Is there something wrong with the singularity install, my .def file, or am I doing something totally wrong?
Thanks for the help.
Could you check system enable unprivileged user namespace creation’.
It is not enabled by default.
Document: https://sylabs.io/guides/3.6/admin-guide/user_namespace.html#user-namespace-requirements
In my enviroment CentOS 7.4, enabled with:
sudo sh -c 'echo user.max_user_namespaces=15000 \
>/etc/sysctl.d/90-max_net_namespaces.conf'
sudo sysctl -p /etc/sysctl.d /etc/sysctl.d/90-max_net_namespaces.conf
The same issue in:
https://github.com/hpcng/singularity/issues/5585
Related
[lab-user#studentvm 0 ~]$ oc get pods
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
Via the command-line flag --kubeconfig
Via the KUBECONFIG environment variable
In your home directory as ~/.kube/config
To view or setup config directly use the 'config' command.
please tell how to run oc commands
Recently, I tried to debug a cross compiled arm program with QEMU, but I got stuck with an issue.
This is the code, very simple.
int main()
{
printf("aaa\n");
int status;
status = system("./bin/ls");
printf("Result of [system] = 0x%x\n", status);
}
When I launch the program using command
spy#spy-virtual-machine:/usr/arm-linux-gnueabihf$ ./qemu-arm-static -L ./ ./a.out
The output is:
aaa
bin include lib test.c qemu-arm-static a.out qemu-arm shell.sh
Result of [system] = 0x0
But when I launch the program with chroot like this:
spy#spy-virtual-machine:/usr/arm-linux-gnueabihf$ sudo chroot ./ ./qemu-arm-static -L ./ ./a.out
The output turns out to be:
aaa
Result of [system] = 0x7f00
Apparently the system("./bin/ls") is not run as expected.
But the ./bin/ls command can be run by chroot & QEMU:
spy#spy-virtual-machine:/usr/arm-linux-gnueabihf$ sudo chroot ./ ./qemu-arm-static -L ./ ./bin/ls
bin include lib test.c qemu-arm-static a.out qemu-arm shell.sh
Now I'm totally confused. Can anybody give me a hint on this, and what can I do to get the right output of system function when using chroot command.
All command line input and output can be found in this picture:
Command line content
From man 3 system:
system() executes a command specified in command by calling /bin/sh -c
command
So you need a working shell inside the chroot in order to be able to successfully invoke system().
The following happens when this program runs in qemu-arm-static: system() results in fork() followed by exec() for the shell. When you run it without chroot this is your host (x86) shell. The shell then calls fork() followed by exec() for the bin/ls (ARM). My understanding is that it can only succeed if you have binfmt handler for the ARM ELF registered on your host. In that case registered qemu-arm gets loaded and it executes bin/ls.
When you do the same thing in the chroot the host shell is not accessible, so system() results in exec() call for the bin/sh (ARM). It looks like your binfmt handler is not accessible inside the chroot, and because of that loading bin/sh fails and error status is returned from system().
You can check registered binfmt handlers in the /proc/sys/fs/binfmt_misc
Please, do you know how resolve this issue ?
I searched everywhere without finding.
06:45 SELinux is preventing systemd from open access on the file /root/.pm2/pm2.pid. For complete SELinux messages run: sealert -l d84a5a0b-cfcf-4cb9-918a-c0952bf70600 setroubleshoot
06:45 pm2-root.service: Can't convert PID files /root/.pm2/pm2.pid O_PATH file descriptor to proper file descriptor: Permission denied systemd 2
06:45 Failed to start PM2 process manager.
I have executed this command : sealert -l d84a5a0b-cfcf-4cb9-918a-c0952bf70600 setroubleshoot
Messages d'audit bruts
type=AVC msg=audit(1591498085.184:7731): avc: denied { open } for pid=1 comm="systemd" path="/root/.pm2/pm2.pid" dev="dm-0" ino=51695937 scontext=system_u:system_r:init_t:s0 tcontext=system_u:object_r:admin_home_t:s0 tclass=file permissive=0
PM2 Version : 4.4.0
NODE version : 12.18.0
CentOS Version : 8
my systemd service :
[Unit]
Description=PM2 process manager
Documentation=https://pm2.keymetrics.io/
After=network.target
[Service]
Type=forking
User=root
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Environment=PATH=/sbin:/bin:/usr/sbin:/usr/bin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
Environment=PM2_HOME=/root/.pm2
PIDFile=/root/.pm2/pm2.pid
Restart=on-failure
ExecStart=/usr/lib/node_modules/pm2/bin/pm2 resurrect
ExecReload=/usr/lib/node_modules/pm2/bin/pm2 reload all
ExecStop=/usr/lib/node_modules/pm2/bin/pm2 kill
[Install]
WantedBy=multi-user.target
Thank you
As said in the comments, I had the exact same issue.
To solve this, just run the following commands as root after trying to start the PM2 service (in your case, this start attempt would be systemctl start pm2-root)
ausearch -c 'systemd' --raw | audit2allow -M my-systemd
semodule -i my-systemd.pp
This looks pretty generic, but it works. These lines were suggested by SELinux itself. To get them, I had to run the command journalctl -xe after trying to start the service
Two options:
Edit the systemd file that starts pm2 and specify an alternative location for the pm2 PIDFile). You'll have to make two changes, one to tell pm2 where to place the PIDFile, and one to tell systemd where to look for it. Replace the existing PIDFile line with the following two lines
Environment=PM2_PID_FILE_PATH=/run/pm2.pid
PIDFile=/run/pm2.pid
Create an SELinux rule that allows this particular behavior. You can do that exactly as Backslash36 suggest in their answer. If you want to create the policy file yourself rather than through audit2allow,the following should work, although then you have to compile it to a usable .pp file yourself.
module pm2 1.0;
require {
type user_home_t;
type init_t;
class file read;
}
#============= init_t ==============
allow init_t user_home_t:file read;
I have successfully forked and built the Context Broker source code on a CentOS 6.9 VM and now I am trying to run the functional tests as the official documentation suggests. First, I installed the accumulator-server.py script:
$ make install_scripts INSTALL_DIR=~
Verified that it is installed:
$ accumulator-server.py -u
Usage: accumulator-server.py --host <host> --port <port> --url <server url> --pretty-print -v -u
Parameters:
--host <host>: host to use database to use (default is '0.0.0.0')
--port <port>: port to use (default is 1028)
--url <server url>: server URL to use (default is /accumulate)
--pretty-print: pretty print mode
--https: start in https
--key: key file (only used if https is enabled)
--cert: cert file (only used if https is enabled)
-v: verbose mode
-u: print this usage message
And then run the functional tests:
$ make functional_test INSTALL_DIR=~
But the test fails and exits with the message below:
024/927: 0000_ipv6_support/ipv4_ipv6_both.test ........................................................................ (FAIL 11 - SHELL-INIT exited with code 1) testHarness.sh/IPv6 IPv4 Both : (0000_ipv6_support/ipv4_ipv6_both.test)
make: *** [functional_test] Error 11
$
I checked the file ../0000_ipv6_support/ipv4_ipv6_both.shellInit.stdout for any hint on what may be going wrong but error log does not lead me anywhere:
{ "dropped" : "ftest", "ok" : 1 }
accumulator running as PID 6404
Unable to start listening application after waiting 30
Does anyone have any idea about what may be going wrong here?
I checked the script which prints the error line Unable to start listening application after waiting 30 and noticed that stderr for accumulator-server.py is logged into the /tmp folder.
The accumulator_9977_stderr file had this log: 0000_ipv6_support/ipv4_ipv6_both.shellInit: line 27: accumulator-server.py: command not found
Once I saw this log I understood the mistake I made. I was running the
functional tests with sudo and the secure_path was being used instead of my PATH variable.
So at the end, running the functional tests with the command below solved the issue for me.
$ sudo "PATH=$PATH" make functional_test INSTALL_DIR=~
This can also be solved by editing the /etc/sudoers file by:
$ sudo visudo
and modifying the secure_path value.
I am trying to compile eJabberd on CentOS6. I am following the steps mentioned # [https://www.process-one.net/docs/ejabberd/guide_en.html#htoc12][1]
However, this aborts with connection-timeout error while executing "make".
Following is the error snipet from command prompt:
*
[root#CentOS-6-64-EN ejabberd-15.04]# make
rm -rf deps/.got
rm -rf deps/.built
/usr/lib64/erlang/bin/escript rebar get-deps && :> deps/.got
==> rel (get-deps)
==> ejabberd-15.04 (get-deps)
Pulling p1_cache_tab from {git,"git://github.com/processone/cache_tab",
"cca096330ce39e8b56fe0e0c478df1ff452e7751"}
github.com[0: 192.30.252.131]: errno=Connection timed out
fatal: unable to connect a socket (Connection timed out)
Initialized empty Git repository in /root/Desktop/eJabberd/ejabberd-15.04/deps/p1_cache_tab/.git/
ERROR: git clone -n git://github.com/processone/cache_tab p1_cache_tab failed with error: 128 and output:
github.com[0: 192.30.252.131]: errno=Connection timed out
fatal: unable to connect a socket (Connection timed out)
Initialized empty Git repository in /root/Desktop/eJabberd/ejabberd-15.04/deps/p1_cache_tab/.git/
ERROR: 'get-deps' failed while processing /root/Desktop/eJabberd/ejabberd-15.04: rebar_abort
make: *** [deps/.got] Error 1
*
On trying the command "./rebar get-deps", I get the same connection timeout error.
My network connectivity is fine and it seems the github link is broken. Please Help!
You should try replacing the dependancy link to Github using https:// instead of git://
It should fix your issue.
We will check the project to make sure all our dependancies use https url scheme instead of ssh.