How to run strace inside CoreOS toolbox container? - containers

I run coreos and need to run strace on a certain process. However:
strace -s 99 -ffp 8259
strace: attach: ptrace(PTRACE_SEIZE, 8259): Operation not permitted
I opened up the script that spins up the toolbox and found this:
sudo systemd-nspawn \
--directory="${machinepath}" \
--capability=all \
--share-system \
${TOOLBOX_BIND} \
--user="${TOOLBOX_USER}" "$#"
Which is a namespace container. It looks like a permissions issue but I don't know how to give my container permissions to attach strace to process outside of it. My CoreOS version: DISTRIB_RELEASE=1185.5.0
Any help is appreciated

Short answer:
echo 0 > /proc/sys/kernel/yama/ptrace_scope
Longer answer here

Related

What is the right way to increase the hard and soft ulimits for a singularity-container image?

The task I want to complete: I need to run a python package inside of a singularity-container that is asking to open at least some 9704 files. This is the first I have heard of it and searching around this has something to do with a system’s ulimit.
What I currently have is the following def file.
I am setting the * hard nofile flag and the * soft nofile flag to 15 thousand. The sed line does edit the conf file but within the singularity shell my ulimit is still the default 1024.
Bootstrap: docker
From: fedora
%post
dnf -y update
dnf -y install nano pip wget libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
wget -c https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh
/bin/bash Anaconda3-2020.02-Linux-x86_64.sh -bfp /usr/local
conda config --file /.condarc --add channels defaults
conda config --file /.condarc --add channels conda-forge
conda update conda
sed -i '2s/#/\n* hard nofile 15000\n* soft nofile 15000\n\n#/g' /etc/security/limits.conf
bash
%runscript
python /Users/lamsal/count_of_monte_cristo/orthofinder_run/OrthoFinder_source/orthofinder.py -f /Users/lamsal/count_of_monte_cristo/orthofinder_run/concatanated_FAs/
I am following the “official” instuctions to change the ulimits for a RHEL based system from IBM’s webpage here: https://www.ibm.com/docs/en/rational-clearcase/9.0.2?topic=servers-increasing-number-file-handles-linux-workstations
Is the sed line not the right way to change ulimits for a singularity image?
Short answer:
Change the value on the host OS.
Long answer:
In this instance, running a singularity container is best thought of as any other binary you're executing in your host OS. It creates its own separate environment, but otherwise it follows the rules and restrictions of the user running it. Here, the ulimit is taken from the host kernel and completely ignores any configs that may exist in the container itself.
Compare the output from the following:
# check the ulimit on the host
ulimit -n
# check the ulimit in the singularity container
singularity exec -e image.sif ulimit -n
# docker only cares about container config settings
docker run --rm fedora:latest ulimit -n
# change your local ulimit
ulimit -n 4096
# verify it has changed
ulimit -n
# singularity has changed
singularity exec -e image.sif ulimit -n
# ... but docker hasn't
docker run --rm fedora:latest ulimit -n
To have a persistent fix, you'll need to modify the setting on your host OS. Assuming you're on MacOS this answer should take care of that.
If you don't have root privs or you're only using this intermittently you can run ulimit by before running singularity. Alternatively, you could use a wrapper script to run the image and set it in there.

Docker Container DB dont show up

Hello currently I get in touch with Docker. I am doing their getting started and I ran into a problem which I cant solve and I dont understand why it dont work. First of all I create a network using.
$ docker network create todo-app
After that, I set up a Container mysql database and connect it with the network with following code.
$ docker run -d \
--network todo-app --network-alias mysql \
-v todo-mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=todos \
mysql:5.7
I check for the Container id with
$ docker ps
After that I use the command to get into the mysql CLI ? (not sure on that yet)
$ docker exec -it mysql -u root -p
After getting there I use
mysql> SHOW DATABASES;
to show all DB on my PC? But there is non listed named todos and i dont know why it dont appear.
I would like to hear what you are thinking im struggeling a little there. Thanks for the replies. Sorry for my english skills.
Run container in the foreground and check the logs.
Following Part 7: Multi-container apps, I ran into this exact issue just now.
Chances are, you have run that same command at least once.
# command to run as per the docs
docker run -d \
--network todo-app --network-alias mysql \
-v todo-mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=todos \
mysql:5.7
And the first time you ran the command you unknowingly made a mistake. For me, I mistyped MYSQL_DATABASE FOR MYSQL_ROOT_PASSWORD. Yours might be different. In any case, it seems making small mistakes like this might have caused the mysql:5.7 image to not be set up correctly with the todos database. (Not entirely sure.)
Adding to that, the first time you run that command, Docker creates a todo-mysql-data volume, which does not get overwritten when you run that same command again.
So as a "fix", you might have to delete the todo-mysql-data volume first.
docker volume rm todo-mysql-data
And then re-create the todo-mysql-data volume implicitly by re-running the image with the above command; this time without mistakes.
Sorry for the trouble it was my fault I guess cause I used the Command I pointed out above but I definetly had to use this command :
docker run -d \
--network todo-app --network-alias mysql \
--platform "linux/amd64" \
-v todo-mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=todos \
mysql:5.7
Because im using Linux... Such a dumb mistake but i swear this wasnt there two months ago when I asked this Qeustion.

How to start a container in cri-o with only specifying the image name?

I am trying to achieve something like
docker run -it <image_name> bash
I want to specify the image to run and do not care about anything else.
crictl requires config files for both a container and a pod for the run command, if I am not mistaken.
[hbaba#ip-XX-XX-XXX misc]$ sudo crictl -r /run/crio/crio.sock run -h
....
USAGE:
crictl run [command options] container-config.[json|yaml] pod-config.[json|yaml]
I am looking for the simplest way of starting a container, possibly with only a specified image.

Docker container bash terminal is irresponsive

I have a MySQL instance running on a docker container. I am trying to access the bash terminal by running "docker exec -t myContainerID /bin/bash" for the container so that I can check into my MySQL and see if the setup is correct. Although after accessing the bash terminal, any command I run is irresponsive. Even something as simple as ls. Is there any way to resolve this or know what might be causing the problem? Thanks.
You seem to be missing the -i option, try running: docker exec -ti CONTAINER_ID /bin/bash
And just FYI:
--interactive , -i Keep STDIN open even if not attached
--tty , -t Allocate a pseudo-TTY

Keep getting "There is already another Tungsten installation script running"

I'm trying to install tungsten replicator 3.0.0-524 GA from MySQL to MongoDB but when I'm running the cookbook/validate_cluster the error:
There is already another Tungsten installation script running
(InstallationScriptCheck)
Keep showing up
The configuration I'm using for the cluster are:
./tools/tpm configure mysql2mongodb \
--enable-heterogenous-master=true \
--topology=master-slave \
--master=mysql \
--replication-user=boahub_boahub \
--replication-password=*****\
--slaves=tracking-mongo \
--home-directory=/opt/mysql \
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=boahub_boahub.urls,boahub_boahub.media_campaigns \
--start-and-report
./tools/tpm configure mysql2mongodb \
--hosts=tracking-mongo \
--datasource-type=mongodb \
--replication-port=27017
./tools/tpm -v install --install-directory=/opt/tungsten
I've configured both "mysql" and "tracking-mongo" hosts under /etc/hosts file
So far I've tried to
1. Reboot the system
2. Clear my /opt/tungsten installation directory
3. Delete the deploy.cfg
The verbose output of the tools/tpm -v install shows the SSH between the two machines succeeded and the command for checking other tungsten script is
ps ax 2>/dev/null | grep configure.rb | grep -v firewall | grep -v grep | awk '{print $1}'
When I execute this command it comes up with nothing.
What can I do? Is there and way to ignore this check?
Thanks!
You can remove any check using --skip-validation-check option(argument required). You can use this option multiple time without problem.
The option takes as argument the name of the check which can be found in the error message.
In your case you can add the following option to your command:
--skip-validation-check InstallationScriptCheck
I have a feeling this may help you get through.
Have you tried install your master and slave separately? Do a
./tools/tpm install
after configuring & installing master, clear the configuration with
./tools/tpm configure defaults --reset
Then apply your slave settings and do the other tpm install.
A few weeks ago I had run into some similar (maybe, I can't recall as clear) trouble. The phrase "another script" in your post has brought back some memory of that for me, hope it works.
Good Luck!