How to fix "cannot toggle freezer: cgroups not configured for container" error - containers

Im getting the below error when I try to stop and rm the running container using podman.
WARN[0000] cannot toggle freezer: cgroups not configured for container WARN[0000] cannot toggle freezer: cgroups not configured for container WARN[0000] lstat : no such file or directory WARN[0000] Failed to remove cgroup (will retry) error="rmdir /sys/fs/cgroup/systemd/user.slice/user-491465.slice/user#491465.service/user.slice/podman-344715.scope/d8258e0ae3f2a7a7f08abca04446247
I guess due to this some of the running process inside the container is not stopped properly and still available in host process list.
Actually I want to gracefully exit all the process like exiting container like giving 'exit' command when container is attached.
Please let me know how to fix the error or how to gracefully shutdown all the process of running conatainer from host.

Related

Error while running "podman run"; error adding pod to CNI network "podman": unexpected end of JSON input

I'm new to podman, and I just trying to run containers on it.
(podman version 3.4.0, installed by brew, intel Core MAC)
However, when I trying to run "podman run {image-name}", below errors were thrown.
$ podman run -ti -d --name web httpd 125
Error: error configuring network namespace for container b0e70d672cb66005833c0a300c8661b88eab49e942c240d69d17587e0b75c47b: error adding pod web_2_web_2 to CNI network "podman": unexpected end of JSON input
$ podman run centos:7
Error: error preparing container a6d0bc1ad217cd8207935561dc8ff7bd33672da3fa513917f9965cb39520c449 for attach: error configuring network namespace for container a6d0bc1ad217cd8207935561dc8ff7bd33672da3fa513917f9965cb39520c449: error adding pod quirky_snyder_quirky_snyder to CNI network "podman": unexpected end of JSON input
By reading https://issueexplorer.com/issue/containers/podman/11452, I removed ~/.docker/, but the solution doesn't work in my case.
Of course, the error message says there was "unexpected end of JSON input", but I don't know how to fix it. Could anyone guess why podman didn't work even running these base images, or how to debug it?
Thanks in advance.
on macos, current machine version 3.3.1 has this problem. I had this problem on server version 3.3.1 and I do not encounter it on server version 3.4.0. You can check server version with podman version.
Try removing current machine and installing a newer one
podman machine stop
podman machine rm
podman machine init --image-path next
podman machine start
Check server version again with podman version.
Try running your image again.

Change mysql db location when installed with homebrew using Big Sur and external hard drive

Previously I had /usr/local/var/mysql symlinked to /Volumes/External/mysql meaning all my databases were stored on the external hard drive.
I have had to reformat my machine and upgrade to BigSur. If I try to set up the symlink as before I now get the following when I try to start MySQL
brew services start mysql
Bootstrap failed: 5: Input/output error
Error: Failure while executing; `/bin/launchctl bootstrap gui/502 /Users/jamie/Library/LaunchAgents/homebrew.mxcl.mysql.plist` exited with 5.
If I try to also change the -datadir in
/usr/local/Cellar/mysql/8.0.26/homebrew.mxcl.mysql.plist
to be
<string>--datadir=/Volumes/External/mysql</string>
I get the same error
brew services start mysql
Bootstrap failed: 5: Input/output error
Error: Failure while executing; `/bin/launchctl bootstrap gui/502 /Users/jamie/Library/LaunchAgents/homebrew.mxcl.mysql.plist` exited with 5.
I have tried
launchctl unload /Users/jamie/Library/LaunchAgents/homebrew.mxcl.mysql.plist
launchctl load /Users/jamie/Library/LaunchAgents/homebrew.mxcl.mysql.plist
but that didn't work either. It's like it doesn't have the correct permissions. Looking at the privacy settings you can see httpd which is also installed by brew is allowed to see "Removable volumes".
I can't add MySQL as the + symbol is grey out even though I have unlocked the panel
The external hard drive is located at /Volumes/External/ and is APFS (Encrypted) volume.
Any help would be much appreciated
https://github.com/Homebrew/discussions/discussions/2092#discussioncomment-1286031
Select System Preferences->Security & Privacy->Full Disk Access
Click the lock to make changes
Click '+'
Press 'cmd + Shift + .' to show hidden files
select /bin/sh

Gnome Boxes on Fedora 33 fails to open

I attempt to load gnome-boxes from the terminal (I'm running Fedora 33) and get the following error
$ gnome-boxes
(gnome-boxes:3194): Gtk-WARNING **: 12:34:57.343: GtkFlowBox with a model will ignore sort and filter functions
(gnome-boxes:3194): Gtk-WARNING **: 12:34:57.344: GtkListBox with a model will ignore sort and filter functions
(gnome-boxes:3194): Boxes-WARNING **: 12:34:57.904: libvirt-machine.vala:83: Failed to disable 3D Acceleration
(gnome-boxes:3194): Boxes-WARNING **: 12:34:57.913: libvirt-broker.vala:70: Failed to update domain 'fedora33-wor-2': Failed to set domain configuration: XML error: Invalid PCI address 0000:04:00.0. slot must be >= 1
(gnome-boxes:3194): Boxes-CRITICAL **: 12:34:57.916: boxes_vm_importer_get_source_media: assertion 'self != NULL' failed
Segmentation fault (core dumped)
My system:
$uname -a
Linux localhost.localdomain 5.9.16-200.fc33.x86_64 #1 SMP Mon Dec 21 14:08:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
I don't whether it's related but I recently updated from kernel 5.9.11 directly to 5.9.16 (haven't used the PC in question for some weeks) and before gnome-boxes was working as normal.
Please advise how I can restore gnome-boxes - I have some virtual machines that I need to access...
I faced this issue when I force stopped Gnome-Boxes while cloning a VM.
Deleting the conflicting VM will resolve your issue(in your case 'fedora33-wor-2').
To delete the VM in fedora, install "libvirt-client" which provides "virsh" using the command
dnf install libvirt-client
then double check the available VM's using
virsh list --all
Delete the VM using command,
virsh undefine VM_Name
#channel-fun solved the problem of staring up gnome-boxes.
But the real problem is in cloning procedure. The XML describing the new machine is malformed.
virt-clone --original fedora33-ser --auto-clone
works properly.
I know this is an old thread, but I had the same problem recently.
I shut down gnome boxes whilst it was cloning a vm, and shutdown the machine.
I then couldn't open boxes, as it would just crash.
I was able to delete the VM itself, and then deleted the XML file associated with it.
To delete the VM itself, go to :
$HOME/.var/app/org.gnome.Boxes/data/gnome-boxes/images (which in my case is a symbolic link to a data drive)
and delete the VM with the name that you were cloning to (or safer, just move it somewhere).
To delete the XML file associated with it:
$HOME/.var/app/org.gnome.Boxes/config/libvirt/qemu/
and delete (or safer move) the file that is named VM_NAME.xml.
Then boxes should open ok, at least it worked for me.
Extending on Channel Fun's answer for Ubuntu repos the package is libvirt-clients (note the plural s):
sudo apt install libvirt-clients
Check the available VM's using:
virsh list --all
Delete the VM using:
virsh undefine VM_Name
If you receive the error:
error: Refusing to undefine while domain managed save image exists
Then you can explicitly remove that also using the --managed-save flag:
virsh undefine VM_Name --managed-save

apache drill on cluster start error

I install apache drill on a cluster with 3 nodes.
When I use the following command to start it,it will not really running.
bin/drillbit.sh start
error
I don't know how to solve it and want you help.
The zookeeper is running without problems.
Then I check the log, and it show the following infos:
Exception in thread "main" org.apache.drill.exec.exception.DrillbitStartupException: Failure while initializing values in Drillbit.
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:287)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:271)
at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:267)
Caused by: org.apache.drill.exec.exception.DrillbitStartupException: Problem in finding the native library of JPAM (Pluggable Authenticator Module API). Make sure to set Drillbit JVM option 'java.library.path' to point to the directory where the native JPAM exists.:no jpam in java.library.path
I check the java.library.path, it is the following:
/home/hadoop/bigdata/hadoop-2.7.2/lib/native/::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
So, I add the following setting:
declare -x DRILL_JAVA_LIB_PATH="/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib"
However, it not work and turn out the same problem like before.
The declare -x DRILL_JAVA_LIB_PATH snippet you provided will not point drill to the pam library. Please follow all the instructions in the Drill docs here https://drill.apache.org/docs/using-jpam-as-the-pam-authenticator/
Note: you will have to perform those steps on all 3 nodes of your cluster.

Openshift - Execution of post execute step failed

I have node.js app and during the deployment after installing dependencies the following error had occured:
error: Execution of post execute step failed
warning: Failed to remove container "a167df5e218c392e42ec772d5c22311f88043ff99c71ce1a08e7af535ac3817b": Error response from daemon: {"message":"Driver devicemapper failed to remove root filesystem a167df5e218c392e42ec772d5c22311f88043ff99c71ce1a08e7af535ac3817b: Device is Busy"}
error: build error: building my-pokus/hello-seattle-2:d4b8ecde failed when committing the image due to error: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
So that happens when node that the build is scheduled on, is being shutdown or restarted. Try to re-spin the build so the scheduler will, put it on an available node.
Should work :)
The problem is that your image is too big, if the commit is longer than 2mins, this error happens.
I found a workaround here : github origin 13515
Shrink your Docker Image :)
Use a more recent S2I-Builder:
In order to temporarily use another version of the docker image, the
easiest way seems to be to simply pull the new image and tag it as the
one used by openshift:
docker pull docker.io/openshift/origin-sti-builder:v1.5.0-rc.0
docker tag docker.io/openshift/origin-sti-builder:v1.5.0-rc.0 docker.io/openshift/origin-sti-builder:v1.4.1