MySql container stuck at "Restarting..." on Dokku - mysql

I tried to create a new mySql database on my Dokku container.
Using
dokku mysql:create bookmarks
The container has been created, but it seems it is unable to start.
The command
# dokku mysql:list
NAME VERSION STATUS EXPOSED PORTS LINKS
bookmarks mysql:5.6.26 restarting - -
I am unable to stop, restart or destroy this container.
# dokku mysql:destroy bookmarks
! WARNING: Potentially Destructive Action
! This command will destroy bookmarks MySQL service.
! To proceed, type "bookmarks"
> bookmarks
-----> Deleting bookmarks
Deleting container data
! Service is already stopped
Removing container
Error response from daemon: Conflict, You cannot remove a running container. Stop the container before attempting removal or use -f
Error: failed to remove containers: [dokku.mysql.bookmarks]
I also tried to reboot the entire server, without any success.
To me, it seems like something went wrong during the creation of this container that makes the system unable to start it. The problem is that at the same time I am unable to stop or restart it, and being unable to stop it I cannot remove it and start from scratch.
# dokku mysql:stop bookmarks
! Service is already stopped
# dokku mysql:restart bookmarks
! Service is already stopped
-----> Starting container
No container exists for bookmarks
-----> Please call dokku ps:restart on all linked apps
The error message says something about "forcing" the process, but I can't find anywhere how to use it.
Does anyone have any idea?
Thank you in advance,
Simone

So, finally with the help from people from DigitalOcean I've been able to stop and destroy that faulty container.
Here is what I did:
Check Docker processes running
docker ps -a
Identify the process that is causing the problem, in my case it was:
8549c8ec4e53 mysql:5.6.26 "/entrypoint.sh mysql" 17 hours ago Restarting (1) 2 hours ago 3306/tcp dokku.mysql.bookmarks
Kill the Docker process
docker kill 8549c8ec4e53
Remove the container from Dokku
dokku mysql:destroy bookmarks
Hope this answer helps others having similar problems.

Related

How to resolve Kubernetes image respository in a bad state

I have a 3 node bare metal K3s cluster where an install fails on one node, but not another.
My guess is that somehow the Kubernetes image repository on the node where the deployment failed is in a bad state. I don't know how to prove that, or fix it.
I did a helm install yesterday which failed with the following error:
Apr 14 14:28:41 clstr2n1 k3s[18777]: E0414 14:28:41.878018 18777 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"docker.ssgh.com/device-api:1.2.0-SNAPSHOT\": failed to copy: httpReadSeeker: failed open: could not fetch content descriptor sha256:cd5b8d67fe0f3675553921aeb4310503a746c0bb8db237be6ad5160575a133f9 (application/vnd.docker.image.rootfs.diff.tar.gzip) from remote: not found" image="docker.ssgh.com/device-api:1.2.0-SNAPSHOT"
I verified that I could pull the image from the repository using docker pull docker.ssgh.com/device-api:1.2.0-SNAPSHOT on my development VM and it worked as expected.
I then set the nodeName attribute for the pod specification to force it to one of the other nodes and the deployment worked as expected.
In addition I also used cURL to fetch the content descriptor, which worked as expected.
Edit for further detail.
My original install included 6 different charts. Initially only 2 of the 6 installed correctly, the remaining 4 reported image pull errors. I deleted the failing 4 and tried again, this time 2 of the 4 failed. I deleted the failing 2 and tried again. These 2 continued to fail, unless I specified a different node, in which they worked. I deleted them again and waited for an hour to see if Kubernetes would clean up the mess. When I tried again, 1 of them worked, but the other continued to fail. I left it over night, and its still failing this morning. Unless I move force onto a different node.
It is worth noting that the nodes in question are able to download other images from the same private repo without issue.
There can be multiple reasons for your pod not pulling the image on particular node:
Docker on non-working node is not trusting the image repo
Docker is not verifying the CA issuer for the repo
Firewall is not opened to image repo on non-working node
Troubleshoot using the following option to find the cause of the issue :
Check the connectivity to image repo on the non-working node
Check the docker config over non working node whether its allowing the image repo
Do docker pull on non working node

mysql in docker container can't start

This started today and I couldn't figure it out.
Mysql docker container logs reveal this but I have no idea where to fix it and why it started all of a sudden.
Any idea?
2021-05-04T16:32:44.941492Z 0 [ERROR] unknown option '--root#cc1dd09ee7b4:/etc/mysql'
i started a new docker/mysql container and copied data directory from the old one , which "solved" the problem.

Running Github Actions on OSX results in "Could not find domain for port (Aqua)"

I've followed the directions here, but when I run ./svc.sh run, I receive the following error:
Could not find domain for port (Aqua)
I'm SSH-ing into a box to run this command, it seems to work fine when I'm not in a headless session, but I need this to be headless and as a background service. Anyone else run into this?
I was able to resolve this, addressed here
sudo cp {/Users/xxx/Library/LaunchAgents,/Library/LaunchDaemons}/your.plist
I was able to reboot my machine without logging in and see the runner active
Solution by #futbolpal not good, because LaunchDaemons doesn't have access to keychain.
Better copy to LaunchAgents
Like:
sudo cp {/Users/xxx/Library/LaunchAgents,/Library/LaunchAgents}/your.plist

Docker ps -a doesn't show a stopped mysql client running container

I am connecting to a mysql container using another container running mysql client. When I exit out of this client the container stops obviously. But when I do a docker ps -a this container doesn't show. I have not been able to find a reason for this. I am following these instructions to start the containers. Any ideas would be helpful
The --rm option passed along docker run automatically removes the container after its stopped.
See clean up flag:
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag

Cannot login to phpMyAdmin, no errors shown

I have MySQL set up correctly on my linux computer, however I want a better way to input data into the database besides terminal. For this reason, I downloaded phpMyAdmin. However, when I try to log in to the phpMyAdmin from index.php, it doesnt do anything. It seems to just refresh the page without doing anything. I am putting in the correct MySQL username and password. What is the issue?
Here is a screen shot of what it shows after I click "go".
This is a possible issue when the path to save php_session is not correctly set :
The directory for storing session does not exists or php do not have sufficient rights to write to it.
To define the php_session directory simply add the following line to the php.ini :
session.save_path="/tmp/php_session/"
And give write rights to the http server.
usually, the http server run as user daemon in group daemon. If it is the case, the following commands will make it :
chown -R :daemon /tmp/php_session
chmod -R g+wr /tmp/php_session
service httpd restart
Login fails if session folder in not writeable. To check that, create a PHP file in your web directory with:
<?php
$sessionPath = 'undefined';
if (!($sessionPath = ini_get('session.save_path'))) {
$sessionPath = isset($_ENV['TMP']) ? $_ENV['TMP'] : sys_get_temp_dir();
}
if (!is_writeable($sessionPath)) {
echo 'Session directory "'. $sessionPath . '"" is not writeable';
} else {
echo 'Session directory: "' . $sessionPath . '" is writeable';
}
If session folder is not writeable do either
sudo setfacl -R -m u:www-data:rwx <session directory> or chmod 777 sudo setfacl -R -m u:www-data:rwx <session directory>
-
I am late to the game, but on Amazon linux AMI I could not log in to phpmyadmin ... it just kept refreshing the login screen with no errors.
I have fixed with below command
sudo chmod -R 755 /var/lib/php/session
I fixed my issue on CentOS 7 with MariaDB and phpmyadmin I downloaded from offical phpmyadmin site by adding
session.save_path = "/var/lib/php/session"
to /etc/php.ini
and
chown -R :lighttpd /var/lib/php/session
I also restarted php-fpm and lighttpd after
In my case the solution was to set an Apache setting properly:
ProxyPassReverseCookiePath
This was required, because ProxyPass and ProxyPassReverse were in use, but cookie paths are not changed automatically.
It'd be great if PHPMyAdmin had shown something like session not found or anything, when password is sent with POST.
Do you have a .htaccess file in one of the parent directories that strips off index.php from the url by doing a 301 redirect?
301 redirects discard the form data and redirect you as if you didn't submit anything. So you get returned to the login page.
So you should create a local .htaccess file in the phpmyadmin directory with a single line RewriteEngine On. This will overwrite the previous rewrite rule to nothing.
You may need to clear the browser cache as Chrome aggressively caches 301 redirects.
In my case the hard drive was full.
Use df -h to check the space left on your hard drive, and if you want you can free some space by using the command sudo apt-get clean, which removes installation files.
I hope this will help some future users.
I ran these commands and it worked for me:
sudo service httpd restart
sudo service mysqld stop
sudo service mysqld start
Try searching the web for installation or setup guides for phpMyAdmin. Look at two or three of these and make sure you have covered all the required steps. (If you have already done so, please include which guides you have followed it in the question).
See if it helps to edit config.inc.php (acecoder mentioned this as well).
Check if this guide is of any help.
Which distro are you on? Try searching for the name of the distro you are using together with "phpMyAdmin guide" or "phpMyAdmin setup howto".
If you encounter errors along the way, post the error text here, if it's short (or paste via a pastebin-like site if it's long).
Are you sure that mysql is running? I had the same issue after doing a database import and filling up the volume containing the mysql database. After changing various permissions and clearing sessions, I tried to restart mysql (/etc/init.d/mysql restart) and it failed because the volume was full. After increasing /var and starting mysql successfully, I was able to log into phpmyadmin just fine.
If you have an error like:
Host 'host_name' is blocked because of many connection errors.
Login in your mysql as root and run the flush hosts command
1.- mysql -u root -p
2.- mysql > flush hosts
After this I was able to login again in phpmyadmin
phpMyAdmin will show errors when login fails. If it doesn't, it means that your setup has an error.
The most likely place to check is your php.ini settings. Since there doesn't seem to be an official list of phpMyAdmin-compatible settings, it's mostly trial and error.
Make sure you have enabled the stuff that needs to be enabled. Also check that you did not enable uncommon php.ini settings (like enable_post_data_reading = Off) because phpMyAdmin assumes them to be "the usual ones".
To ease debugging, start with a clean default php.ini file then tweak them line by line to see which setting is causing the error. (Don't forget that you need to restart your server after changing the php.ini file for the changes to take place.)
In my case it was due to an old Apache session.
Stop Apache, clear all pending sessions in your sessions.save_path directory (example: /var/lib/php/session) and restart Apache.
Make sure to set a 32 chars long random key in 'config.inc.php' in the $cfg['blowfish_secret'] value. That solved it for me.
Didn't realize I need to restart MariaDB after modifying config.inc.php:
service mariadb restart
Otherwise at least in my case changes didn't come affect. Also make sure your php session directory is writable by webserver (typically session.save_path = "/var/lib/php/session")