CircleCI: MySQL starts on its own even after stopping the process - mysql

I am having some trouble with default MySQL installation on CircleCI. In 'post' section of 'machine', I stop mysql using, "- sudo service mysql stop". The reason behind doing so is that I want to use docker mysql container on port 3306. My "docker-compose up" takes some time to finish and sometimes before the docker mysql container starts, the mysql process starts again for no reason obvious to me. I have been tracking this issue using the following command.
while true; do sudo netstat -nlp | grep :3306; sleep 2; done
I have a build that ran fine with docker being able to register port 3306, and also a build in which mysqld started again even after stopping giving me the following error on docker-compose up.
ERROR: for dbm01 Cannot start service dbm01: failed to create endpoint minimum_dbm01_1 on network minimum_default: Error starting userland proxy: listen tcp 0.0.0.0:3306: bind: address already in use
ERROR: Encountered errors while bringing up the project.
Both the builds are of same commit so there is no difference in code. What might be the issue?

Related

Creating a MySQL cluster, using mysql-server docker containers, on multiple servers

I'm trying to create an MySQL cluster of 3 nodes using mysql-server docker containers.
I have 3 separate cloud instances and docker is setup on all 3 of them. Each server will have only 1 container running on it - to achieve High Availability when in cluster.
I start the containers on all 3 servers, individually, with the command
docker run --name=db -p 3301:3306 -v db:/var/lib/mysql -d mysql/mysql-server
I'm mapping the port 3306 of container to my server's 3301 port. I've also created a new user 'clusteradmin' for remote access.
Next, from mysql-shell, I ran following command - for all 3 servers
dba.configureInstance('clusteradmin#serverIp:3301')
I get similar message for all-
Note that it says 'This instance reports its own address as 39xxxxxxxxxx:3306'.
Next I create a cluster in one of the server successfully. But, when adding the other 2 servers to this cluster, I'm getting the following error
On checking the logs for that particular server, I see the following lines
It says 'peer address a9yyyyyyyyyy:33061 is not valid'. This is because, since the containers are running on different servers, the container-id is not recognised by other containers on other server.
I tried many options but to no avail. One method was to use report-host and report-port options when starting the container, like so
docker run --name=db2 -p 3301:3306 -v db2:/var/lib/mysql -d mysql/mysql-server --report-host=139.59.11.215 --report-port=3301
But, the issue with this approch is that, during dba.configureInstance(), it wants to update the port to default value and throws error like so
Anybody who has managed to create such a cluster of mysql-server containers running on different servers, I would really appreciate pointers in this regard.
I have gone over the documentation and source code but have not found an explanation why listening and advertising different ports is problematic.
I have solved the problem by using --port 3301 when invoking mysql-server:
docker run --name=db2 -p 3301:3301 -v db2:/var/lib/mysql -d mysql/mysql-server --report-host=139.59.11.215 --port 3301

Docker MySql Service Only Starts As Admin and Never Stays on with AWS AMI 2018.3

I am creating a development/testing container that contains a number of elements including a mysql server that must run internally for code to access. To demonstrate the issue, I run the following Dockerfile with docker run -i -t demo_mysql_server:
FROM amazonlinux:2018.03
RUN yum -y update && yum -y install shadow-utils mysql-server
Unfortunately, after building the docker container I receive a common connection error (see 1, 2)
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
which can be fixed by logging in as admin with docker run -i -u 0 -t demo_mysql_server
and executing:
echo "NETWORKING=yes" >/etc/sysconfig/network
service network restart
/etc/init.d/mysqld start
chkconfig mysqld on
which seems to turn everything on. However, incorporating these into RUN commands doesn't seem to keep the service on and logging in as admin requires restarting the service as above and adding a user and working as a non-admin and trying to start the service results in errors of the flavor:
bash: /etc/sysconfig/network: Permission denied
[testUser#544a938c44c1 /]$ service network restart
[testUser#544a938c44c1 /]$ /etc/init.d/mysqld start
/etc/init.d/mysqld: line 16: /etc/sysconfig/network: No such file or directory
[testUser#544a938c44c1 /]$ chkconfig mysqld on
You do not have enough privileges to perform this operation.
I this a normal error to see and how do I get the MySQL server instance to stay running?

MySQL server in a docker container doesn't work after commiting the container

I created docker-compose.yml file, which creates MySQL image and sets the password of MySQL root user to "hello".
# docker-compose.yml
version: '3.1'
services:
mysql:
image: mysql:5.6.40
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
- MYSQL_ROOT_PASSWORD=hello
- MYSQL_ALLOW_EMPTY_PASSWORD=hello
- MYSQL_RANDOM_ROOT_PASSWORD=hello
Then I run:
sudo docker-compose up # (1)
... from the directory with this file.
The first problem is that the newly created container starts running in the foreground but not in bash and I can't put it in the background without exiting it, which I can do by Ctrl+C, or somehow enter bash, getting out from this process.
But when I open a new terminal window and run:
sudo docker exec -it bdebee1b8090 /bin/bash # (2)
..., where bdebee1b8090 is the id of the running container, I enter bash, where I can enter MySQL shell as root user, entering password "hello":
mysql -u root -p # (3)
Then I exit MySQL shell and bash shell in the container without stopping the container.
And then I commit changes to the container:
sudo docker commit bdebee1b8090 hello_mysql # (4)
..., creating a image. And then, when I run the image:
sudo docker run -it --rm hello_mysql /bin/bash # (5)
... and try to start MySQL shell again as root user, entering password "hello", I get an error like
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
And even after I restart the MySQL server:
/etc/init.d/mysql restart # (6)
..., I get the same error.
All of the above commands were run on ubuntu.
Why is this happening?
Edit:
When I try to make those steps on MacOS High Sierra, I get stuck on the step (3), because when I try to enter the password "hello", it doesn't get accepted. This error is shown on the screen:
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
And when I try to restart MySQL server in the container
/etc/init.d/mysql restart
..., the container restarts in the background
..., but when I run it again and try to repeat steps (2) and (3) it gives the same error, and when I restart the MySQL server again, the container restarts in the background again...
Edit2:
After I removed lines:
- MYSQL_ALLOW_EMPTY_PASSWORD=hello
- MYSQL_RANDOM_ROOT_PASSWORD=hello -
...it started to work on Mac, but still after I commit the container, and try to enter the MySQL shell, it gives and error like:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
.... again.
According to official documentation for MySQL docker images
https://hub.docker.com/r/mysql/mysql-server/
The boolean variables including MYSQL_RANDOM_ROOT_PASSWORD,
MYSQL_ONETIME_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD, and
MYSQL_LOG_CONSOLE are made true by setting them with any strings of
non-zero lengths. Therefore, setting them to, for example, “0”,
“false”, or “no” does not make them false, but actually makes them
true. This is a known issue of the MySQL Server containers.
Which means your config:
- MYSQL_ALLOW_EMPTY_PASSWORD=hello
- MYSQL_RANDOM_ROOT_PASSWORD=hello
equal to:
- MYSQL_ALLOW_EMPTY_PASSWORD = true
- MYSQL_RANDOM_ROOT_PASSWORD = true
Which bring us to the point:
MYSQL_RANDOM_ROOT_PASSWORD: When this variable is true (which is its
default state, unless MYSQL_ROOT_PASSWORD is set or
MYSQL_ALLOW_EMPTY_PASSWORD is set to true), a random password for the
server's root user is generated when the Docker container is started.
The password is printed to stdout of the container and can be found by
looking at the container’s log.
So either check your container log file to find random password generated
or just remove config parameter MYSQL_RANDOM_ROOT_PASSWORD from your file.
Docker images are generally designed to run a single process or server, in the foreground, until they exit. The standard mysql image works this way: if you docker run mysql without -d you will see all of the server logs printed to stdout, and there’s not an immediate option to get a shell. You should think of interactive shells in containers as a debugging convenience, and not the usual way Docker containers run.
When you docker run --rm -it /bin/bash, it runs the interactive shell instead of the database server, and that’s why you get the “can’t connect to server” error. As a general rule you should assume things like init.d scripts just don’t work in Docker; depending on the specific setup they might, but again, they’re not the usual way Docker containers run.
You can install a mysql client binary on your host or elsewhere and use that to interact with the server running in the container. You don’t need an interactive shell in a container to do any of what you’re describing here.
Bit late here:
i also faced this issue.
i took an ubuntu image and ran apt-get install mysql-server it worked fine in current instance but after commiting it into image it didnt work and i was unable to up the service as well.
so in ubuntu i made it to apt-get install mariadb-server.After commiting it in image in new container i ran service mysql start; mysql
It worked fine !

kubernetes failing to connect on fresh installation of CoreOS

I'm running (from Windows 8.1) a Vagrant VM for CoreOS (yungsang/coreos).
I installed kubernetes according to the guide I found here and created the json for the pod using my images.
When I execute sudo ./kubecfg list /pods I get the following error:
F0909 06:03:04.626251 01933 kubecfg.go:182] Got request error: Get http://localhost:8080/api/v1beta1/pods?labels=: dial tcp 127.0.0.1:8080: connection refused
Same goes for sudo ./kubecfg -h http://127.0.0.1:8080 -c /vagrant/app.json create /pods
EDIT: Update
Instead of running the commands myself I integrated into the vagrant file (as such) .
This makes kubernetes work fine. HOWEVER after some time my vagrant ssh connection gets closed off. I reconnect and any kubernetes commands I specify result in the same error as above.
EDIT 2: Update
I managed to get it to run again, however I am unsure if it will run smoothly
I had to re-execute the following commands.
sudo systemctl start etcd
sudo systemctl start download-kubernetes
sudo systemctl start apiserver
sudo systemctl start controller-manager
sudo systemctl start kubelet
sudo systemctl start proxy
I believe it is in fact the apiserver that needs restarting
What is the source of this "timeout"? (Where are any logs I can find for this matter)
Kubernetes development is moving insanely fast right now so this could be out of date by tomorrow. With that in mind, the kubernetes folks recommend following one of their official installation guides. The best advice would be to start over fresh with one of the new installation guides but there are a few tips that I have learned doing this myself.
The first thing to note is that Kubecfg is being deprecated in favor of kubectl. So for future reference if you want to get info about a pod you would run something like:
./kubectl get pods.
With kubectl you will also need to set an env variable so kubectl know how to talk to the apiserver:
KUBERNETES_MASTER=http://IPADDRESS:8080.
The easiest way to debug exactly what is going on if you are using CoreOS is to tail the logs for the service you are interested in. So if you have a kube-apiserver unit you can look at what's goin on by running:
journalctl -f -u kube-apiserver
from the node that is running the apiserver. If that service isn't running, which may be the case, you can start it with:
systemctl start kube-apiserver
On CoreOS you should look at the logs using journalctl.
For example, if you wish to see etcd logs, which Kubernetes relies on for storing the state of it's minions, run journalctl _COMM=etcd, and similarly journalctl _COMM=apiserver will show you the logs from the apiserver, one of key components in Kubernetes.
You also get last few log entries if you run systemctl status apiserver.
Based on errordevelopers advice, my recent installation ran against a similar problem.
Using systemctl status apiserver and sudo systemctl start apiserver I managed to get the environment up and running again.

MySQL is waiting why can I log in from the command line?

On Ubuntu 12.04, I run sudo service mysql stop. It responds mysql stop/waiting.
But then I can still log in to mysql from the command line. How is this possible?
service mysql stop
[ ok ] Stopping MySQL database server: mysqld.
If your server has stopped this message would have appeared. Since it didn't it means that MySQL is still running and that is why you could have still logged in. You can try the more brutal method
ps -C mysqld -o pid=
//You will get a pid, which is the process identification number, say it is 8082 then
kill -9 8082
PS: Please do note replace 8082 with the pid you see in the screen