influxdb container inside service checking - containers

I have an influxdb container running there, all works well. then I tried to connect inside to it using
docker exec -t -i 1b6cfd1a7060 /bin/bash
then I tried to check the influxdb service status:
root#1b6cfd1a7060:/#service influxdb status
it says
influxdb process is not running [ FAILED ]
and I also tried to check the influx default 8086 port by looking at cat /etc/services, the port is not even listed there.
but if I try influx cli:
root#1b6cfd1a7060:/#influx
Connected to http://localhost:8086 version 1.8.10
> show databases
ERR: unable to parse authentication credentials
Warning: It is possible this error is due to not setting a database.
Please set a database with the command "use <database>".
I am confused whether influx service is running side container or not?
however I can still connect to the container and run influx query outside of container ,everything seems operating well, what is going on there inside the container?

Related

Phpmyadmin stops connecting to mysql server after being up for a few months

I have a docker container running MySQL and another docker container running phpmyadmin. Both containers are running on my Ubuntu server.
Normally I can log into MySQL without problems thru phpmyadmin. However, this has happened several times in the past, that phpmyadmin runs into some issue and says:
"Cannot log in to the MySQL server" and
"mysqli::real_connect(): (HY000/2002): No such file or directory".
The funny thing is this happens at seemingly random times before giving this error. One time it worked for 4 months before giving this error message, another time it was 1 month, another time it was 3 months. There didn't seem to be any periodic or specific amount of time before it gave me this error.
I also checked the mysql container and it's still up and running and when I log into it (mysql container), I can access my db and see all the data and tables in it.
When I start the phpmyadmin container, I use this command. There is no config.user.inc.php file in /etc/phpmyadmin and it works for a few months.
docker run --name myadmin -d --link mysql_db_server:db -p 8080:80 phpmyadmin
I found some stackoverflow questions that were similar to my issue but doing what is suggested doesn't work.
One person said to edit the config.user.inc.php file and change the host to 127.0.0.1. I used the config.sample.php as my template for config.user.inc.php. In my config.user.inc.php file, I added
$cfg['Servers'][$i]['host'] = '127.0.0.1';
I then mounted a volume on my local linux server to map to /etc/phpmyadmin on the container such that when we started the container, it would use the config file. I ran:
docker run --name myadmin -d --link mysql_db_server:db -p 8080:80 -v /local/dir/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php phpmyadmin
However, this is worse than when I run the docker command without the volume because using the config.user.inc.php file makes me run into the error immediately. It's almost if the config.sample.inc.php file was misconfigured
The work around for me is to wait for phpmyadmin to give me the error, stop and kill the phpmyadmin container, then start a new one. However, if I can get it to work right off the bat and not run into this error, that would be most ideal.

socat throw error while trying to connect external mysql

I try to connect 2 docker containers to each other via socat.
Inside of the web container, I'll use socat to bind the external mysql-container to Port 3306.
I do use this command line:
socat TCP:$MYSQL_CONTAINER_IP:$MYSQL_CONTAINER_PORT,fork,reuseaddr,unlink-early,user=root,group=root,mode=777 UNIX-LISTEN:$MY_SOCKET &
While $MYSQL_CONTAINER_IP = 172.17.0.2
and $MYSQL_CONTAINER_PORT = 3306
$MY_SOCKET is set via:
MY_SOCKET=$(mysql_config --socket)
and result in /var/run/mysqld/mysqld.sock
But if I run this command, I got this:
2022/05/29 06:43:54 socat[10267] E bind(6, {AF=1 "/var/run/mysqld/mysqld.sock"}, 29): No such file or directory
The Web-Docker-Container is debian:buster (Debian buster [10]),
The MySql Container is Debian wheezy:latest
Any Idea, why I got the above noticed error-message?
The error message sounds like directory /var/run/mysqld/ does not exist in the environment where Socat is run. I'd recommend to check this.
However, the Socat command line you constructed, with the fork option on the TCP address, will try every second to establish another connection to the MySQL server, and from the second connection (and sub process) on the UNIX bind will fail.
For typical forwarder uses, you should have the listener with fork as the first address, and the connector (here TCP:) as second address.

Mysql container password security

I have a couple of questions on password security in mysql container. I use mysql/mysql-server:8.0 image.
The 1st question is
Is using MYSQL_PASSWORD env var in mysql container based on the image above secure?
I elaborate a bit more about this below.
I set mysql password for mysql container by k8s env var injection, that is, setting MYSQL_PASSWORD env var in mysql container by using k8s secrets via env var in k8s manifest file. Is this secure? That is my 1st question. Notes following table in this page say using MYSQL_PWD(note this is not MYSQL_PASSWORD) env var is extremely insecure because ps cmd can display the environment of running processes and any other user can exploit it. Does this also apply to container situation using MYSQL_PASSWORD instead of MYSQL_PWD?
The 2nd question is
Is running mysql -h 127.0.0.1 -p${MYSQL_PASSWORD} in the same mysql container secure?
I need to run similar cmd in k8s readiness probe. The warning section of this page says running mysql -phard-coded-password is not secure. I'm not sure if the password is still not secure even if the env var is used like above and I'm also not sure if this warning applies to container case.
Thanks in advance!
If your security concerns include protecting your database against an attacker with legitimate login access to the host, then the most secure option is to pass the database credentials in a file. Both command-line options and environment variables, in principle, are visible via ps.
For the case of the database container, the standard Docker Hub images don't have paths to provide credentials this way. If you create the initial database elsewhere and then mount the resulting data directory on your production system (consider this like restoring a backup) then you won't need to set any of the initial data variables.
here$ docker run -it -v "$PWD/mysql:/var/lib/mysql" -e MYSQL_PASSWORD=... mysql
^C
here$ scp -r ./mysql there:
here$ ssh there
# without any -e MYSQL_*=... options
there$ docker run -v "$PWD/mysql:/var/lib/mysql" -p 3306:3306 mysql
More broadly, there are two other things I'd take into account here:
Anyone who can run any docker command at all can very easily root the entire host. So if you're broadly granting Docker socket access to anyone with login permission, they can easily find out the credentials (if nothing else they can docker exec a cat command in the container to dump the credentials file).
Any ENV directives in a Dockerfile will be visible in docker history and docker inspect output to anyone who gets a copy of the image. Never put any sort of credentials in your Dockerfile!
Practically, I'd suggest that, if you're this concerned about your database credentials, you're probably dealing with some sort of production system; and if you're dealing with a production system, the set of people who can log into it is limited and trusted. In that case an environment variable setting isn't exposing credentials to anyone who couldn't read it anyways.
(In the more specific case of a Kubernetes Pod with environment variables injected by a Secret, in most cases almost nobody will have login access to an individual Node and the Secret can be protected by Kubernetes RBAC. This is pretty safe from prying eyes if set up correctly.)

Connecting to mysql container from host

I have installed docker on my mac. I have MySQL container which is running on my local machine (MAC).
Docker ps command is giving me below output -
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b5c50b2d334a test_mysql2 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 0.0.0.0:32783->3306/tcp test_mysql2_1
I know username and password to the mysql which would be setup up in the container.
I want to connect to mysql and run some queries But I am not able to figure out how to connect to it. Any help will be appreciated.
Do you want to Connect to MySQL through Docker if yes; Kindly follow this step by step procedure that I am using.
Step 1 : Pull MySql image from docker hub. The following command will pull the latest mysql image.
cli> docker pull mysql
Step 2: Run a container from this image. ‘-name’ gives a name to the container. ‘ -e’ specifies run time variables you need to set. Set the password for the MySQL root user using ‘MYSQL_ROOT_PASSWORD’. ‘-d’ tells the docker to run the container in background.
cli> docker run --name=testsql -e MYSQL_ROOT_PASSWORD=rukshani -d mysql
This will output a container id; which means that the container is running in the background properly.
Step 3: Then check the status of the container by issuing, ‘docker ps’ command
cli> docker ps
Now you should be able to see that MySQL is running on port 3306.
Step 4: To checkout the logs of the running container use the following command
cli > docker logs testsql
Step 5: Find the IP of the container using following. Check out the “IPAddress” from the output, this will tell you the IP address.
cli> docker inspect testsql
Now you should be able to connect to MySQL using tIPs ip address on port 3306.
Base on what I understand from your question, this is what you need. (I hope so)
(This is not my own documentation, I only like to document everything most especially those procedure that I cannot put in my head, so that if ever the same thing happen or I need same procedure in the future, I will not waste my time to research again, but instead I will open my notes and run the commands.)
As you can see in the output of docker ps, the port 32783 (local machine) is mapped to the port 3306 inside the docker container. If you are using a MySQL Client (e.g. MySQL Workbench) you should be able to connect using ip localhost and port 32783. If not, you should go with docker exec and then open a interactive mysql shell inside the container (As mulg0r commented).

Automatically Start Multiple MySQL instances on boot in Ubuntu Trusty 14.04

I'm still learning how to use Linux & MySQL, so please keep answers simple :)
I've been following this tutorial :
http://www.ducea.com/2009/01/19/running-multiple-instances-of-mysql-on-the-same-machine/
Both MySQL instances (the default instance you get on installing MySQL and the new one I created in the aforementioned tutorial) are working correctly.
But, when I boot up my OS, only the first (default instance) starts up, so I have to manually start the second instance.
I've been doing this by running these commands as root:
Start the instance:
mysqld_safe --defaults-file=/etc/mysql2/my.cnf &
Connect:
mysql -h 127.0.0.1 -P 3307
How do I make it so that both of these instances will start at boot time?
Thanks!
I fixed it by simply running the command at startup by using the solution in this question :
Ubuntu - Run command on start-up with "sudo"
Steps I took :
I added this command to the "/etc/rc.local" file as root:
(Before the "exit 0:" line or it will never get executed)
sudo mysqld_safe --defaults-file=/etc/mysql2/my.cnf &
Then I restarted my OS. The new instance is now automatically started :), so now I have both starting on boot!
Though now i wonder.. is this the correct way to handle this issue?