Docker - Enable Remote HTTP API with SystemD and "daemon.json" - json

Disclaimer:
On a old machine with Ubuntu 14.04 with Upstart as init system I have enabled the HTTP API by defining DOCKER_OPTS on /etc/default/docker. It works.
$ docker version
Client:
Version: 1.11.2
(...)
Server:
Version: 1.11.2
(...)
Problem:
This does solution does not work on a recent machine with Ubuntu 16.04 with SystemD.
As stated on the top of the recent file installed /etc/default/docker:
# Docker Upstart and SysVinit configuration file
#
# THIS FILE DOES NOT APPLY TO SYSTEMD
#
# Please see the documentation for "systemd drop-ins":
# https://docs.docker.com/engine/articles/systemd/
#
(...)
As I checked this information on the Docker documentation page for SystemD I need to fill a daemon.json file but as stated on the reference there are some properties self-explanatory but others could be under-explained.
That being said, I'm looking for help to convert this:
DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock -G myuser --debug"
to the daemon.jsonobject?
Notes
PS1: I'm aware that the daemon.json have a debug: true as default.
PS2: Probably the group: "myuser" it will work like this or with an array of strings.
PS3: My main concern is to use SOCK and HTTP simultaneous.
EDIT (8/08/2017)
After reading the accepted answer, check the #white_gecko answer for more input on the matter.

With a lot of fragmented documentation it was difficult to solve this.
My first solution was to create the daemon.json with
{
"hosts": [
"unix:///var/run/docker.sock",
"tcp://127.0.0.1:2376"
]
}
This does not worked this error docker[5586]: unable to configure the Docker daemon with file /etc/docker/daemon.json after tried to restart the daemon with service docker restart.
Note: There was more on the error that I failed to copy.
But what this error meant it at the start the daemon it a conflict with a flag and configurations on daemon.json.
When I looked into it with service docker status this it was the parent process: ExecStart=/usr/bin/docker daemon -H fd://.
What it was strange because is different with configurations on /etc/init.d/docker which I thought that were the service configurations.
The strange part it was that the file on init.d does contain any reference to daemon argument neither -H fd://.
After some research and a lot of searches of the system directories, I find out these directory (with help on the discussion on this issue docker github issue #22339).
Solution
Edited the ExecStart from /lib/systemd/system/docker.service with this new value:
/usr/bin/docker daemon
And created the /etc/docker/daemon.json with
{
"hosts": [
"fd://",
"tcp://127.0.0.1:2376"
]
}
Finally restarted the service with service docker start and now I get the "green light" on service docker status.
Tested the new configurations with:
$ docker run hello-world
Hello from Docker!
(...)
And,
$ curl http://127.0.0.1:2376/v1.23/info
[JSON]
I hope that this will help someone with a similar problem as mine! :)

I had the same problem and actually in my eyes the easiest solution which should doesn't touch any existing files, which are managed by the system update process is, to use a systemd drop-in:
Just create a file /etc/systemd/system/docker.service which overwrites the specific part of the service in /lib/systemd/system/docker.service.
In this case the content of /etc/systemd/system/docker.service would be:
[Service]
ExecStart=/usr/bin/dockerd --tlsverify --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server-cert.pem --tlskey=/etc/docker/server-key.pem -H=tcp://127.0.0.1:2375 -H=fd://
(You could even create a directory docker.service.d which contains multiple files to overwrite different parameters.)
After adding the file you just run:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker

The solution described at https://docs.docker.com/engine/admin/#troubleshoot-conflicts-between-the-daemonjson-and-startup-scripts works for me:
One notable example of a configuration conflict that is difficult to
troubleshoot is when you want to specify a different daemon address
from the default. Docker listens on a socket by default. On Debian and
Ubuntu systems using systemd), this means that a -H flag is always
used when starting dockerd. If you specify a hosts entry in the
daemon.json, this causes a configuration conflict (as in the above
message) and Docker fails to start.
To work around this problem, create a new file
/etc/systemd/system/docker.service.d/docker.conf with the following
contents, to remove the -H argument that is used when starting the
daemon by default.
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
Note that the line with ExecStart= is actually required, otherwise it'll fail with the error:
docker.service: Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. Refusing.
After creating the file you must run:
sudo systemctl daemon-reload
sudo systemctl restart docker

For me worked on Ubuntu 18.04.1 LTS and Docker 18.06.0-ce create
/etc/systemd/system/docker.service.d/remote-api.conf
with following content:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock
then run sudo systemctl daemon-reload and sudo systemctl restart docker
See result calling:
curl http://localhost:2376/info
You might need to configure proxy, if your docker is behind a proxy.
To achiev this paste in /etc/default/docker file following:
http_proxy="http://85.22.53.71:8080/"
https_proxy="http://85.22.53.71:8080/"
HTTP_PROXY="http://85.22.53.71:8080/"
HTTPS_PROXY="http://85.22.53.71:8080/"
# below you can list some *.enterprise_domain.com as well
NO_PROXY="localhost,127.0.0.1,::1"
Or Create
/etc/systemd/system/docker.service.d/remote-api.conf with following content:
[Service]
Environment="HTTP_PROXY=http://<you_proxy_ip>:<port>"
Environment="HTTPS_PROXY=https://<you_proxy_ip>:<port>/"
Environment="NO_PROXY=localhost,127.0.0.1,::1"
I hope it helps someone...

Related

Pm2 startup issue with CENTOS 8 / SELinux

Please, do you know how resolve this issue ?
I searched everywhere without finding.
06:45 SELinux is preventing systemd from open access on the file /root/.pm2/pm2.pid. For complete SELinux messages run: sealert -l d84a5a0b-cfcf-4cb9-918a-c0952bf70600 setroubleshoot
06:45 pm2-root.service: Can't convert PID files /root/.pm2/pm2.pid O_PATH file descriptor to proper file descriptor: Permission denied systemd 2
06:45 Failed to start PM2 process manager.
I have executed this command : sealert -l d84a5a0b-cfcf-4cb9-918a-c0952bf70600 setroubleshoot
Messages d'audit bruts
type=AVC msg=audit(1591498085.184:7731): avc: denied { open } for pid=1 comm="systemd" path="/root/.pm2/pm2.pid" dev="dm-0" ino=51695937 scontext=system_u:system_r:init_t:s0 tcontext=system_u:object_r:admin_home_t:s0 tclass=file permissive=0
PM2 Version : 4.4.0
NODE version : 12.18.0
CentOS Version : 8
my systemd service :
[Unit]
Description=PM2 process manager
Documentation=https://pm2.keymetrics.io/
After=network.target
[Service]
Type=forking
User=root
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Environment=PATH=/sbin:/bin:/usr/sbin:/usr/bin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
Environment=PM2_HOME=/root/.pm2
PIDFile=/root/.pm2/pm2.pid
Restart=on-failure
ExecStart=/usr/lib/node_modules/pm2/bin/pm2 resurrect
ExecReload=/usr/lib/node_modules/pm2/bin/pm2 reload all
ExecStop=/usr/lib/node_modules/pm2/bin/pm2 kill
[Install]
WantedBy=multi-user.target
Thank you
As said in the comments, I had the exact same issue.
To solve this, just run the following commands as root after trying to start the PM2 service (in your case, this start attempt would be systemctl start pm2-root)
ausearch -c 'systemd' --raw | audit2allow -M my-systemd
semodule -i my-systemd.pp
This looks pretty generic, but it works. These lines were suggested by SELinux itself. To get them, I had to run the command journalctl -xe after trying to start the service
Two options:
Edit the systemd file that starts pm2 and specify an alternative location for the pm2 PIDFile). You'll have to make two changes, one to tell pm2 where to place the PIDFile, and one to tell systemd where to look for it. Replace the existing PIDFile line with the following two lines
Environment=PM2_PID_FILE_PATH=/run/pm2.pid
PIDFile=/run/pm2.pid
Create an SELinux rule that allows this particular behavior. You can do that exactly as Backslash36 suggest in their answer. If you want to create the policy file yourself rather than through audit2allow,the following should work, although then you have to compile it to a usable .pp file yourself.
module pm2 1.0;
require {
type user_home_t;
type init_t;
class file read;
}
#============= init_t ==============
allow init_t user_home_t:file read;

How to install sudo and nano command in MySql docker image

I am trying to connect Django application with MySql docker container. I am using the latest version of MySql i.e MySql 8.0 to build a container. I was able to build the MySql container successfully but I am not able to connect it using Django's default MySql Connector. When I run the docker-compose up command I get the error mentioned below.
django.db.utils.OperationalError: (1045, 'Plugin caching_sha2_password could not be loaded: /usr/lib/x86_64-linux-gnu/mariadb19/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory')
I started looking for the solution and got to know that MySql has released a major change in its default authentication plugin which is not support by most of the MySql Connectors.
To fix this issue I will have to set the default-authentication-plugin to mysql_native_password in my.cnf file of MySql container.
I logged into container using command docker exec -it <conatiner id> /bin/bash and was also able to locate the my.cnf file inside the container.
To edit the my.cnf file I will have to use the nano command as stated below.
nano my.cnf
But unfortunately nano command is not installed in MySql Container. To install nano I will need sudo installed in container.
I tried installing sudo using below mentioned command but it did not work.
apt-get install sudo
error -
Reading package lists... Done
Building dependency tree
Reading state information... Done
What are the possible solutions to fix this issue.
In general you shouldn't try to directly edit files in containers. Those changes will get lost as soon as the container is stopped and deleted; this happens extremely routinely since many Docker options can only be set at startup time, and the standard way to update the software in a container is to recreate it with a newer image. In your case, you also can't live-edit a configuration file the main container process needs at startup time, because it will have already read the configuration file by the time you're able to edit it.
The right way to do this is to inject the modified configuration file at container startup time. If you haven't already, get the default configuration file out of the image
docker run --rm mysql:8 cat /etc/mysql/my.cnf > my.cnf
and edit it, directly, on your host, using your choice of text editor. When you launch the container, inject the modified file
docker run -v $PWD/my.cnf:/etc/mysql/my.cnf ... mysql:8
or, in Docker Compose,
volumes:
- ./my.cnf:/etc/mysql/my.cnf
The Docker Hub mysql image documentation has some more suggestions on ways to set this; see "Using a custom MySQL configuration file" there.
While docker exec is an extremely useful debugging tool, it shouldn't be part of your core workflow, and I'd recommend trying to avoid it in cases like this. (With the bind-mount approach, you can add the modified config file to your source control system and blindly docker-compose up like normal without knowing this detail; a docker exec approach you'd have to remember and repeat by hand every time you started the container stack.)
Also note that you don't need sudo in Docker at all. Every context where you can run something (Dockerfiles, docker run, docker exec) has some way to explicitly specify the user ID, so you can docker exec -u root .... sudo generally depends on things like users having passwords and interactive prompting, which works well for administering a real Linux host but doesn't match a typical Docker environment.
The issue is not with sudo because you've already permissions to install pacakegs.
You should instead update package manager before to install new packages in order to update package repositories:
RUN apt-get update
RUN apt-get install nano
Mysql build the image with oracle linux, run the commands to install nano:
microdnf update
microdnf install nano sudo -y
And edit the my.cnf with nano

mysql.sock does not exist error in fresh install of MySQL on Arch Linux

I'm trying to use MySQL on Arch Linux. it is already installed but this error comes up when I try to connect:
connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/run/mysqld/mysqld.sock' (2 "No such file or directory")'
I've looked for /etc/my.cfg but the file does not exist.
Something must have gone wrong during the installation.
How can I "purge" MariaDB and reinstall it?
If you're using archlinux it is a vital idea to understand the package manager (pacman). For the question about /etc/my.cfg you can run
pacman -Ql mariadb
there you will see that the file is actually called:
/etc/mysql/my.cnf
Arch linux will not configure the package for you, that is part of the arch philosophy. It will provide example configurations, and even provide you with a systemd unit file
usr/lib/systemd/system/mysqld.service
but it is your responsibility to ensure that the configuration is correct and actually start the daemon.
systemctl enable mysqld # add the unit file to the boot sequence
systemctl start mysqld # runs ExecStart= in the unit file
systemctl stop mysqld # kills the daemon
systemctl disable mysqld # remove unit from boot sequence
reinstall
Since the word reinstall is in the title of the question and someone might find this question thanks to that: To reinstall mariadb you simply do
pacman -S mariadb
pacman will reinstall a package that is already installed, there is no need to remove the package (for completeness, package removal happens with pacman -R)
as of 7-28-17 I had to do this on a new install. Newbie here might save someone some time. It was a real pain.
OK HERE IS THE DEAL!!!!!
INSTALL APACHE _ NO PROB
INSTALL MYSQL _PROBLEM
pacman -S mysql then before starting service
MUST UNCOMMENT INNODB IN:
nano /etc/mysql/my.cnf
then must initialize datadirectory before starting service:
mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/mysql
You need to initialize the MariaDB data directory prior to starting
the service. This can be done with mysql_install_db command, e.g.:
mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/mysql
Optional dependencies for mariadb
galera: for MariaDB cluster with Galera WSREP
perl-dbd-mysql: for mysqlhotcopy, mysql_convert_table_format and
mysql_setpermission
CNF file is /etc/mysql/my.cnf in Arch Linux.
One simple way I can reproduce your issue is when MariaDB is shut down. Sorry if it sounds dumb but as you did not mention it: is MariaDB started? sudo systemctl start mysqld.service
You should have a look at MariaDB logs to get some clue: journalctl _SYSTEMD_UNIT=mysqld.service (maybe paste some part if you still don't get what is going on).
This happens the first time you install MySQL and MariaDB. As grochmal pointed out, you have to set up configurations before first use. But, the user teckk sent these three links in the archlinux newbie corner:
https://wiki.archlinux.org/index.php/MariaDB
https://wiki.archlinux.org/index.php/MariaDB#Reset_the_root_password
https://bbs.archlinux.org/viewtopic.php?id=51981
In short, you have to run the command below before starting the service:
sudo mariadb-install-db --user=mysql --basedir=/usr --datadir=/var/lib/mysql
Optionally (recommended) you should improve the initial security by calling:
sudo mysql_secure_installation
Now you can start the service:
sudo systemctl start mariadb
Optionally, you could install and use a graphical front-end tool.
Carry on with setting up the configurations as described in the archwiki post on MariaDB Configuration.

Docker configuration using the systemd configuration style

I note that the Docker documentation is gradually moving towards the systemd method of initialisation and hence configuration. I'm somewhat uncertain how you add "insecure-registry" entries to Docker when using the systemd configuration method.
Whilst using Docker version 1.6.1 I was able to add multiple insecure-registry entries by adding to the file:
/etc/sysconfig/docker
a line much like the following:
INSECURE_REGISTRY='--insecure-registry myregistry.companyx.com:5010 --insecure-registry myregistry.companyx.com:5011'
and restarting Docker with the command:
sudo service docker restart
With Docker 1.8.2 I've been looking how to do this in "systemd" fashion. The closest I've come to any documentation is the following 2 pages:
https://docs.docker.com/articles/systemd/
https://coreos.com/os/docs/latest/registry-authentication.html
Both the above suggest I need to add a file to a directory called:
/etc/systemd/system/docker.service.d
The second of those pages suggests a file called:
/etc/systemd/system/docker.service.d/50-insecure-registry.conf
it also talks about "#cloud-config write_files: - path: " which I didn't follow at all.
I ignored the stuff I didn't understand and created a file named:
/etc/systemd/system/docker.service.d/50-insecure-registry.conf
Containing something along the lines of:
[Service]
Environment='DOCKER_OPTS=--insecure-registry="myregistry.companyx.com:5010"'
and restarted docker using the command:
sudo systemctl restart docker
The result makes me think it's time to go home. I want to add multiple insecure-registry entries but haven't figured out how to do that. Also I'm a long way from being confident about the success of the single entry.
STUFF added 2 days later
With help from page:
http://nknu.net/how-to-configure-docker-on-ubuntu-15-04/
I made some progress in configuring Docker using files dropped into the directory:
/etc/systemd/system/docker.service.d
The thing I had been missing was an entry to override the default:
[Service]
ExecStart=/usr/bin/docker -d -H fd://
I did this by creating an additional drop in file, this one called:
docker_systemd_workaround.conf
it contains:
[Service]
# workaround to include default options
ExecStart=
ExecStart=/usr/bin/docker -d -H fd:// $DOCKER_OPTS
With this, the content of another drop in file which sets DOCKER_OPTS is no longer ignored. I don't think this is anything close to a complete solution but it does fix the issue I was having trying to add "insecure-registry" entries.
Expanded on my comment for readability
Problem
Couldn't connect to a remote insecure repository. Unable to add "insecure_repository" to docker options on startup.
Using docker installed via package manager on Ubuntu 16.04 LTS
Solution
1. Verify docker under is control of systemd
$systemctl status docker should return details for running docker service. You can view the default setup it's using under Loaded:
2. Add insecure repository systemd conf file
This file will load the DOCKER_OPTS env variable.
Create file at /etc/systemd/system/docker.service.d/insecure_repository.conf
Add file contents:
[Service]
Environment='DOCKER_OPTS=--insecure-registry="myregistryserver.mydomain.com:5000"'
3. Add docker systemd workaround conf file
This file will modify the ExecStart to use the DOCKER_OPTS environment variable.
Create file at /etc/systemd/system/docker.service.d/docker-systemd-workaround.conf
Add file contents:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// $DOCKER_OPTS
4. Reload
$sudo systemctl daemon-reload
$sudo service docker restart
5. Verify
$docker info should contain myregistryserver.mydomain.com:5000 under Insecure Registries:
$systemctl status docker should have your systemd configs (aka drop-ins) under the Drop-In: header. You should also see your modified ExecStart under the CGroup: header.
I had a similar problem and struggled for ages until I found this blog.
Basically follow these steps :
sudo vi /etc/systemd/system/docker.service.d/docker.conf and
add the following :
[Service]
#You need the below or you 'ExecStart=' or you will get and error 'Service has more than one ExecStart= setting, which is only allowed'
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd://
ExecStart=/usr/bin/docker daemon -H fd:// --insecure-registry youregistry.mydomain.com:5000
Then finally :
sudo systemctl daemon-reload
sudo systemctl restart docker

kubernetes failing to connect on fresh installation of CoreOS

I'm running (from Windows 8.1) a Vagrant VM for CoreOS (yungsang/coreos).
I installed kubernetes according to the guide I found here and created the json for the pod using my images.
When I execute sudo ./kubecfg list /pods I get the following error:
F0909 06:03:04.626251 01933 kubecfg.go:182] Got request error: Get http://localhost:8080/api/v1beta1/pods?labels=: dial tcp 127.0.0.1:8080: connection refused
Same goes for sudo ./kubecfg -h http://127.0.0.1:8080 -c /vagrant/app.json create /pods
EDIT: Update
Instead of running the commands myself I integrated into the vagrant file (as such) .
This makes kubernetes work fine. HOWEVER after some time my vagrant ssh connection gets closed off. I reconnect and any kubernetes commands I specify result in the same error as above.
EDIT 2: Update
I managed to get it to run again, however I am unsure if it will run smoothly
I had to re-execute the following commands.
sudo systemctl start etcd
sudo systemctl start download-kubernetes
sudo systemctl start apiserver
sudo systemctl start controller-manager
sudo systemctl start kubelet
sudo systemctl start proxy
I believe it is in fact the apiserver that needs restarting
What is the source of this "timeout"? (Where are any logs I can find for this matter)
Kubernetes development is moving insanely fast right now so this could be out of date by tomorrow. With that in mind, the kubernetes folks recommend following one of their official installation guides. The best advice would be to start over fresh with one of the new installation guides but there are a few tips that I have learned doing this myself.
The first thing to note is that Kubecfg is being deprecated in favor of kubectl. So for future reference if you want to get info about a pod you would run something like:
./kubectl get pods.
With kubectl you will also need to set an env variable so kubectl know how to talk to the apiserver:
KUBERNETES_MASTER=http://IPADDRESS:8080.
The easiest way to debug exactly what is going on if you are using CoreOS is to tail the logs for the service you are interested in. So if you have a kube-apiserver unit you can look at what's goin on by running:
journalctl -f -u kube-apiserver
from the node that is running the apiserver. If that service isn't running, which may be the case, you can start it with:
systemctl start kube-apiserver
On CoreOS you should look at the logs using journalctl.
For example, if you wish to see etcd logs, which Kubernetes relies on for storing the state of it's minions, run journalctl _COMM=etcd, and similarly journalctl _COMM=apiserver will show you the logs from the apiserver, one of key components in Kubernetes.
You also get last few log entries if you run systemctl status apiserver.
Based on errordevelopers advice, my recent installation ran against a similar problem.
Using systemctl status apiserver and sudo systemctl start apiserver I managed to get the environment up and running again.