I note that the Docker documentation is gradually moving towards the systemd method of initialisation and hence configuration. I'm somewhat uncertain how you add "insecure-registry" entries to Docker when using the systemd configuration method.
Whilst using Docker version 1.6.1 I was able to add multiple insecure-registry entries by adding to the file:
/etc/sysconfig/docker
a line much like the following:
INSECURE_REGISTRY='--insecure-registry myregistry.companyx.com:5010 --insecure-registry myregistry.companyx.com:5011'
and restarting Docker with the command:
sudo service docker restart
With Docker 1.8.2 I've been looking how to do this in "systemd" fashion. The closest I've come to any documentation is the following 2 pages:
https://docs.docker.com/articles/systemd/
https://coreos.com/os/docs/latest/registry-authentication.html
Both the above suggest I need to add a file to a directory called:
/etc/systemd/system/docker.service.d
The second of those pages suggests a file called:
/etc/systemd/system/docker.service.d/50-insecure-registry.conf
it also talks about "#cloud-config write_files: - path: " which I didn't follow at all.
I ignored the stuff I didn't understand and created a file named:
/etc/systemd/system/docker.service.d/50-insecure-registry.conf
Containing something along the lines of:
[Service]
Environment='DOCKER_OPTS=--insecure-registry="myregistry.companyx.com:5010"'
and restarted docker using the command:
sudo systemctl restart docker
The result makes me think it's time to go home. I want to add multiple insecure-registry entries but haven't figured out how to do that. Also I'm a long way from being confident about the success of the single entry.
STUFF added 2 days later
With help from page:
http://nknu.net/how-to-configure-docker-on-ubuntu-15-04/
I made some progress in configuring Docker using files dropped into the directory:
/etc/systemd/system/docker.service.d
The thing I had been missing was an entry to override the default:
[Service]
ExecStart=/usr/bin/docker -d -H fd://
I did this by creating an additional drop in file, this one called:
docker_systemd_workaround.conf
it contains:
[Service]
# workaround to include default options
ExecStart=
ExecStart=/usr/bin/docker -d -H fd:// $DOCKER_OPTS
With this, the content of another drop in file which sets DOCKER_OPTS is no longer ignored. I don't think this is anything close to a complete solution but it does fix the issue I was having trying to add "insecure-registry" entries.
Expanded on my comment for readability
Problem
Couldn't connect to a remote insecure repository. Unable to add "insecure_repository" to docker options on startup.
Using docker installed via package manager on Ubuntu 16.04 LTS
Solution
1. Verify docker under is control of systemd
$systemctl status docker should return details for running docker service. You can view the default setup it's using under Loaded:
2. Add insecure repository systemd conf file
This file will load the DOCKER_OPTS env variable.
Create file at /etc/systemd/system/docker.service.d/insecure_repository.conf
Add file contents:
[Service]
Environment='DOCKER_OPTS=--insecure-registry="myregistryserver.mydomain.com:5000"'
3. Add docker systemd workaround conf file
This file will modify the ExecStart to use the DOCKER_OPTS environment variable.
Create file at /etc/systemd/system/docker.service.d/docker-systemd-workaround.conf
Add file contents:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// $DOCKER_OPTS
4. Reload
$sudo systemctl daemon-reload
$sudo service docker restart
5. Verify
$docker info should contain myregistryserver.mydomain.com:5000 under Insecure Registries:
$systemctl status docker should have your systemd configs (aka drop-ins) under the Drop-In: header. You should also see your modified ExecStart under the CGroup: header.
I had a similar problem and struggled for ages until I found this blog.
Basically follow these steps :
sudo vi /etc/systemd/system/docker.service.d/docker.conf and
add the following :
[Service]
#You need the below or you 'ExecStart=' or you will get and error 'Service has more than one ExecStart= setting, which is only allowed'
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd://
ExecStart=/usr/bin/docker daemon -H fd:// --insecure-registry youregistry.mydomain.com:5000
Then finally :
sudo systemctl daemon-reload
sudo systemctl restart docker
Related
Please, do you know how resolve this issue ?
I searched everywhere without finding.
06:45 SELinux is preventing systemd from open access on the file /root/.pm2/pm2.pid. For complete SELinux messages run: sealert -l d84a5a0b-cfcf-4cb9-918a-c0952bf70600 setroubleshoot
06:45 pm2-root.service: Can't convert PID files /root/.pm2/pm2.pid O_PATH file descriptor to proper file descriptor: Permission denied systemd 2
06:45 Failed to start PM2 process manager.
I have executed this command : sealert -l d84a5a0b-cfcf-4cb9-918a-c0952bf70600 setroubleshoot
Messages d'audit bruts
type=AVC msg=audit(1591498085.184:7731): avc: denied { open } for pid=1 comm="systemd" path="/root/.pm2/pm2.pid" dev="dm-0" ino=51695937 scontext=system_u:system_r:init_t:s0 tcontext=system_u:object_r:admin_home_t:s0 tclass=file permissive=0
PM2 Version : 4.4.0
NODE version : 12.18.0
CentOS Version : 8
my systemd service :
[Unit]
Description=PM2 process manager
Documentation=https://pm2.keymetrics.io/
After=network.target
[Service]
Type=forking
User=root
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Environment=PATH=/sbin:/bin:/usr/sbin:/usr/bin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
Environment=PM2_HOME=/root/.pm2
PIDFile=/root/.pm2/pm2.pid
Restart=on-failure
ExecStart=/usr/lib/node_modules/pm2/bin/pm2 resurrect
ExecReload=/usr/lib/node_modules/pm2/bin/pm2 reload all
ExecStop=/usr/lib/node_modules/pm2/bin/pm2 kill
[Install]
WantedBy=multi-user.target
Thank you
As said in the comments, I had the exact same issue.
To solve this, just run the following commands as root after trying to start the PM2 service (in your case, this start attempt would be systemctl start pm2-root)
ausearch -c 'systemd' --raw | audit2allow -M my-systemd
semodule -i my-systemd.pp
This looks pretty generic, but it works. These lines were suggested by SELinux itself. To get them, I had to run the command journalctl -xe after trying to start the service
Two options:
Edit the systemd file that starts pm2 and specify an alternative location for the pm2 PIDFile). You'll have to make two changes, one to tell pm2 where to place the PIDFile, and one to tell systemd where to look for it. Replace the existing PIDFile line with the following two lines
Environment=PM2_PID_FILE_PATH=/run/pm2.pid
PIDFile=/run/pm2.pid
Create an SELinux rule that allows this particular behavior. You can do that exactly as Backslash36 suggest in their answer. If you want to create the policy file yourself rather than through audit2allow,the following should work, although then you have to compile it to a usable .pp file yourself.
module pm2 1.0;
require {
type user_home_t;
type init_t;
class file read;
}
#============= init_t ==============
allow init_t user_home_t:file read;
I am trying to connect Django application with MySql docker container. I am using the latest version of MySql i.e MySql 8.0 to build a container. I was able to build the MySql container successfully but I am not able to connect it using Django's default MySql Connector. When I run the docker-compose up command I get the error mentioned below.
django.db.utils.OperationalError: (1045, 'Plugin caching_sha2_password could not be loaded: /usr/lib/x86_64-linux-gnu/mariadb19/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory')
I started looking for the solution and got to know that MySql has released a major change in its default authentication plugin which is not support by most of the MySql Connectors.
To fix this issue I will have to set the default-authentication-plugin to mysql_native_password in my.cnf file of MySql container.
I logged into container using command docker exec -it <conatiner id> /bin/bash and was also able to locate the my.cnf file inside the container.
To edit the my.cnf file I will have to use the nano command as stated below.
nano my.cnf
But unfortunately nano command is not installed in MySql Container. To install nano I will need sudo installed in container.
I tried installing sudo using below mentioned command but it did not work.
apt-get install sudo
error -
Reading package lists... Done
Building dependency tree
Reading state information... Done
What are the possible solutions to fix this issue.
In general you shouldn't try to directly edit files in containers. Those changes will get lost as soon as the container is stopped and deleted; this happens extremely routinely since many Docker options can only be set at startup time, and the standard way to update the software in a container is to recreate it with a newer image. In your case, you also can't live-edit a configuration file the main container process needs at startup time, because it will have already read the configuration file by the time you're able to edit it.
The right way to do this is to inject the modified configuration file at container startup time. If you haven't already, get the default configuration file out of the image
docker run --rm mysql:8 cat /etc/mysql/my.cnf > my.cnf
and edit it, directly, on your host, using your choice of text editor. When you launch the container, inject the modified file
docker run -v $PWD/my.cnf:/etc/mysql/my.cnf ... mysql:8
or, in Docker Compose,
volumes:
- ./my.cnf:/etc/mysql/my.cnf
The Docker Hub mysql image documentation has some more suggestions on ways to set this; see "Using a custom MySQL configuration file" there.
While docker exec is an extremely useful debugging tool, it shouldn't be part of your core workflow, and I'd recommend trying to avoid it in cases like this. (With the bind-mount approach, you can add the modified config file to your source control system and blindly docker-compose up like normal without knowing this detail; a docker exec approach you'd have to remember and repeat by hand every time you started the container stack.)
Also note that you don't need sudo in Docker at all. Every context where you can run something (Dockerfiles, docker run, docker exec) has some way to explicitly specify the user ID, so you can docker exec -u root .... sudo generally depends on things like users having passwords and interactive prompting, which works well for administering a real Linux host but doesn't match a typical Docker environment.
The issue is not with sudo because you've already permissions to install pacakegs.
You should instead update package manager before to install new packages in order to update package repositories:
RUN apt-get update
RUN apt-get install nano
Mysql build the image with oracle linux, run the commands to install nano:
microdnf update
microdnf install nano sudo -y
And edit the my.cnf with nano
Disclaimer:
On a old machine with Ubuntu 14.04 with Upstart as init system I have enabled the HTTP API by defining DOCKER_OPTS on /etc/default/docker. It works.
$ docker version
Client:
Version: 1.11.2
(...)
Server:
Version: 1.11.2
(...)
Problem:
This does solution does not work on a recent machine with Ubuntu 16.04 with SystemD.
As stated on the top of the recent file installed /etc/default/docker:
# Docker Upstart and SysVinit configuration file
#
# THIS FILE DOES NOT APPLY TO SYSTEMD
#
# Please see the documentation for "systemd drop-ins":
# https://docs.docker.com/engine/articles/systemd/
#
(...)
As I checked this information on the Docker documentation page for SystemD I need to fill a daemon.json file but as stated on the reference there are some properties self-explanatory but others could be under-explained.
That being said, I'm looking for help to convert this:
DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock -G myuser --debug"
to the daemon.jsonobject?
Notes
PS1: I'm aware that the daemon.json have a debug: true as default.
PS2: Probably the group: "myuser" it will work like this or with an array of strings.
PS3: My main concern is to use SOCK and HTTP simultaneous.
EDIT (8/08/2017)
After reading the accepted answer, check the #white_gecko answer for more input on the matter.
With a lot of fragmented documentation it was difficult to solve this.
My first solution was to create the daemon.json with
{
"hosts": [
"unix:///var/run/docker.sock",
"tcp://127.0.0.1:2376"
]
}
This does not worked this error docker[5586]: unable to configure the Docker daemon with file /etc/docker/daemon.json after tried to restart the daemon with service docker restart.
Note: There was more on the error that I failed to copy.
But what this error meant it at the start the daemon it a conflict with a flag and configurations on daemon.json.
When I looked into it with service docker status this it was the parent process: ExecStart=/usr/bin/docker daemon -H fd://.
What it was strange because is different with configurations on /etc/init.d/docker which I thought that were the service configurations.
The strange part it was that the file on init.d does contain any reference to daemon argument neither -H fd://.
After some research and a lot of searches of the system directories, I find out these directory (with help on the discussion on this issue docker github issue #22339).
Solution
Edited the ExecStart from /lib/systemd/system/docker.service with this new value:
/usr/bin/docker daemon
And created the /etc/docker/daemon.json with
{
"hosts": [
"fd://",
"tcp://127.0.0.1:2376"
]
}
Finally restarted the service with service docker start and now I get the "green light" on service docker status.
Tested the new configurations with:
$ docker run hello-world
Hello from Docker!
(...)
And,
$ curl http://127.0.0.1:2376/v1.23/info
[JSON]
I hope that this will help someone with a similar problem as mine! :)
I had the same problem and actually in my eyes the easiest solution which should doesn't touch any existing files, which are managed by the system update process is, to use a systemd drop-in:
Just create a file /etc/systemd/system/docker.service which overwrites the specific part of the service in /lib/systemd/system/docker.service.
In this case the content of /etc/systemd/system/docker.service would be:
[Service]
ExecStart=/usr/bin/dockerd --tlsverify --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server-cert.pem --tlskey=/etc/docker/server-key.pem -H=tcp://127.0.0.1:2375 -H=fd://
(You could even create a directory docker.service.d which contains multiple files to overwrite different parameters.)
After adding the file you just run:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
The solution described at https://docs.docker.com/engine/admin/#troubleshoot-conflicts-between-the-daemonjson-and-startup-scripts works for me:
One notable example of a configuration conflict that is difficult to
troubleshoot is when you want to specify a different daemon address
from the default. Docker listens on a socket by default. On Debian and
Ubuntu systems using systemd), this means that a -H flag is always
used when starting dockerd. If you specify a hosts entry in the
daemon.json, this causes a configuration conflict (as in the above
message) and Docker fails to start.
To work around this problem, create a new file
/etc/systemd/system/docker.service.d/docker.conf with the following
contents, to remove the -H argument that is used when starting the
daemon by default.
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
Note that the line with ExecStart= is actually required, otherwise it'll fail with the error:
docker.service: Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. Refusing.
After creating the file you must run:
sudo systemctl daemon-reload
sudo systemctl restart docker
For me worked on Ubuntu 18.04.1 LTS and Docker 18.06.0-ce create
/etc/systemd/system/docker.service.d/remote-api.conf
with following content:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock
then run sudo systemctl daemon-reload and sudo systemctl restart docker
See result calling:
curl http://localhost:2376/info
You might need to configure proxy, if your docker is behind a proxy.
To achiev this paste in /etc/default/docker file following:
http_proxy="http://85.22.53.71:8080/"
https_proxy="http://85.22.53.71:8080/"
HTTP_PROXY="http://85.22.53.71:8080/"
HTTPS_PROXY="http://85.22.53.71:8080/"
# below you can list some *.enterprise_domain.com as well
NO_PROXY="localhost,127.0.0.1,::1"
Or Create
/etc/systemd/system/docker.service.d/remote-api.conf with following content:
[Service]
Environment="HTTP_PROXY=http://<you_proxy_ip>:<port>"
Environment="HTTPS_PROXY=https://<you_proxy_ip>:<port>/"
Environment="NO_PROXY=localhost,127.0.0.1,::1"
I hope it helps someone...
I'm trying to run a docker mysql container with initialized db according instruction provided in this message https://stackoverflow.com/a/29150538/6086816. After first run it works ok, but on second run, after trying of executing /usr/sbin/mysqld from script, I get this error:
db_1 | 2016-03-19T14:50:14.819377Z 0 [ERROR] Another process with pid 10 is using unix socket file.
db_1 | 2016-03-19T14:50:14.819498Z 0 [ERROR] Unable to setup unix socket lock file.
...
mdir_db_1 exited with code 1
what can be the reason of it?
I was facing the same issue. Following are the steps that I tried to resolve this issue -
Firsly, stop your docker service by using following command - "sudo service docker stop"
Now,get into the docker folder in my Linux system using the following path -
/var/lib/docker.
Then within the docker folder you need to get into the volumes folder. This folder contains the volumes of all your containers (memory of each container) -
cd /volumes
After getting into volumes do 'sudo ls' you will find multiple folders with hash names. These folders are volumes of your containers. Each folder is named after its hash
(You need to inspect your docker container and get the hash of your container volume. For this, you need to do the following steps -
Run command "docker inspect 'your container ID' ".
Now you will get a JSON file. It is the config file of your docker container.
Seach for Mounts key within this JSON file. In Mounts, you will get the Name(hash) of your volume. (You will also get the path of your volume within the Mounts. Within Mounts "Name" key is your volume name and "Source" is the path where your volume is located.)).
Once you get the name of your volume you can go within your volume folder and within this folder you will find "_data" folder. Get into this folder.
Finally within "_data" folder use sudo ls command and you will find a folder with the name mysql.sock.lock. Remove this folder By "rm -f mysql.sock.lock".
Now restart your docker service and then start your docker container. It will start working.
Note- Use sudo in each command while you are in the docker container folder.
You should make sure the socket file have been deleted before you start mysql.Check my.cnf(/etc/mysql/my.cnf) file to get the path of socket file.
find sth like this socket = /var/run/mysqld/mysqld.sock.And delete the .sock.lock file as well.
This is a glitch with docker.
Execute following commands:
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
This will stop all containers and remove them.
After this it should work just fine.
Just faced same problem.
After many research, summary of my solution:
Find host location of docker files
$ docker inspect <container_name> --> Mounts.Source section
In my case, it was /var/snap/docker/common/.../_data
As root, you can ls -l that directory and see the files that are preventing your container from starting, the socket mysql.sock and the file mysql.sock.lock
Simply delete them as root ($ sudo rm /var/snap/.../_data/mysql.sock*) and start your docker container.
NOTE: be sure you don't have any other mysql.sock... files than those two. In that case don't use wildcar (*), delete each of them individually.
Hope this helps.
I had the same problem and got rid of it in an easy and mysterious way.
First I have noticed that I am unable to start mysql_container container. Running docker logs mysql_container indicated exactly the same problem as described repeating for few times.
I wanted to get a look around by running the container in an interactive mode by docker start -i mysql_container from one bash window while running things like
docker exec -it mysql_container cat /etc/mysql/my.cnf in another.
I have done that and was very surprised to see that this time the container started successfully. I cannot understand why. I can only guess that starting an interactive mode together with running subsequent docker exec commands slowed down init process and some another process had a bit more time to remove its locks.
Hope that helps anybody.
I'm running (from Windows 8.1) a Vagrant VM for CoreOS (yungsang/coreos).
I installed kubernetes according to the guide I found here and created the json for the pod using my images.
When I execute sudo ./kubecfg list /pods I get the following error:
F0909 06:03:04.626251 01933 kubecfg.go:182] Got request error: Get http://localhost:8080/api/v1beta1/pods?labels=: dial tcp 127.0.0.1:8080: connection refused
Same goes for sudo ./kubecfg -h http://127.0.0.1:8080 -c /vagrant/app.json create /pods
EDIT: Update
Instead of running the commands myself I integrated into the vagrant file (as such) .
This makes kubernetes work fine. HOWEVER after some time my vagrant ssh connection gets closed off. I reconnect and any kubernetes commands I specify result in the same error as above.
EDIT 2: Update
I managed to get it to run again, however I am unsure if it will run smoothly
I had to re-execute the following commands.
sudo systemctl start etcd
sudo systemctl start download-kubernetes
sudo systemctl start apiserver
sudo systemctl start controller-manager
sudo systemctl start kubelet
sudo systemctl start proxy
I believe it is in fact the apiserver that needs restarting
What is the source of this "timeout"? (Where are any logs I can find for this matter)
Kubernetes development is moving insanely fast right now so this could be out of date by tomorrow. With that in mind, the kubernetes folks recommend following one of their official installation guides. The best advice would be to start over fresh with one of the new installation guides but there are a few tips that I have learned doing this myself.
The first thing to note is that Kubecfg is being deprecated in favor of kubectl. So for future reference if you want to get info about a pod you would run something like:
./kubectl get pods.
With kubectl you will also need to set an env variable so kubectl know how to talk to the apiserver:
KUBERNETES_MASTER=http://IPADDRESS:8080.
The easiest way to debug exactly what is going on if you are using CoreOS is to tail the logs for the service you are interested in. So if you have a kube-apiserver unit you can look at what's goin on by running:
journalctl -f -u kube-apiserver
from the node that is running the apiserver. If that service isn't running, which may be the case, you can start it with:
systemctl start kube-apiserver
On CoreOS you should look at the logs using journalctl.
For example, if you wish to see etcd logs, which Kubernetes relies on for storing the state of it's minions, run journalctl _COMM=etcd, and similarly journalctl _COMM=apiserver will show you the logs from the apiserver, one of key components in Kubernetes.
You also get last few log entries if you run systemctl status apiserver.
Based on errordevelopers advice, my recent installation ran against a similar problem.
Using systemctl status apiserver and sudo systemctl start apiserver I managed to get the environment up and running again.