Kubernetes: Failed to pull image. Server gave HTTP response to HTTPS client - json

I'm trying to use Kubernetes with Docker. My image runs with Docker. I have one master-node and two worker-nodes. I also created a local registry like this $ docker run -d -p 5000:5000 --restart=always --name registry registry:2 and pushed my image into it. Everything worked fine so far.
I added { "insecure-registries":["xxx.xxx.xxx.xxx:5000"] } to the daemon.json file at /etc/docker. And I also changed the content of the docker-file at /etc/default/to DOCKER_OPTS="--config-file=/etc/docker/daemon.json". I made the changes on all nodes and I restarted the docker daemon afterwards.
I am able to pull my image from every node with the following command:
sudo docker pull xxx.xxx.xxx.xxx:5000/helloworldimage
I try to create my container from the master node with the command bellow:
sudo kubectl run test --image xxx.xxx.xxx.xxx:5000/helloworldimage
Than I get the following error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/test-775f99f57-m9r4b to rpi-2
Normal BackOff 18s (x2 over 44s) kubelet, rpi-2 Back-off pulling image "xxx.xxx.xxx.xxx:5000/helloworldimage"
Warning Failed 18s (x2 over 44s) kubelet, rpi-2 Error: ImagePullBackOff
Normal Pulling 3s (x3 over 45s) kubelet, rpi-2 Pulling image "xxx.xxx.xxx.xxx:5000/helloworldimage"
Warning Failed 3s (x3 over 45s) kubelet, rpi-2 Failed to pull image "xxx.xxx.xxx.xxx:5000/helloworldimage": rpc error: code = Unknown desc = failed to pull and unpack image "xxx.xxx.xxx.xxx:5000/helloworldimage:latest": failed to resolve reference "xxx.xxx.xxx.xxx:5000/helloworldimage:latest": failed to do request: Head https://xxx.xxx.xxx.xxx:5000/v2/helloworldimage/manifests/latest: http: server gave HTTP response to HTTPS client
Warning Failed 3s (x3 over 45s) kubelet, rpi-2 Error: ErrImagePull
This is the docker version I use:
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:37:22 2019
OS/Arch: linux/arm
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:31:17 2019
OS/Arch: linux/arm
Experimental: false
containerd:
Version: 1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
This is the Kubernetes version I use:
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0+k3s.1", GitCommit:"0f644650f5d8e9f091629f860b342f221c46f6d7", GitTreeState:"clean", BuildDate:"2020-01-06T23:20:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/arm"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0+k3s.1", GitCommit:"0f644650f5d8e9f091629f860b342f221c46f6d7", GitTreeState:"clean", BuildDate:"2020-01-06T23:20:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/arm"}```

Kubernetes: Failed to pull image. Server gave HTTP response to HTTPS client.
{ "insecure-registries":["xxx.xxx.xxx.xxx:5000"] }
to the daemon.json file at /etc/docker.
I solved this problem by configuring it on all kubernetes nodes.

It appears that in some situations solution described here solved the problem:
sudo systemctl edit docker
Add below lines:
[Service]
ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry registry:5000
sudo systemctl daemon-reload
systemctl restart docker
systemctl status docker

Related

Podman Non-Root "Error setting up pivot dir"

First time posting on StackOverflow so please be gentle!
I'm setting up a new RHEL8 server to run Podman. Previously, I've done this on a pretty vanilla server but this one is setup in line with our corporate image. This means a homedir that is mounted over NFS.
When I try a simple podman command such as podman run centos, I get a couple of errors (see below). According to https://github.com/containers/podman/blob/main/rootless.md, Podman non-root is known to have problems with NFS homedirs.
Output from podman run centos (and others):
❯ podman run centos
Resolved "centos" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull quay.io/centos/centos:latest...
Getting image source signatures
Copying blob 7a0437f04f83 done
Error: writing blob: adding layer with blob "sha256:7a0437f04f83f084b7ed68ad9c4a4947e12fc4e1b006b38129bac89114ec3621": Error processing tar file(exit status 1): Error setting up pivot dir: mkdir /home/me/.local/share/containers/storage/overlay/2653d992f4ef2bfd27f94db643815aa567240c37732cae1405ad1c1309ee9859/diff/.pivot_root926823499: permission denied
No, my username isn't really 'me'
Is there a way to use podman non-root in this setup? I'd prefer to avoid creating a local user account to run things under (this is my dev server and isn't where the application will actually be running but will involve me building, running, destroying regularly so I'd rather avoid having to do anything 'clever')
Output of podman info:
❯ podman info
host:
arch: amd64
buildahVersion: 1.23.1
cgroupControllers: []
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.0.32-1.module+el8.5.0+13852+150547f7.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.32, commit: 4b12bce835c3f8acc006a43620dd955a6a73bae0'
cpus: 1
distribution:
distribution: '"rhel"'
version: "8.5"
eventLogger: file
hostname: servername
idMappings:
gidmap:
- container_id: 0
host_id: 2000
size: 1
uidmap:
- container_id: 0
host_id: 10279927
size: 1
kernel: 4.18.0-348.12.2.el8_5.x86_64
linkmode: dynamic
logDriver: k8s-file
memFree: 1881419776
memTotal: 3918233600
ociRuntime:
name: runc
package: runc-1.0.3-1.module+el8.5.0+13556+7f055e70.x86_64
path: /usr/bin/runc
version: |-
runc version 1.0.3
spec: 1.0.2-dev
go: go1.16.7
libseccomp: 2.5.1
os: linux
remoteSocket:
path: /run/user/10279927/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.8-1.module+el8.5.0+12582+56d94c81.x86_64
version: |-
slirp4netns version 1.1.8
commit: d361001f495417b880f20329121e3aa431a8f90f
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.1
swapFree: 4294963200
swapTotal: 4294963200
uptime: 2h 45m 20.28s (Approximately 0.08 days)
plugins:
log:
- k8s-file
- none
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- docker.io
store:
configFile: /home/me/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.8-1.module+el8.5.0+13754+92ec836b.x86_64
Version: |-
fusermount3 version: 3.2.1
fuse-overlayfs: version 1.8
FUSE library version 3.2.1
using FUSE kernel interface version 7.26
graphRoot: /home/me/.local/share/containers/storage
graphStatus:
Backing Filesystem: nfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 0
runRoot: /run/user/10279927/containers
volumePath: /home/me/.local/share/containers/storage/volumes
version:
APIVersion: 3.4.2
Built: 1642068949
BuiltTime: Thu Jan 13 10:15:49 2022
GitCommit: ""
GoVersion: go1.16.7
OsArch: linux/amd64
Version: 3.4.2
Thank you!
Based on this article, https://www.redhat.com/sysadmin/rootless-podman-nfs, podman and nfs home directories don't mix well together.
This is worked around by changing the graphroot(which is described in the above article) to write to a local, non-nfs, location.

Failed to start gunicorn.service: Unit gunicorn.service is masked

I am trying to deploy django web application on alibabacloud everything seems to be working perfectly(running gunicorn --bind 0.0.0.0:8000 project_name.wsgi on virtual environment)
Then after deactivating the virtual environment and setting up
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=admin
Group=www-data
WorkingDirectory=/home/admin/project_name
ExecStart=/home/admin/project_name/myprojectenv/bin/gunicorn --access-logfile - --workers 3 --bind
unix:/home/admin/project_name/project_name.sock project_name.wsgi:application
in /etc/systemd/system/gunicorn.service
then running sudo systemctl start gunicorn I keep getting the error
Failed to start gunicorn.service: Unit gunicorn.service is masked.
Please how can I fix this?
I have tried systemctl unmask gunicorn.socket but it keeps showing me the error
Unit gunicorn.socket does not exist, proceeding anyway.
Failed to unmask unit: The name org.freedesktop.PolicyKit1 was not provided
by any .service files

AWS Beanstalk with working app fails to deploy app to new ec2 when current one is terminated

I have a new beanstalk that is a migration of an old one running an app under php5.6 platform on Amazon AMI Linux. The new beanstalk is running php7.3 on Amazon Linux2. I have worked through all the migration issues and the app is running correctly on my new beanstalk. I have a load-balancer (classic) and I run autoscaling with the max and min instance settings both set to 1.
The problem occurs when I terminate the ec2. The autoscaling is creating a new ec2 but it is't deploying the application to it.
Does anyone know why this might be, or where I can look to try and debug the issue?
What worked for me was to remove the old .ebextension config files related to cwlogs, and add the line
awslogs: []
to my config that does
packages:
yum:
then create a new conf file as follows
files:
"/tmp/start_aws_cloudwatch_service.sh":
content: |
#!/bin/sh
systemctl start awslogsd
systemctl status awslogsd
systemctl enable awslogsd.service
exit $?
mode : "000755"
owner : root
group : root
commands:
start_aws_cloudwatch_service:
cwd: /tmp
command: bash /tmp/start_aws_cloudwatch_service.sh
After this I could see that the service was up and running
$ systemctl status awslogsd
● awslogsd.service - awslogs daemon
Loaded: loaded (/usr/lib/systemd/system/awslogsd.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2020-10-14 14:08:19 UTC; 34min ago
Main PID: 4029 (aws)
CGroup: /system.slice/awslogsd.service
└─4029 /usr/bin/python2 -s /usr/bin/aws logs push --config-file /etc/awslogs/awslogs.conf --additional-configs-dir /etc/awslogs/c...
Oct 14 14:08:19 ip-xxx-xxx-30-7.eu-west-1.compute.internal systemd[1]: Started awslogs daemon.
Oct 14 14:08:19 ip-xxx-xxx-30-7.eu-west-1.compute.internal systemd[1]: Starting awslogs daemon...
See: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html

openshift import-images error "! error: Import failed (Unauthorized): you may not have access to the Docker image"

I need some inputs/help to bring up my container(customized) on OpenShift
[mag-vm#mag-vm-centos-2 ~]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mag_main latest e039447d7212 13 days ago 1.316 GB
[mag-vm#mag-vm-centos-2 ~]$ oc import-image mag_main:latest
error: no image stream named "mag_main" exists, pass --confirm to create and import
[mag-vm#mag-vm-centos-2 ~]$ oc import-image mag_main:latest --confirm
The import completed with errors.
Name: mag_main
Namespace: cirrus
Created: Less than a second ago
Labels: <none>
Annotations: openshift.io/image.dockerRepositoryCheck=2017-06-07T16:24:49Z
Docker Pull Spec: 172.30.124.119:5000/cirrus/mag_main
Unique Images: 0
Tags: 1
latest
tagged from mag_main:latest
! error: Import failed (Unauthorized): you may not have access to the Docker image "mag_main:latest"
Less than a second ago
[mag-vm#mag-vm-centos-2 ~]$
Could you pls help me to overcome this issue, is it something to do with the "secret" settings?
Thanks in Advance.
Also as an additional input, I am able to bring up the container for this docker image using "docker run" command
STEP #1 : sudo docker run -t mag_main:latest /bin/bash
STEP #2 : Once the container is up, I used "./bin/karaf" to run the services inside this docker container
Pls let me know, how can I do the same from the OpenShift.
OpenShift Details;
[mag-vm#mag-vm-centos-2 ~]$ oc version
oc v1.5.1+7b451fc
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://10.100.71.160:8443
openshift v1.5.1+7b451fc
kubernetes v1.5.2+43a9be4
[mag-vm#mag-vm-centos-2 ~]$
Docker Details;
[mag-vm#mag-vm-centos-2 ~]$ sudo docker version
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:23:11 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:23:11 2016
OS/Arch: linux/amd64
[mag-vm#mag-vm-centos-2 ~]$

Docker Remote API does not list containers

I have locally installed docker server which runs one container.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3d7ef4f6bb0a debian "/bin/bash" 7 hours ago Up 7 hours 0.0.0.0:80->2376/tcp nostalgic_fermat
when I tried to use the docker remote API in order to get the information about this container I did not see the json output about the containers running on host. The result from rest call is:
wget -v 192.168.99.100:2376/containers/json/
--2016-01-16 23:57:20-- http://192.168.99.100:2376/containers/json/
Connecting to 192.168.99.100:2376... connected.
HTTP request sent, awaiting response... 200 No headers, assuming HTTP/0.9
Length: unspecified
Saving to: 'index.html.3'
index.html.3 [ <=> ] 7 --.-KB/s in 0s
2016-01-16 23:57:20 (297 KB/s) - 'index.html.3' saved [7]
What exactly I am missing?
The version of API is:
Client:
Version: 1.9.0
API version: 1.21
Go version: go1.4.3
Git commit: 76d6bc9
Built: Tue Nov 3 19:20:09 UTC 2015
OS/Arch: darwin/amd64
EDIT (RESOLVE)
It appears that docker server requires SSL authentication. I was able to authorized to docker localhost by providing the local docker server certificates.
The following command stores json file with information of all containers running on local docker server.
wget --no-check-certificate --ca-certificate ca.pem --certificate=cert.pem --certificate-type=PEM --private-key=key.pem --private-key-type=PEM https://192.168.99.100:2376/containers/json