Failed to start gunicorn.service: Unit gunicorn.service is masked - gunicorn

I am trying to deploy django web application on alibabacloud everything seems to be working perfectly(running gunicorn --bind 0.0.0.0:8000 project_name.wsgi on virtual environment)
Then after deactivating the virtual environment and setting up
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=admin
Group=www-data
WorkingDirectory=/home/admin/project_name
ExecStart=/home/admin/project_name/myprojectenv/bin/gunicorn --access-logfile - --workers 3 --bind
unix:/home/admin/project_name/project_name.sock project_name.wsgi:application
in /etc/systemd/system/gunicorn.service
then running sudo systemctl start gunicorn I keep getting the error
Failed to start gunicorn.service: Unit gunicorn.service is masked.
Please how can I fix this?
I have tried systemctl unmask gunicorn.socket but it keeps showing me the error
Unit gunicorn.socket does not exist, proceeding anyway.
Failed to unmask unit: The name org.freedesktop.PolicyKit1 was not provided
by any .service files

Related

AWS Beanstalk with working app fails to deploy app to new ec2 when current one is terminated

I have a new beanstalk that is a migration of an old one running an app under php5.6 platform on Amazon AMI Linux. The new beanstalk is running php7.3 on Amazon Linux2. I have worked through all the migration issues and the app is running correctly on my new beanstalk. I have a load-balancer (classic) and I run autoscaling with the max and min instance settings both set to 1.
The problem occurs when I terminate the ec2. The autoscaling is creating a new ec2 but it is't deploying the application to it.
Does anyone know why this might be, or where I can look to try and debug the issue?
What worked for me was to remove the old .ebextension config files related to cwlogs, and add the line
awslogs: []
to my config that does
packages:
yum:
then create a new conf file as follows
files:
"/tmp/start_aws_cloudwatch_service.sh":
content: |
#!/bin/sh
systemctl start awslogsd
systemctl status awslogsd
systemctl enable awslogsd.service
exit $?
mode : "000755"
owner : root
group : root
commands:
start_aws_cloudwatch_service:
cwd: /tmp
command: bash /tmp/start_aws_cloudwatch_service.sh
After this I could see that the service was up and running
$ systemctl status awslogsd
● awslogsd.service - awslogs daemon
Loaded: loaded (/usr/lib/systemd/system/awslogsd.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2020-10-14 14:08:19 UTC; 34min ago
Main PID: 4029 (aws)
CGroup: /system.slice/awslogsd.service
└─4029 /usr/bin/python2 -s /usr/bin/aws logs push --config-file /etc/awslogs/awslogs.conf --additional-configs-dir /etc/awslogs/c...
Oct 14 14:08:19 ip-xxx-xxx-30-7.eu-west-1.compute.internal systemd[1]: Started awslogs daemon.
Oct 14 14:08:19 ip-xxx-xxx-30-7.eu-west-1.compute.internal systemd[1]: Starting awslogs daemon...
See: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html

systemd podman This usually indicates unclean termination of a previous run, or service implementation deficiencies

I am running container with systemd/pod, when I want to deploy new image tag. stopping service, updating the service file and starting. but container failed to start.
systemd file.
[Unit]
Description=hello_api Podman Container
After=network.target
[Service]
Restart=on-failure
RestartSec=3
ExecStartPre=/usr/bin/rm -f /%t/%n-pid /%t/%n-cid
ExecStartPre=-/usr/bin/podman rm hello_api
ExecStart=/usr/bin/podman run --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid -d -h modelenv \
--name hello_api --rm --ulimit=host -p "8001:8001" -p "8443:8443" 7963-hello_api:7.8
ExecStop=/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`"
KillMode=none
Type=forking
PIDFile=/%t/%n-pid
[Install]
WantedBy=default.target
here is error message.
May 21 10:41:43 webserver systemd[1471]: hello_api.service: Found left-over process 22912 (conmon) in control group while starting unit. Ignoring.
May 21 10:41:43 webserver systemd[1471]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 21 10:41:43 webserver systemd[1471]: hello_api.service: Found left-over process 22922 (node) in control group while starting unit. Ignoring.
May 21 10:41:43 webserver systemd[1471]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 21 10:41:43 webserver systemd[1471]: hello_api.service: Found left-over process 22960 (node) in control group while starting unit. Ignoring.
May 21 10:41:43 webserver systemd[1471]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 21 10:41:44 webserver podman[24565]: 2020-05-21 10:41:44.586396547 -0400 EDT m=+1.090025069 container create 28eaf881f532339766cc96ec27a69d8ad588e07d4bfc70e65e7c54e8a5082933 (image=7963-hello_api:7.8, name=hello_api)
May 21 10:41:45 webserver podman[24565]: Error: error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]
May 21 10:41:45 webserver systemd[1471]: hello_api.service: Control process exited, code=exited status=126
May 21 10:41:45 webserver systemd[1471]: hello_api.service: Failed with result 'exit-code'.
May 21 10:41:45 webserver systemd[1471]: Failed to start call_center_hello_api Podman Container.
why its giving this error, is there option to cleanly exit the old container?
I think we followed the same tutorial here: https://www.redhat.com/sysadmin/podman-shareable-systemd-services
"It’s important to set the kill mode to none. Otherwise, systemd will start competing with Podman to stop and kill the container processes. which can lead to various undesired side effects and invalid states"
I'm not sure if the behavior changed, but I removed the KillMode=none causing it to use the default KillMode=control-group. I have not had any problems managing the service since. Also, I removed the / from some of the commands because it was being duplicated:
ExecStartPre=/usr/bin/rm -f //run/user/1000/registry.service-pid //run/user/1000/registry.service-cid
It's now:
ExecStartPre=/usr/bin/rm -f /run/user/1000/registry.service-pid /run/user/1000/registry.service-cid
The full service file I use for running a docker registry:
[Unit]
Description=Image Registry
[Service]
Restart=on-failure
ExecStartPre=-/usr/bin/podman volume create registry
ExecStartPre=/usr/bin/rm -f /%t/%n-pid /%t/%n-cid
ExecStart=/usr/bin/podman run --conmon-pidfile %t/%n-pid --cidfile %t/%n-cid -d -p 5000:5000 -v registry:/var/lib/registry --name registry docker.io/library/registry
ExecStop=/usr/bin/sh -c "/usr/bin/podman rm -f `cat %t/%n-cid`"
Type=forking
PIDFile=/%t/%n-pid
[Install]
WantedBy=multi-user.target

Kubernetes: Failed to pull image. Server gave HTTP response to HTTPS client

I'm trying to use Kubernetes with Docker. My image runs with Docker. I have one master-node and two worker-nodes. I also created a local registry like this $ docker run -d -p 5000:5000 --restart=always --name registry registry:2 and pushed my image into it. Everything worked fine so far.
I added { "insecure-registries":["xxx.xxx.xxx.xxx:5000"] } to the daemon.json file at /etc/docker. And I also changed the content of the docker-file at /etc/default/to DOCKER_OPTS="--config-file=/etc/docker/daemon.json". I made the changes on all nodes and I restarted the docker daemon afterwards.
I am able to pull my image from every node with the following command:
sudo docker pull xxx.xxx.xxx.xxx:5000/helloworldimage
I try to create my container from the master node with the command bellow:
sudo kubectl run test --image xxx.xxx.xxx.xxx:5000/helloworldimage
Than I get the following error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/test-775f99f57-m9r4b to rpi-2
Normal BackOff 18s (x2 over 44s) kubelet, rpi-2 Back-off pulling image "xxx.xxx.xxx.xxx:5000/helloworldimage"
Warning Failed 18s (x2 over 44s) kubelet, rpi-2 Error: ImagePullBackOff
Normal Pulling 3s (x3 over 45s) kubelet, rpi-2 Pulling image "xxx.xxx.xxx.xxx:5000/helloworldimage"
Warning Failed 3s (x3 over 45s) kubelet, rpi-2 Failed to pull image "xxx.xxx.xxx.xxx:5000/helloworldimage": rpc error: code = Unknown desc = failed to pull and unpack image "xxx.xxx.xxx.xxx:5000/helloworldimage:latest": failed to resolve reference "xxx.xxx.xxx.xxx:5000/helloworldimage:latest": failed to do request: Head https://xxx.xxx.xxx.xxx:5000/v2/helloworldimage/manifests/latest: http: server gave HTTP response to HTTPS client
Warning Failed 3s (x3 over 45s) kubelet, rpi-2 Error: ErrImagePull
This is the docker version I use:
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:37:22 2019
OS/Arch: linux/arm
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:31:17 2019
OS/Arch: linux/arm
Experimental: false
containerd:
Version: 1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
This is the Kubernetes version I use:
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0+k3s.1", GitCommit:"0f644650f5d8e9f091629f860b342f221c46f6d7", GitTreeState:"clean", BuildDate:"2020-01-06T23:20:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/arm"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0+k3s.1", GitCommit:"0f644650f5d8e9f091629f860b342f221c46f6d7", GitTreeState:"clean", BuildDate:"2020-01-06T23:20:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/arm"}```
Kubernetes: Failed to pull image. Server gave HTTP response to HTTPS client.
{ "insecure-registries":["xxx.xxx.xxx.xxx:5000"] }
to the daemon.json file at /etc/docker.
I solved this problem by configuring it on all kubernetes nodes.
It appears that in some situations solution described here solved the problem:
sudo systemctl edit docker
Add below lines:
[Service]
ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry registry:5000
sudo systemctl daemon-reload
systemctl restart docker
systemctl status docker

Failed to start couchbase-server.service: Unit couchbase-server.service failed to load

When I run this command to install Couchbase Server
sudo rpm --install couchbase-server-community-4.0.0.centos7.x86_64.rpm
in my Fedora 22, I got an error below:
Starting couchbase-server (via systemctl): Failed to start couchbase-server.service: Unit couchbase-server.service failed to load: No such file or directory. [Failed]
You have successfully installed Couchbase Server.
How can I fixed this error?

sudo systemctl start returns "Failed to wait for response: Success"

I'm following this tutorial to install nginx and mysql on a new server.
I'm running into problems when I run either of sudo systemctl start mysqld && mysql_secure_installation or sudo systemctl start nginx.
With either of these I get the response "Failed to wait for response: Success". I'm not sure what this means, but I assume it means something went wrong. Do you have any idea what this message means and what I can do about it?
I faced a similar issue with systemctl where start and stop command would always fail, but the services were getting started and stopped correctly.
You can check if the services are effectively getting started and stopped correctly with systemctl status service-name.
I have also seen the mentioned error message Failed to wait for response: Success in many other post out there, so it seems to be a problem with systemctl.
After upgrading package systemd to version 215-4 the problem went away.
In my particular case the situation was worse, because systemctl failures were blocking package installation.
Namely, package systemd depends on package udev, but package udev could not be configured because service start/stop operations would fail.
The solution was to force the installation of package systemd ignoring udev dependency, and fix dependencies after.
# dpkg --ignore-depends=udev --install /var/cache/apt/archives/systemd_215-4_amd64.deb
# apt-get install -f