About upstart and systemd - gunicorn

My system is ubutnu 16.04,so I need to transform upstart into systemd,all I need to do is add /etc/systemd/system/gunicorn.service ??
my upstart is
start on net-device-up
stop on shutdown
respawn
setuid yangxg
chdir /home/yangxg/sites/demo.zmrenwu.com/blogproject
exec ../env/bin/gunicorn --bind unix:/tmp/demo.zmrenwu.com.socket blogproject.wsgi:application
How can I transform it into systemd
[Unit]
Description=My script
[Service]
ExecStart=/usr/bin/my-script
[Install]
WantedBy=multi-user.target
I'm not clear about these parameters...I am new in learning gunicorn...

Related

Failed to start gunicorn.service: Unit gunicorn.service is masked

I am trying to deploy django web application on alibabacloud everything seems to be working perfectly(running gunicorn --bind 0.0.0.0:8000 project_name.wsgi on virtual environment)
Then after deactivating the virtual environment and setting up
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=admin
Group=www-data
WorkingDirectory=/home/admin/project_name
ExecStart=/home/admin/project_name/myprojectenv/bin/gunicorn --access-logfile - --workers 3 --bind
unix:/home/admin/project_name/project_name.sock project_name.wsgi:application
in /etc/systemd/system/gunicorn.service
then running sudo systemctl start gunicorn I keep getting the error
Failed to start gunicorn.service: Unit gunicorn.service is masked.
Please how can I fix this?
I have tried systemctl unmask gunicorn.socket but it keeps showing me the error
Unit gunicorn.socket does not exist, proceeding anyway.
Failed to unmask unit: The name org.freedesktop.PolicyKit1 was not provided
by any .service files

systemd podman This usually indicates unclean termination of a previous run, or service implementation deficiencies

I am running container with systemd/pod, when I want to deploy new image tag. stopping service, updating the service file and starting. but container failed to start.
systemd file.
[Unit]
Description=hello_api Podman Container
After=network.target
[Service]
Restart=on-failure
RestartSec=3
ExecStartPre=/usr/bin/rm -f /%t/%n-pid /%t/%n-cid
ExecStartPre=-/usr/bin/podman rm hello_api
ExecStart=/usr/bin/podman run --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid -d -h modelenv \
--name hello_api --rm --ulimit=host -p "8001:8001" -p "8443:8443" 7963-hello_api:7.8
ExecStop=/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`"
KillMode=none
Type=forking
PIDFile=/%t/%n-pid
[Install]
WantedBy=default.target
here is error message.
May 21 10:41:43 webserver systemd[1471]: hello_api.service: Found left-over process 22912 (conmon) in control group while starting unit. Ignoring.
May 21 10:41:43 webserver systemd[1471]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 21 10:41:43 webserver systemd[1471]: hello_api.service: Found left-over process 22922 (node) in control group while starting unit. Ignoring.
May 21 10:41:43 webserver systemd[1471]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 21 10:41:43 webserver systemd[1471]: hello_api.service: Found left-over process 22960 (node) in control group while starting unit. Ignoring.
May 21 10:41:43 webserver systemd[1471]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 21 10:41:44 webserver podman[24565]: 2020-05-21 10:41:44.586396547 -0400 EDT m=+1.090025069 container create 28eaf881f532339766cc96ec27a69d8ad588e07d4bfc70e65e7c54e8a5082933 (image=7963-hello_api:7.8, name=hello_api)
May 21 10:41:45 webserver podman[24565]: Error: error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]
May 21 10:41:45 webserver systemd[1471]: hello_api.service: Control process exited, code=exited status=126
May 21 10:41:45 webserver systemd[1471]: hello_api.service: Failed with result 'exit-code'.
May 21 10:41:45 webserver systemd[1471]: Failed to start call_center_hello_api Podman Container.
why its giving this error, is there option to cleanly exit the old container?
I think we followed the same tutorial here: https://www.redhat.com/sysadmin/podman-shareable-systemd-services
"It’s important to set the kill mode to none. Otherwise, systemd will start competing with Podman to stop and kill the container processes. which can lead to various undesired side effects and invalid states"
I'm not sure if the behavior changed, but I removed the KillMode=none causing it to use the default KillMode=control-group. I have not had any problems managing the service since. Also, I removed the / from some of the commands because it was being duplicated:
ExecStartPre=/usr/bin/rm -f //run/user/1000/registry.service-pid //run/user/1000/registry.service-cid
It's now:
ExecStartPre=/usr/bin/rm -f /run/user/1000/registry.service-pid /run/user/1000/registry.service-cid
The full service file I use for running a docker registry:
[Unit]
Description=Image Registry
[Service]
Restart=on-failure
ExecStartPre=-/usr/bin/podman volume create registry
ExecStartPre=/usr/bin/rm -f /%t/%n-pid /%t/%n-cid
ExecStart=/usr/bin/podman run --conmon-pidfile %t/%n-pid --cidfile %t/%n-cid -d -p 5000:5000 -v registry:/var/lib/registry --name registry docker.io/library/registry
ExecStop=/usr/bin/sh -c "/usr/bin/podman rm -f `cat %t/%n-cid`"
Type=forking
PIDFile=/%t/%n-pid
[Install]
WantedBy=multi-user.target

Run gunicorn like service

I want configure gunicorn in a service.
I have this configuration for the service :
[Unit]
Description=test
[Service]
WorkingDirectory=/var/www/cmdb
Type=forking
Restart=always
ExecStart=/var/www/test/bin/gunicorn --workers=4 --bind=0.0.0.0:8080 test.wsgi:application
[Install]
WantedBy=multi-user.target
My problem is that it doesn't run. I have this error when I start the service :
gunicorn.service: Main process exited, code=exited, status=203/EXEC
gunicorn.service: Unit entered failed state.
gunicorn.service: Failed with result 'exit-code'.
gunicorn.service: Start request repeated too quickly.
I don't find the mistakes in my configuration. Does someone have an idea ?
Assuming that you're running in a virtualenv, the gunicorn bin should be something like this:
/var/www/cmdb/venv/bin/gunicorn
Instead of
/var/www/test/bin/gunicorn
Anyway, I use something like this in my system and it work fine:
[Unit]
Description = SampleApp
After = network.target
[Service]
PIDFile = /run/cmdb/cmdb.pid
WorkingDirectory = /var/www/cmdb
ExecStartPre = /bin/mkdir /run/cmdb
ExecStart = /var/www/cmdb/venv/bin/gunicorn test.wsgi:application -b 0.0.0.0:8000 --pid /run/cmdb/cmdb.pid
ExecReload = /bin/kill -s HUP $MAINPID
ExecStop = /bin/kill -s TERM $MAINPID
ExecStopPost = /bin/rm -rf /run/cmdb
[Install]
WantedBy = multi-user.target
Note: this example use the root as the app user. I do recommend use an user to your app, with restricted permissions.

MySql/MariaDB cannot change default datadir on Debian 9.1 server

I have a problem to move the default datadir of Maria DB to another partition, it appears to be very common but i tried everything I can without luck.
Mysql is installed as Mariadb 10.1.26 with the default debian package (apt-get install mysql-server) on a Debian 9.1 (stretch) server, mysqld -v returns mysqld 10.1.26-MariaDB-0+deb9u1
Default_mysql_datadir : /var/lib/mysql
New_mysql_datadir : /home/mysql
/var/lib/mysql is mounted to "/" (/dev/md3)
/home/mysql is mounted to "/home" (/dev/md4)
What I've tried
# systemctl stop mysql
# mv /var/lib/mysql /home
Change datadir in /etc/mysql/my.cnf
# datadir = /home/mysql
Check if the rights/permissions are ok
# chown -R mysql.mysql /home/mysql
apparmor is NOT installed nor running on the system though the /etc/apparmor.d/usr.sbin.mysqld file is existing with the following rules :
/home/mysql/ r,
/home/mysql/** rwk,
I even tried to create and empty /var/lib/mysq folder refering to this bug
But when I start I always get the same error :
# systemctl start mysql
[Warning] Can't create test file /home/mysql/<user>.lower-test
#007/usr/sbin/mysqld: Can't change dir to '/home/mysql/' (Errcode: 13 "Permission denied") 2017-09-07 0:16:59 140119808397888 [ERROR] Aborting
mariadb.service: Main process exited, code=exited, status=1/FAILURE
Failed to start MariaDB database server.
mariadb.service: Unit entered failed state.
mariadb.service: Failed with result 'exit-code'.
Any suggestion ?
Thanks
Services started by Systemd have additional filesystem restrictions imposed by systemd.
It should be possible to provide values to systemd [Service] directives ProtectHome= and/or ProtectSystem= and/or ReadWritePaths= to resolve this issue.
I have similar problem:
When I start the mysql services a message is shown:
Job for mariadb.service failed because the control process exited with error code.
See "systemctl status mariadb.service" and "journalctl -xe" for details.
if I change the datadir from /etc/mysql/my.cnf again as default; np

Docker configuration using the systemd configuration style

I note that the Docker documentation is gradually moving towards the systemd method of initialisation and hence configuration. I'm somewhat uncertain how you add "insecure-registry" entries to Docker when using the systemd configuration method.
Whilst using Docker version 1.6.1 I was able to add multiple insecure-registry entries by adding to the file:
/etc/sysconfig/docker
a line much like the following:
INSECURE_REGISTRY='--insecure-registry myregistry.companyx.com:5010 --insecure-registry myregistry.companyx.com:5011'
and restarting Docker with the command:
sudo service docker restart
With Docker 1.8.2 I've been looking how to do this in "systemd" fashion. The closest I've come to any documentation is the following 2 pages:
https://docs.docker.com/articles/systemd/
https://coreos.com/os/docs/latest/registry-authentication.html
Both the above suggest I need to add a file to a directory called:
/etc/systemd/system/docker.service.d
The second of those pages suggests a file called:
/etc/systemd/system/docker.service.d/50-insecure-registry.conf
it also talks about "#cloud-config write_files: - path: " which I didn't follow at all.
I ignored the stuff I didn't understand and created a file named:
/etc/systemd/system/docker.service.d/50-insecure-registry.conf
Containing something along the lines of:
[Service]
Environment='DOCKER_OPTS=--insecure-registry="myregistry.companyx.com:5010"'
and restarted docker using the command:
sudo systemctl restart docker
The result makes me think it's time to go home. I want to add multiple insecure-registry entries but haven't figured out how to do that. Also I'm a long way from being confident about the success of the single entry.
STUFF added 2 days later
With help from page:
http://nknu.net/how-to-configure-docker-on-ubuntu-15-04/
I made some progress in configuring Docker using files dropped into the directory:
/etc/systemd/system/docker.service.d
The thing I had been missing was an entry to override the default:
[Service]
ExecStart=/usr/bin/docker -d -H fd://
I did this by creating an additional drop in file, this one called:
docker_systemd_workaround.conf
it contains:
[Service]
# workaround to include default options
ExecStart=
ExecStart=/usr/bin/docker -d -H fd:// $DOCKER_OPTS
With this, the content of another drop in file which sets DOCKER_OPTS is no longer ignored. I don't think this is anything close to a complete solution but it does fix the issue I was having trying to add "insecure-registry" entries.
Expanded on my comment for readability
Problem
Couldn't connect to a remote insecure repository. Unable to add "insecure_repository" to docker options on startup.
Using docker installed via package manager on Ubuntu 16.04 LTS
Solution
1. Verify docker under is control of systemd
$systemctl status docker should return details for running docker service. You can view the default setup it's using under Loaded:
2. Add insecure repository systemd conf file
This file will load the DOCKER_OPTS env variable.
Create file at /etc/systemd/system/docker.service.d/insecure_repository.conf
Add file contents:
[Service]
Environment='DOCKER_OPTS=--insecure-registry="myregistryserver.mydomain.com:5000"'
3. Add docker systemd workaround conf file
This file will modify the ExecStart to use the DOCKER_OPTS environment variable.
Create file at /etc/systemd/system/docker.service.d/docker-systemd-workaround.conf
Add file contents:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// $DOCKER_OPTS
4. Reload
$sudo systemctl daemon-reload
$sudo service docker restart
5. Verify
$docker info should contain myregistryserver.mydomain.com:5000 under Insecure Registries:
$systemctl status docker should have your systemd configs (aka drop-ins) under the Drop-In: header. You should also see your modified ExecStart under the CGroup: header.
I had a similar problem and struggled for ages until I found this blog.
Basically follow these steps :
sudo vi /etc/systemd/system/docker.service.d/docker.conf and
add the following :
[Service]
#You need the below or you 'ExecStart=' or you will get and error 'Service has more than one ExecStart= setting, which is only allowed'
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd://
ExecStart=/usr/bin/docker daemon -H fd:// --insecure-registry youregistry.mydomain.com:5000
Then finally :
sudo systemctl daemon-reload
sudo systemctl restart docker