unable to start Fiware Cygnus as a service - fiware

I installed fiware cygnus using RPM on my centOS 7 but I can't start it as service. I got the following error:
[root#localhost cygnus]# sudo service cygnus start
Starting cygnus (via systemctl): Job for cygnus.service failed. See 'systemctl status cygnus.service' and 'journalctl -xn' for details.
[ÉCHOUÉ]
[root#localhost cygnus]# systemctl status cygnus.service -l
cygnus.service - SYSV: cygnus
Loaded: loaded (/etc/rc.d/init.d/cygnus)
Active: failed (Result: exit-code) since ven. 2015-07-31 19:11:10 CEST; 2s ago
Process: 5750 ExecStart=/etc/rc.d/init.d/cygnus start (code=exited, status=1/FAILURE)
juil. 31 19:11:08 localhost cygnus[5750]: /usr/cygnus/conf/cygnus_instance_1.conf: ligne34: mongo-channel : commande introuvable
juil. 31 19:11:08 localhost su[5756]: (to root) root on none
juil. 31 19:11:10 localhost cygnus[5750]: Starting Cygnus 1... [ÉCHOUÉ]
juil. 31 19:11:10 localhost systemd[1]: cygnus.service: control process exited, code=exited status=1
juil. 31 19:11:10 localhost systemd[1]: Failed to start SYSV: cygnus.
juil. 31 19:11:10 localhost systemd[1]: Unit cygnus.service entered failed state.
not sure what to put as name of the agent in the configuration file: cygnus_instance_1.conf since it did not reconize the agent name
# Name of the agent. The name of the agent is not trivial, since it is the base for the Fluleters
# naming conventions, e.g. it appears in .sources.http-source.channels=...
AGENT_NAME= mongo-channel
Here is my complete configuration files:
cygnus_instance_1.conf
#####
#
# Configuration file for apache-flume
#
#####
# Copyright 2014 Telefonica Investigación y Desarrollo, S.A.U
#
# This file is part of fiware-cygnus (FI-WARE project).
#
# fiware-cygnus is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
# Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any
# later version.
# fiware-cygnus is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
# details.
#
# You should have received a copy of the GNU Affero General Public License along with fiware-cygnus. If not, see
# http://www.gnu.org/licenses/.
#
# For those usages not covered by the GNU Affero General Public License please contact with iot_support at tid dot es
# Who to run cygnus as. Note that you may need to use root if you want
# to run cygnus in a privileged port (<1024)
CYGNUS_USER=root
# Where is the config folder
CONFIG_FOLDER=/usr/cygnus/conf
# Which is the config file
CONFIG_FILE=/usr/cygnus/conf/agent_1.conf
# Name of the agent. The name of the agent is not trivial, since it is the base for the Fluleters
# naming conventions, e.g. it appears in .sources.http-source.channels=...
AGENT_NAME= /usr/cygnus/bin/cygnus-flume-ng
#mongo-channel
# Name of the logfile located at /var/log/cygnus. It is important to put the extension '.log' in order to the log rotation works properly
LOGFILE_NAME=cygnus.log
# Administration port. Must be unique per instance
ADMIN_PORT=8081
# Polling interval (seconds) for the configuration reloading
POLLING_INTERVAL=30
agent_1.conf
# Copyright 2014 Telefónica Investigación y Desarrollo, S.A.U
#
# This file is part of fiware-cygnus (FI-WARE project).
#
# fiware-cygnus is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
# Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any
# later version.
# fiware-cygnus is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
# details.
#
# You should have received a copy of the GNU Affero General Public License along with fiware-cygnus. If not, see
# http://www.gnu.org/licenses/.
#
# For those usages not covered by the GNU Affero General Public License please contact with iot_support at tid dot es
#=============================================
# To be put in APACHE_FLUME_HOME/conf/agent.conf
#
# General configuration template explaining how to setup a sink of each of the available types (HDFS, CKAN, MySQL).
#=============================================
# The next tree fields set the sources, sinks and channels used by Cygnus. You could use different names than the
# ones suggested below, but in that case make sure you keep coherence in properties names along the configuration file.
# Regarding sinks, you can use multiple types at the same time; the only requirement is to provide a channel for each
# one of them (this example shows how to configure 3 sink types at the same time). Even, you can define more than one
# sink of the same type and sharing the channel in order to improve the performance (this is like having
# multi-threading).
cygnusagent.sources = http-source
cygnusagent.sinks = mongo-sink
cygnusagent.channels = mongo-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
# ============================================
# OrionMongoSink configuration
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# FQDN/IP:port where the MongoDB server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_hosts = 192.168.1.40:27017
# a valid user in the MongoDB server (or empty if authentication is not enabled in MongoDB)
#cygnusagent.sinks.mongo-sink.mongo_username = mongo_username
# password for the user above (or empty if authentication is not enabled in MongoDB)
#cygnusagent.sinks.mongo-sink.mongo_password = xxxxxxxx
# prefix for the MongoDB databases
#cygnusagent.sinks.mongo-sink.db_prefix = sth_
# prefix pro the MongoDB collections
#cygnusagent.sinks.mongo-sink.collection_prefix = sth_
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
#=============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
UPDATES after #frb response:
I updated my cygnus_instance_1.conf #frb response but unfortuantly as I got the following error:
systemctl status cygnus.service -l
cygnus.service - SYSV: cygnus
Loaded: loaded (/etc/rc.d/init.d/cygnus)
Active: failed (Result: exit-code) since mer. 2015-08-05 17:22:09 CEST; 3s ago
Process: 3338 ExecStart=/etc/rc.d/init.d/cygnus start (code=exited, status=1/FAILURE)
août 05 17:22:07 localhost cygnus[3338]: /usr/cygnus/conf/cygnus_instance_1.conf: ligne24: cygnus : commande introuvable
août 05 17:22:07 localhost cygnus[3338]: /usr/cygnus/conf/cygnus_instance_1.conf: ligne34: cygnusagent : commande introuvable
août 05 17:22:07 localhost su[3345]: (to cygnus) root on none
août 05 17:22:07 localhost cygnus[3338]: Starting Cygnus 1... bash: /var/run/cygnus/cygnus_1.pid: Permission non accordée
août 05 17:22:09 localhost cygnus[3338]: cat: /var/run/cygnus/cygnus_1.pid: Aucun fichier ou dossier de ce type
août 05 17:22:09 localhost cygnus[3338]: [ÉCHOUÉ]
août 05 17:22:09 localhost cygnus[3338]: rm: impossible de supprimer « /var/run/cygnus/cygnus_1.pid »: Aucun fichier ou dossier de ce type
août 05 17:22:09 localhost systemd[1]: cygnus.service: control process exited, code=exited status=1
août 05 17:22:09 localhost systemd[1]: Failed to start SYSV: cygnus.
août 05 17:22:09 localhost systemd[1]: Unit cygnus.service entered failed state.
Looking to the above error I saw that it can't find the file "/var/run/cygnus/cygnus_1.pid" so I created an empty file to bypass this error but I got a new one:
[root#localhost ~]# sudo systemctl start cygnus.service
Job for cygnus.service failed. See 'systemctl status cygnus.service' and 'journalctl -xn' for details.
[root#localhost ~]# sudo systemctl status cygnus.service -l
cygnus.service - SYSV: cygnus
Loaded: loaded (/etc/rc.d/init.d/cygnus)
Active: failed (Result: exit-code) since mer. 2015-08-05 17:24:08 CEST; 5s ago
Process: 3445 ExecStart=/etc/rc.d/init.d/cygnus start (code=exited, status=1/FAILURE)
août 05 17:24:06 localhost systemd[1]: Starting SYSV: cygnus...
août 05 17:24:06 localhost cygnus[3445]: /usr/cygnus/conf/cygnus_instance_1.conf: ligne24: cygnus : commande introuvable
août 05 17:24:06 localhost cygnus[3445]: /usr/cygnus/conf/cygnus_instance_1.conf: ligne34: cygnusagent : commande introuvable
août 05 17:24:06 localhost su[3452]: (to cygnus) root on none
août 05 17:24:06 localhost cygnus[3445]: Starting Cygnus 1... bash: /var/run/cygnus/cygnus_1.pid: Permission non accordée
août 05 17:24:08 localhost cygnus[3445]: [ÉCHOUÉ]
août 05 17:24:08 localhost systemd[1]: cygnus.service: control process exited, code=exited status=1
août 05 17:24:08 localhost systemd[1]: Failed to start SYSV: cygnus.
août 05 17:24:08 localhost systemd[1]: Unit cygnus.service entered failed state.

According to your agent_1.conf configuration, the AGENT_NAME within cygnus_instance_1.conf must be cygnusagent. I.e.:
AGENT_NAME = cygnusagent
In addition, the CYGNUS_USER should be cygnus, since this is the user the RPM creates when installing the software.

Related

slurmd.service is Failed & there is no PID file /var/run/slurmd.pid

I am trying to start slurmd.service using below commands but it is not successful permanently. I will be grateful if you could help me to resolve this issue!
systemctl start slurmd
scontrol update nodename=fwb-lab-tesla1 state=idle
This is the status of slurmd.service
cat /usr/lib/systemd/system/slurmd.service
[Unit]
Description=Slurm node daemon
After=network.target munge.service
ConditionPathExists=/etc/slurm/slurm.conf
[Service]
Type=forking
EnvironmentFile=-/etc/sysconfig/slurmd
ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS
ExecReload=/bin/kill -HUP $MAINPID
PIDFile=/var/run/slurmd.pid
KillMode=process
LimitNOFILE=51200
LimitMEMLOCK=infinity
LimitSTACK=infinity
[Install]
WantedBy=multi-user.target
and this the status of the node:
$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
gpucompute* up infinite 1 drain fwb-lab-tesla1
$ sinfo -R
REASON USER TIMESTAMP NODELIST
Low RealMemory root 2020-09-28T16:46:28 fwb-lab-tesla1
$ sinfo -Nl
Thu Oct 1 14:00:10 2020
NODELIST NODES PARTITION STATE CPUS S:C:T MEMORY TMP_DISK WEIGHT AVAIL_FE REASON
fwb-lab-tesla1 1 gpucompute* drained 32 32:1:1 64000 0 1 (null) Low RealMemory
Here there is the contents of slurm.conf
$ cat /etc/slurm/slurm.conf
# slurm.conf file generated by configurator easy.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=FWB-Lab-Tesla
#ControlAddr=137.72.38.102
#
MailProg=/bin/mail
MpiDefault=none
#MpiParams=ports=#-#
ProctrackType=proctrack/cgroup
ReturnToService=1
SlurmctldPidFile=/var/run/slurmctld.pid
#SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
#SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
#SlurmUser=slurm
SlurmdUser=root
StateSaveLocation=/var/spool/slurm/StateSave
SwitchType=switch/none
TaskPlugin=task/cgroup
#
#
# TIMERS
#KillWait=30
#MinJobAge=300
#SlurmctldTimeout=120
#SlurmdTimeout=300
#
#
# SCHEDULING
FastSchedule=1
SchedulerType=sched/backfill
SelectType=select/cons_res
SelectTypeParameters=CR_CPU_Memory
# Prevent very long time waits for mix serial/parallel in multi node environment
SchedulerParameters=pack_serial_at_end
#
#
# LOGGING AND ACCOUNTING
AccountingStorageType=accounting_storage/filetxt
# Need slurmdbd for gres functionality
#AccountingStorageTRES=CPU,Mem,gres/gpu,gres/gpu:Titan
ClusterName=cluster
#JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/linux
#SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm/slurmctld.log
#SlurmdDebug=3
SlurmdLogFile=/var/log/slurm/slurmd.log
#
#
# COMPUTE NODES
GresTypes=gpu
#NodeName=fwb-lab-tesla[1-32] Gres=gpu:4 RealMemory=64000 Sockets=2 CoresPerSocket=8 ThreadsPerCore=2 State=UNKNOWN
#PartitionName=compute Nodes=fwb-lab-tesla[1-32] Default=YES MaxTime=INFINITE State=UP
#NodeName=fwb-lab-tesla1 NodeAddr=137.73.38.102 Gres=gpu:4 RealMemory=64000 Sockets=2 CoresPerSocket=8 ThreadsPerCore=2 State=UNKNOWN
NodeName=fwb-lab-tesla1 NodeAddr=137.73.38.102 Gres=gpu:4 RealMemory=64000 CPUs=32 State=UNKNOWN
PartitionName=gpucompute Nodes=fwb-lab-tesla1 Default=YES MaxTime=INFINITE State=UP
There is not any slurmd.pid in the below path. Just once by starting system it appears here but it is gone after few minutes again.
$ ls /var/run/
abrt cryptsetup gdm lvm openvpn-server slurmctld.pid tuned
alsactl.pid cups gssproxy.pid lvmetad.pid plymouth sm-notify.pid udev
atd.pid dbus gssproxy.sock mariadb ppp spice-vdagentd user
auditd.pid dhclient-eno2.pid httpd mdadm rpcbind sshd.pid utmp
avahi-daemon dhclient.pid initramfs media rpcbind.sock sudo vpnc
certmonger dmeventd-client ipmievd.pid mount samba svnserve xl2tpd
chrony dmeventd-server lightdm munge screen sysconfig xrdp
console ebtables.lock lock netreport sepermit syslogd.pid xtables.lock
crond.pid faillock log NetworkManager setrans systemd
cron.reboot firewalld lsm openvpn-client setroubleshoot tmpfiles.d
[shirin#FWB-Lab-Tesla Seq2KMR33]$ systemctl status slurmctld
â slurmctld.service - Slurm controller daemon
Loaded: loaded (/usr/lib/systemd/system/slurmctld.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2020-09-28 15:41:25 BST; 2 days ago
Main PID: 1492 (slurmctld)
CGroup: /system.slice/slurmctld.service
ââ1492 /usr/sbin/slurmctld
Sep 28 15:41:25 FWB-Lab-Tesla systemd[1]: Starting Slurm controller daemon...
Sep 28 15:41:25 FWB-Lab-Tesla systemd[1]: Started Slurm controller daemon.
I try to start the service slurmd.service but it returns to failed after few minutes again
$ systemctl status slurmd
â slurmd.service - Slurm node daemon
Loaded: loaded (/usr/lib/systemd/system/slurmd.service; enabled; vendor preset: disabled)
Active: failed (Result: timeout) since Tue 2020-09-29 18:11:25 BST; 1 day 19h ago
Process: 25650 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited, status=0/SUCCESS)
CGroup: /system.slice/slurmd.service
ââ2986 /usr/sbin/slurmd
Sep 29 18:09:55 FWB-Lab-Tesla systemd[1]: Starting Slurm node daemon...
Sep 29 18:09:55 FWB-Lab-Tesla systemd[1]: Can't open PID file /var/run/slurmd.pid (yet?) after start: No ...ctory
Sep 29 18:11:25 FWB-Lab-Tesla systemd[1]: slurmd.service start operation timed out. Terminating.
Sep 29 18:11:25 FWB-Lab-Tesla systemd[1]: Failed to start Slurm node daemon.
Sep 29 18:11:25 FWB-Lab-Tesla systemd[1]: Unit slurmd.service entered failed state.
Sep 29 18:11:25 FWB-Lab-Tesla systemd[1]: slurmd.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
Log output of starting slurmd:
[2020-09-29T18:09:55.074] Message aggregation disabled
[2020-09-29T18:09:55.075] gpu device number 0(/dev/nvidia0):c 195:0 rwm
[2020-09-29T18:09:55.075] gpu device number 1(/dev/nvidia1):c 195:1 rwm
[2020-09-29T18:09:55.075] gpu device number 2(/dev/nvidia2):c 195:2 rwm
[2020-09-29T18:09:55.075] gpu device number 3(/dev/nvidia3):c 195:3 rwm
[2020-09-29T18:09:55.095] slurmd version 17.11.7 started
[2020-09-29T18:09:55.096] error: Error binding slurm stream socket: Address already in use
[2020-09-29T18:09:55.096] error: Unable to bind listen port (*:6818): Address already in use```
The log files states that it cannot bind to the standard slurmd port 6818, because there is something else using this address already.
Do you have another slurmd running on this node? Or something else listening there? Try netstat -tulpen | grep 6818 to see what is using the address.

QEMU+Virt-manager can't connect to virtlxcd-sock

I've installed qemu virt-manager libvirt on Linux Mint 20, I have a AMD FX(tm)-4300 Quad-Core Processor with AMD-V enabled in the bios, restarted a lot but virt-manager(Virtual Machine Manager) is saying:
Unable to connect to libvirt lxc:///.
Failed to connect socket to '/var/run/libvirt/virtlxcd-sock': No such file or directory
Libvirt URI is: lxc:///
I am running this locally. The file/socket does not exist, but there is a "libvirt-sock" (and other files) in that folder.
The service is running, but reporting the same error:
libvirtd.service - Virtualization daemon
Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-09-01 10:11:27 BST; 12min ago
TriggeredBy: ● libvirtd.socket
● libvirtd-ro.socket
● libvirtd-admin.socket
Docs: man:libvirtd(8)
https://libvirt.org
Main PID: 731 (libvirtd)
Tasks: 19 (limit: 32768)
Memory: 34.2M
CGroup: /system.slice/libvirtd.service
├─ 731 /usr/sbin/libvirtd
├─1041 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt>
└─1042 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt>
Sep 01 10:11:29 mainlinux dnsmasq[1041]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Sep 01 10:11:29 mainlinux dnsmasq-dhcp[1041]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Sep 01 10:12:35 mainlinux libvirtd[731]: libvirt version: 6.0.0, package: 0ubuntu8.3 (Marc Deslauriers <marc.deslauriers#ubuntu.com> Thu, 30 >
Sep 01 10:12:35 mainlinux libvirtd[731]: hostname: mainlinux
Sep 01 10:12:35 mainlinux libvirtd[731]: Failed to connect socket to '/var/run/libvirt/virtlxcd-sock': No such file or directory
Sep 01 10:12:35 mainlinux libvirtd[731]: End of file while reading data: Input/output error
I'm updated my kernel to 5.8.5-generic, but other than that, running Mint 20 (based on Ubuntu focal). Anyone know how to fix this, or display a log as to why virtlxcd-sock is not being created?
Also tried sudo chmod 777 on the libvirt subfolder and restarted libvirtd, same error.
Been googling for hours, finally found the one that worked for me, seems like installing libvirt and lxc does not install this package:
sudo apt install libvirt-daemon-driver-lxc
sudo systemctl restart libvirtd

gunicorn daemon (active: failed) / curl(56) Recv Failure: Connection reset by peer

First thing, I am not sure if this is better here or on ask ubuntu (ubuntu did not have a 'gunicorn' tag so I think i'm in the right place). If it is not appropriate here just drop it in the comments and I'll close it.
I am following a digitalocean tutorial on deployment(https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04).I am up to gunicorn set up. I'm at my wits end trying to get this gunicorn to work, so I have come here. Anything in quotes is the name of the current section in the article. I got up to "Checking for the Gunicorn Socket File" and "check for the existence of the gunicorn.sock file within the /run directory:" before failure.
Check for socket file:
sudo systemctl status gunicorn.socket returns
Failed to dump process list, ignoring: No such file or directory
● gunicorn.socket - gunicorn socket
Loaded: loaded (/etc/systemd/system/gunicorn.socket; enabled; vendor pres
Active: active (listening) since Fri 2020-02-21 21:34:06 UTC; 1min 8s ago
Listen: /run/gunicorn.sock (Stream)
CGroup: /system.slice/gunicorn.socket
Check for existence of gunicorn.sock:
file /run/gunicorn.sock
output: /run/gunicorn.sock: socket
Upon "Testing socket activation", it fails:
sudo systemctl status gunicorn
output:
● gunicorn.service - gunicorn daemon
Loaded: loaded (/etc/systemd/system/gunicorn.service;
Active: failed (Result: exit-code) since Fri 2020-02-
Main PID: 15708 (code=exited, status=217/USER)
Feb 21 21:32:39 ubuntu-s-1vcpu-1gb-nyc3-01 systemd[1]: S
Feb 21 21:32:39 ubuntu-s-1vcpu-1gb-nyc3-01 systemd[15708
Feb 21 21:32:39 ubuntu-s-1vcpu-1gb-nyc3-01 systemd[15708
Feb 21 21:32:39 ubuntu-s-1vcpu-1gb-nyc3-01 systemd[1]: g
Feb 21 21:32:39 ubuntu-s-1vcpu-1gb-nyc3-01 systemd[1]: g
Feb 21 21:32:39 ubuntu-s-1vcpu-1gb-nyc3-01 systemd[1]: g
Feb 21 21:32:39 ubuntu-s-1vcpu-1gb-nyc3-01 systemd[1]: g
Feb 21 21:32:39 ubuntu-s-1vcpu-1gb-nyc3-01 systemd[1]: F
lines 1-13/13 (END)
It says to test socket activation, do the following:
curl --unix-socket /run/gunicorn.sock localhost
output(says I should see HTML):
curl: (56) Recv failure: Connection reset by peer
Not sure if I provided enough info. Below I will include my gunicorn.socket and gunicorn.service files as well as the layout of directories on my server.
gunicorn.socket:
[Unit]
Description=gunicorn socket
[Socket]
ListenStream=/run/gunicorn.sock
[Install]
WantedBy=sockets.target
gunicorn.service:
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=justin
Group=www-data
WorkingDirectory=/home/justin/project
ExecStart=/home/justin/project/env/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
jobzumo.wsgi:application
[Install]
WantedBy=multi-user.target
Layout of server/project:
home/justin/project/
This project/ folder contains env(env/bin/gunicorn does exist), jobzumo(django project), manage.py and static.
The only thing I am thinking is that I may have created these gunicorn files while using root and now I am trying to modify them using the user justin? Not really sure what is going on here. If I did not provide enough info or if you need me to run any type of debug commands please let me know. Thanks for any help.
I had the exact same propblem following this tutorial. OP's answer did not help in my case but I found a solution here. Maybe it helps others stubmling over this.
Many thanks to RussellMolimock for the following comment, which I found there!
"Go back into your virtualenv with source
[your_project_env]/bin/activate and enter which gunicorn That will
return the path to your gunicorn exectuable.
Paste that into the path section of the ‘ExecStart’ value inside the
’/etc/systemd/system/gunicorn.service’ file, and run the ‘sudo
systemctl daemon-reload’ and 'sudo systemctl restart gunicorn’
commands to restart your daemon and try curling again with curl
–unix-socket /run/gunicorn.sock localhost
I hope this helps!"
I had to run the following two commands:
sudo ufw delete allow 8000
sudo ufw allow 'Nginx Full'
and now everything is working. Apparently this opens my firewall up to port 80. Not sure why as I don't specify port 80 there, but it is working.
I faced this error because Gunicorn was not able to read the environment variables. This helped me in defining the environment variables for Gunicorn.
I deleted the whole project folder in Ubuntu (home/user/project) and restarted from the beginning, and it worked. I have tried multiple solutions on the Internet, restarting the daemon and changing the path of gunicorn, all fail.

How do i set up gammu-smsd.service on my RaspberryPi

I'm trying to configure Gammu-smsd.services with mysql on my Rasberry Pi.
For information, gammu is working without smsd service.
smsd service is working when on defaut (not with mysql)
i got this kind of error :
pi#F1rst:/var/log $ sudo systemctl status gammu-smsd.service
● gammu-smsd.service - SMS daemon for Gammu
Loaded: loaded (/lib/systemd/system/gammu-smsd.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2019-01-14 18:16:26 CET; 24min ago
Docs: man:gammu-smsd(1)
Process: 4318 ExecStopPost=/bin/rm -f /var/run/gammu-smsd.pid (code=exited, status=0/SUCCESS)
Process: 4312 ExecStart=/usr/bin/gammu-smsd --pid=/var/run/gammu-smsd.pid --daemon (code=exited, status=0/SUCCESS)
Main PID: 4313 (code=exited, status=2)
janv. 14 18:16:26 F1rst systemd[1]: Starting SMS daemon for Gammu...
janv. 14 18:16:26 F1rst gammu-smsd[4312]: Log filename is "/var/log/smsd"
janv. 14 18:16:26 F1rst systemd[1]: Started SMS daemon for Gammu.
janv. 14 18:16:26 F1rst systemd[1]: gammu-smsd.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
janv. 14 18:16:26 F1rst systemd[1]: gammu-smsd.service: Unit entered failed state.
janv. 14 18:16:26 F1rst systemd[1]: gammu-smsd.service: Failed with result 'exit-code'.
And the smsd log give me this :
Mon 2019/01/14 18:16:26 gammu-smsd[4312]: Warning: No PIN code in /etc/gammu-smsdrc file
Mon 2019/01/14 18:16:26 gammu-smsd[4313]: Connected to Database: smsd on localhost
Mon 2019/01/14 18:16:26 gammu-smsd[4313]: Failed to seek to first row!
Mon 2019/01/14 18:16:26 gammu-smsd[4313]: Initialisation failed, stopping Gammu smsd: Erreur inconnue. (UNKNOWN[27])
Mon 2019/01/14 18:16:26 gammu-smsd[4313]: Stopping Gammu smsd: Aucune erreur. (NONE[1])
Here is my gammu-smsdrc configuration file :
# Configuration file for Gammu SMS Daemon
# Gammu library configuration, see gammurc(5)
[gammu]
# Please configure this!
port = /dev/ttyAMA0
connection = at115200
# Debugging
#logformat = textall
# SMSD configuration, see gammu-smsdrc(5)
[smsd]
#RunOnReceive = /home/pi/script/sms.sh
service = sql
driver = native_mysql
host = localhost
user = smsd
password = g#mmuP#ssword
database = smsd
logfile = /var/log/smsd
# Increase for debugging information
debuglevel = 0
# Paths where messages are stored
inboxpath = /var/spool/gammu/inbox/
outboxpath = /var/spool/gammu/outbox/
sentsmspath = /var/spool/gammu/sent/
errorsmspath = /var/spool/gammu/error/
I tried the solution given here but it didnt work.
Do someone got an idea for me ?
Thanks by advance for you time.
Maybe it's a very easy fix but i'm a real beginner
Ok i figure it out.
The probleme where there :
Mon 2019/01/14 18:16:26 gammu-smsd[4313]: Failed to seek to first row!
I found this video witch tell you to do this :
Entrer your smsd database using phpmyadmin
Seek for the gammu Table and insert the value "13"
I tryed then it din't worked but the error did change.
I had at this moment this error :
Sat 2019/01/19 13:12:21 gammu-smsd[30893]: Database structure is from older Gammu version
So i change the value to 20
Then i got this :
Sat 2019/01/19 13:08:31 gammu-smsd[30705]: Database structure is from newer Gammu version
After few time i entrer the value '16' and it worked !!!

How do I enable MySQL binary logging?

I tried to use simple example of mysql-events package but when i tried to use it , i got this error:
Error: ER_NO_BINARY_LOGGING: You are not using binary logging
so i changed my.cnf:
#
# The MySQL database server configuration file.
#
# You can copy this to one of:
# - "/etc/mysql/my.cnf" to set global options,
# - "~/.my.cnf" to set user-specific options.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# For explanations see
# http://dev.mysql.com/doc/mysql/en/server-system-variables.html
#
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
#
#what i added:
log_bin = "/home/erfan/salone-entezar/server/"
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/
but when i tried to restart mysql (using $ sudo service mysql restart) this error has happened:
Job for mysql.service failed because the control process exited with error code.
See "systemctl status mysql.service" and "journalctl -xe" for details.
and this is systemctl status mysql.service :
● mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: activating (start-post) (Result: exit-code) since Fri 2016-11-18 20:16:48 IRST; 3s ago
Process: 8838 ExecStart=/usr/sbin/mysqld (code=exited, status=1/FAILURE)
Process: 8831 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
Main PID: 8838 (code=exited, status=1/FAILURE); Control PID: 8839 (mysql-systemd-s)
Tasks: 2 (limit: 4915)
CGroup: /system.slice/mysql.service
└─control
├─8839 /bin/bash /usr/share/mysql/mysql-systemd-start post
└─8850 sleep 1
Nov 18 20:16:48 erfan-m systemd[1]: Starting MySQL Community Server...
Nov 18 20:16:48 erfan-m mysql-systemd-start[8831]: my_print_defaults: [ERROR] Found option without preceding group in config file /etc/mysql/my.cnf at line 19!
Nov 18 20:16:48 erfan-m mysql-systemd-start[8831]: my_print_defaults: [ERROR] Fatal error in defaults handling. Program aborted!
Nov 18 20:16:48 erfan-m systemd[1]: mysql.service: Main process exited, code=exited, status=1/FAILURE
What should i do now and what is my problem?
for UBUNTU
i have to add socketPath : '/var/run/mysqld/mysqld.sock' to dsn variable in code and also correcting /etc/mysql/my.cnf as below :
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/
[mysqld] #grouping config options is important
# Must be unique integer from 1-2^32
server-id = 1
# Row format required for ZongJi
binlog_format = row
# Directory must exist. This path works for Linux. Other OS may require
# different path.
log_bin = /var/log/mysql/mysql-bin.log
and at last restart it with $sudo service mysql restart
for CentOS
i have to add socketPath : '/var/lib/mysql/mysql.sock' to dsn variable in code and also correcting /etc/my.cnf as below :
[mysqld]
# Must be unique integer from 1-2^32
server-id = 1
# Row format required for ZongJi
binlog_format = row
# Directory must exist. This path works for Linux. Other OS may require
# different path.
log_bin = /var/log/mariadb/mariadb-bin.log
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
and at last restart it with $systemctl restart mariadb
NOTE : CentOS 7 has replaced MySQL with MariaDB. So there is deference between log_bin path of UBUNTU and CentOS .