How to change the stdout and stderr log location of processes started by supervisor? - gunicorn

So in my system, the supervisor captures stderr and stdout into these files:
root#3a1a895598f8:/var/log/supervisor# ls -l
total 24
-rw------- 1 root root 18136 Sep 14 03:35 gunicorn-stderr---supervisor-VVVsL1.log
-rw------- 1 root root 0 Sep 14 03:35 gunicorn-stdout---supervisor-lllimW.log
-rw------- 1 root root 0 Sep 14 03:35 nginx-stderr---supervisor-HNIPIA.log
-rw------- 1 root root 0 Sep 14 03:35 nginx-stdout---supervisor-2jDN7t.log
-rw-r--r-- 1 root root 1256 Sep 14 03:35 supervisord.log
But I would like to change gunicorn's stdout and stderr log files 'location to /var/log/gunicorn and fixed the file names for monitoring purpose.
This is what I have done in the config file:
[program:gunicorn]
stdout_capture_maxbytes=50MB
stderr_capture_maxbytes=50MB
stdout = /var/log/gunicorn/gunicorn-stdout.log
stderr = /var/log/gunicorn/gunicorn-stderr.log
command=/usr/bin/gunicorn -w 2 server:app
However it does not take any effect at all. Did I miss anything in the configuration?

Change stdout and stderr to stdout_logfile and stderr_logfile and this should solve your issue.
You can also change childlogdir in the main configuration to make all the child logs appear in another directory. If your are using Auto log mode the logfile names will be auto generated into the childlogdir specified without you needing to set stdout_logfile.
In order for your changes to be reflected you need to either restart the supervisor service with:
service supervisord restart
or
reload the config supervisorctl reload and update the config in the running processes supervisorctl update.
Documentation on this can be found here http://supervisord.org/logging.html#child-process-logs

Related

How to fix the 'Can't load'-error when a perl script is executing 'use DBD::mysql;'

For years, I'm running a perl script on my Synology NAS. This script writes to a MariaDB database. I never had any errors. Until the migration to DSM7. This is what happened ...
I never had any problem when using MariaDB5
I migrated my database from MariaDB5 to MariaDB10. I changed the database name and the credentials to make sure my perl script used the MariaDB10 database. Everything worked fine!
I upgraded my Synology NAS to DSM7, following the provided procedure, including the requirement to remove the MariaDB5 package.
Afterwards, I noticed I following error:
Can't load '/usr/local/lib/perl5/vendor_perl/auto/DBD/mysql/mysql.so' for module DBD::mysql: libmariadb.so.3: cannot open shared object file: No such file or directory at /usr/local/lib/perl5/core_perl/DynaLoader.pm line 193.
at ./mysql1_MDB10.pl line 2.
Compilation failed in require at ./mysql1_MDB10.pl line 2.
BEGIN failed--compilation aborted at ./mysql1_MDB10.pl line 2.
To make it easy to debug, I created a test script:
#!/usr/bin/perl
use DBD::mysql;
$solDBIDB = 'DBI:mysql:database=SOLAR2_10;host=127.0.0.1;port=3307';
$DBUser = 'mydbuser';
$DBPass = 'mydbpass';
$dbh = DBI->connect($solDBIDB, $DBUser, $DBPass) || die "Could not connect to database: $DBI::errstr";
$query = "SELECT * FROM SOLAR2_10.SUM_DATA_ITEMS";
$query_handle = $dbh->prepare($query);
# EXECUTE THE QUERY
$query_handle->execute();
I'm using a Synology NAS DS415P with DSM DSM 7.1-42661 Update 2 (latest updates)
Perl is installed as a package (via the Synology Package center)
perl -v provides the following version information: This is perl 5, version 28, subversion 3 (v5.28.3) built for i686-linux
The version of MariaDB10 is 10.3.32-1040
Update after comment #ikegami
I'm able to find the mysql.so file:
-r-xr-xr-x 1 root root 131923 Apr 21 2021 /usr/local/lib/perl5/vendor_perl/auto/DBD/mysql/mysql.so
I'm able to find the various libmariadb.so files:
ls -l in /volume1/#appstore/MariaDB10/usr/local/mariadb10/lib
lrwxrwxrwx 1 root root 15 Nov 23 2021 libmariadb.so -> libmariadb.so.3
-rwxr-xr-x 1 root root 275356 Nov 23 2021 libmariadb.so.3
lrwxrwxrwx 1 root root 15 Nov 23 2021 libmysqlclient_r.so -> libmariadb.so.3
lrwxrwxrwx 1 root root 15 Nov 23 2021 libmysqlclient.so -> libmariadb.so.3
lrwxrwxrwx 1 root root 17 Nov 23 2021 libmysqld.so -> libmariadbd.so.19
drwxr-xr-x 3 root root 4096 Nov 23 2021 mysql
drwxr-xr-x 2 root root 4096 Nov 23 2021 pkgconfig
Additional info:
cpan -l | egrep -i "mysql"
Bundle::DBD::mysql 4.048
DBD::mysql 4.048
DBD::mysql::GetInfo undef
To me, everything looks properly installed ...
What am I missing?

unshare command doesn't create new PID namespace

I'm learning linux core and I'm on the namepsaces topic now.
I tried to play with "unshare" command just to get hang of namespace and its essentials.
The problem is that it doesn't or, what is more probable, I'm doing something wrong.
I'd appreciate if you could help me to understand.
I try to execute busybox sh program as within its own PID namespace. That's what I do:
[ab#a ~]$ sudo unshare --pid busybox sh
/home/ab # ps
PID TTY TIME CMD
6014 pts/1 00:00:00 sudo
6016 pts/1 00:00:00 busybox
6026 pts/1 00:00:00 ps
So as I can see it from the output of the ps command all process are visible in the new env. And it gets confirmed when I check pid namespace id of newly created process and current one. See below
[ab#a ~]$ ps -p 6016,$$
PID TTY TIME CMD
4604 pts/0 00:00:00 bash
6016 pts/1 00:00:00 busybox
[ab#a ~]$ sudo ls -l /proc/4604/ns
total 0
lrwxrwxrwx. 1 ab ab 0 Aug 8 23:49 ipc -> ipc:[4026531839]
lrwxrwxrwx. 1 ab ab 0 Aug 8 23:49 mnt -> mnt:[4026531840]
lrwxrwxrwx. 1 ab ab 0 Aug 8 23:49 net -> net:[4026531968]
lrwxrwxrwx. 1 ab ab 0 Aug 8 23:49 pid -> pid:[4026531836]
lrwxrwxrwx. 1 ab ab 0 Aug 8 23:49 user -> user:[4026531837]
lrwxrwxrwx. 1 ab ab 0 Aug 8 23:49 uts -> uts:[4026531838]
[ab#a ~]$ sudo ls -l /proc/6016/ns
total 0
lrwxrwxrwx. 1 root root 0 Aug 9 00:07 ipc -> ipc:[4026531839]
lrwxrwxrwx. 1 root root 0 Aug 9 00:07 mnt -> mnt:[4026531840]
lrwxrwxrwx. 1 root root 0 Aug 9 00:07 net -> net:[4026531968]
lrwxrwxrwx. 1 root root 0 Aug 9 00:07 pid -> pid:[4026531836]
lrwxrwxrwx. 1 root root 0 Aug 9 00:07 user -> user:[4026531837]
lrwxrwxrwx. 1 root root 0 Aug 9 00:07 uts -> uts:[4026531838]
So, pid namespace stays the same eventhough I provided --pid argument to unshare call.
Could you please help me to understand why it happens.
Thanks
Solution
you should add --fork and --mount-proc switch to unshare as stated in the man page
-f, --fork
Fork the specified program as a child process of unshare rather than running it directly. This is useful
when creating a new PID namespace. Note that when unshare is waiting for the child process, then it
ignores SIGINT and SIGTERM and does not forward any signals to the child. It is necessary to send
signals to the child process.
Explanation (from man pid_namespaces)
a process's PID namespace membership is determined when the process is created and cannot be changed thereafter.
what unshare actually does when you supply --pid is setting the file descriptor at /proc/[PID]/ns/pid_for_children for the current process to the new PID namespace, causing children subsequently created by this process to be places in a different PID namespace (its children not itself!! important!).
So, when you supply --fork to unshare, it will fork your program (in this case busybox sh) as a child process of unshare and place it in the new PID namespace.
Why do I need --mount-proc ?
Try running unshare with only --pid and --fork and let's see what happen.
wendel#gentoo-grill ~ λ sudo unshare --pid --fork busybox sh
/home/wendel # echo $$
1
/home/wendel # ps
PID USER TIME COMMAND
12443 root 0:00 unshare --pid --fork busybox sh
12444 root 0:00 busybox sh
24370 root 0:00 {ps} busybox sh
.
.
. // bunch more
from echo $$ we can see that the pid is actually 1 so we know that we must be in the new PID namespace, but when we run ps we see other processes as if we are still in the parent PID namespace.
This is because of /proc is a special filesystem called procfs that kernel created in memory, and from the man page.
A /proc filesystem shows (in the /proc/[pid] directories) only processes visible in the PID namespace of the process that performed the mount, even if the /proc filesystem is viewed from processes in other namespaces.
So, in order for tools such as ps to work correctly, we need to re-mount /proc using a process in the new namespace.
But, assuming that your process is in the root mount namespace, if we re-mount /proc, this will mess up many things for other processes in the same mount namespace, because now they can't see anything (in /proc). So you should also put your process in new mount namespace too.
Good thing is unshare has --mount-proc.
--mount-proc[=mountpoint]
Just before running the program, mount the proc filesystem at mountpoint (default is /proc). This is useful when creating a new PID namespace. It also implies creating a new mount namespace since the /proc mount would
otherwise mess up existing programs on the system. The new proc filesystem is explicitly mounted as private (with MS_PRIVATE|MS_REC).
Let's verify that --mount-proc also put your process in new mount namespace.
bash outside:
wendel#gentoo-grill ~ λ ls -go /proc/$$/ns/{user,mnt,pid}
lrwxrwxrwx 1 0 Aug 9 10:05 /proc/17011/ns/mnt -> 'mnt:[4026531840]'
lrwxrwxrwx 1 0 Aug 9 10:10 /proc/17011/ns/pid -> 'pid:[4026531836]'
lrwxrwxrwx 1 0 Aug 9 10:10 /proc/17011/ns/user -> 'user:[4026531837]'
busybox:
wendel#gentoo-grill ~ λ doas ls -go /proc/16436/ns/{user,mnt,pid}
lrwxrwxrwx 1 0 Aug 9 10:05 /proc/16436/ns/mnt -> 'mnt:[4026533479]'
lrwxrwxrwx 1 0 Aug 9 10:04 /proc/16436/ns/pid -> 'pid:[4026533481]'
lrwxrwxrwx 1 0 Aug 9 10:17 /proc/16436/ns/user -> 'user:[4026531837]'
Notice that their user namespace is the same but mount and pid aren't.
Note: You can see that I cited a lot from man pages. If you want to learn more about linux namespaces (or anything unix really) first thing for you to do is to read the man page of each namespace. It is well written and really informative.

how to start mariaDB on boot after external drive is mounted

I am using a raspberry pi 3 with OSMC as the operating system along with Debian Stretch and nginx, and installed manually mariaDB 10.2 following some instructions I found somewhere a while back.
I have changed the datadir for mariadb to /media/USBHDD2/shared/mysql
When I boot, or reboot, the pi, mariaDB fails to start. Before, when I had the default datadir = /var/lib/mysql it was all fine. If I change it back it is fine.
However, if I login to the console I can successfully start it by using
service mysql start
Note that I am using 'service' rather than 'systemctl' - the latter does not work. The files mariadb.service and mysql.service do not exist anywhere.
In /etc/init.d I find two files: mysql and myswql which seem to be identical. If I remove the myswql from the directory mariadb won't start at all. I have tried editing these by putting, for example, a sleep 15 at the beginning, but to no avail. I have read all sorts of solutions about trying to test if the USBHDD2 is mounted, eg using
while ! test -f /media/USBHDD2/shared/test.txt
do
sleep 1
done
which I tried in the /etc/init.d/mysql and myswql files, and also in rc.local before calling for the start of mysql.
But that doesn't work either.
I also renamed the links in rc?.d to S99mysql so is starts after everything else, still no joy.
I have spent two full days on this to no avail. What do I need to do to get this working so that mysql starts on boot?
Files system is ntfs
output from ls -la //media/USBHDD2/shared/mysql is as follows:
total 176481
drwxrwxrwx 1 root root 4096 Mar 27 11:41 .
drwxrwxrwx 1 root root 4096 Mar 27 13:06 ..
-rwxrwxrwx 1 root root 16384 Mar 27 11:41 aria_log.00000001
-rwxrwxrwx 1 root root 52 Mar 27 11:41 aria_log_control
-rwxrwxrwx 1 root root 0 Nov 3 2016 debian-10.1.flag
-rwxrwxrwx 1 root root 12697 Mar 27 11:41 ib_buffer_pool
-rwxrwxrwx 1 root root 50331648 Mar 27 11:41 ib_logfile0
-rwxrwxrwx 1 root root 50331648 Mar 26 22:02 ib_logfile1
-rwxrwxrwx 1 root root 79691776 Mar 27 11:41 ibdata1
drwxrwxrwx 1 root root 32768 Mar 25 18:37 montegov_admin
-rwxrwxrwx 1 root root 0 Nov 3 2016 multi-master.info
drwxrwxrwx 1 root root 20480 Sep 3 2019 mysql
drwxrwxrwx 1 root root 0 Sep 3 2019 performance_schema
drwxrwxrwx 1 root root 86016 Mar 25 20:06 rentmaxpro_wp187
drwxrwxrwx 1 root root 0 Sep 3 2019 test
drwxrwxrwx 1 root root 32768 Nov 3 2016 trustedhomerenta_admin
drwxrwxrwx 1 root root 32768 Nov 3 2016 trustedhomerenta_demo
drwxrwxrwx 1 root root 40960 Mar 25 21:05 trustedhomerenta_meta
drwxrwxrwx 1 root root 36864 Mar 25 21:25 trustedhomerenta_montego
drwxrwxrwx 1 root root 36864 Mar 26 20:37 trustedhomerenta_testmontego
The problem is that the external drive is configured as ntfs.
Mysql requires the files and directory to be owned by mysql:mysql but since the ntfs does not have the same system of owners and groups as linux, the linux mount process asigns its own owner and group to the filestructure when mounting the drive. By defualt this ends up being root:root so mysql cannot use them.
ntfs does not allow CHOWN to work, so there is no way to change the ownership away from root.
One solution is to backup all the files, repartition as EXT4, and then restore all the files.
The solution I finally used was to specify mysql as the owner and group at the time that the drive is being mounted. Thus my /etc/fstab file was changed to:
ID=C2CA68D9CA68CB6D /media/USBHDD2 ntfs users,exec,uid=mysql,gid=mysql 0 2
and now mysql starts properly at boot.
phew ;-)
Thanks #danblack for getting me thinking in the right direction

zabbix Standard items vfs.file.exists

mysql backup file will create at 23:35,
scripts:
/usr/local/mysql/bin/mysqldump -uroot -padmin mysql > /data/backup/mysql-$(date +%Y-%m-%d).sql
the struct of the file is mysql-$(date +%Y-%m-%d).sql.
backup file:
[root#zabbix-agent ~]# cd /data/backup/
[root#zabbix-agent backup]# ll
总用量 3072000
-rw-r--r-- 1 root root 1048576000 5月 15 23:35 mysql-2018-05-29.sql
-rw-r--r-- 1 root root 1048576000 5月 17 23:35 mysql-2018-05-30.sql
-rw-r--r-- 1 root root 1048576000 5月 16 23:35 mysql-2018-05-31.sql
I want to check file by inside key vfs.file.exists at 00:01 everyday.
zabbix items:
enter image description here
key:
vfs.file.exists[/data/$(date -d "yesterday" +%Y-%m-%d).sql]
but the zabbix works fail,I want to know how can I use the vfs.file.exists to check the backup file.
In Zabbix 3.0 the scheduling intervals were added where you can define exactly at what time the execution of the check should be performed. In your case the scheduling interval instruction would be
h0m1
which stands for "every day at 0 hours 1 minute".

Google Chrome - Crash Dump Location

I'm trying to debug a page in a web app that keeps crashing Chrome ("Aw, snap!" error). I've enabled/disabled automatic crash reporting, tried logging with google-chrome --enable-logging --v=1, (as well as various levels of verbosity), and all I get is a "crash dump ID" in the chrome_debug.log chrome://crashes Shows all of the dump IDs, but no actual dump file
I see other questions referring to reading the dump files, but I can't find the dump files themselves (just the ID).
Grepping for the crash ID in /tmp and ~/.config/google-chrome/ turns up nothing, but the ~/.config/google-chrome/chrome_debug.log shows that something was sent:
--2015-04-06 11:10:00-- https://clients2.google.com/cr/report
Resolving clients2.google.com (clients2.google.com)... 74.125.228.224, 74.125.228.225, 74.125.228.231, ...
Connecting to clients2.google.com (clients2.google.com)|74.125.228.224|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘/dev/fd/3’
0K
Crash dump id: 7dac9d5d58258264
Any ideas on where to find the actual file/data that's sent?
Details:
Chrome version: 40.0.2214.111 (Official Build)
Linux Mint 16 (Petra)
Edit: Some extra info:
curtis#localhost:-$ tail -n 5 uploads.log && echo $(pwd)
1428584493,ddc357e4600a49e6
1428584497,7ac16455c152381a
1428589439,d00ad6f5e6426f3d
1428934450,66b3f722430511e8
1428939578,7a2efc2b681515d1
/home/curtis/.config/google-chrome/Crash Reports
curtis#localhost:-$ ll -a
total 12
drwx------ 2 curtis curtis 4096 Apr 6 11:32 .
drwx------ 9 curtis curtis 4096 Apr 13 11:43 ..
-rw------- 1 curtis curtis 3291 Apr 13 11:39 uploads.log
Automatic reporting is enabled...
Thanks!
The *.dmp files are stored in /tmp/, and this has nothing to do with the "Automatic crash reporting" checkbox. The file is also not related to the hash stored in ~/.config/google-chrome/
In ~/.config/google-chrome/Crash Reports/uploads.log:
1429189585,5bddea9f7433e3da
From using , the crash dump file for this particular report was:
chromium-renderer-minidump-2113a256de381bce.dmp
Solution:
root#localhost:-$ mkdir /tmp/misc && chmod 777 /tmp/misc
root#localhost:-$ cd /tmp
root#localhost:-$ watch -n 1 'find . -mmin -1 -exec cp {} /tmp/misc/ \;'
Then, as a regular user (not root):
google-chrome --enable-logging --v=1
Once you see files created by the watch command, run:
root#localhost:-$ ls -l
-rw------- 1 root root 230432 Apr 16 09:06 chromium-renderer-minidump-2113a256de381bce.dmp
-rw------- 1 root root 230264 Apr 16 09:12 chromium-renderer-minidump-95889ebac3d8ac81.dmp
-rw------- 1 root root 231264 Apr 16 09:13 chromium-renderer-minidump-da0752adcba4e7ca.dmp
-rw------- 1 root root 236246 Apr 16 09:12 chromium-upload-56dc27ccc3570a10
-rw------- 1 root root 237247 Apr 16 09:13 chromium-upload-5cebb028232dd944
Now you can use breakpad to work on the *.dmp files.
Google Chrome - Crash Dump Location
To generate the Crash Dump locally,
CHROME_HEADLESS=1 google-chrome
The .dmp files are then stored in ~/.config/google-chrome/Crash Reports
Produce Stack Trace
Check out and add depot_tools to your PATH (used to build breakpad)
git clone https://chromium.googlesource.com/chromium/tools/depot_tools
export PATH=`pwd`/depot_tools:"$PATH"
Check out and build breakpad (using fetch from depot_tools)
mkdir breakpad && cd breakpad
fetch breakpad
cd src
./config && make
To produce stack trace without symbols:
breakpad/src/processor/minidump_stackwalk -m /path/to/minidump
More here https://www.chromium.org/developers/decoding-crash-dumps
Personally Preferred Method
Enable crash reporting:
Chrome menu > Settings > Show advanced settings > Tick "Automatically send usage statistics and crash reports to Google"
Go to chrome://crashes > File bug > Takes you to crbug.com > Complete
report leaving the auto-added report_id field unchanged.
Someone from the Chrome/Chromium team will follow up. They can provide
you with your stack trace and aid at resolving the issue.