File owner changed after editing it - file-ownership

[root#MGWSDT_FEWS ~]# ll file
-rw-r--r-- 1 root bill 0 Aug 14 17:28 file
[root#MGWSDT_FEWS ~]# su - bill
$ vi /root/file
I edited this file and wq!
Now bill becomes the file owner:
$ ll /root/file
-rw-r--r-- 1 bill bill 16 Aug 14 17:29 /root/file
Why? So strange!

bill cant edit the file. hess part of the group which only has read access.
So switching to bill you would expect a permissions error when you try and write.
In this case bill is also the directory owner, so whats actually happening is the file is being removed, and recreated now with bill as the owner.
:w !sudo tee %
would write as root and keep the permissions

Related

qemu - Need read/write rights on statedir /var/lib/swtpm-localca for user tss

I've installed swtpm and added it to a virtual machine using virt-manager (qemu+virsh).
When I'm going to start the machine, a error arises and points to a log file.
The file states:
Need read/write rights on statedir /var/lib/swtpm-localca for user tss.`
The easiest approach I've found is to just give the ownership of that particular folder to that user.
sudo chown -R tss:root /var/lib/swtpm-localca
On my system it has previously stated:
sudo ls -lach /var/lib/swtpm-localca
total 8,0K
drwxr-x--- 2 swtpm root 4,0K Mär 17 11:51 .
drwxr-xr-x 80 root root 4,0K Mär 17 11:51 ..
I do not know what I do break when revoking the user swtmp the access to that folder, but until now it works just smoothly.

how to start mariaDB on boot after external drive is mounted

I am using a raspberry pi 3 with OSMC as the operating system along with Debian Stretch and nginx, and installed manually mariaDB 10.2 following some instructions I found somewhere a while back.
I have changed the datadir for mariadb to /media/USBHDD2/shared/mysql
When I boot, or reboot, the pi, mariaDB fails to start. Before, when I had the default datadir = /var/lib/mysql it was all fine. If I change it back it is fine.
However, if I login to the console I can successfully start it by using
service mysql start
Note that I am using 'service' rather than 'systemctl' - the latter does not work. The files mariadb.service and mysql.service do not exist anywhere.
In /etc/init.d I find two files: mysql and myswql which seem to be identical. If I remove the myswql from the directory mariadb won't start at all. I have tried editing these by putting, for example, a sleep 15 at the beginning, but to no avail. I have read all sorts of solutions about trying to test if the USBHDD2 is mounted, eg using
while ! test -f /media/USBHDD2/shared/test.txt
do
sleep 1
done
which I tried in the /etc/init.d/mysql and myswql files, and also in rc.local before calling for the start of mysql.
But that doesn't work either.
I also renamed the links in rc?.d to S99mysql so is starts after everything else, still no joy.
I have spent two full days on this to no avail. What do I need to do to get this working so that mysql starts on boot?
Files system is ntfs
output from ls -la //media/USBHDD2/shared/mysql is as follows:
total 176481
drwxrwxrwx 1 root root 4096 Mar 27 11:41 .
drwxrwxrwx 1 root root 4096 Mar 27 13:06 ..
-rwxrwxrwx 1 root root 16384 Mar 27 11:41 aria_log.00000001
-rwxrwxrwx 1 root root 52 Mar 27 11:41 aria_log_control
-rwxrwxrwx 1 root root 0 Nov 3 2016 debian-10.1.flag
-rwxrwxrwx 1 root root 12697 Mar 27 11:41 ib_buffer_pool
-rwxrwxrwx 1 root root 50331648 Mar 27 11:41 ib_logfile0
-rwxrwxrwx 1 root root 50331648 Mar 26 22:02 ib_logfile1
-rwxrwxrwx 1 root root 79691776 Mar 27 11:41 ibdata1
drwxrwxrwx 1 root root 32768 Mar 25 18:37 montegov_admin
-rwxrwxrwx 1 root root 0 Nov 3 2016 multi-master.info
drwxrwxrwx 1 root root 20480 Sep 3 2019 mysql
drwxrwxrwx 1 root root 0 Sep 3 2019 performance_schema
drwxrwxrwx 1 root root 86016 Mar 25 20:06 rentmaxpro_wp187
drwxrwxrwx 1 root root 0 Sep 3 2019 test
drwxrwxrwx 1 root root 32768 Nov 3 2016 trustedhomerenta_admin
drwxrwxrwx 1 root root 32768 Nov 3 2016 trustedhomerenta_demo
drwxrwxrwx 1 root root 40960 Mar 25 21:05 trustedhomerenta_meta
drwxrwxrwx 1 root root 36864 Mar 25 21:25 trustedhomerenta_montego
drwxrwxrwx 1 root root 36864 Mar 26 20:37 trustedhomerenta_testmontego
The problem is that the external drive is configured as ntfs.
Mysql requires the files and directory to be owned by mysql:mysql but since the ntfs does not have the same system of owners and groups as linux, the linux mount process asigns its own owner and group to the filestructure when mounting the drive. By defualt this ends up being root:root so mysql cannot use them.
ntfs does not allow CHOWN to work, so there is no way to change the ownership away from root.
One solution is to backup all the files, repartition as EXT4, and then restore all the files.
The solution I finally used was to specify mysql as the owner and group at the time that the drive is being mounted. Thus my /etc/fstab file was changed to:
ID=C2CA68D9CA68CB6D /media/USBHDD2 ntfs users,exec,uid=mysql,gid=mysql 0 2
and now mysql starts properly at boot.
phew ;-)
Thanks #danblack for getting me thinking in the right direction

zabbix Standard items vfs.file.exists

mysql backup file will create at 23:35,
scripts:
/usr/local/mysql/bin/mysqldump -uroot -padmin mysql > /data/backup/mysql-$(date +%Y-%m-%d).sql
the struct of the file is mysql-$(date +%Y-%m-%d).sql.
backup file:
[root#zabbix-agent ~]# cd /data/backup/
[root#zabbix-agent backup]# ll
总用量 3072000
-rw-r--r-- 1 root root 1048576000 5月 15 23:35 mysql-2018-05-29.sql
-rw-r--r-- 1 root root 1048576000 5月 17 23:35 mysql-2018-05-30.sql
-rw-r--r-- 1 root root 1048576000 5月 16 23:35 mysql-2018-05-31.sql
I want to check file by inside key vfs.file.exists at 00:01 everyday.
zabbix items:
enter image description here
key:
vfs.file.exists[/data/$(date -d "yesterday" +%Y-%m-%d).sql]
but the zabbix works fail,I want to know how can I use the vfs.file.exists to check the backup file.
In Zabbix 3.0 the scheduling intervals were added where you can define exactly at what time the execution of the check should be performed. In your case the scheduling interval instruction would be
h0m1
which stands for "every day at 0 hours 1 minute".

How to change the stdout and stderr log location of processes started by supervisor?

So in my system, the supervisor captures stderr and stdout into these files:
root#3a1a895598f8:/var/log/supervisor# ls -l
total 24
-rw------- 1 root root 18136 Sep 14 03:35 gunicorn-stderr---supervisor-VVVsL1.log
-rw------- 1 root root 0 Sep 14 03:35 gunicorn-stdout---supervisor-lllimW.log
-rw------- 1 root root 0 Sep 14 03:35 nginx-stderr---supervisor-HNIPIA.log
-rw------- 1 root root 0 Sep 14 03:35 nginx-stdout---supervisor-2jDN7t.log
-rw-r--r-- 1 root root 1256 Sep 14 03:35 supervisord.log
But I would like to change gunicorn's stdout and stderr log files 'location to /var/log/gunicorn and fixed the file names for monitoring purpose.
This is what I have done in the config file:
[program:gunicorn]
stdout_capture_maxbytes=50MB
stderr_capture_maxbytes=50MB
stdout = /var/log/gunicorn/gunicorn-stdout.log
stderr = /var/log/gunicorn/gunicorn-stderr.log
command=/usr/bin/gunicorn -w 2 server:app
However it does not take any effect at all. Did I miss anything in the configuration?
Change stdout and stderr to stdout_logfile and stderr_logfile and this should solve your issue.
You can also change childlogdir in the main configuration to make all the child logs appear in another directory. If your are using Auto log mode the logfile names will be auto generated into the childlogdir specified without you needing to set stdout_logfile.
In order for your changes to be reflected you need to either restart the supervisor service with:
service supervisord restart
or
reload the config supervisorctl reload and update the config in the running processes supervisorctl update.
Documentation on this can be found here http://supervisord.org/logging.html#child-process-logs

Load CSV Fails in Cypher + Neo4j "LoadExternalResourceException: Couldn't load the external resource at:"

I have a fresh install of Neo4j 2.1.4 open source on a corporate cloud server running Ubuntu 14.04. I am importing a CSV file into the database. The path to my file is '/home/username/data-neo4j/node.csv'
Below is my command, which I run from the Neo4j command line tool neo4j-shell:
LOAD CSV WITH HEADERS FROM "file:///home/username/data-neo4j/node.csv" AS line CREATE (:Node { nid: toInt(line.nid), title: line.title, type: line.type, url: line.url});
This returns:
LoadExternalResourceException: Couldn't load the external resource at: file:/home/user/data-neo4j/node.csv
This looks like a message saying it can't find the file. However, the file is in place. I even tried changing the permissions on the file to be 755.
I have a separate instance of Neo4j on my local machine (OSX with Neo4j 2.1.2 Enterprise). The command is successful on my local machine, given that I switch the path to match.
One thing I notice when I run neo4j-shell, I get NOTE: Remote Neo4j graph database service 'shell' at port 1337. I have opened this port and my command still returns the same error message.
I also read through this link - but their problem was that they had not uploaded their file. My file is in place.
neo4j LOAD CSV returns Couldn't Load external resource
sheldonkreger, your co-worker is right. Thanks to him.
I solved it doing the same, but you actually don't need to place the file in a location where neo4j user has permissions, as suggested by him, for example /var/log/neo4j or /var/lib/neo4j.
Instead, just go to the neo4j directories mentioned above and see the file permissions over there, and provide the same permissions to your csv file or whichever file you are trying to import.
For example, for my system the file permissions in neo4j folder was like this:
ls -la
total 208
drwxr-xr-x 4 neo4j adm 4096 Feb 4 10:35 .
drwxr-xr-x 87 root root 4096 Feb 11 22:21 ..
drwxr-xr-x 3 neo4j adm 4096 Feb 4 10:35 bin
-rw-r--r-- 1 neo4j adm 61164 Jan 29 22:32 CHANGES.txt
lrwxrwxrwx 1 neo4j adm 10 Sep 30 12:07 conf -> /etc/neo4j
drwxr-xr-x 4 neo4j adm 4096 Mar 13 13:25 data
lrwxrwxrwx 1 neo4j adm 20 Sep 30 12:07 lib -> /usr/share/neo4j/lib
-rw-r--r-- 1 neo4j adm 125517 Jan 29 22:32 LICENSES.txt
lrwxrwxrwx 1 neo4j adm 24 Sep 30 12:07 plugins -> /usr/share/neo4j/plugins
-rw-r--r-- 1 neo4j adm 1568 Jan 29 22:32 README.txt
lrwxrwxrwx 1 neo4j adm 23 Sep 30 12:07 system -> /usr/share/neo4j/system
-rw-r--r-- 1 neo4j adm 4018 Jan 29 22:30 UPGRADE.txt
So I did the same to my file and neo4j was successfully able to run the import command.
I did this:
sudo chown neo4j:adm <csv file location>
A co-worker helped me debug this.
The problem was a permission. In Linux, neo4j has its own user 'neo4j'. That user did not have permissions to access the data at /home/myuser/data-neo4j/node.csv
We moved the data to a folder where the neo4j user has permissions and adjusted the path in the query.
For future reference, the log for Neo4j can provide additional info, an in Linux, is found at /var/log/neo4j