Permissions for external HDD nextcloud container - containers

good day.
im trying to migrate my NC19 server to a container.
so far i can install the container and map a persistent drive to the host’s drive but when i try to use an external HDD i get the following erros on the log:
**Initializing nextcloud 19.0.2.2
rsync: chown “/var/www/html/data” failed: Operation not permitted (1)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1207) [sender=3.1.3]**
this is more than likely a permissions issue, but when mounting the HDD as root and granting access to the external drive /media/ncd as root:root or as www-data:www-data i get the above error.
now, the external hdd file system is exfat (not sure if this will affect on the container).
any ideas how can i get past this?
im close to format my hdd as ext4 to see if this fixes it.
thanks in advance

just got this fixed by changing the filesystem from ExFat to ext4

Related

libvirt cannot open symlink image

This only happens on Fedora, on ubuntu it works fine.
I have a VM contain a symlink point to actual image. When I start the vm, it error out due to permission denied:
error: internal error: process exited while connecting to monitor: 2020-10-15T19:24:24.359891Z qemu-system-x86_64: -drive file=/some-local-pool/VM01.current,format=qcow2,if=none,id=drive-virtio-disk0,cache=unsafe,discard=unmap,aio=threads: Could not open '/some-local-pool/VM01.current': Permission denied
But the permission is fine:
lrwxrwxrwx. 1 root root 32 Oct 15 20:37 VM01.current -> VM01.deployed
-rw-rw-r--. 1 root root 20G Oct 15 21:01 VM01.deployed
unconfined_u:object_r:unlabeled_t:s0 VM01.current
system_u:object_r:default_t:s0 VM01.deployed
Directly use the actual image path works. qemu-img info on the symlink works too. Any idea why this won't work?
it is the selinux context problem. Somehow, the VM01.deployed get wrong type of default_t if start the VM with it. We need to change it to virt_content_t, if we want the symlink as image path.
Moreover, the label of symlink itself need to match the target.
# chcon --reference=VM01.deployed VM01.current
Then it is going to work.
It is still not clear why use the VM01.deployed as image path can still work if type label is default_t.
So far if we use libvirt and qemu tools to create new qcow2 disk or snapshots. The labels can be many things: default_t, virt_image_t, virt_content_t ... no idea why cannot they all agree on one type. This is really confusing.
When libvirt runs QEMU, it is given a unique SELinux label that is specific to that VM instance. In order for this to work, libvirt has to set the SELinux label on every disk image that QEMU is intended to access prior to QEMU being started. The label that libvirt sets will apply to the real disk image (VM01.deployed), but not the symlink (VM01.current) which you need to set to virt_image_t/virt_content_t.

XAMPP MAC OSx - use external USB drive for MySQL storage

I'm using fairly large MySQL DB Tables with XAMPP, which makes it tough with my rather small internal storage of my Mac. I thought I would just keep MySQL data on an external USB3.0 SSD drive, but it looks like it's not that easy.
Here is what I've tried:
With XAMPP ( not VM ): Moved /Applications/XAMPP/xamppfiles/var/mysql to /Volumes/myexternalssd/mysql and then pointed everything in my.cnf to that dir. The permissions seem to have copied properly. But it didn't work. MySQL does not start at all if I trash the original dir, or just keeps using the original dir if I leave it in place.
With XAMPP-VM: Moving ~/.bitnami dir to the ext drive and then symlinling ( ln -s ) to the new location. The error is then:
Cannot load stack definitions.
Dtails:
1 error occurred:
* failed to create stack: cannot deserialize stack from file "/Users/arseni/.bitnami/stackman/machines/xampp/metadata.json": open /Users/arseni/.bitnami/stackman/machines/xampp/metadata.json: operation not permitted

gsutil doesn't run in the mounted drive directory

I'm trying to run gsutil in the shared environment and I see a really weird behaviour.
When I run it being in the root of the filesystem, as well as anywhere else - everything is fine, but when I open the shared drive mounted directory it fails with this:
$: gsutil
cannot open path of the current working directory: Permission denied
The shared drive folder itself is the Google Fileshare NFS with drwxrwxr-x, and the user is in the group that can do rwx.
Any help appreciated, thanks!
update: The issue was in the snap way of the installation of the gcloud-sdk, I'm not sure the exact nature of the problem, but reinstalling it following the google-sdk istallation manual with apt-get solved the issue.

Syslog-ng - File permission error in suse Linux

I am getting the below given error when I try to forward certain log files using syslog-ng in Suse Linux
Starting syslog servicesError opening file for reading; filename='/tmp/app.log', error='Permission denied (13)'
my conf file - Source definition seems to be ok
source app {
file("/tmp/app.log");
};
I went through similar posts and dont see any problems with my steps.The weird part is that the file is owned by root and when i run syslog-ng as root it gives read permission error
Am I missing anything?
This problem is caused because of AppArmor linux security module. Solution to this problem is mentioned in attached thread. syslog-ng read file permission denied
Here are steps I followed.
Open /etc/apparmor.d/sbin.syslong-ng
Add /opt/xxx/logs/* rw, line anywhere. rw below means allow read & write access. Change your directory appropriately.
Run apparmor_parser -r /etc/apparmor.d/sbin.syslong-ng to set new rules.
Restart syslog-ng using service command or any other way you have set up.

Copying a MySQL database to another machine

I'm trying make a copy of a MySQL database on another server. I stop the server, tar up the mysql directory, copy it to the other server and untar it. I set all the permissions to match to the working server, and copied the my.cnf file so everything is the same except for server specific changes.
However, when I try to startup the server, I get the following InnoDB error:
InnoDB: Operating system error number 13 in a file operation.
This error means mysql does not have the access rights to
the directory.
File name /var/lib/mysql/ibdata1
File operation call: 'open'.
The owner/group for all the files is mysql. I even tried changing permissions to a+rw. I can su to the mysql user and access the ibdata1 file using head.
SOLUTION:
The problem was selinux was enabled and preventing the new files from being accessed.
A silly question, but people forget: you said you checked that all files have the same permissions; still, even though it said so in the message, might you possibly have forgotten to check the permissions on the containing directory?
UPDATE: Two more suggestions:
You might try inserting --console and --log-warnings flags, for lots of debugging output, something like this (on my Mac):
/usr/libexec/mysqld --console --log-warnings --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock
If all else fails, you can probably try strace mysqld ... to see what exactly is it failing. The error will be somewhere at the bottom.
UPDATE2: Interesting indeed... I can't see what your OS is. I normally don't use /sbin/service, it's a bit mysterious for me; on a Mac, it's deprecated in favour of launchctl with config file in /System/Library/LaunchDaemons/mysqld.plist, and on most Linux boxes you have /etc/init.d/mysqld. So you could insert strace there.
Or (untested, but manpage suggests it's possible) you could try stracing the service call:
strace -ff -o straces /sbin/service start mysqld
This should produce files straces.pid, one of which should be mysqld's, and hopefully you'll find your error there.
This isn't a direct answer to your question, but I would recommend trying one of these programs for your backup / restore needs.
Percona Xtrabackup: https://launchpad.net/percona-xtrabackup
Mydumper: http://www.mydumper.org/
Both are great tools, are free and open source, and will help you avoid that problem entirely.
Hope that helps.