Solaris cp -P not setting directory ownership when there is a soft link - solaris-10

I've encountered an oddity when doing a copy in Solaris 10 update 10 (sparc, 147440-25).
Here is the setup (done as root):
# cd /tmp
# mkdir foo
# touch foo/thing1
# ln -s thing1 foo/thing2
# chown -hR joe:user foo
If you look at the directory and the link, everything is owned by the user "joe". Now comes the interesting part:
# cp -rpP foo bar
The options to cp here are to recurse, preserve permissions and ownership, and to act on links instead of following them. But when I do this, while the link is copied with the correct permissions, the directory itself, bar, is set to root:root. Is there some reason for this behavior?
It only acts this way if there is a link in the directory. If the directory contains only files all ownership is preserved (I assume because the -P never comes into play).

That's indeed an odd cp behavior I reproduced on the same Solaris release.
Not sure there is a patch for Solaris 10 but the issue is fixed in Solaris 11.1.

Related

Docker and MariaDB/MySQL — Permanently Editing my.cnf to enable remote access

I am running Docker on a Macintosh, and have installed the MariaDB image. I would like to access it from another machine on the LAN.
I understand that the solution is to enable bind-address=0.0.0.0 (or something similar) in /etc/mysql/my.cnf. I executed docker exec -it mariadb bash, installed Joe text editor (because I am much more familiar with it than Vi or Nano), and edited the file.
The problem is that when I restart the Docker image,it has forgotten all the changes, and it doesn’t work.
Am I missing a step, or is this not the way to go about it?
Containers are throw-away by design and, as you noticed, any modifications are lost when you run fresh one.
You have two options:
First one is described here: Docker: editing my.cnf in when building the image (just mount your custom config and be done).
Second option is to make your custom container image based on official image + your modification, something like this:
Dockerfile:
# Lets say mariadb v10.3.28... Change for what you want.
FROM mariadb:10.3.28
# there is already `#bind-address=0.0.0.0` in /etc/mysql/my.cnf
# we use sed and replace it with `bind-address=0.0.0.0`)
RUN sed -i "s/#bind-address=0.0.0.0/bind-address=0.0.0.0/g" /etc/mysql/my.cnf && \
# and, for example, lets change `max_allowed_packet` too.
sed -i "s/max_allowed_packet.*/max_allowed_packet=512M/g" /etc/mysql/my.cnf;
(rule of thumbs is "make as many steps in single RUN as possible" to save image layers)
then build it:
$ cd /where/my/dockerfile/is
$ docker build . -t mymysql
and run it:
# In newer mariadb it should be `-e MARIADB_ROOT_PASSWORD=`
# And maybe you should mount datadir somewhere `-v /my/own/datadir:/var/lib/mysql`
$ docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=my-secret-pw mymysql

Permission denied inside /var/www/html when creating a website and it's files with the apache2 server

UPDATE** The screenshot is within atom, but when I navigate to the directory using the file explorer, and right click, the option to rename or create a new folder are restricted and I cannot click on them.
I just finished setting up the LAMP stack on my fresh UBUNTU 18.04 installation. I have everything working, the default /var/www/html/index.html page from Apache2 is being served on localhost, no port forwarding or any unique domain name, i just wanna run this on my network from my computer for now.
If there is a simple way to create multiple websites and easily choose which folder to serve than that's fine, but I want to serve just one website for now.
When I go to my /var/www/html folder and try to edit the index.html file it says permission denied. What do I need to do in order to work inside this directory for the remaining time that I am building the website. I am signed in as the root user on my system.
Also, if I do change permissions to allow me to work in this directory, what does it mean for people trying to access my server if it was available to the public. (RIGHT NOW JUST ON LOCALHOST).
Lemme know if you need more info or explanation thanks!
sudo chown -R $USER:$USER /var/www
this works, it changes the owner to my user instead of root user. I still don't understand because my user already had sudo rights and all those permissions. It was the user I created during the ubuntu18.04 setup, so there shouldn't be an issue, or idk.
File ownership issues can be fixed at the command line by typing:
sudo chmod 777 /var/www/html -R
One caveat from
turnkeyLinux.com:
Changing file permissions is a trade off
(often increasing security reduces user-friendliness and/or
usability). For security 'best practice' only the folders that require
write access by the webserver should be owned by the webserver.
If your webserver has write access everywhere and your server
is compromised it makes it easier to hack your WordPress install) but
for ease of use giving the webserver ownership should resolve all your
issues...
This article on Understanding File Permissions was great, too.
This will help you.
sudo chgrp -R www-data /var/www/html
sudo gpasswd -a username www-data
sudo chmod -R 777 /var/www/html
The permission error is occurring because the folder does not have the rights and rights are reserved with different user. (you can inspect this by doing ls -l folderName)
The solution for your problem can be handled in different ways following are the few :
WAY1:
Find out who is running apache by running the command apachectl
-S
Locate the user name (say www-data)
Change the ownership of your folder as chown -R www-data:www-data /var/www/html (this will allow only your apache
to play with files)
Run the following command ln -s /var/www/html /home/username/html (this will create a soft link for your folder,
where you can edit/delete/read which will reflect on your apache)
WAY2:
goto /var/www/
sudo chown -R www-data:${USER} html
(Now both apache and your loged-in user will have rights to play with file).
If you are not root or you don't have a permission on some things(folder,files..), know that your actions are limited.
Take folder as example:
first of all verify permissions of your folder
==>ls -ld linkto/folder
and after give it a permissions it need or type the command bellow to add all permissions
==>sudo chmod -R 777 /var/www/html
verify permissions of your folder again if it is correct then try to copy again
Just write
And give read and write rights to the folder ( not user )
Try the following the command
For Read and Write:
sudo chmod -R a+rw /var/www
For Read, Write and Execute:
sudo chmod -R a+rwx /var/www
Edit the file as root. Or better yet fix your permissions so you don’t have to worry.

OwnCloud: How to synchronyze the FileSystem with the DB

I have to "insert" a lot of files into an owncloud server (8.2).
A user give me a USB key with the files and tell me to copy of all them into his owncloud data files repository.
Do you know if is it possible ?
Is it possible to synchronyze the ownCloud data fileSystem with the ownCloud database?
My environment is Linux CentOS7 (Apache 2.4, mySQL 5.6, php 5.6)
Thanks,
owncloud brings a command line utility that allows to manually trigger some tasks. Among those is the files:scan function which re-scans a users file system.
So you can import those files by following these steps:
1. you copy the files into the physical file system of the user(s) inside ownclouds data folder
2. you fire the command line utility to re-scan the files. That takes care to update the database according to the files found.
This is an example for the manual trigger:
sudo -u www-data php occ files:scan <user name>
Here <user name> obviously has to be replaced. Also the account name the sudo command switches to depends on the linux distribution and its setup. The command has to be started inside ownclouds base folder. THe command can be called in a loop with different user names, that can be done by means of standard scripting.
Here is a documentation of the utility: https://doc.owncloud.org/server/8.0/admin_manual/configuration_server/occ_command.html
I just made a try myself using an owncloud-8.2 installation and succeeded.
Before I could sucessfully scan my files again as arkascha explained, I needed to change the ownder and the group of the new folder to www-data (for Debian OS - others see OC-Docu 1) and set rights of the new directory to 755
Change ownder:
sudo chown -R www-data:www-data <path>
Change rights:
sudo chmod 755 <path>
whwere is the path to the newly added directory and could for example look like this example: /media/hdd/owncloud/data/<username>/files/<newFolderName>
OC-Docu:
https://doc.owncloud.org/server/9.0/admin_manual/configuration_server/occ_command.html

How do i move MySQL directory to an external network drive in MAC OSX 10.9.4

I am new to the database world and I ran into some problems....
My hard disk on my Mac says I have less than 8gb left of free space. For this reason, I would like to move my MySQL data directory to an external network drive called ls-xld4c.
I have been trying to follow the rules to do so via http://mailsteward.com/nickstek/?p=22
As noted from step 3 from the link above:
I copied the /usr/local/mysql/data directory and all of its files and subdirectories to the
new location at /Volumes/share/MYSQL
So here is what i typed in my terminal:
cd /Volumes/share/MYSQL
cp -R /usr/local/mysql/data
which returns the following: ( i do not know what this means)
usage: cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file target_file
cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file ... target_directory
Here is some info that might be handy:
1) Server version: 5.6.17 MySQL Community Server (GPL)
2) Where my external drive is located: /Volumes/share
-The network drive is called ls-xld4c and is 1TB in size(I don't know if that is relevant)
The specific folder I want to put the directory reads that it is found in...
Server : smb://ls-xld4c/share/MYSQL , however /Volumes/share/MYSQL shows that it is a valid directory
3) I do not have a password and the user is root
You have almost done it. The error is flagged because you have not specified the destination directory which should be your current working directory. Please use CO command as:
cp -R /usr/local/mysql/data .
The ending dot means current directory which you have already set by using:
cd /Volumes/share/MYSQL
By the way, the following steps are required:
Stop MySQL service.
Copy data files from the directory as specified in "my.cnf" or "my.inf" (in case of windows).
Paste data to destination dir.
Change "my.cnf" or "my.inf" such as the "datadir" entry specifies the destination path.
Restart MySQL.
1. Stop MySQL
sudo /etc/init.d/mysql stop
2. Change Data Directory
sudo cp -R -p /var/lib/mysql /newlocation
3. Edit MySQL default configuration file
sudo vim /etc/mysql/mysql.conf.d/mysqld.cnf
change 'datadir' to /newlocation
sudo vim /etc/apparmor.d/usr.sbin.mysqld
change '/var/lib/mysql' two-entries to /newlocation
4. Start MySQL
sudo /etc/init.d/mysql restart
On macOS Big Sur, MySQL installer used to install MySQL:
Go to System Preferences > MySQL > click on Stop MySQL Server
In configuration tab, you can see current Data Directory
Copy data folder to your destination directory
Change "Data Directory" address to your destination address > then Apply
Go to System Preferences > Security & Privacy > Privacy > Full Disk Access and make sure "mysqld" is checked here
Go to System Preferences > MySQL > click on Start MySQL Server
if you do not do step 5, the service won't start back.
hope it helps for those with permission issues

hg archive to Remote Directory

Is there any way to archive a Mercurial repository to a remote directory over SSH? For example, it would be nice if one could do the following:
hg archive ssh://user#example.com/path/to/archive
However, that does not appear to work. It instead creates a directory called ssh: in the current directory.
I made the following quick-and-dirty script that emulates the desired behavior by creating a temporary ZIP archive, copying it over SSH, and unzipping the destination directory. However, I would like to know if there is a better way.
if [[ $# != 1 ]]; then
echo "Usage: $0 [user#]hostname:remote_dir"
exit
fi
arg=$1
arg=${arg%/} # remove trailing slash
host=${arg%%:*}
remote_dir=${arg##*:}
# zip named to match lowest directory in $remote_dir
zip=${remote_dir##*/}.zip
# root of archive will match zip name
hg archive -t zip $zip
# make $remote_dir if it doesn't exist
ssh $host mkdir --parents $remote_dir
# copy zip over ssh into destination
scp $zip $host:$remote_dir
# unzip into containing directory (will prompt for overwrite)
ssh $host unzip $remote_dir/$zip -d $remote_dir/..
# clean up zips
ssh $host rm $remote_dir/$zip
rm $zip
Edit: clone-and-push would be ideal, but unfortunately the remote server does not have Mercurial installed.
Nope, this is not possible -- we always assume that there is a functioning Mercurial installation on the remote host.
I definitely agree with you that this functionality would be nice, but I think it would have to be made in an extension. Mercurial is not a general SCP/FTP/rsync file-copying program, so don't expect to see this functionality in the core.
This reminds me... perhaps you can built on the FTP extension to make it do what you want. Good luck! :-)
Have you considered simply having a clone on the remote and doing hg push to archive?
Could you use a ssh tunnel to mount a remote directory on your local machine and then just do standard hg clone and hg push operations 'locally' (as far as HG knows) but where they actually write to a filesystem which is on the remote computer?
It looks like there are several stackoverflow questions about doing this:
How do I mount a remote Linux folder in Windows through SSH?
Map SSH drive in Windows
How can I mount a remote directory on my computer?
I am often in a similar situation. The way I get around it is with sshfs.
sshfs me#somewhere-else:path/to/repo local/path/to/somewhere-else
hg archive local/path/to/somewhere-else
fusermount -r somewhere-else
The only disadvantage is sshfs is slower than nfs, samba or rsync. Generally I don't notice as I only rarely need to do anything in the remote file-system.
You could also simply execute hg on the remote host:
ssh user#example.com "cd /path/to/repo; hg archive -r 123 /path/to/archive"