Solaris 11 ACL issue with copying into acl folder - acl

I'm having an issue with ACL inheritance, I have created a folder /test1 and have set acl's on it. When I copy a file/folder from for example /tmp/test.txt to /test1 it gives the new file both the old permissions and the new acl permissions.
How can I just make it have the new permissions?
I have set aclmode=passthrough and aclinherit=passthrough on the ZFS datasets.
Any Ideas?

Related

chmod configuration in elastic beanstalk

I am attempting to restrict access to a few backend folders in an elastic beanstalk environment, but I cannot figure out how to set chmod permissions in an existing environment.
I am aware that there may be a way to do this through the .ebextensions file, but I have been unable to find a resource outlining this process.
How do I restrict folder access to folders and files in my elastic beanstalk environment?
Thank you!
There is a setting you can use in the .ebextenstions file called "files". I haven't tested this with folders though and I am not sure if you can change permissions on already existing files and folders with it.
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-files
You could just add a command that does it though.
commands:
01_set_file_permissions:
command: "chmod 600 /path/to/file"

yii2 - All Files and Folders permissions are messed up. What should be the permissions of yii2 framework's directory hierarchy

I moved the complete yii2 installation from one server to another with the help of FileZilla. Sadly, Filezilla don't keep the file permissions by default, and now I'm facing issues with file / directory permissions. I would like to know what's the file permissions for different directories and files in the yii2 directory hierarchy.
You should not transfer the project this way.
Currently it's the era of version control (especially Git) and Composer.
Once you created you project locally and put it under version control, you push it to your main repository and then deploy it to production server.
No need to use Filezilla or something like that.
If your hoster limits you in that, it's better to switch to another one.
In your current situation comparing and setting permissions manually can be very tidious, some of the permissions are set during init command.
So I recommend to deploy it again using version control and Composer instead of struggling with manual permissions setting.
But just in case, I checked production server, most of the folder permissions are 0755, for files - 0644. Folders like runtime, assets have 0777 permissions and set with init command as I mentioned above.
Locally I use Vagrant and pretty much everything here has 0777 permission.

How to update OJS

Would you please let me know how to update OJS (Open Journal System). I have installed it on a shared server and I have no access to the shell, only a web interface and control panel (direct admin) is allowed. I think there must be some update button online, But i could not find it.
Thanks
Download and decompress the package from the OJS web site
Make a copy of the config.inc.php provided in the new package
Move or copy the following files and directories from your current OJS
installation:
config.inc.php
public/
Your uploaded files directory ("files_dir" in config.inc.php), if it
resides within your OJS directory
Replace the current OJS directory with the new OJS directory, moving the
old one to a safe location as a backup
Be sure to review the Configuration Changes section of the release notes
in docs/release-notes/README-(version) for all versions between your
original version and the new version. You may need to manually add
new items to your config.inc.php file.
The easiest thing would be to make a new folder on your shared hosts with the latest version. Copy over the config.ing.php, cache, and public folders. If your files is within your OJS folder as well, copy it too (though, you should move it outside the web accessible location).
Then you'll find an option to upgrade the database in the Admin pages.

TortoiseHg 'No space left on device' error while pushing

We are using TortoiseHg as our Mercurial client UI. Today we ran into an issue while trying to push from one particular workstation. It was receiving the following error:
abort: No space left on device
[command returned code 255 ..........]
This error occurs while TortoiseHg/Mercurial is bundling files in preparation to pushing to the repository. I did some testing and noticed that the workstations (C:) drive was gradually being filled up as the file were being bundled. The (C:) drive went from ~900MB to ~100MB and then the error message was received. So this is obviously the cause.
My question is this:
Does anyone know which default directory is used to store the temp files created while TortoiseHg/Mercurial bundles files in prep for a push? This seems to be independent of the drive TortoiseHg is installed to. I re-installed to a data drive with plenty of space and still used (C:) to store whatever temp files it was using.
Is there a way to configure TortoiseHg/Mercurial to use a temp directory of your choice?
Thanks in advance for any help!
Mercurial is python and python has good platform specific defaults for temporary file locations. They're pretty easily overridden if you want something other than the defaults, which on Windows are probably c:\temp.
http://docs.python.org/library/tempfile.html#tempfile.tempdir says it's:
The directory named by the TMPDIR environment variable.
The directory named by the TEMP environment variable.
The directory named by the TMP environment variable.
A platform-specific location:
On RiscOS, the directory named by the Wimp$ScrapDir environment variable.
On Windows, the directories C:\TEMP, C:\TMP, \TEMP, and \TMP, in that order.
On all other platforms, the directories /tmp, /var/tmp, and /usr/tmp, in that order.
As a last resort, the current working directory.
So if you've got software using Mercurial on a client computer set the environment variable to some place you know has space.
Mercurial always stores internal files inside the ".hg" folder in the local repository folder.
Maybe TortoiseHg has a additional temp folder... don't know. Anyway you should try to push the files using the Mercurital command line client:
hg push
More information about the command line client you can find here Mercurial: The Definitive Guide
Another temporary solution might be the move these files via a file system simlink to another drive with more space left.

Correct PHP file upload permissions

I have developed a download/upload manager script.
When I upload a file via POST method it is stored in a folder called files, the files folder is within another folder called download-manager.
Now it seems when I upload via the POST method 0666 CHMOD works when I want to rename, delete the file but the download-manager folder and the files folder need to be 0777 CHMOD for this to work. Now can someone tell me if this is dangerous?
1) I got a deny all in .htaccess so nobody can access the files directory via a browser
2) the upload script is protected by a username and password which the person who uses the script will obviously change, so only admins can basically upload, rename, edit, delete files and the records in the MySQL database.
When a file is uploaded a record is added to the database with information like file type, file name, file size etc and then the unique id (auto incremented by MySQL) is appended to the process.php file which gets the file from the directory and mime type etc that is not revealed, the process.php basically does the checks to see if record and files exists and if so forces the download of that file.
Basically the download URL is like: wwww.mydomain.com/process.php?file=57, a check is done to obviously make sure that id exists in the database and that a file exists with the file name stored in the database with that id.
Now all this works fine when uploading the file via a form using POST method but I also added a manual upload so for people who want to upload a file that is larger than the size their webhost allows they can simply upload the file via a FTP program for example and then just add the filename and file details manually themselves via a form in the admin area to link the record with the file. The problem is then a permission issue because if the file is uploaded via FTP or whatever way they upload the file by the php script cannot rename, delete the file if needed in the future as the php script does not have the correct privileges. So from what I gather, the only option is then telling the persons who use the script to change the file chmod to 0777 for it to work, i think that will make it work?
But then I have the problem of 0777 also being executable. The script allows any file type upload as it's a download/upload manager script but at the same time I am slightly confused with all this permissions lark and what I should actually be doing. As php is limited by the max upload size set by a host I want to add manual upload so users can upload the file by another method and assign the file to the database record but then as stated I get a problem when wanting to rename, delete the file via the php script.
I have developed the script to detect such problems and notify the user etc but I would like to try and make this script do all the leg work or nearly all of it without having to state in the manual that the admin will have to chmod the file to 0777 when they want the script to rename, delete the file, although I don't know if just chmodding the file to 0777 will actually allow the php script to the rename, delete it and so forth but also security is then a concern.
UPDATED
Ok thanks so chown the file before chmodding it on upload?
Do i just use chown() around the file and nothing else and that will make it owned by the server process and make it private? as i see you got
chown apache:apache '/path/to/files' ;
Do I need to add the apache:apache bit?
I did think of this as simpler solution, if a admin does a manual upload tell them they will have to rename/delete the file manually if needed in the future because the script won't have the correct permissions to do so, this would then make this a easy solution, as the manualupload script can just rename the db record to keep it linked to the file. That way no worries of file permission issues.
Simply put user changes file manually via ftp for example from myfile.zip to somefile.zip then they edit the db record for that file and change the filename to somefile.zip from the old filename myfile.zip, that way everything is linked still but no worries about permission issues. As I also have been reading that chown() does not always work or cannot be relied on for whatever reason.
1) i got a deny all in .htaccess so nobody can access the files directory via a browser
Store your files in a separate folder, away from the directory structure that houses your PHP files.
As far as the permissions on the directory are concerned, there are three ways to go about setting up the permissions on the folder:
Make it world-writable (chmod 0777 '/path/to/files/')
This is not recommended, as it has major security implications especially on a non-dedicated server; anyone who has an account or can tell a process on the server to write/delete to that folder will be able to change its contents.
Make it temporary (chmod 1777 '/path/to/files/')
This also carries a security concern, but less so than option 1 for the following reason: users cannot modify the directory--only the files they own.
Make it owned by the server process and make it private (chown apache:apache '/path/to/files' ; chmod 0700 '/path/to/files')
This is arguably the best solution.
Just relax & enjoy.
On many shared hostings it's the only possible solution anyway.
There is another option - to ask a user for ftp pass and use ftp for copying files from tmp, like wordpress does. But I think it's even less secure.