I am using Ubuntu, XAMPP, MySQL, and Geany. Trouble using fopen(); - mysql

So when I try to use:
fopen("sometext.txt", "w") or die("blahblahbla");
I keep on getting the following message:
failed to open stream: Permission denied". I have looked for other
answers on this site and none of them actually work.
Why is this doing this? Can somebody recommend a fix?
Do I have permission to create files in my directory? I get a bunch of advice on using chmod or changing the "file access", but how do you do this? They never explain that, just "oh use this or that".

If you have Terminal Access just fire a command in file's folder:
sudo chmod 777 sometext.txt ( For security reasons, later use correct chmod for permission)
if you dont have, you can modify File Attributes in your FTP client. ( Tick all fields (Execute-read-write) for Owner, Group, Everyone).
I hope it will solve your problem.

First, make sure you are in the apache group (check it with id username), then add your user to group apache (sudo usermod -G apache -a username) and then make sure the directory is in the group apache (check it with ls -l directory. I suppose the directory is /var/www/html or /srv/whatever, but XAMPP has its own. If not, do a sudo chgrp apache directory. Also, the directory must be writable by group members (chmod g+w directory).
Obviously in the apache configuration must be the apache user and group. If they doesn't exist, create them (sudo groupadd apache and sudo useradd apache).
P.S: chmod 777 is evil! It's better to be in the apache group and avoid making your file be edited by someone else!

Related

"Start by creating and opening a systemd socket file for Gunicorn with sudo privileges" (directory to this file does not *appear* to exist)

I am working on a server running ubuntu 18.04. This digital ocean tutorial on django deployment(https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04) is telling me to do the following:
"We’re now finished configuring our Django application. We can back out of our virtual environment by typing:
(env): deactivate" I am familiar with virtual environments, I did this. Now for the part I am not at all familiar with:
"Start by creating and opening a systemd socket file for Gunicorn with sudo privileges:
sudo nano /etc/systemd/system/gunicorn.socket
"
First, since I just deactivated my env, I am now at justin#ubuntu-s-1vcpu-1gb-nyc3-01:~$. If I ls I only see the project folder I created which holds the virtualenv, the python project, manage.py and the static directory. Nowhere can I find this
/etc/systemd/system/
directory and the command they are telling me to use cannot create directories, only files. So I am very confused, any help would be greatly appreciated.
/etc doesn't live inside ~. Try ls /etc to see what's already in that directory. If you need to create that directory, you can do so wih sudo mkdir -p /etc/systemd/system/ (the -p flag is to make sure that, in case systemd is also not present under etc, it will get created).

Permission denied inside /var/www/html when creating a website and it's files with the apache2 server

UPDATE** The screenshot is within atom, but when I navigate to the directory using the file explorer, and right click, the option to rename or create a new folder are restricted and I cannot click on them.
I just finished setting up the LAMP stack on my fresh UBUNTU 18.04 installation. I have everything working, the default /var/www/html/index.html page from Apache2 is being served on localhost, no port forwarding or any unique domain name, i just wanna run this on my network from my computer for now.
If there is a simple way to create multiple websites and easily choose which folder to serve than that's fine, but I want to serve just one website for now.
When I go to my /var/www/html folder and try to edit the index.html file it says permission denied. What do I need to do in order to work inside this directory for the remaining time that I am building the website. I am signed in as the root user on my system.
Also, if I do change permissions to allow me to work in this directory, what does it mean for people trying to access my server if it was available to the public. (RIGHT NOW JUST ON LOCALHOST).
Lemme know if you need more info or explanation thanks!
sudo chown -R $USER:$USER /var/www
this works, it changes the owner to my user instead of root user. I still don't understand because my user already had sudo rights and all those permissions. It was the user I created during the ubuntu18.04 setup, so there shouldn't be an issue, or idk.
File ownership issues can be fixed at the command line by typing:
sudo chmod 777 /var/www/html -R
One caveat from
turnkeyLinux.com:
Changing file permissions is a trade off
(often increasing security reduces user-friendliness and/or
usability). For security 'best practice' only the folders that require
write access by the webserver should be owned by the webserver.
If your webserver has write access everywhere and your server
is compromised it makes it easier to hack your WordPress install) but
for ease of use giving the webserver ownership should resolve all your
issues...
This article on Understanding File Permissions was great, too.
This will help you.
sudo chgrp -R www-data /var/www/html
sudo gpasswd -a username www-data
sudo chmod -R 777 /var/www/html
The permission error is occurring because the folder does not have the rights and rights are reserved with different user. (you can inspect this by doing ls -l folderName)
The solution for your problem can be handled in different ways following are the few :
WAY1:
Find out who is running apache by running the command apachectl
-S
Locate the user name (say www-data)
Change the ownership of your folder as chown -R www-data:www-data /var/www/html (this will allow only your apache
to play with files)
Run the following command ln -s /var/www/html /home/username/html (this will create a soft link for your folder,
where you can edit/delete/read which will reflect on your apache)
WAY2:
goto /var/www/
sudo chown -R www-data:${USER} html
(Now both apache and your loged-in user will have rights to play with file).
If you are not root or you don't have a permission on some things(folder,files..), know that your actions are limited.
Take folder as example:
first of all verify permissions of your folder
==>ls -ld linkto/folder
and after give it a permissions it need or type the command bellow to add all permissions
==>sudo chmod -R 777 /var/www/html
verify permissions of your folder again if it is correct then try to copy again
Just write
And give read and write rights to the folder ( not user )
Try the following the command
For Read and Write:
sudo chmod -R a+rw /var/www
For Read, Write and Execute:
sudo chmod -R a+rwx /var/www
Edit the file as root. Or better yet fix your permissions so you don’t have to worry.

Unable to mount a directory on Google Compute Engine using sshfs

I am trying to mount a remote filesystem on Google Container Engine. I am following this tutorial: https://www.digitalocean.com/community/tutorials/how-to-use-sshfs-to-mount-remote-file-systems-over-ssh
Using following sshfs command:
sudo sshfs -o sshfs_debug,allow_other <instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce-container
I am getting error:
SSHFS version 2.5
read: Connection reset by peer
I referred this link https://cloud.google.com/sdk/gcloud/reference/compute/config-ssh
and could login using ssh via following command:
$gcloud compute config-ssh
$ssh <instance-name>.<region>.<project_id>
Any ideas what might be going wrong here? I can't understand what keys and username should I use for sshfs login.
Update(11/5):
I am using following command:
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce`
I have chowned /mnt/gce folder for my user. I checked the IP matches the entry in ~/.ssh/config file. However I still get the error read: Connection reset by peer
The problem with command below is that
1) unless you have a static IP, it keeps changing on machine reboot
2) you need to use .pub file
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce
I finally got it working by following command:
sudo mkdir /mnt/gce
sudo chown <user> /mnt/gce
sshfs -o IdentityFile=~/.ssh/google_compute_engine.pub <user_name>#<instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce
A few things that might be the cause of the problem:
Don't use sshfs as root. It's a FUSE filesystem and meant to be user mounted.
Don't specify a full path as the remote FS. It's SSH, so by default, the $PWD on the remote side is the login user's $HOME.
if ssh works, sshfs will work. The easiest way is to make sure that ~/.ssh/config has an entry for the remote host with the user, port, etc provided.
If you get this from sshfs
read: Connection reset by peer
maybe help to set file to read only
chmod 400 /{{path_to_your_key}}/keypair.pem
and connect again.

Solaris 10 sudo configuration Issue

I am using SunOS 5.10 Generic_147441-24 i86pc i386 i86pc
if i run
which sudo
i get the below
/opt/sfw/bin
when i run "sudo -l" i get the below
User localuser may run the following commands on this host:
(root) NOPASSWD: /sbin/ifconfig
for "visudo"
visudo
-bash: visudo: command not found
also /etc/sudoers file does not exist in the box.
Please help me configure sudo, how it is possible with out the sudoers file.
Perhaps you should have a look at Sun (Oracle) RBAC for accounts, rather than rely on sudo in Solaris? It is unclear from your post why you must use sudo, but if you are not calling sudo from a script, it might be worth your while to read: http://docs.oracle.com/cd/E23824_01/html/821-1456/rbac-1.html
I've never seen the sudo binary exist in /opt, so my first thought would be that your visudo binary is not in your path, or the sudo package you installed does not contain the visudo binary. Either way you may consider downloading the sudo package again and reinstalling.
To see if your visudo binary exists anywhere:
find / -name visudo -print
If you find nothing, remember you do not explicitly need visudo to use sudo -- it's there as a checkpoint for making sure that you do not save and exit a sudoers file that has errors, thus possibly compromising your ability to edit it again or to break sudo for all users on the host.
Also note that /etc/sudoers can start off empty, just fill it in with your sudo rules. For example, to provide sudo all commands on that host for a user without prompting for a password:
userid ALL=(ALL) NOPASSWD: ALL
That particular user ID can run "sudo -l" to list the sudo rules available to it. You could do this even just to test that sudo is in fact working on your host.
You could easily get the location of the sudoers file from sudo binary itself by doing this
cat $(which sudo) | strings | grep /sudoers
Then, you would know what file to modify.

brew link mysql did not complete

For some some reason brew does not link mysql and it complains about permission.
I chmod the folder to 777 but I am still having the same issues
laptop$ brew install mysql
Error:
mysql-5.5.27 already installed, it's just not linked
laptop$ brew link mysql
Linking /usr/local/Cellar/mysql/5.5.27... Warning: Could not link mysql.
Unlinking...
Error:
Could not symlink file: /usr/local/Cellar/mysql/5.5.27/lib/plugin
/usr/local/lib is not writable. You should change its permissions.
I figured what the problem was.
It was issues with premission and I basically did this
sudo chown -R $(whoami) /usr/local/lib/
I believe You should:
sudo chmod 775 /usr/local/lib/
and make sure You are member of the file's group.
Not really an answer, but a comment that may help those who are pulling out hair chowning and chmoding like crazy and still getting "not writeable" errors at linking. For example, from $ brew doctor -d
Error: /usr/local/lib/pkgconfig isn't writable.
This can happen if you "sudo make install" software that isn't managed by
by Homebrew. If a formula tries to write a file to this directory, the
install will fail during the link step.
I suggest you check the linked file and it's dependencies and either delete them and reinstall via homebrew, or install the package without using homebrew.
In my system this worked perfectly.
chown -R $(whoami) /usr/local/share/
I am trying to give a general answer to the question.
It happens that neither /usr/local/lib/ nor /usr/local/share/ gives error. You should look the exact directory that is not writable. It is mentioned right after "Error: Could not symlink". So execute the command for that directory.
chown -R $(whoami) [/bath/to/your/dir]