"Start by creating and opening a systemd socket file for Gunicorn with sudo privileges" (directory to this file does not *appear* to exist) - gunicorn

I am working on a server running ubuntu 18.04. This digital ocean tutorial on django deployment(https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04) is telling me to do the following:
"We’re now finished configuring our Django application. We can back out of our virtual environment by typing:
(env): deactivate" I am familiar with virtual environments, I did this. Now for the part I am not at all familiar with:
"Start by creating and opening a systemd socket file for Gunicorn with sudo privileges:
sudo nano /etc/systemd/system/gunicorn.socket
"
First, since I just deactivated my env, I am now at justin#ubuntu-s-1vcpu-1gb-nyc3-01:~$. If I ls I only see the project folder I created which holds the virtualenv, the python project, manage.py and the static directory. Nowhere can I find this
/etc/systemd/system/
directory and the command they are telling me to use cannot create directories, only files. So I am very confused, any help would be greatly appreciated.

/etc doesn't live inside ~. Try ls /etc to see what's already in that directory. If you need to create that directory, you can do so wih sudo mkdir -p /etc/systemd/system/ (the -p flag is to make sure that, in case systemd is also not present under etc, it will get created).

Related

Permission denied inside /var/www/html when creating a website and it's files with the apache2 server

UPDATE** The screenshot is within atom, but when I navigate to the directory using the file explorer, and right click, the option to rename or create a new folder are restricted and I cannot click on them.
I just finished setting up the LAMP stack on my fresh UBUNTU 18.04 installation. I have everything working, the default /var/www/html/index.html page from Apache2 is being served on localhost, no port forwarding or any unique domain name, i just wanna run this on my network from my computer for now.
If there is a simple way to create multiple websites and easily choose which folder to serve than that's fine, but I want to serve just one website for now.
When I go to my /var/www/html folder and try to edit the index.html file it says permission denied. What do I need to do in order to work inside this directory for the remaining time that I am building the website. I am signed in as the root user on my system.
Also, if I do change permissions to allow me to work in this directory, what does it mean for people trying to access my server if it was available to the public. (RIGHT NOW JUST ON LOCALHOST).
Lemme know if you need more info or explanation thanks!
sudo chown -R $USER:$USER /var/www
this works, it changes the owner to my user instead of root user. I still don't understand because my user already had sudo rights and all those permissions. It was the user I created during the ubuntu18.04 setup, so there shouldn't be an issue, or idk.
File ownership issues can be fixed at the command line by typing:
sudo chmod 777 /var/www/html -R
One caveat from
turnkeyLinux.com:
Changing file permissions is a trade off
(often increasing security reduces user-friendliness and/or
usability). For security 'best practice' only the folders that require
write access by the webserver should be owned by the webserver.
If your webserver has write access everywhere and your server
is compromised it makes it easier to hack your WordPress install) but
for ease of use giving the webserver ownership should resolve all your
issues...
This article on Understanding File Permissions was great, too.
This will help you.
sudo chgrp -R www-data /var/www/html
sudo gpasswd -a username www-data
sudo chmod -R 777 /var/www/html
The permission error is occurring because the folder does not have the rights and rights are reserved with different user. (you can inspect this by doing ls -l folderName)
The solution for your problem can be handled in different ways following are the few :
WAY1:
Find out who is running apache by running the command apachectl
-S
Locate the user name (say www-data)
Change the ownership of your folder as chown -R www-data:www-data /var/www/html (this will allow only your apache
to play with files)
Run the following command ln -s /var/www/html /home/username/html (this will create a soft link for your folder,
where you can edit/delete/read which will reflect on your apache)
WAY2:
goto /var/www/
sudo chown -R www-data:${USER} html
(Now both apache and your loged-in user will have rights to play with file).
If you are not root or you don't have a permission on some things(folder,files..), know that your actions are limited.
Take folder as example:
first of all verify permissions of your folder
==>ls -ld linkto/folder
and after give it a permissions it need or type the command bellow to add all permissions
==>sudo chmod -R 777 /var/www/html
verify permissions of your folder again if it is correct then try to copy again
Just write
And give read and write rights to the folder ( not user )
Try the following the command
For Read and Write:
sudo chmod -R a+rw /var/www
For Read, Write and Execute:
sudo chmod -R a+rwx /var/www
Edit the file as root. Or better yet fix your permissions so you don’t have to worry.

I am using Ubuntu, XAMPP, MySQL, and Geany. Trouble using fopen();

So when I try to use:
fopen("sometext.txt", "w") or die("blahblahbla");
I keep on getting the following message:
failed to open stream: Permission denied". I have looked for other
answers on this site and none of them actually work.
Why is this doing this? Can somebody recommend a fix?
Do I have permission to create files in my directory? I get a bunch of advice on using chmod or changing the "file access", but how do you do this? They never explain that, just "oh use this or that".
If you have Terminal Access just fire a command in file's folder:
sudo chmod 777 sometext.txt ( For security reasons, later use correct chmod for permission)
if you dont have, you can modify File Attributes in your FTP client. ( Tick all fields (Execute-read-write) for Owner, Group, Everyone).
I hope it will solve your problem.
First, make sure you are in the apache group (check it with id username), then add your user to group apache (sudo usermod -G apache -a username) and then make sure the directory is in the group apache (check it with ls -l directory. I suppose the directory is /var/www/html or /srv/whatever, but XAMPP has its own. If not, do a sudo chgrp apache directory. Also, the directory must be writable by group members (chmod g+w directory).
Obviously in the apache configuration must be the apache user and group. If they doesn't exist, create them (sudo groupadd apache and sudo useradd apache).
P.S: chmod 777 is evil! It's better to be in the apache group and avoid making your file be edited by someone else!

OwnCloud: How to synchronyze the FileSystem with the DB

I have to "insert" a lot of files into an owncloud server (8.2).
A user give me a USB key with the files and tell me to copy of all them into his owncloud data files repository.
Do you know if is it possible ?
Is it possible to synchronyze the ownCloud data fileSystem with the ownCloud database?
My environment is Linux CentOS7 (Apache 2.4, mySQL 5.6, php 5.6)
Thanks,
owncloud brings a command line utility that allows to manually trigger some tasks. Among those is the files:scan function which re-scans a users file system.
So you can import those files by following these steps:
1. you copy the files into the physical file system of the user(s) inside ownclouds data folder
2. you fire the command line utility to re-scan the files. That takes care to update the database according to the files found.
This is an example for the manual trigger:
sudo -u www-data php occ files:scan <user name>
Here <user name> obviously has to be replaced. Also the account name the sudo command switches to depends on the linux distribution and its setup. The command has to be started inside ownclouds base folder. THe command can be called in a loop with different user names, that can be done by means of standard scripting.
Here is a documentation of the utility: https://doc.owncloud.org/server/8.0/admin_manual/configuration_server/occ_command.html
I just made a try myself using an owncloud-8.2 installation and succeeded.
Before I could sucessfully scan my files again as arkascha explained, I needed to change the ownder and the group of the new folder to www-data (for Debian OS - others see OC-Docu 1) and set rights of the new directory to 755
Change ownder:
sudo chown -R www-data:www-data <path>
Change rights:
sudo chmod 755 <path>
whwere is the path to the newly added directory and could for example look like this example: /media/hdd/owncloud/data/<username>/files/<newFolderName>
OC-Docu:
https://doc.owncloud.org/server/9.0/admin_manual/configuration_server/occ_command.html

AWS CentOS 6.5 Instance + AWS EBS volume for web hosting files and database?

I have an AWS instance running CentOS 6.5. It has been updated, secured, and setup for web hosting (LAMP). I attached an EBS volume to the instance and mounted it under /data.
Two questions:
How can I get MySQL to use the /data directory as its database storage location? (I don't want to run the program from the /data directory, just put the .sql file there.
How can I do the same for my web site? I plan on running a wordpress site and its current location is in the /var/www/html directory. I want to change this to /data/site.
I want to keep the web site files and database on a separate volume: /data. If my instance was to get corrupt or inaccessible, I can attach the EBS volume to a new instance.
I have read dozens of tutorials and articles on how to get MySQL moved to a different directory, but nothing is working. MySQL refuses to start up after. Can I keep MySQL installed as is, but have it read/write the database on a different directory like /data which is a mounted EBS volume or is this not possible at all with linux?
Here are some of the tutorials and articles I been following/testing with:
aws.amazon.com/articles/1663?_encoding=UTF8&jiveRedirect=1
spruce.it/noise/setting-up-a-proper-lamp-stack-on-aws-ec2-ebs/
EDIT:
This is what I am doing.
Create a new instance using this ami: https://aws.amazon.com/marketplace/pp/B00IOYDTV6?ref=cns_srchrow
Once the instance is up, I run updates using: sudo yum update -y
One updated, I set it up as a LAMP web server using these instructions: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html
In addition to the above steps, I allow port 80 tcp connections on the built-in firewall. I run these commands: sudo iptables -I INPUT -p tcp --dport 80 -j ACCEPT and sudo service iptables save
Once this is done, I test my site at http://IP-ADDRESS (this shows me the Apache Test Page)
Once LAMP is installed, I install the MySQL Server by running this: yum install mysql-server
After that is installed, I proceed to the "To secure the MySQL server" instructions on the previous Amazon document.
Next, I install PHPMyAdmin using these two tutorials: http://tecadmin.net/installing-apache-mysql-php-on-centos-redhat/# and http://tecadmin.net/how-to-install-phpmyadmin-on-centos-using-yum/
At this point, I have a fully functioning web server. Now, I want to use the AWS EBS volume to store all the databases and website files. First, I attach the newly create AWS EBS volume. I use this tutorial to do this: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
THIS IS WHERE THE PROBLEMS START.
Using the information in this tutorial: aws.amazon.com/articles/1663?_encoding=UTF8&jiveRedirect=1. It says FAILED.
So one thing you can do is the following that avoids copying all directories. You need to make sure that all permissions are setup correctly for it to work:
mysql dat dir:
mv /var/lib/mysql /var/lib/mysql.orig
mkdir -p /<your-new-ebs-mountpoint>/var/lib/mysql
chown mysql.mysql /<your-new-ebs-mountpoint>/var/lib/mysql
chmod 700 /<your-new-ebs-mountpoint>/var/lib/mysql
etc configs:
mkdir -p /<your-new-ebs-mountpoint>/etc
cp /etc/my.cnf /<your-new-ebs-mountpoint>/etc/my.cnf
mv /etc/my.cnf /etc/my.cnf.orig
ln -s /<your-new-ebs-mountpoint>/etc/my.cnf /etc/my.cnf
logs:
mkdir -p /<your-new-ebs-mountpoint>/var/log
mv /var/log/mysqld.log /var/log/mysqld.log.orig
touch /<your-new-ebs-mountpoint>/var/log/mysqld.log
chown mysql.mysql /<your-new-ebs-mountpoint>/var/log/mysqld.log
chmod 640 /<your-new-ebs-mountpoint>/var/log/mysqld.log
ln -s /<your-new-ebs-mountpoint>/var/log/mysqld.log /var/log/mysqld.log

Something goes wrong with the SSH while setting up hadoop

I'm a new fish for hadoop.I installed Ubuntu 12.10 on my computer and I wanna install Hadoop in pseudo-distributed mode on one single node.I searched and get lots of tutorials but I have a problem with the SSH.I did what the tutorial said.
I am sure the problem is about the SSH.I get the openssh-server,and had done this:
hadoop00#WebsoftStation:~$ssh-keygen -t dsa -P "" -f ~/.ssh/id_dsa
hadoop00#WebsoftStation:~/.ssh$cat ~/.ssh/id_dsa.pub >> authorized_keys
Then I can successfully ssh my localhost like this:
hadoop00#WebsoftStation:~$ssh localhost
It worked.
So I changed the path to hadoop and then:
hadoop00#WebsoftStation:/usr/local/hadoop$ sudo bin/start-all.sh
[sudo] password for hadoop00:
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-WebsoftStation.out
root#localhost's password:
root#localhost's password: localhost: Permission denied, please try again.
So,what's the problem?
You have setup password-less ssh for only your current account. Since, when you can use ssh localhost without any problem, the thing you need to do next is giving execution permission to your scripts.
Execute the following commands:
chmod +x bin/*.sh ---> assigns execution permission to all the scripts
./start.all ----> executes the script
Note: Hadoop can also be run without having password-less ssh setup using hadoop-daemon.sh script. The only advantage with password-less ssh is that, the ./start.all, script will take the trouble of doing that on behalf of you in each of the nodes.
You need to change permissions for your Hadoop folder to be owned by the hadoop00 user:
cd /usr/local/
sudo chown -R hadoop00:hadoop00 /usr/local/hadoop
Then you can cd into the sbin folder and run things without sudo. If you use sudo you're running the scripts as root which has different environment variables etc which is why you have a different behavior.
Why are you using sudo this is clearly a permission problem.
Try running this without sudo
bin/start-all.sh