Now I try to install cosmos, but 1 error ecountered,
https://forge.fiware.org/plugins/mediawiki/wiki/fiware/index.php/BigData_Analysis_-_Installation_and_Administration_Guide on the step 3 i created 3 files, 2 .pem and 1 .cer, on the step 4 mv .pem /etc/pki/tls/certs cd /etc/pki/tls/certs
which is the .pem that i have to move?
The <ca_cert>.pem in step 4 refers to the CA certificate.
I assume that you created a self-signed CA certificate, so it should be the certnew.cer file.
Related
This question already has answers here:
How to give apache permission to write to home directory?
(5 answers)
Closed 2 years ago.
I get the Permission denied error every time I try to write a file to (/var/lib/mysql-files) directory with http, but if I restart apache and/or MySQL I can write to that directory with no errors, but only one time, so if I try to write a second file I will get that error, and I have to restart apache again and so on.
This is the ownership and the permissions that i gave that directory:
groupadd mysql_apache
usermod -a -G mysql_apache mysql
usermod -a -G mysql_apache apache
chown -R :mysql_apache /var/lib/mysql-files
chmod -R 770 /var/lib/mysql-files
I want to give the rights to read; write; execute on this dir (/var/lib/mysql-files) only to MySQL and apache, What did I do wrong here?
PS: This is on CentOs 8.
We moved to Ubuntu 20.04 since CentOs 8 EOL changed to 2021, and I didn't have this problem in Ubuntu.
I've been trying to get gcloud to a usable state on Travis and I just can't seem to get passed the gcloud auth activate-service-account point.
When ever it runs I just get the following error:
ERROR: (gcloud.auth.activate-service-account) PyOpenSSL is not available.
See https://developers.google.com/cloud/sdk/crypto for details.
I've tried apt-get and pip installs both with the export CLOUDSDK_PYTHON_SITEPACKAGES=1 set and nothing seems to work.
Does anyone have any ideas or alternatives?
This is Travis version Ubuntu 14.04.
Update
If I run the command from the docs on travis I get the following error:
usage: gcloud auth activate-service-account ACCOUNT --key-file KEY_FILE [optional flags]
ERROR: (gcloud.auth.activate-service-account) too few arguments
This made me think I had to have an ACCOUNT parameter, but after running the command locally with the un-encrypted service account key, I know it's not needed (unless something has changed).
The only other thing I can think of is that the file isn't be decrypted correctly or the command itself isn't happy in Travis:
- gcloud auth activate-service-account --key-file client-secret.json
Update 2
Just dumped a load of logs to figure what is going on. (Massive shout out to #Vilas for his help)
It looks like gcloud is installed on the VM for node already, but it's a super old version.
$ which gcloud
/usr/bin/gcloud
$ gcloud --version
Google Cloud SDK 0.9.37
bq 2.0.18
bq-nix 2.0.18
compute 2014.11.25
core 2014.11.25
core-nix 2014.11.25
dns 2014.11.25
gcutil 1.16.5
gcutil-nix 1.16.5
gsutil 4.6
gsutil-nix 4.6
sql 2014.11.25
The next question is how can I get the path to find the right gcloud?
I've confirmed that the downloaded SDK installs to ${HOME}/google-cloud-sdk/bin by running this command.
$ ls -l ${HOME}/google-cloud-sdk/bin
total 24
drwxr-xr-x 2 travis travis 4096 Apr 27 21:44 bootstrapping
-rwxr-xr-x 1 travis travis 3107 Mar 28 14:53 bq
-rwxr-xr-x 1 travis travis 912 Apr 21 18:56 dev_appserver.py
-rwxr-xr-x 1 travis travis 3097 Mar 28 14:53 gcloud
-rwxr-xr-x 1 travis travis 3144 Mar 28 14:53 git-credential-gcloud.sh
-rwxr-xr-x 1 travis travis 3143 Mar 28 14:53 gsutil
I finally got a solution for it. Essentially Travis has a super old version of the gcloud SDK installed that was taking presidence over the downloaded SDK.
Steps to Help Diagnose
In your .travis.yml file add:
env:
global:
# Ensure the downloaded SDK is first on the PATH
- PATH=${HOME}/google-cloud-sdk/bin:$PATH
# Ensure the install happens without prompts
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
Then in your install step add the following:
install:
# Make sure SDK is downloaded - cache once it's working
# NOTE: Note sure how to update the SDK if it's cached
- curl https://sdk.cloud.google.com | bash;
# List the SDK contents to ensure it's downloaded
- ls -l ${HOME}/google-cloud-sdk/bin
# Ensure the correct gcloud is being used
- which gcloud
# Print the gcloud version and make sure it's something
# Reasonably up to date compared with:
# https://cloud.google.com/sdk/downloads#versioned
- gcloud --version
Can I use openssl s_client to retrieve the CA certificate for MySQL?
I have access to the remote database server using the following
mysql -u theuser -h thehost --ssl --ssl-cipher=DHE-RSA-AES256-SHA -p thedatabase
Now I want to do to connect to it using JDBC.
I realize that I need to insert the public certificate into my Java key store. However, I cannot figure out how to retrieve the public certificate. I realize it sits on the remote server in /etc/mysql/ca.pem or a similar place. But, I don't have permission to read that file or even ssh into the machine.
I've tried
openssl s_client -cipher DHE-RSA-AES256-SHA -connect thehost:3306
and some variations. I always get errors. For example
CONNECTED(00000003)
30495:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:/BuildRoot/
Library/Caches/com.apple.xbs/Sources/OpenSSL098/OpenSSL098-59/src/ssl/s23_clnt.c:618:
Can I use openssl s_client to retrieve the CA certificate for MySQL?
You probably can't.
A well configured server will send the server certificate and all intermediate certificates required to build a path to the root CA. You have to have the root CA certificate already.
For example:
$ openssl s_client -connect www.cryptopp.com:443 -tls1 -servername www.cryptopp.com
CONNECTED(00000003)
depth=2 C = GB, ST = Greater Manchester, L = Salford, O = COMODO CA Limited, CN = COMODO RSA Certification Authority
verify error:num=20:unable to get local issuer certificate
---
Certificate chain
0 s:/OU=Domain Control Validated/OU=COMODO SSL Unified Communications
i:/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Domain Validation Secure Server CA
1 s:/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Domain Validation Secure Server CA
i:/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Certification Authority
2 s:/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Certification Authority
i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root
---
...
The server sent the server's certificate. Its shown above as 0 s:/OU=Domain Control Validated/OU=COMODO SSL Unified Communications. S means its the Subject, while I means its the issuer.
The server sent two intermediate certificates at 1 and 2. However, we need to have the Issuer of certificate 2 locally to build the path for validation. The Issuer of certificate 2 goes by the Common Name "AddTrust External CA Root".
"AddTrust External CA Root" can be downloaded from Comodo's site at [Root] AddTrust External CA Root
It the server sent the root CA, then a bad guy could tamper with the chain and a client would be no wiser. They could swap-in their own CA and use an evil chain.
We can clear the verify error:num=20:unable to get local issuer certificate by fetching the root CA, and then using -CAfile:
$ openssl s_client -connect www.cryptopp.com:443 -tls1 -servername www.cryptopp.com \
-CAfile addtrustexternalcaroot.pem
It will result in a Verify Ok (0).
Yes, OpenSSL version 1.1.1 (released on 11 Sep 2018) now supports fetching the server certificate from a MySQL server.
openssl s_client -starttls mysql -connect thehost:3306
Source: answer by Paul Tobias
I'm running django on Digital Ocean with gunicorn and nginx. Gunicorn for serving the django and nginx for static files.
Upon uploading a file via website, I cant save to a folder in /home directory. I get [Errno 13] Permission denied.
Please, how do I make the web server to be able have read write access to any arbitrary folder anywhere under /home?
This all depends on the user that your application is running as.
If you check ps aux | grep gunicorn which user the Gunicorn server is running your app as then you can change the chmod or chown permissions accordingly.
ls -lash will show you which user current only owns the folder and what permissions are on the folder you are trying to write to:
4.0K drwxrwx--- 4 username username 4.0K Dec 9 14:11 uploads
You can then use this to check for any issues.
Some docs on changing ownership and permissions
http://linux.die.net/man/1/chmod
http://linux.die.net/man/1/chown
I would advise being very careful to what locations on your disk you give access for the web server to read/write from. This can have massive security implications.
Well, I worked on this issue for more than a week and finally was able to FIGURE IT OUT.
Please follow links from digital ocean , but they did not pinpoint important issues one which includes
no live upstreams while connecting to upstream
*4 connect() to unix:/myproject.sock failed (13: Permission denied) while connecting to upstream
gunicorn OSError: [Errno 1] Operation not permitted
*1 connect() to unix:/tmp/myproject.sock failed (2: No such file or directory)
etc.
These issues are basically permission issue for connection between Nginx and Gunicorn.
To make things simple, I recommend to give same nginx permission to every file/project/python program you create.
To solve all the issue follow this approach:
First thing is :
Log in to the system as a root user
Create /home/nginx directory.
After doing this, follow as per the website until Create an Upstart Script.
Run chown -R nginx:nginx /home/nginx
For upstart script, do the following change in the last line :
exec gunicorn --workers 3 --bind unix:myproject.sock -u nginx -g nginx wsgi
DONT ADD -m permission as it messes up the socket. From the documentation of Gunicorn, when -m is default, python will figure out the best permission
Start the upstart script
Now just go to /etc/nginx/nginx.conf file.
Go to the server module and append:
location / {
include proxy_params;
proxy_pass http<>:<>//unix:/home/nginx/myproject.sock;
}
REMOVE <>
Do not follow the digitalocean aricle from here on
Now restart nginx server and you are good to go.
Change the owner of /home
See actual owner $ ls -l /
f1 f2 f3 f4 f5 f6 f6 f8 f9 f10
- rwx r-x r-x 1 root root 209 Mar 30 17:41 /home
https://www.garron.me/en/go2linux/ls-file-permissions.html
f2 Owner permissions over the file or directory
f3 Group permissions over the file or directory
f4 Everybody else permissions over the file or directory
f6 The user that owns the file or directory
Change folder owner recursively sudo chown -R ubuntu /home/ substitute ubuntu with a non-root user.
Good practices
Use a subdirectory home/ubuntu as server directory, ubuntu folder have ubuntu user as owner.
Set user-owner permissions to all. Your group and other users to read-only sudo chmod -R 744 /home/ubuntu/
I changed the ownership of the file which is containing my images
chown -R www-data: /myproject/media/mainsite/images
Change the path accordingly and also restart server. In my case its apache2 so
sudo service apache2 restart
In my case it was something very simple that was generating a similar error, I just had to check the user who controlled Gunicorn and the user who controlled NGINX, they had different permissions.
I am new to the database world and I ran into some problems....
My hard disk on my Mac says I have less than 8gb left of free space. For this reason, I would like to move my MySQL data directory to an external network drive called ls-xld4c.
I have been trying to follow the rules to do so via http://mailsteward.com/nickstek/?p=22
As noted from step 3 from the link above:
I copied the /usr/local/mysql/data directory and all of its files and subdirectories to the
new location at /Volumes/share/MYSQL
So here is what i typed in my terminal:
cd /Volumes/share/MYSQL
cp -R /usr/local/mysql/data
which returns the following: ( i do not know what this means)
usage: cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file target_file
cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file ... target_directory
Here is some info that might be handy:
1) Server version: 5.6.17 MySQL Community Server (GPL)
2) Where my external drive is located: /Volumes/share
-The network drive is called ls-xld4c and is 1TB in size(I don't know if that is relevant)
The specific folder I want to put the directory reads that it is found in...
Server : smb://ls-xld4c/share/MYSQL , however /Volumes/share/MYSQL shows that it is a valid directory
3) I do not have a password and the user is root
You have almost done it. The error is flagged because you have not specified the destination directory which should be your current working directory. Please use CO command as:
cp -R /usr/local/mysql/data .
The ending dot means current directory which you have already set by using:
cd /Volumes/share/MYSQL
By the way, the following steps are required:
Stop MySQL service.
Copy data files from the directory as specified in "my.cnf" or "my.inf" (in case of windows).
Paste data to destination dir.
Change "my.cnf" or "my.inf" such as the "datadir" entry specifies the destination path.
Restart MySQL.
1. Stop MySQL
sudo /etc/init.d/mysql stop
2. Change Data Directory
sudo cp -R -p /var/lib/mysql /newlocation
3. Edit MySQL default configuration file
sudo vim /etc/mysql/mysql.conf.d/mysqld.cnf
change 'datadir' to /newlocation
sudo vim /etc/apparmor.d/usr.sbin.mysqld
change '/var/lib/mysql' two-entries to /newlocation
4. Start MySQL
sudo /etc/init.d/mysql restart
On macOS Big Sur, MySQL installer used to install MySQL:
Go to System Preferences > MySQL > click on Stop MySQL Server
In configuration tab, you can see current Data Directory
Copy data folder to your destination directory
Change "Data Directory" address to your destination address > then Apply
Go to System Preferences > Security & Privacy > Privacy > Full Disk Access and make sure "mysqld" is checked here
Go to System Preferences > MySQL > click on Start MySQL Server
if you do not do step 5, the service won't start back.
hope it helps for those with permission issues