Cosmos installation on localhost - fiware

I try to install cosmos, on localhost, but i found a problem, I dont know to follow this steps.
First i create a certs and pem, but in step4 move the .pem i dont know that pem to move i have two pem. And I try to verify openssl verify -CApath /etc/pki/tls/certs .pem the two pem but dont work.
Step 3: generating and installing the master node's certificate
The platform requires a certificate for the master node, signed by a valid CA, is installed in order to be shown as an authentication proof. Thus, this certificate must be created by generating a Certificate Signing Request (CSR); do it once in the master node:
$ openssl req -newkey rsa:2048 -new -keyout newkey.pem -out newreq.pem
The above command will prompt for certain information; the most important information regarding the Cosmos platform is the name of the server (whichever hostname was chosen for the cosmos master node) where the certificate is going to be installed, and that the challenge password must be empty. Althought the PEM pass phrase must be empty (otherwise, the httpd server will not be automatically started), it has to be filled in this step and removed later by performing:
$ openssl rsa -in newkey.pem -out newkey.pem
Reached this point, you may choose among two options for signing the certificate:
Use a valid CA in the Internet. The content of the generated SCR (newreq.pem file) must be used within the CA in order to retrieve the final certificate, which will be typically called certnew.cer. The way each CA manages the CSR varies from one to another.
Self-signing the certificate. In this case, you have to perform this command:
$ openssl req -new -x509 -key newkey.pem -out certnew.cer
In any case, once the certificate (certnew.cer), key (newkey.pem) and CSR (newreq.pem) have been got, rename the files according to this (do it in all the machines):
$ cp newkey.pem [COSMOS_TMP_PATH]/puppet/modules/cosmos/files/environments/<my_environment>/certs/<cosmos-master-node>_key.pem
$ cp cernew.cer [COSMOS_TMP_PATH]/puppet/modules/cosmos/files/environments/<my_environment>/certs/<cosmos-master-node>_cer.pem
$ cp newreq.pem [COSMOS_TMP_PATH]/puppet/modules/cosmos/files/environments/<my_environment>/certs/<cosmos-master-node>_req.pem
Step 4: CA's certificate installation
The CA's certificate itself must be installed. Download it from the appropiate link (if you self-signed the master node's certificates, then such certificate is the CA's certificate as well) and do the following in the Cosmos master node:
Copy the CA's certificate (generic name <ca_cert>.pem) to the local certificates store and change directory to it:
$ mv <ca_cert>.pem /etc/pki/tls/certs
$ cd /etc/pki/tls/certs
Create a symbolic link for the CA's certificate. An 8-digit-number-based file will be created. It is very important the extension of this file is '.0':
$ ln -s <ca_cert>.pem `openssl x509 -hash -noout -in <ca_cert>.pem`.0
Verify the certificate has ben successfully installed:
$ openssl verify -CApath /etc/pki/tls/certs <ca_cert>.pem
xxxxxxxx.0: OK
You must see a 8-digit hash .0 file followed by "OK".

Alejandro, Cosmos is an enabler that is extremly recommended to use through the already deployed instance at FIWARE LAB. Please refer to this link in order to create an account and start working with it.

Related

gpg2: How to use another secret and public keyring?

I know that gpg2 uses the gpg-agent to get private-keys. Per default they are stored in ~/.gnupg/private-keys-v1.d.
Now I'm questioning myself if it's possible to instruct gpg to use another agent on another machine? The documentation is not very helpful because it does not explain how to connect your gpg to another gpg-agent. But as gpg2 requires you to use gpg-agent their is no other way to use a new keyring.
My second question is, if it's possible to instruct gpg-agent to use another dir than the default private-keys-v1.d for looking up keys?
The documentation for gpg2 also contains no option to specify a new public keyring. Is it still available although not mentioned anymore in the docs?
Greetings Sebi2020
If you are connected from machine A (e.g. your PC) to remote machine B over SSH, yes, you can instruct gpg2 on B to use gpg2 agent on A, using GnuPG Agent Forwarding (link to the gnupg manual). This is how you can use your local gpg keys on remote machines typically. Make sure you have proper versions of gnupg and OpenSSH for that (see the manual).
You may not be able to change only the subfolder name private-keys-v1.d per se, but you can replace the default gpg home directory ~/.gnupg/private-keys-v1.d with whatever_directory/private-keys-v1.d, whatever_directory being whatever directory you want to use as gpg home directory. 2 ways of doing that: set the GNUPGHOME environment variable, or use gpg --homedir option. This is still valid for gpg 2.2.4 at least. E.g. using gpg keys from a usb drive: gpg --homedir /media/usb1/gnupg ...
The options to specify a new public keyring are --keyring and --primary-keyring (use --no-default-keyring to exclude default keyring completely). Valid for gpg 2.2.4.

ssh AWS, Jupyter Notebook not showing up on web browser

I am trying to use ssh connecting to AWS "Deep Learning AMI for Amazon Linux", and everything works fine except Jupyter Notebook. This is what I got:
ssh -i ~/.ssh/id_rsa ec2-user#yy.yyy.yyy.yy
gave me
Last login: Wed Oct 4 18:01:23 2017 from 67-207-109-187.static.wiline.com
=============================================================================
__| __|_ )
_| ( / Deep Learning AMI for Amazon Linux
___|\___|___|
The README file for the AMI ➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜ /home/ec2-user/src/README.md
Tests for deep learning frameworks ➜➜➜➜➜➜➜➜➜➜➜➜ /home/ec2-user/src/bin
=============================================================================
1 package(s) needed for security, out of 3 available
Run "sudo yum update" to apply all updates.
Amazon Linux version 2017.09 is available.
Then
[ec2-user#ip-xxx-xx-xx-xxx ~]$ jupyter notebook
[I 16:32:14.172 NotebookApp] Writing notebook server cookie secret to /home/ec2-user/.local/share/jupyter/runtime/notebook_cookie_secret
[I 16:32:14.306 NotebookApp] Serving notebooks from local directory: /home/ec2-user
[I 16:32:14.306 NotebookApp] 0 active kernels
[I 16:32:14.306 NotebookApp] The Jupyter Notebook is running at: http://localhost:8888/?token=74e2ad76eee284d70213ba333dedae74bf043cce331257e0
[I 16:32:14.306 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 16:32:14.307 NotebookApp] No web browser found: could not locate runnable browser.
[C 16:32:14.307 NotebookApp]
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://localhost:8888/?token=74e2ad76eee284d70213ba333dedae74bf043cce331257e0
Copying http://localhost:8888/?token=74e2ad76eee284d70213ba333dedae74bf043cce331257e0 and get
"can’t establish a connection to the server at localhost:8888." on Firefox,
"This site can’t be reached localhost refused to connect." on Chrome
Further,
jupyter notebook --ip=yy.yyy.yyy.yy --port=8888 gives
Traceback (most recent call last):
File "/usr/bin/jupyter-notebook", line 11, in <module>
sys.exit(main())
File "/usr/lib/python3.4/dist-packages/jupyter_core/application.py", line 267, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/usr/lib/python3.4/dist-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "<decorator-gen-7>", line 2, in initialize
File "/usr/lib/python3.4/dist-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/lib/python3.4/dist-packages/notebook/notebookapp.py", line 1296, in initialize
self.init_webapp()
File "/usr/lib/python3.4/dist-packages/notebook/notebookapp.py", line 1120, in init_webapp
self.http_server.listen(port, self.ip)
File "/usr/lib64/python3.4/dist-packages/tornado/tcpserver.py", line 142, in listen
sockets = bind_sockets(port, address=address)
File "/usr/lib64/python3.4/dist-packages/tornado/netutil.py", line 197, in bind_sockets
sock.bind(sockaddr)
OSError: [Errno 99] Cannot assign requested address
Note sure this will be helpful (Is it only for MXNet ? I am not familiar with MXNet) Jupyter_MXNet
localhost will only work when trying to use jupyter (or well, anything) from the machine itself. In this case, it seems you're trying to access it from another machine.
You can do that with the switch --ip=a.b.c.d, where a.b.c.d is the public address of your EC2 instance (or using 0.0.0.0 to make it listen in all interfaces.)
You can also use --port=X to define a particular port number to listen to.
Just remember that your security group must allow access from the outside into your choice of IP/Port.
For example:
jupyter notebook --ip=a.b.c.d --port=8888
Well, there are few things happening here.
ISSUE # 1 - Localhost
As Y. Hernandez said you are trying to access the URL incorrectly. You should replace localhost with the public IP address of your AWS VM (same IP you used to ssh into).
ISSUE # 2 - Jupyter needs proper configuration
But even then, this might not run because Jupyter has not been completely configured out of the box, you are using Deep Learning AMI for Amazon Linux. You need complete these following steps (of course there are multiple others of doing the same thing - and this is just one such way).
Configure Jupyter Notebook -
$ jupyter notebook --generate-config
Create certifications for our connections in the form of .pem files.
$ mkdir certs
$ cd certs
$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem
You’ll get asked some general questions after running that last line. Just fill them out #with some general information or keep pressing enter.
Next we need to finish editing the Jupyter Configuration file we created earlier. So change to .jupyter folder. For this, you can use nano or vi or your fav editor.
$ cd ~/.jupyter/
$ nano jupyter_notebook_config.py
Insert this code at the top of the file -
—insert begin——
c = get_config()
# Notebook config this is where you saved your pem cert
c.NotebookApp.certfile = u'/home/ubuntu/certs/mycert.pem'
# Run on all IP addresses of your instance
c.NotebookApp.ip = '*'
# Don't open browser by default
c.NotebookApp.open_browser = False
# Fix port to 8888
c.NotebookApp.port = 8888
-insert end——
save file
cd out of .jupyter folder
$ cd
You can set up a separate Notebook folder for Jupyter or from you anywhere you
choose, you can launch Jupyter
$ jupyter notebook
On your browser go to this url using your vm IP address and the token given in your terminal
http://VM-IPAddress:8888/?token=72385d6d854bb78b9b6e675f171b90afad47b3edcbaa414b
IF you get an SSL error, then you use https instead of http.
https://VM-IPAddress:8888/?token=72385d6d854bb78b9b6e675f171b90afad47b3edcbaa414b
ISSUE # 3 - Won't be able to launch Python interpreter.
if you intend to run Python 2 or 3, you need to upgrade iPython. If not, once you launch Jupyter, you will not see an option to launch Python interpreter. You will only see options for Text, Folder and Terminal.
Upgrade iPython. For this, shutdown your Jupyter and run this upgrade command.
$ sudo pip install ipython --upgrade
Relaunch Jupyter.
In Amazon DL AMIs, these occurs sometimes. Do
jupyter notebook password
set notebook password and do:
sudo jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root
allow root is not necessary, but it allows copy/paste as a root user.

Rejected Client-Certificate in Chrome 61

I have an already long running website secured by self generated client-certificates. It has been working for years without any problems with any browser like IE, Firefox and Chrome.
Since the last Chrome Update (61.0.3163.100) the client certificates are rejected with following error message:
This site can’t provide a secure connection
my.domain.com didn’t accept your login certificate, or one may not have been provided.
Try contacting the system admin.
ERR_BAD_SSL_CLIENT_AUTH_CERT
And the site continues to work fine with any other browser!
And i can not find any relevant information out there.
I assume that chrome just raised the minimal requirements for the client certificates, as it did for server certificates a few months ago. but i have no glue how to fix it.
Any hint what is wrong with my certificates?
many thanks
UPDATE 15DEC2017
I still had problems and did not find any answer out there.
After a while i figured out that Chrome does not like the Client Certificates generated by openssl ca.
I was generating the Certificates so:
openssl ca -config openssl.cnf -extensions client -batch -in test.req -out test.cer
I tried everything but i was not able to make it to work with Chrome, but again, it worked with all other Browsers.
Now i am generating the Certificates so:
openssl x509 -req -in test.req -CA ca.cer -CAkey ca.key -extensions client -extfile openssl.cnf -CAserial ca.srl -out test.cer -sha256
And it works, if i compare the out of openssl x509 -in test.cer -noout -text, there is NO difference!! So i am wondering what Chrome does not like of openssl ca.
I would prefer to use openssl ca over openssl x509 since i can not use CRLs and i prefer also startdate/enddate over days.
Any ideas?
I was generating the Certificates so:
openssl ca -config openssl.cnf -extensions client -batch -in test.req -out test.cer
I tried everything but i was not able to make it to work with Chrome, but again, it worked with all other Browsers.
Now i am generating the Certificates so:
openssl x509 -req -in test.req -CA ca.cer -CAkey ca.key -extensions client -extfile openssl.cnf -CAserial ca.srl -out test.cer -sha256
And it works!
if i compare the out of openssl x509 -in test.cer -noout -text, there is NO difference!! So i am wondering what Chrome does not like of openssl ca.
I found this article on it from this website:https://productforums.google.com/forum/#!topic/chrome/TM0Tg0_YOvg
To solve this issue this :
try these steps ;
1) Rest browser by clearing out all data in internet options of IE..
2) Remove all certificates related to site you are trying access...Chrome shares IE cert
3) make sure that you have internet access afterwards.. if not check proxy settings if it applies
4) try to go to the same site again and if it prompts for certificate insert smart card or install cert.
5) if it does not work you can remove all certs from personal but be careful removing certs form intermediate and else where.
This error is the certificate has an problem on the local machine you are using.
for DOD users see https://militarycac.com/dodcerts.htm if there are more issues..I am able using the steps I posted to access DOD sites.. make sure you have installroot3a exe installed as well

Disabling HTTPS host authentication in TortoiseHG for internal self-signed certificates

How do you disable HTTPS host authentication in TortoiseHG for internal self-signed certificates. For internal servers HTTPS is primarily used for encryption.
The TortoiseHG documentation says that it is possible to disable host verification (i.e. verification against the Certificate Authority chain) here but I can't seem to find the option.
Its supposed to be an option when cloning a remote repository. I am using the latest TortoiseHG 2.0.5
In the TortoiseHG Workbench, in the Sync tab (or in the Sync screen), if you have a remote path selected, you should see a button with a lock icon on it:
That will bring up the Security window, where you can select the option No host validation, but still encrypted, among other settings. When you turn that on, it adds something like this to your mercurial.ini:
[insecurehosts]
bitbucket.org = 1
That's machine-level config for TortoiseHg, but it doesn't seem to affect the Clone window.
On the command-line, you can use --insecure to skip verifying certificates:
hg clone --insecure https://hostname.org/user/repository repository-clone
This will spit out a number of warnings about not verifying the certificate, and will also show you the host fingerprint in each message, like the example warning below (formatted from the original for readability):
warning: bitbucket.org certificate with fingerprint
24:9c:45:8b:9c:aa:ba:55:4e:01:6d:58:ff:e4:28:7d:2a:14:ae:3b not verified
(check hostfingerprints or web.cacerts config setting)
A better option, however, is host fingerprints, which are used by both hg and TortoiseHg. In TortoiseHg's Security window, above No host validation is the option Verify with stored host fingerprint. The Query button retrieves the fingerprint of the host's certificate and stores it in mercurial.ini:
[hostfingerprints]
bitbucket.org = 81:2b:08:90:dc:d3:71:ee:e0:7c:b4:75:ce:9b:6c:48:94:56:a1:fe
This should skip actual verification of the certificate because you are declaring that you already trust the certificate.
This documentation on certificates may help, as well.
In the Clone Repository window expand options and check "Do not verify host certificate" check box.

How to config mercurial to push without asking my password through ssh?

I use mercurial in my project, and every time I push new changesets to the server by ssh, it ask me for a password.
Then how to config the mercurial to push with out asking password?
I works on Ubuntu 9.10
On Linux and Mac, use ssh-agent.
Ensure you have an ssh keypair (see man ssh-keygen for details)
Copy your public key (from ~/.ssh/id_dsa.pub) to the remote machine, giving it a unique name (such as myhost_key.pub)
Log in to the remote machine normally and append the public key you just copied to the ~/.ssh/authorized_keys file
Run ssh-add on your local workstation to add your key to the keychain
You can now use any remote hg commands in this session without requiring authentication.
Assuming you're using Windows, have a read of my Mercurial/SSH guide. Down the bottom of the post you'll find info on how to use PuTTy to do this for you.
Edit: -- Here's the part of the post that I'm talking about (bear in mind you'll need to have pageant running with your key already loaded for this to work):
Client: Setting up Mercurial
If you haven't already, make sure you install Mercurial on the client machine using the default settings. Make sure you tell the installer to add the Mercurial path to the system PATH.
The last step of configuration for the client is to tell Mercurial to use the PuTTy tools when using SSH. Mercurial can be configured by a user-specific configuration file called .hgrc. On Windows it can also be called Mercurial.ini. The file is located in your home folder. If you don't know what your home folder is, simply open a command prompt and type echo %USERPROFILE% - this will tell you the path.
If you haven't set up your configuration yet, then chances are the configuration file doesn't exist. So you'll have to create it. Create a file call either .hgrc or Mercurial.ini in your home folder manually, and open it in a text editor. Here is what part of mine looks like:
[ui]
username = OJ Reeves
editor = vim
ssh = plink -ssh -i "C:/path/to/key/id_rsa.ppk" -C -agent
The last line is the key and this is what you need to make sure it set properly. We are telling Mercurial to use the plink program. This also comes with PuTTy and is a command-line version of what the PuTTY program itself does behind the scenes. We also add a few parameters:
-ssh : Indicates that we're using the SSH protocol.
-i "file.ppk" : Specifies the location of the private key file we want to use to log in to the remote server. Change this to point to your local putty-compatible ppk private key. Make sure you user forward-slashes for the path separators as well!
-C : This switch enables compression.
-agent : This tells plink to talk to the pageant utility to get the passphrase for the key instead of asking you for it interactively.
The client is now ready to rock!
Install PuTTY.
If you're on Windows, open projectdir/.hg/hgrc in your favorite text editor. Edit it to look like this:
[paths]
default = ssh://hg#bitbucket.org/name/project
[ui]
username = Your Name <your#email.com>
ssh = "C:\Program Files (x86)\PuTTY\plink.exe" -ssh -i "C:\path\to\your\private_key.ppk" -C -agent
If it's taking forever to push, the server might be trying to ask you a question (but it's not displayed).
Run this:
"C:\Program Files (x86)\PuTTY\plink.exe" -T hg#bitbucket.org -i "C:\Program Files (x86)\PuTTY\plink.exe" -ssh -i "C:\path\to\your\private_key.ppk"
Answer any questions, and then try pushing again.
If you're using Bitbucket, open your private key with puttygen, copy your public key out of the top textbox, and add it to your user account: https://bitbucket.org/account/user/USERNAME/ssh-keys/