Accesing to a VM on Fi-lab - fiware

I’m training to get familiar with the Fi-Ware Cloud service.
I can create blueprints templates and instances but I cannot access in SSH or Connect to VM display.
I have the server up and running, I can see the page “It works” of Apache.
The problem I have are:
With SSH I don’t know what credential I have to use, I try with my Fi-Ware credential but the server always shows me “access denied”
Connect to VM display it never appears the login interface.
There is some tutorial where I can see an example of how to do it or a detailed documentation how to configure and access to in a Blueprints Instance?

I know this question was already answered but I tried these solution and only had success with additional detail after Creating, Downloading and chmod-ing the keypair file: using [user#]hostname] ssh parameter as root#Fi-lab-FloatingIPAddress ,
under root shell or
using sudo command to execute ssh -i kp.pem Fi-lab-FloatingIPAddress
Try to access without root username will results in ssh asks to password even including the keypair associated with that virtual machine.
In other words, the keypair to access fi-lab blueprint or instances only works with root username.

Usually, when you create a VM of Bluerpint, you should assign a keypair, that should be created previously. I suppose that you did it. Correct me if I am wrong. During the creation of the keypair, you could download en .pem file that it is used to access to the VM using ssh (ssh -i xxx.pem…).

I am just getting familiar with #Fiware Lab.
prerequisites :
Having in the private key you generated in the fiware cloud interface in the file fiware_rsa (text file beginning with -----BEGIN RSA PRIVATE KEY-----)
Associate your server with an external IP (internet) (note you can access other instances via the one which has inet access)
ssh -i fiware.rsa user#external-ip-address
try with root user, you should see a message advising the proper user name to use depending on the instance :
ubuntu#front:~$ ssh -i .ssh/fiware_rsa root#XXX.XXX.XXX.XXX
Please login as the user "centos" rather than the user "root".
You can find more information here : http://fr.slideshare.net/hmunfru/setting-up-your-virtual-infrastructure-using-fi-lab-cloud
BR

Related

What are the differences between various SSH methods in Google Cloud Compute Engine?

I usually SSH into a Google Cloud Compute Engine Instance using my local terminal like:
ssh -i ~/.ssh/[KEY_FILENAME [USERNAME]#ip_address
where the [KEY_FILENAME] is generated using
ssh-keygen -t rsa -f ~/.ssh/[KEY_FILENAME] -C [USERNAME]
There is also another way to connect to the instance which is through the browser, however I would connect to the instance with a different user account. Is there a way that I can make it consistent regardless of the method I use to connect?
There are several ways to connect a Linux instance via the SSH. The way you are connecting to an instance is via the terminal. You can connect via the Cloud Console Web UI which is in general the most convenient way to connect to an instance. Also, you can use Google Cloud SDK and run below command to connect to an instance via SSH:
gcloud compute ssh [INSTANCE_NAME]
You can also use Cloud Shell to connect your instance from the Cloud Console web UI by using the same command as above. You can connect via the serial console using the Google Cloud Platform Console, the gcloud command-line tool, or a third-party SSH client. The serial console authenticates users with SSH keys. Specifically, you must add your public SSH key to the project or instance metadata, and store your private key on the local machine from which you want to connect. There are other advanced methods to connect to an instance which you can find at this link.
By default, the gcloud compute command-line tool uses the $USER variable to add users to the /etc/passwd file for connecting to virtual machine instances using SSH. You can specify a different user using the --ssh-key-file PRIVATE_KEY_FILE flag when running the gcloud compute ssh command. Depending on your use case and convenience, you can use any method consistently.

Using SSH tunnel to connect to remote MYSQL database from Node-Red

I have a set of data rolling out of Node-Red that I want to send to a remote MYSQL database. The Node-Red system is running on a Raspberry Pi. How do I make this work? I know how to it using Node.JS but im not sure how to do this in Node-Red. The IP-adress of the Pi is dynamic so simply authorizing its Ip address does not work sadly.
Thanks in advance!
EDIT for clarification:
I want to connect to a remote MYSQL database that is hosted by my webhosting. I have connected a Raspberry Pi to a battery, and I want to save this information in the aforementioned database. Since there will be several battery setups in different locations, I cannot save the data locally. So, one way or another I need to access the remote database through Node-Red. Authorizing one IP-address does't work, since the IP of the Raspberry Pi network is dynamic and thus changes. I think a SSH-Tunnel might be the solution, but I have no idea how to this in Node-Red, and google isnt very helpful.
OK, so as I said in the comments you can make a Username/Password pair for MySQL can be granted permission to any IP address (which is less secure if the username/password is compromised. Set the host to '%' to allow all hosts when setting up the grant options).
To reduce the risk you can restrict the Username/Password to a specific subnet. This could be a wifi network or the subnet associated to the piblic IP (it needs to be the public range as nearly all cellular ISPs use CGNAT) range of the cellular provider you may be using. (See this question for details How to grant remote access to MySQL for a whole subnet?).
If you want to use a SSH tunnel then this will normally be done outside Node-RED with the ssh command line e.g.
ssh -L localhost:3306:localhost:3306 remote.host.com
Then configure the Node-RED MySQL node to point to localhost.
Since the connection will look like it's coming from localhost on the MySQL machine you need make sure the Username/Password is locked down to a that host.
You will probably also want to set up public/private key authentication for the ssh connection.
You may be able to run the ssh command in the node-red-daemon node, which should restart the connection if it gets dropped.

Google Compute Engine Cannot SSH using Owner Account

Since Google Compute Engine does not allow root user nor assign any password to the default Owner Account.
I though the SSH console in the Compute Engine backend can SSH to the instance regardless the SSH Config.
Obviously I was wrong, I modified sshd_config file and did not put the default owner account in the allowUsers parameter. Right now, I cannot SSH to the instance using owner account thus lost any SUDOER right and was stuck.
I however have set up a normal user which has no SUDOER rights but can SSH to the instance.
Is there any way to solve this or I have to rebuild the server?
You can get around by attaching the boot disk of the instance in question as a data disk to another instance and editing sshd_config file.

Cannot connect to Compute Engine instance via SSH

I've just created an instance using Google Cloud Platform's Compute Engine and tried to connect to it via SSH connection but it failed.
I'm following the quick start here.
I have generated the SSH key on my PC and have entered the pass-phrase when asked. Though I fail to succeed a log in :-(
I got the PuTTY SSH's error as below snapshots.
Then I get the PuTTY window inactive.
I have the same problem but found a workaround to connect via PuTTY manually.
In brief
Generate SSH key for the machine instance
Add SSH public key to the instance
Prepare to log in - acquiring information for IP, login name, pass phrase, private SSH key
Connect to the instance via SSH client, e.g. PuTTY in Windows
Detail steps
For me the gcloud quick start had already:
launched my instance
created my public and private RSA keys (in C:\Users\USER_NAME\.ssh\)
Public Key - C:\Users\USER_NAME\.ssh\google_compute_engine.pub
Private Key - C:\Users\USER_NAME\.ssh\google_compute_engine.ppk
Go to the Google Developers Console in your browser
Select your project and in the left hand nav bar click: Compute -> Compute Engine -> VM instances
Your running instance(s) will be linked below the CPU usage chart
Click the one you want and find the Add SSH key link and click it
Paste the entire contents of google_compute_engine.pub into the field that appears
Click Save and after a few seconds the key details will appear on the page (if you get an error you pasted from the wrong key file or didn't copy all the text)
The first word in those details is your (case sensitive) username
Find the External IP above on the page
Open PuTTY and paste the external IP into Host Name (port is the default of 22)
In the left hand nav expand: Connection -> SSH and then click Auth
Next to "Private key file for authentication" click "Browse"
Select "C:\Users\USER_NAME\.ssh\google_compute_engine.ppk" and click Open
Scroll the left hand nav back up and click the top item "Session"
Under "Saved Sessions" enter a name and click "Save"
Accept the warning message and you should be prompted to login with the username from above step
Input your passphrase
Done
Hope this helps. If someone has a solution for the gcloud issue I'd love to hear it too.
A non-discussed answer is that you should have at least the standard memory on your VM instance (3.75GB) - do NOT use Micro VM instances.
I could only log in with SSH via browser console or gcloud command line, but not with Putty or Mac terminal SSH.
I spent an hour on the phone with support and we found this to be the problem.
To get identified by ssh you need to run this command, which add gcloud ssh key in the list of ssh keys
ssh-add google_compute_engine C:\Users\USER_NAME\.ssh\
You can also connect to your VM instance using embedded in-browser SSH client, see here for how to do that. That's pretty much a couple of mouse click to do.
Not sure why, if the user already existed (eg: already SSH logged in google web console), it doesn't work when I manually added SSH keys into metadata on google web console. I have tried hundred of times from the steps below.
I found out you have to manually add your ssh key through web SSH CONSOLE -> ssh in on google web console and copy ssh pub key on your local machine (usually is in ~/.ssh/) and append (edit and paste to the end) it to ~/.ssh/authorized_keys.
1) SSh into the vm by cloud console.
2) Change the root password sudo passwd
3) set below parameters to yes by nano /etc/ssh/sshd_config
PasswordAuthentication
PermitRootLogin
PasswordAuthentication
4) restart sshd service sshd restart

How to ssh into HA application gears?

As was explained in the answer to this question: https://stackoverflow.com/questions/11730590/what-are-some-of-the-tricks-to-using-openshift it should be possible to ssh into some of the other gears when using a scaled app with openshift.
Unfortunately the link mentioned there (https://openshift.redhat.com/community/faq/can-i-access-my-applications-gear) seems to be gone.
Via [my app url]/haproxy-status/ I can see the names of the other gears. They are long names like gear-[long number]-[app name]. Using that name I can no longer ssh into them when I'm ssh'ed into the main gear. ssh there just immediately returns without any error.
If I do ssh blala the same thing happened, so it looks like ssh had been replaced by a noop command on the primary gear?
When I examine the haproxy conf file, I see entries like;
server gear-[long number]-[app name] ex-std-node[number].prod.rhcloud.com:[number] check fall 2 ...
I tried ssh'ing into this ext-std-node... address as well, both from the main/primary application gear as well as from my desktop, but it didn't work in both cases.
How can I get shell access to my other gears?
This command shows how to access individual gears:
rhc app show <appname> --gears
The last column of output is the ssh URL. It is of the form $UUID#$UUID-$NAMESPACE.rhcloud.com . You can ssh into them directly, and they are also accessible via ssh from the "head" gear; they have to be, since git pushes are synchronized from the head gear to the others via ssh.