Permission denied (publickey,gssapi-keyex,gssapi-with-mic) - google-compute-engine

After creating the instance, I can login using gcutil or ssh. I tried copy/paste from the ssh link listed at the bottom of the instance and get the same error message.

The permission denied error probably indicates that SSH private key authentication has failed. Assuming that you're using an image derived from the Debian or Centos images recommended by gcutil, it's likely one of the following:
You don't have any ssh keys loaded into your ssh keychain, and you haven't specified a private ssh key with the -i option.
None of your ssh keys match the entries in .ssh/authorized_keys for the account you're attempting to log in to.
You're attempting to log into an account that doesn't exist on the machine, or attempting to log in as root. (The default images disable direct root login – most ssh brute-force attacks are against root or other well-known accounts with weak passwords.)
How to determine what accounts and keys are on the instance:
There's a script that runs every minute on the standard Compute Engine Centos and Debian images which fetches the 'sshKeys' metadata entry from the metadata server, and creates accounts (with sudoers access) as necessary. This script expects entries of the form "account:\n" in the sshKeys metadata, and can put several entries into authorized_keys for a single account. (or create multiple accounts if desired)
In recent versions of the image, this script sends its output to the serial port via syslog, as well as to the local logs on the machine. You can read the last 1MB of serial port output via gcutil getserialportoutput, which can be handy when the machine isn't responding via SSH.
How gcutil ssh works:
gcutil ssh does the following:
Looks for a key in $HOME/.ssh/google_compute_engine, and calls ssh-keygen to create one if not present.
Checks the current contents of the project metadata entry for sshKeys for an entry that looks like ${USER}:$(cat $HOME/.ssh/google_compute_engine.pub)
If no such entry exists, adds that entry to the project metadata, and waits for up to 5 minutes for the metadata change to propagate and for the script inside the VM to notice the new entry and create the new account.
Once the new entry is in place, (or immediately, if the user:key was already present) gcutil ssh invokes ssh with a few command-line arguments to connect to the VM.
A few ways this could break down, and what you might be able to do to fix them:
If you've removed or modified the scripts that read sshKeys, the console and command line tool won't realize that modifying sshKeys doesn't work, and a lot of the automatic magic above can get broken.
If you're trying to use raw ssh, it may not find your .ssh/google_compute_engine key. You can fix this by using gcutil ssh, or by copying your ssh public key (ends in .pub) and adding to the sshKeys entry for the project or instance in the console. (You'll also need to put in a username, probably the same as your local-machine account name.)
If you've never used gcutil ssh, you probably don't have a .ssh/google_compute_engine.pub file. You can either use ssh-keygen to create a new SSH public/private keypair and add it to sshKeys, as above, or use gcutil ssh to create them and manage sshKeys.
If you're mostly using the console, it's possible that the account name in the sshKeys entry doesn't match your local username, you may need to supply the -l argument to SSH.

Ensure that the permissions on your home directory and on the home directory of the user on the host you're connecting to are set to 700 ( owning user rwx only to prevent others seeing the .ssh subdirectory ).
Then ensure that the ~/.ssh directory is also 700 ( user rwx ) and that the authorized_keys is 600 ( user rw ) .
Private keys in your ~/.ssh directory should be 600 or 400 ( user rw or user r )

I was facing this issue for long time. Finally it was issue of ssh-add. Git ssh credentials were not taken into consideration.
Check following command might work for you:
ssh-add

I had the same problem and for some reason The sshKeys was not syncing up with my user on the instance.
I created another user by adding --ssh_user=anotheruser to gcutil command.
The gcutil looked like this
gcutil --service_version="v1" --project="project" --ssh_user=anotheruser ssh --zone="us-central1-a" "inst1"

I just experienced a similar message [ mine was "Permission denied (publickey)"] after connecting to a compute engine VM which I just created. After reading this post, I decided to try it again.
That time it worked. So i see 3 possible reasons for it working the second time,
connecting the second time resolves the problem (after the ssh key was created the first time), or
perhaps trying to connect to a compute engine immediately after it was created could also cause a problem which resolves itself after a while, or
merely reading this post resolves the problem
I suspect the last is unlikely :)

I found this error while connecting ec2 instance with ssh.
and it comes if i write wrong user name.
eg. for ubuntu I need to use ubuntu as user name
and for others I need to use ec2-user.

You haven't accepted an answer, so here's what worked for me in PuTTY:
Without allowing username changes, i got this question's subject as error on the gateway machine.

You need to follow this instructions
https://cloud.google.com/compute/docs/instances/connecting-to-instance#generatesshkeypair
If get "Permission denied (publickey)." with the follow command
ssh -i ~/.ssh/my-ssh-key [USERNAME]#[IP_ADDRESS]
you need to modify the /etc/ssh/sshd_config file and add the line
AllowUsers [USERNAME]
Then restart the ssh service with
service ssh restart
if you get the message "Could not load host key: /etc/ssh/ssh_host_ed25519_key" execute:
ssh-keygen -A
and finally restart the ssh service again.
service ssh restart

I followed everything from here:
https://cloud.google.com/compute/docs/instances/connecting-to-instance#generatesshkeypair
But still there was an error and SSH keys in my instance metadata wasn't getting recognized.
Solution: Check if your ssh key has any new-line. When I copied my public key using cat, it added into-lines into the key, thus breaking the key. Had to manually check any line-breaks and correct it.

The trick here is to use the -C (comment) parameter to specify your GCE userid. It looks like Google introduced this change last in 2018.
If the Google user who owns the GCE instance is myname#gmail.com (which you will use as your login userid), then generate the key pair with (for example)
ssh-keygen -b521 -t ecdsa -C myname -f mykeypair
When you paste mykeypair.pub into the instance's public key list, you should see "myname" appear as the userid of the key.
Setting this up will let you use ssh, scp, etc from your command line.

Add ssh public key to Google cloud
cat ~/.ssh/id_rsa.pub
go and click your VM instances
edit VM instances
add ssh public key(from id_rsa.pub) in SSH keys area
ssh login from Git bash on your computer
ssh -i ~/.ssh/id_rsa tiennt#x.y.z.120

Related

Navicat or MySQL Workbench SSH Tunnel with MFA

Is it even possible to use Navicat or MySQL workbench (or any other tool) to connect to an Amazon RDS via a jump box like a bastion host?
I can do this if I manually open an ssh tunnel in a terminal like so:
ssh -A <my_user>#<bastion_endpoint> -L 3306:<RDS_ENDPOINT>:3306
Then connect to localhost:3306 from the mysql tool, this works. It's important to know that on the bastion MFA is required to login.
I can't even connect to my bastion box from Navicat using the exact same credentials as I used in the terminal/command line. I get the error:
Access denied for 'none'. Authentication that can continue: keyboard-interactive (11)
So I went to the bastion box and removed keyboard-interactive from the authentication methods and it works. However, that obviously breaks the MFA I'm using so that's not an option.
Is there any other configuration I need to do on my bastion box in order to make this work, or is this simply not possible?
Turns out my sshd configuration was fine, however, I did go back and enable auto pushing with MFA.
Navicat
If auto push is disabled for MFA you need to select Password and Public Key (even though we're just using a public key here). In the password field it can be blank, if auto push is enabled, otherwise put the number of your selection you want to be notified here (thats right it's not a normal password, dumb I know). Then just use your normal private key and password for the private key and it should connect fine.
MySQL Workbench / Sequel Pro
You MUST have autopush or something similar to it in order to work with MFA. If I didn't have auto push enabled it would fail to connect. Again use Password and Public Key. The password to the instance is blank then its just the normal public key and password for you public key.
IntelliJ
This is nicer. It will work with prompting with the MFA choice if you don't have auto push enabled. Unlike the others you can select the Public Key option instead of Password and Public Key and it should connect and work fine.
- - - - - Errors With Rotating IPs - - - - -
Now unfortunately you will get the annoying "man-in-the-middle" scary error with ssh if you have a constant endpoint with rotating IPs under it (like I did). If that's the case ONLY IntelliJ offers a checkbox for ignore StrictHostSSHKeyChecking.
So to get around that ONLY for this specific endpoint you can make it be forced to ignore SSH Key Checking. To do that add the following entry to your ~/.ssh/config file:
# changing IPs
Host my-custom-host.com
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
You can use as many wildcards in the Host section which is nice. So customize it to your needs. Now you won't be checking the ssh key but it's better than having to daily (or how often the IP gets rotated) delete the endpoint entry in ~/.ssh/known_hosts.
Hopefully this helps to save someone a lot of pain.

How do I resolve this error when trying to connect to an SQL server hosted on a Google Compute Engine Ubuntu VM

For a database course that I'm in, the professor has tasked us with setting up several VM MySQL servers and remote connections. I've found proper documentation to solve most of my problems, but I've pored over docs trying to find a solution to my latest issue.
I've set up an Ubuntu VM on the Google Cloud Compute Engine. I installed a MySQL server to this VM instance, and I need to log in remotely from my laptop. I've followed this documentation https://cloud.google.com/solutions/mysql-remote-access and this youtube video https://www.youtube.com/watch?v=f5qQDm3ciDg.
However, I still get an Unable to Connect to Server message when I test my connection. What could I be overlooking that will help me connect?
Thanks!
So, I slammed my head against a wall for long enough to realize that ssh will be an easier solution than a direct connection.
So, at least for my Windows machine, these are the steps I followed to make the connection:
Download the sql server (You don't need to add a user unless necessary, and you don't change the bind-address in the config file).
Use PuTTYgen to create a private public key pair. Export the private key as an openssh format (in the export options)
Click the edit button on your VM instance then scroll down to the SSH key section.
Paste the public key into the text box (be sure to change the last comment portion to a username on the Linus VM)
Use the SSH connection on MySQL Workbench. Use the external IP of your VM as the first (ssh) host name and localhost as the second (SQL) host name. Input all other info as it is asked for.

Google Cloud instance can't be accessed via SSH after cloning

I'm desperate for help here. I have a compute engine instance that hosts a lot of websites. These are the steps that I took:
Go to Compute Engine > Snapshots and take a snapshot of my instance
Click on the newly created snapshot and click Create Instance.
The new instance has all the configs of the current running instance
Then when I tried to access the new instance via SSH, it wouldn't work. Error message:
"Connection Failed
We are unable to connect to the VM on port 22. Learn more about possible causes of this issue."
Clicking on Learn more gets me to https://cloud.google.com/compute/docs/ssh-in-browser#ssherror
The instance is booting up and sshd is not yet running - Not sure how to check this
The instance is not running sshd - Not sure how to check this either
sshd is listening on a port other than the one you are connecting to - My current instance is having ssh running on port 22 so I guess this is fine?
There is no firewall rule allowing SSH access on the port - Again, my current instance is having ssh running so I don't think it's because of firewall, right?
The firewall rule allowing SSH access is enabled, but is not configured to allow connections from GCP Console services. - Same as above
The instance is shut down - Instance is still running.
Strange thing is if I create a fresh instance from scratch and then do the steps above to clone to a new instance then that new instance can be accessed normally via SSH.
Can anyone show me how to fix this if possible? Or show me how to see logs, check for what went wrong etc as I tried to google but pretty confused with all the jargons or where to find a particular stuff. Sorry for the wall of text. Thanks
**
Edit #1
**: I got technical support from Google. The steps below might help someone else, but not me as when I reached step 7, I waited forever and couldn't get to the login page.
1.) Go to the VM instances page and click on the Instance name of your VM.
2.) Click the Edit button at the top of the page.
3.) Under Custom metadata, click Add item.
4.) Set 'Key' to 'startup-script' and set 'Value' to this script:
#! /bin/bash
useradd -G sudo USERNAME
echo 'USERNAME:PASSWORD' | chpasswd
NOTE: change the value of USERNAME and PASSWORD to the name and password of your choice.
5.) Enable "Enable connecting to serial ports" by checking the box below the SSH button.
6.) Click Save and then click RESET on the top of the page. Wait for some time for the instance to reboot.
7.) Click on 'Connect to serial port' in the page. In the new window, you might need to wait a bit and press on Enter of your keyboard once; then, you should see the login prompt.
8.) Login using the USERNAME and PASSWORD you provided.
Note: Please do not share any of your password and username for your data security.
As those steps above couldn't help me and the Google support representative looked at the log but didn't see anything wrong, she suggested to debug SSH following this guide https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-ssh#use_your_disk_on_a_new_instance which I will do when I have time. Feel like I'm writing an essay. Will keep posted
The troubleshooting steps that you can follow are:
Use the serial console to view your instance logs and check whether the new instance you created from the snapshot failed to start to the appropriate run level where the ssh daemon would get started. If sshd was not started you would not have ssh access to your instance.
You can try restarting the instance if it doesn’t affect production and try to gain ssh access again. Might be that some issue prevented the instance from starting up properly and restarting it could fix it.
You can try creating another VM instance from the snapshot in case the previous instance wasn’t created properly.
If creating a new VM instance from the snapshot doesn’t fix the issue, it might be that the snapshot itself wasn’t created properly. You can read this documentation guide, section Understanding snapshot best practices, and try creating another snapshot and VM instances from it.
I had the same problem and after a lot of searching, I found an answer from user Peripheral from ServerFault that worked for me.
I found the fix for me. A recent update has a known issue where it removes the default gateway from the iptables. To fix it, I have to go to the instance and select Edit. Scroll down, and under Custom Metadata put the following:
key: startup-script
value: route add default gw <gatewayIP> eth0
Save and restart the VM.
Source
All credits to him/her, just want to share to help others find their solution faster.
I had the same issue. I eventually figured that it was because I attached a persistent disk added an entry into the /etc/fstab file. This entry is supposed to automatically mount the attached disk upon restart of the instance.
However, when I created a snapshot of the boot disk, I didn't remove the /etc/fstab entry. So creating a new instance from this snapshot will always cause a boot error as the script tries to mount a disk that is not attached.
This information is present in the documentation

Connect to SFTP using Winscp from SSI without using host key [duplicate]

I am trying to connect to Unix server from WinSCP commandline for the first time.
It closes with the the following error:
The server's host key was not found in the cache. You have no guarantee that the
server is the computer you think it is.
The server's rsa2 key fingerprint is:
ssh-rsa 1024 42:9e:c7:f4:7f:8b:50:10:6a:06:04:b1:d4:f2:04:6d
If you trust this host, press Yes. To connect without adding host key to the cac
he, press No. To abandon the connection press Cancel.
In the WinSCP commandline, it does not ask for any input (Yes or No). It closes with Authentication failed. If I connect through the WinSCP tool, I'll get the same error. However, I'll be able to press YES.
I also know that If I add -hostkey switch in the command line, I'll be able to connect. But, I don't want pass hostkey in my batch script as I will be connecting to various servers. So, my requirement is to pass "YES" input from the commandline in case of this error. Can someone help?
A host key fingerprint verification is a crucial step in securing your SSH connection. Even if you are using a set of sessions with your script, it does not excuse you. The fingerprint should be part of a set of information you have for each of the sessions (in addition to a hostname, an username and a password).
Skipping the fingerprint verification means that you lose any security and there's no point using an SSH/SFTP anymore.
Anyway, if you do not care about a security, you can use the -hostkey=* switch to unconditionally accept any host key.
Further references:
Where do I get SSH host key fingerprint to authorize the server?
Verifying the host key

Cannot connect to Google Compute Instance user

I have searched all over google for this issue but found no answere yet, so I thought asking here.
I have a google compute instance and I had a running putty ssh connection that worked flawlessly. But after I formatted my PC, everything went wrong.
I installed gcloud and done the whole procces of ssh again (config-ssh, adding ssh to key list and trying to connect), also I was trying to connect to my old user after I realized that I typed a different name to my windows user name. Suddenly I got the No supported authentication message. So I thought something is wrong with the ssh keys, But I realized that I cannot connect to my user even through the google web browser window, the connection is always stuck on trying to connect until timeout.
I would gladly appreciate any help :)
gcloud compute ssh currently has a known problem and might not work on Windows.
Here's a workaround until we fix it: run "gcloud compute ssh INSTANCE --dry-run". This will output the command it tries to execute.
Copy that command. You can either add -W flag to it and run it, or replace ssh.exe with ssh-term.exe and remove the -o flags.
If gcloud is installed in a place like Program Files, you might also need to add "" around the path.
First of all, run the following command (replacing the word in capital letters) which will ensure that your SSH key is created if it was not created before: gcloud compute ssh INSTANCE
Then, follow these steps to add your SSH key to your project and SSH into your instance:
1- Copy the content of C:\Users\<username>\.ssh\ google_compute_engine.pub(might be different path based on each Windows version) into the project metadata (Developers Console -> PROJECT -> Compute -> Metadata -> SSH keys -> Edit -> Add key).
If you want to log in as a different user, you can do it modifying it in the last word of the pasted text: <username>#<hostname>
2- Configure Putty. Go to Connection -> SSH -> Auth -> Browse and select your Putty SSH key which should be located in C:\Users\<username>\.ssh\ google_compute_engine.ppk)and try to SSH into the instance.
3- If it doesn't work, remove the instance metadata because the instance metadata overrides the project metadata. To do that, go to Compute-> Compute Engine-> INSTANCE -> SSH keys -> Edit -> Click on every ‘x’ and save the changes.
Regarding your issue trying to access using the SSH button in the Developers Console, I’d reboot the instance if it’s not in production because there is a script that must be working properly in order to access from there: /usr/bin/python /usr/share/google/google_daemon/manage_accounts.py --daemon
I hope it helps.