authorization error while setup ssh into elastic beanstalk intance - amazon-elastic-beanstalk

i am trying to install ssh on my elastic beanstalk instance via eb console.
I get the following authorization error but i don't understand what policy i am missing...
eb ssh --setup Test-env
WARNING: You are about to setup SSH for environment "Test-env". If you continue, your existing instances will have to be **terminated** and new instances will be created. The environment will be temporarily unavailable.
To confirm, type the environment name: Test-env
Type a keypair name.
(Default is aws-eb):
Generating public/private rsa key pair.
C:\Users\steeve\.ssh\aws-eb already exists.
Overwrite (y/n)?
ERROR: NotAuthorizedError - Operation Denied. You are not authorized to perform this operation. Encoded authorization failure message: CMrME2Q3zO8uzTOfmkZKzzZFtYI619QHlTlFKlnZYzbaLYS6bpdJqBg8yYOUj7WU7YJLqabtHgtp4U6kSMlNxvjpodMsdNirEP4aEr6yZL3Rum7MczDWdow9CdWJE_TOS3ULP2aFEa4Uas4rfwpCmEN1S7NRBIpGHrC_obKy3UgQygeXrcRJhxsvsfzTcEP7sLGd5y5KajPqoFso0HY2B0qSJ9XObX9bQrZ2wnADKaUuM1dyrJlJH_OAzNivJR1DeciqWkWjLJRHHWef7XhS3bpZJCXkM7ahpTwXZ5SOS5f1F-NU1dVxVzR9wYyp5XVhI0SM1FkEAFhK6T3TkkV6XqoYYKdwuyzQFoIX57LFpPLCxxAchM0xq2wenIlnlqW4Puu4g6oeo2SEd7E7HBd0Zk_QQx0dW1pZYZroLXMc37fZAYsEDOooeOq5I-qF-DIxMse_vMQnPMKpb5JW9g4vz_ZP0ToRirgWNdP0dd5rT7v2TbRnNFdFG3j3RoSe46qgaEt5GcFEjmwb-Kog_GE
thanks

i was missing EC2 and S3 permissions

Related

SASL Error connecting to remote libvirt over SSH: No worthy mechs found

I have a server running Ovirt Node that I'm trying to manage remotely using libvirt. I have an SSH keypair installed and can ssh user#server -i ssh-privkey successfully. When I try to connect to qemu+ssh//user#host/system?keyfile=ssh-privkey, I get this error:
authentication failed: Failed to start SASL negotiation: -4 (SASL(-4): no mechanism available: No worthy mechs found)
That led me down the path of getting TLS keys and certificates installed on the client and the server mostly according to these instructions (the configuration is slightly different because I have only one host and am using Terraform to manage the certificates*). However, I still get the same error. When I look at the output of libvirt --listen --verbose on the server when a connection failed, the only useful output is this:
error : virNetSocketReadWire:1792 : End of file while reading data: Input/output error
I have checked every firewall between the client and the server and they should all be wide open. What else could be the cause of this error?
* The goal is ultimately to use Terraform to provision libvirt resources, however I get the same errors trying to connect with virsh and virt-manager.
UPDATE: It's easier to connect just via SSH; this question exists because I couldn't figure out how to turn off SASL. It turns out SASL is enabled for SSH connections due to vdsm setting auth_unix_rw="sasl" in /etc/libvirt/libvirtd.conf. Removing that config means I can just use my SSH private key as I intended. The TLS configuration was a wild goose chase that was further hindered by vdsm changing the configured location of all the PKI files.
You're likely missing a RPM package on your client host. First on the virtualization host check /etc/sasl2/libvirt.conf and see what 'mech_list' setting is uncommented.
Back on your client you'll need to install a 'cyrus-sasl-XXXX' RPM that provides the same mechanism that the server is set to use. For a modern libvirt install it will probably be using 'cyrus-sasl-scram' for plain username/password auth, but for older installs, it might still be using 'cyrus-sasl-md5'

Request had insufficient authentication scopes on terraform when creating gcp mysql

Keep getting this error:
Error, failed to create instance group-database-instance: googleapi: Error 403: Request had insufficient authentication scopes.
More details:
Reason: insufficientPermissions, Message: Insufficient Permission
I have added a service account with editor permissions to use all gcp resources and added directed terraform to a credentials file generated.
Would this be an error in the code or something else?
Based on the error message you have provided and the task you would like to accomplish, it would seem that you might need to add a scope when creating your instance.
To use the Google Kubernetes Engine API for a GCE virtual machines, you will need to add the Cloud Platform Scope ("https://www.googleapis.com/auth/cloud-platform") to your VM when it is created.
Additionally, if you are using the gcloud command-line, you can follow along with something like:
gcloud compute instances create NAME --scopes=https://www.googleapis.com/auth/cloud-platform
If you are using the Cloud Console UI, when you are creating a VM instance, look for the "Identity and API access" section, and select "Allow full access to all Cloud APIs".

Can't do cf ic login with http proxy

I am using Bluemix container service and am unable to do cf ic login from behind a firewall, even though I have configured proxies.
When I do
cf ic -v login
I get the error message:
Authenticating with the IBM Containers registry host
registry.ng.bluemix.net... FAILED The attempt to authenticate with the
IBM Containers registry host registry.ng.bluemix.net was unsuccessful.
****Warning: '-e' is deprecated, it will be removed soon. See usage. Error response from daemon: Get
https://registry.ng.bluemix.net/v1/users/: dial tcp
198.23.117.106:443: i/o timeout
To test that my proxy is configured, I do this:
wget https://registry.ng.bluemix.net/v1/users/
--2016-10-25 11:25:23-- https://registry.ng.bluemix.net/v1/users/ Resolving proxy-chain.intel.com (proxy-chain.intel.com)... 10.19.8.225
Connecting to proxy-chain.intel.com
(proxy-chain.intel.com)|10.19.8.225|:912... connected. Proxy request
sent, awaiting response... 404 Not Found 2016-10-25 11:25:24 ERROR
404: Not Found.
If I disconnect VPN so I no longer have a firewall and need a proxy, and unset my proxies, it works.
These are the proxies I have set:
printenv | grep -i proxy
http_proxy=http://proxy-chain.intel.com:911
ftp_proxy=http://proxy-chain.intel.com:911
socks_proxy=http://proxy-chain.intel.com:1080
https_proxy=http://proxy-chain.intel.com:912
no_proxy=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,127.0.0.0/8,134.134.0.0/16
>
More experiments:
When I set the proxy to something bogus, it fails immediately:
> export https_proxy=http://foobarsfsdf.com
> cf ic login
FAILED
auth request failed: Error performing request: Post https://login.ng.bluemix.net/UAALoginServerWAR/oauth/token: http: error connecting to proxy http://foobarsfsdf.com: dial tcp: lookup foobarsfsdf.com on 10.0.2.3:53: no such host
>
When I set the proxy correctly, it fails later:
> cf ic login
Deleting old configuration file...
Retrieving client certificates for IBM Containers...
Storing client certificates in /home/rscohn1/.ice/certs/...
Storing client certificates in /home/rscohn1/.ice/certs/containers-api.ng.bluemix.net/80cc2e8c-4df0-4700-bd04-77f2e8777f80...
OK
The client certificates were retrieved.
Checking local Docker configuration...
OK
Authenticating with the IBM Containers registry host registry.ng.bluemix.net...
FAILED
The attempt to authenticate with the IBM Containers registry host registry.ng.bluemix.net was unsuccessful.
****Warning: '-e' is deprecated, it will be removed soon. See usage.
Error response from daemon: Get https://registry.ng.bluemix.net/v1/users/: dial tcp 198.23.117.106:443: i/o timeout
When you are not connected to the IBM Containers registry host, you can run only a limited number of IBM Containers commands. Check the spelling of the host URL and try again. If the host URL is correct, open a new command line or terminal window before retrying.
It looks like some parts of the ic plugin uses proxies, and some parts do not.
You need to add the proxy on to your Docker daemon configuration. Also note that as Alex says, you should make sure to configure a HTTPS proxy.
See here for some information on how to do that with Systemd on Linux (Ubuntu 16.04+): https://docs.docker.com/engine/admin/systemd/#http-proxy
For older Linux distributions, such as Ubuntu versions before 16.04, Docker uses Upstart. You'll find the Upstart configuration file at /etc/default/docker, with a sample of how to set the proxy up in comments inside that file.
If you're using the Docker for Mac or Docker for Windows apps, you'll find the proxy configuration options in Preferences -> Advanced.
Make sure to restart Docker after changing the configuration, so that your changes take effect. On Linux: sudo service docker restart. On Mac or Windows, right-click the Docker icon and click restart.

Scopes required for executing "gcloud container clusters create" on GCE VM instance

I am trying to create a GKE cluster by executing the following command on a GCE VM instance:
sudo gcloud container clusters create my-cluster \
--machine-type g1-small --num-nodes 1
Execution fails with this error message (despite kubectl being installed):
WARNING: Accessing a Container Engine cluster requires the kubernetes commandline client [kubectl].
ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Request had insufficient authentication scopes.
This problem is perhaps cause by the VM instance not possessing enough scopes. It currently possesses the following ones. Which other scope(s) is required in order for the problem to disappear?
Google Container Engine requires the https://www.googleapis.com/auth/cloud-platform scope, so you'll need to select "Allow full access to all Cloud APIs" when you create the VM instance.

Accesing to a VM on Fi-lab

I’m training to get familiar with the Fi-Ware Cloud service.
I can create blueprints templates and instances but I cannot access in SSH or Connect to VM display.
I have the server up and running, I can see the page “It works” of Apache.
The problem I have are:
With SSH I don’t know what credential I have to use, I try with my Fi-Ware credential but the server always shows me “access denied”
Connect to VM display it never appears the login interface.
There is some tutorial where I can see an example of how to do it or a detailed documentation how to configure and access to in a Blueprints Instance?
I know this question was already answered but I tried these solution and only had success with additional detail after Creating, Downloading and chmod-ing the keypair file: using [user#]hostname] ssh parameter as root#Fi-lab-FloatingIPAddress ,
under root shell or
using sudo command to execute ssh -i kp.pem Fi-lab-FloatingIPAddress
Try to access without root username will results in ssh asks to password even including the keypair associated with that virtual machine.
In other words, the keypair to access fi-lab blueprint or instances only works with root username.
Usually, when you create a VM of Bluerpint, you should assign a keypair, that should be created previously. I suppose that you did it. Correct me if I am wrong. During the creation of the keypair, you could download en .pem file that it is used to access to the VM using ssh (ssh -i xxx.pem…).
I am just getting familiar with #Fiware Lab.
prerequisites :
Having in the private key you generated in the fiware cloud interface in the file fiware_rsa (text file beginning with -----BEGIN RSA PRIVATE KEY-----)
Associate your server with an external IP (internet) (note you can access other instances via the one which has inet access)
ssh -i fiware.rsa user#external-ip-address
try with root user, you should see a message advising the proper user name to use depending on the instance :
ubuntu#front:~$ ssh -i .ssh/fiware_rsa root#XXX.XXX.XXX.XXX
Please login as the user "centos" rather than the user "root".
You can find more information here : http://fr.slideshare.net/hmunfru/setting-up-your-virtual-infrastructure-using-fi-lab-cloud
BR