unable to create an OVA in virtualbox using packer with private_key authentication - packer

I am unable to create an OVA using packer in virtualbox with id_rsa.From the host machine I am able to ssh to the vbox host using same private key. The error is as given
"Error waiting for SSH: ssh: handshake failed: ssh: unable to
authenticate, attempted methods [none publickey], no supported methods
remain". Using "ssh_password" the OVA is created successfully. But
my objective is to create an OVA using private key.
{
"builders": [{
"type": "virtualbox-ovf",
"source_path": "/root/Documents/OVA_idrsa.ova",
"ssh_username": "support",
"ssh_private_key_file": "id_rsa",
"ssh_pty": "true",
"ssh_port": 22,
"vrdp_bind_address": "0.0.0.0",
"guest_additions_mode": "disable",
"virtualbox_version_file": "",
"headless": true,
"ssh_skip_nat_mapping": "true",
"boot_wait": "120s",
"ssh_wait_timeout": "1000s",
"shutdown_command": ""
}]
}
I have tried using the ssh_password instead. It was successfull. But with private_key file the issue is recurrent.
{
"builders": [{
"type": "virtualbox-ovf",
"source_path": "/root/Documents/OVA_idrsa.ova",
"ssh_username": "support",
"ssh_private_key_file": "id_rsa",
"ssh_pty": "true",
"ssh_port": 22,
"vrdp_bind_address": "0.0.0.0",
"guest_additions_mode": "disable",
"virtualbox_version_file": "",
"headless": true,
"ssh_skip_nat_mapping": "true",
"boot_wait": "120s",
"ssh_wait_timeout": "1000s",
"shutdown_command": ""
}]
}
Error:
"Error waiting for SSH: ssh: handshake failed: ssh: unable to
authenticate, attempted methods [none publickey], no supported methods
remain"

Related

Packer custom image build failed with ssh authentication error

I'm trying to build custom image for AWS EKS managed node group, Note: my custom image (ubuntu) already has MFA and private key based authentication enabled.
I cloned github repository to build eks related changes from the below url.
git clone https://github.com/awslabs/amazon-eks-ami && cd amazon-eks-ami
Next i made few changes to run the make file
cat eks-worker-al2.json
{
"variables": {
"aws_region": "eu-central-1",
"ami_name": "template",
"creator": "{{env `USER`}}",
"encrypted": "false",
"kms_key_id": "",
"aws_access_key_id": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_access_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"aws_session_token": "{{env `AWS_SESSION_TOKEN`}}",
"binary_bucket_name": "amazon-eks",
"binary_bucket_region": "eu-central-1",
"kubernetes_version": "1.20",
"kubernetes_build_date": null,
"kernel_version": "",
"docker_version": "19.03.13ce-1.amzn2",
"containerd_version": "1.4.1-2.amzn2",
"runc_version": "1.0.0-0.3.20210225.git12644e6.amzn2",
"cni_plugin_version": "v0.8.6",
"pull_cni_from_github": "true",
"source_ami_id": "ami-12345678",
"source_ami_owners": "00012345",
"source_ami_filter_name": "template",
"arch": null,
"instance_type": null,
"ami_description": "EKS Kubernetes Worker AMI with AmazonLinux2 image",
"cleanup_image": "true",
"ssh_interface": "",
"ssh_username": "nandu",
"ssh_private_key_file": "/home/nandu/.ssh/template_rsa.ppk",
"temporary_security_group_source_cidrs": "",
"security_group_id": "sg-08725678910",
"associate_public_ip_address": "",
"subnet_id": "subnet-01273896789",
"remote_folder": "",
"launch_block_device_mappings_volume_size": "4",
"ami_users": "",
"additional_yum_repos": "",
"sonobuoy_e2e_registry": ""
After adding user and private key build getting failed with below error.
logs
amazon-ebs: Error waiting for SSH: Packer experienced an authentication error when trying to connect via SSH. This can happen if your username/password are wrong. You may want to double-check your credentials as part of your debugging process. original error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain.
for me just changue region for aws o delete aws region in packer.

I have mysql and apache superset setup on dockers and connected by a bridge network, what will theSQLAlchemy URI be?

I pulled the official superset image:
git clone https://github.com/apache/incubator-superset.git
then added the MYSQL Client to requirements.txt
cd incubator-superset
touch ./docker/requirements-local.txt
echo "mysqlclient==1.4.6" >> ./docker/requirements-local.txt
docker-compose build --force-rm
docker-compose up -d
After which I made the MYSQL Container
docker run --detach --network="incubator-superset_default" --name=vedasupersetmysql --env="MYSQL_ROOT_PASSWORD=vedashri" --publish 6603:3306 mysql
Then connected Mysql to the Superset Bridge.
The bridge network is as follows:
docker inspect incubator-superset_default
[
{
"Name": "incubator-superset_default",
"Id": "56db7b47ecf0867a2461dddb1219c64c1def8cd603fc9668d80338a477d77fdb",
"Created": "2020-12-08T07:38:47.94934583Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"07a6e0d5d87ea3ccb353fa20a3562d8f59b00d2b7ce827f791ae3c8eca1621cc": {
"Name": "superset_db",
"EndpointID": "0dd4781290c67e3e202912cad576830eddb0139cb71fd348019298b245bc4756",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"096a98f22688107a689aa156fcaf003e8aaae30bdc3c7bc6fc08824209592a44": {
"Name": "superset_worker",
"EndpointID": "54614854caebcd9afd111fb67778c7c6fd7dd29fdc9c51c19acde641a9552e66",
"MacAddress": "02:42:ac:13:00:05",
"IPv4Address": "172.19.0.5/16",
"IPv6Address": ""
},
"34e7fe6417b109fb9af458559e20ce1eaed1dc3b7d195efc2150019025393341": {
"Name": "superset_init",
"EndpointID": "49c580b22298237e51607ffa9fec56a7cf155065766b2d75fecdd8d91d024da7",
"MacAddress": "02:42:ac:13:00:06",
"IPv4Address": "172.19.0.6/16",
"IPv6Address": ""
},
"5716e0e644230beef6b6cdf7945f3e8be908d7e9295eea5b1e5379495817c4d8": {
"Name": "superset_app",
"EndpointID": "bf22dab0714501cc003b1fa69334c871db6bade8816724779fca8eb81ad7089d",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},
"b09d2808853c54f66145ac43bfc38d4968d28d9870e2ce320982dd60968462d5": {
"Name": "superset_node",
"EndpointID": "70f00c6e0ebf54b7d3dfad1bb8e989bc9425c920593082362d8b282bcd913c5d",
"MacAddress": "02:42:ac:13:00:07",
"IPv4Address": "172.19.0.7/16",
"IPv6Address": ""
},
"d08f8a2b090425904ea2bdc7a23b050a1327ccfe0e0b50360b2945ea39a07172": {
"Name": "superset_cache",
"EndpointID": "350fd18662e5c7c2a2d8a563c41513a62995dbe790dcbf4f08097f6395c720b1",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"e21469db533ad7a92b50c787a7aa026e939e4cf6d616e3e6bc895a64407c1eb7": {
"Name": "vedasupersetmysql",
"EndpointID": "d658c0224d070664f918644584460f93db573435c426c8d4246dcf03f993a434",
"MacAddress": "02:42:ac:13:00:08",
"IPv4Address": "172.19.0.8/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "incubator-superset",
"com.docker.compose.version": "1.26.0"
}
}
]
How should I form the SQLAlchemy URI?
I have tried
mysql://user:password#8088:6603/database-name
But it shows connection error, when I enter this URI.
If there is any related documentation, that would also help.
The issue was not related to superset or network. You configured the right network but haven't enabled default-authentication-plugin on MySQL docker images. Due to this error showed on the console was
Plugin caching_sha2_password could not be loaded:
To reproduce:
from sqlalchemy import create_engine
engine = create_engine('mysql://root:sample#172.19.0.5/mysql')
engine.connect()
error logs:
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1045, 'Plugin caching_sha2_password could not be loaded: /usr/lib/x86_64-linux-gnu/mariadb19/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory')
(Background on this error at: http://sqlalche.me/e/13/e3q8)
To resolve the issue:
Create MySQL image with default-authentication-plugin
docker run --detach --network="incubator-superset_default" --name=mysql --env="MYSQL_ROOT_PASSWORD=sample" --publish 3306:3306 mysql --default-authentication-plugin=mysql_native_password
Superset already has a User-defined bridge network, so you can use both formats
mysql://root:sample#mysql/mysql
mysql://root:sample#172.19.0.5/mysql
I don't have a lot of experience with Docker, but I don't think you should use 8088 as the host for your MySQL database.
Try using mysql://user:password#172.19.0.8:6603/database-name as the URI.

PM2-health : can i use pm2-health module for sending email alerts/notifications?

I have a nodejs application which runs on pm2 and I need to be able to send email notifications whenever a crash/ restart occurs. My idea is to monitor the application for crashes and trigger a mail action from pm2-health. The documentation of pm2-health module is here but I'm unable to use it for sending email alerts. Can anyone explain how to use it for this purpose?
P.S: Also, it would be great if you could explain about SMTP configuration for gmail.(I have configured postfix to use gmail smtp according to this and it works fine for test gmail but doesn't work with pm2-health)
This is how I could get pm2-health working with my Gmail account:
Install pm2-health module:
pm2 install pm2-health
Open PM2 module config file:
vim ~/.pm2/module_conf.json
Update it with the Gmail account’s SMTP parameters:
{
"pm2-health": {
"smtp": {
"host": "smtp.gmail.com",
"port": 465,
"user": "EXAMPLE_sender#gmail.com",
"password": "PASSWORD",
"secure": true,
"disabled": false
},
"mailTo": "NOTIFICATION_RECIPIENT_EMAIL_ADDRESS",
"replyTo": "EXAMPLE_SENDER#gmail.com",
"events": [
"exit"
],
"exceptions": true,
"messages": true,
"messageExcludeExps": [],
"metric": {},
"metricIntervalS": 60,
"aliveTimeoutS": 300,
"addLogs": false,
"appsExcluded": [],
"snapshot": {
"url": "",
"token": "",
"auth": {
"user": "",
"password": ""
},
"disabled": false
}
},
"module-db-v2": {
"pm2-health": {}
}
}
Save and close it
Restart pm2-health:
pm2 restart pm2-health
Test it by restarting one of your PM2-managed Node processes. You should receive an email about that event.
For anyone trying to use with 2FA enabled Gmail, you need to use an App Password. More information here: https://support.google.com/accounts/answer/185833

Connection to RDS MySql from ECS Fargate wordpress container times out

I have a container running (wordpress container if being more specific), which tries to connect to mysql rds instance.
Parameters for the fargate ecs service container:
{
"executionRoleArn": "ignore-this",
"containerDefinitions": [
{
"name": "MyCoolContainer",
"image": "wordpress:latest",
"essential": true,
"environment": [
{"name": "WORDPRESS_DB_HOST", "value": "host:3306"},
{"name": "WORDPRESS_DB_USER", "value": "user"},
{"name": "WORDPRESS_DB_PASSWORD", "value": "password"},
{"name": "WORDPRESS_DB_NAME", "value": "name"}
],
"portMappings": [
{
"hostPort": 80,
"protocol": "tcp",
"containerPort": 80
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/aws/ecs/fargate/prefix",
"awslogs-region": "eu-west-1",
"awslogs-stream-prefix": "prefix"
}
}
}
],
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"cpu": "256",
"memory": "512",
"family": "wordpress"
}
Also, for security groups, I have opened 22, 80, 443, 3306 ports for any IP address.
But the container in ECS still fails to start with the reason:
[17-Sep-2019 08:42:24 UTC] PHP Warning: mysqli::__construct():
(HY000/2002): Connection timed out in Standard input code on line 22
MySQL Connection Error: (2002) Connection timed out
MySQL Connection Error: (2002) Connection timed out
However I can ensure that the RDS instance is accessable, when trying to connect from a local machine with a command:
mysql -uuser -ppassword -hhost -P3306
Also, I can ensure that a (wordpress) container successfuly runs on local machine and successfully connects to a remote RDS database with no timeouts.
EDIT
This is how my environment looks like from ECS UI panel:
(I have tried to copy paste these values into my local mysql command and it connected successfully.)
I suspect there is something wrong with aws services configuration. Any ideas?
Thanks to Adiii and some other articles found on the internet i have a complete solution to this problem.
You need to simply attach a NAT Gateway to the subnet in which you are launching your ECS Fargate instance.
Simply launching in a public subnet with an Internet Gateway for some weird reason does not solve the problem (even though logically thinking it should).
TL;DR:
NAT Gateway is needed. AWS is f****d up.

CannotStartContainerError while submitting a AWS Batch Job

In AWS Batch I have a job definition and a job queue and a compute environment where to execute my AWS Batch jobs.
After submitting a job, I find it in the list of the failed ones with this error:
Status reason
Essential container in task exited
Container message
CannotStartContainerError: API error (404): oci runtime error: container_linux.go:247: starting container process caused "exec: \"/var/application/script.sh --file= --key=.
and in the cloudwatch logs I have:
container_linux.go:247: starting container process caused "exec: \"/var/application/script.sh --file=Toulouse.json --key=out\": stat /var/application/script.sh --file=Toulouse.json --key=out: no such file or directory"
I have specified a correct docker image that has all the scripts (we use it already and it works) and I don't know where the error is coming from.
Any suggestions are very appreciated.
The docker file is something like that:
# Pull base image.
FROM account-id.dkr.ecr.region.amazonaws.com/application-image.base-php7-image:latest
VOLUME /tmp
VOLUME /mount-point
RUN chown -R ubuntu:ubuntu /var/application
# Create the source directories
USER ubuntu
COPY application/ /var/application
# Register aws profile
COPY data/aws /home/ubuntu/.aws
WORKDIR /var/application/
ENV COMPOSER_CACHE_DIR /tmp
RUN composer update -o && \
rm -Rf /tmp/*
Here is the Job Definition:
{
"jobDefinitionName": "JobDefinition",
"jobDefinitionArn": "arn:aws:batch:region:accountid:job-definition/JobDefinition:25",
"revision": 21,
"status": "ACTIVE",
"type": "container",
"parameters": {},
"retryStrategy": {
"attempts": 1
},
"containerProperties": {
"image": "account-id.dkr.ecr.region.amazonaws.com/application-dev:latest",
"vcpus": 1,
"memory": 512,
"command": [
"/var/application/script.sh",
"--file=",
"Ref::file",
"--key=",
"Ref::key"
],
"volumes": [
{
"host": {
"sourcePath": "/mount-point"
},
"name": "logs"
},
{
"host": {
"sourcePath": "/var/log/php/errors.log"
},
"name": "php-errors-log"
},
{
"host": {
"sourcePath": "/tmp/"
},
"name": "tmp"
}
],
"environment": [
{
"name": "APP_ENV",
"value": "dev"
}
],
"mountPoints": [
{
"containerPath": "/tmp/",
"readOnly": false,
"sourceVolume": "tmp"
},
{
"containerPath": "/var/log/php/errors.log",
"readOnly": false,
"sourceVolume": "php-errors-log"
},
{
"containerPath": "/mount-point",
"readOnly": false,
"sourceVolume": "logs"
}
],
"ulimits": []
}
}
In Cloudwatch log stream /var/log/docker:
time="2017-06-09T12:23:21.014547063Z" level=error msg="Handler for GET /v1.17/containers/4150933a38d4f162ba402a3edd8b7763c6bbbd417fcce232964e4a79c2286f67/json returned error: No such container: 4150933a38d4f162ba402a3edd8b7763c6bbbd417fcce232964e4a79c2286f67"
This error was because the command was malformed. I was submitting the job by a lambda function (python 2.7) using boto3 and the syntax of the command should be something like this:
'command' : ['sudo','mkdir','directory']
Hope it helps somebody.