Packer ssh_private_key_file is invalid: Failed to read key c:/xxxxx: no key found - packer

I use Packer form Hashicorp to create VMs.
Faced such a problem:
packer build jessie64_hv.json
virtualbox-iso output will be in this color.
1 error(s) occurred:
ssh_private_key_file is invalid: Failed to read key 'C:/users/xxxx/test_key.ppk': no key found
Part of the json file:
"type": "virtualbox-iso",
"guest_os_type": "Debian_64",
"guest_additions_mode": "disable",
"headless": "{{user `HEADLESS`}}",
"disk_size": "{{user `DISK_SIZE`}}",
"http_directory": "http",
"iso_url": "{{user `ISO_URL`}}",
"iso_checksum": "{{user `ISO_CHECKSUM`}}",
"iso_checksum_type": "{{user `ISO_CHECKSUM_TYPE`}}",
"ssh_port": 22,
"ssh_private_key_file": "C:/users/xxxxx/test_key.ppk",
"ssh_username": "root",
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo 'vagrant' | sudo -S /sbin/shutdown -hP now",
"vm_name": "{{user `VM_NAME`}}",

You can't use ppk keys, they are specific to Putty. Packer can only read standard OpenSSH keys.
To convert your key do something like
puttygen privatekey.ppk -O private-openssh -o privatekey.pem
And then use the privatekey.pem in packer.
For more info see: Converting a ppk to pem

Related

Using packer and type qemu in the json file to create a guest kvm vm, but ssh timeout error coming

I have RHEL 8.5 as the KVM host. I want to create a guest vm through packer type qemu and have a json file where all the configurations are mentioned.
{
"builders": [
{
"type": "qemu",
"iso_url": "/var/lib/libvirt/images/test.iso",
"iso_checksum": "md5:3959597d89e8c20d58c4514a7cf3bc7f",
"output_directory": "/var/lib/libvirt/images/iso-dir/test",
"disk_size": "55G",
"headless": "true",
"qemuargs": [
[
"-m",
"4096"
],
[
"-smp",
"2"
]
],
"format": "qcow2",
"shutdown_command": "echo 'siedgerexuser' | sudo -S shutdown -P now",
"accelerator": "kvm",
"ssh_username": "nonrootuser",
"ssh_password": "********",
"ssh_timeout": "20m",
"vm_name": "test",
"net_device": "virtio-net",
"disk_interface": "virtio",
"http_directory": "/home/azureuser/http",
"boot_wait": "10s",
"boot_command": [
"e inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/anaconda-ks.cfg"
]
}
],
"provisioners":
[
{
"type": "file",
"source": "/home/azureuser/service_status_check.sh",
"destination": "/tmp/service_status_check.sh"
},
{
"type": "file",
"source": "/home/azureuser/service_check.sh",
"destination": "/tmp/service_check.sh"
},
{
"type": "file",
"source": "/home/azureuser/azure.sh",
"destination": "/tmp/azure.sh"
},
{
"type": "file",
"source": "/home/azureuser/params.cfg",
"destination": "/tmp/params.cfg"
},
{
"type": "shell" ,
"execute_command": "echo 'siedgerexuser' | {{.Vars}} sudo -E -S bash '{{.Path}}'",
"inline": [
"echo copying" , "cp /tmp/params.cfg /root/",
"sudo ls -lrt /root/params.cfg",
"sudo ls -lrt /opt/scripts/"
],
"inline_shebang": "/bin/sh -x"
},
{
"type": "shell",
"pause_before": "5s",
"expect_disconnect": true ,
"inline": [
"echo runningconfigurescript" , "sudo sh /opt/scripts/configure-env.sh"
]
},
{
"type": "shell",
"pause_before": "200s",
"inline": [
"sudo sh /tmp/service_check.sh",
"sudo sh /tmp/azure.sh"
]
}
]
}
It is working fine in rhel 7.9, but the same thing giving ssh timeout error in RHEL 8.4.
But when i am creating guest vm with virt-install it is able to create a vm and i am able to see it in cockpit web ui, but when i initiate packer build then while giving ssh timeout error it is not visible in cockpit UI, so not able to see where the guest vm created get stuck.
Can anyone please help me to fix this issue

Packer custom image build failed with ssh authentication error

I'm trying to build custom image for AWS EKS managed node group, Note: my custom image (ubuntu) already has MFA and private key based authentication enabled.
I cloned github repository to build eks related changes from the below url.
git clone https://github.com/awslabs/amazon-eks-ami && cd amazon-eks-ami
Next i made few changes to run the make file
cat eks-worker-al2.json
{
"variables": {
"aws_region": "eu-central-1",
"ami_name": "template",
"creator": "{{env `USER`}}",
"encrypted": "false",
"kms_key_id": "",
"aws_access_key_id": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_access_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"aws_session_token": "{{env `AWS_SESSION_TOKEN`}}",
"binary_bucket_name": "amazon-eks",
"binary_bucket_region": "eu-central-1",
"kubernetes_version": "1.20",
"kubernetes_build_date": null,
"kernel_version": "",
"docker_version": "19.03.13ce-1.amzn2",
"containerd_version": "1.4.1-2.amzn2",
"runc_version": "1.0.0-0.3.20210225.git12644e6.amzn2",
"cni_plugin_version": "v0.8.6",
"pull_cni_from_github": "true",
"source_ami_id": "ami-12345678",
"source_ami_owners": "00012345",
"source_ami_filter_name": "template",
"arch": null,
"instance_type": null,
"ami_description": "EKS Kubernetes Worker AMI with AmazonLinux2 image",
"cleanup_image": "true",
"ssh_interface": "",
"ssh_username": "nandu",
"ssh_private_key_file": "/home/nandu/.ssh/template_rsa.ppk",
"temporary_security_group_source_cidrs": "",
"security_group_id": "sg-08725678910",
"associate_public_ip_address": "",
"subnet_id": "subnet-01273896789",
"remote_folder": "",
"launch_block_device_mappings_volume_size": "4",
"ami_users": "",
"additional_yum_repos": "",
"sonobuoy_e2e_registry": ""
After adding user and private key build getting failed with below error.
logs
amazon-ebs: Error waiting for SSH: Packer experienced an authentication error when trying to connect via SSH. This can happen if your username/password are wrong. You may want to double-check your credentials as part of your debugging process. original error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain.
for me just changue region for aws o delete aws region in packer.

Packer SSH timeout when setting custom VPC, subnet and security group

So I needed to be able to move my packer builders inside a private VPC and add a locked down security group that only allowed ssh from a restricted range of IPs, thus:
"builders": [{
"type": "amazon-ebs",
"associate_public_ip_address": false,
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "{{user `aws_region`}}",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "{{user `ami_source_name`}}",
"root-device-type": "ebs"
},
"owners": ["{{user `ami_source_owner_id`}}"],
"most_recent": true
},
"instance_type": "t3.small",
"iam_instance_profile": "{{user `iam_instance_profile`}}",
"ssh_username": "{{user `ssh_username`}}",
"ami_name": "{{user `ami_name_prefix`}}_{{user `ami_creation_date`}}",
"ami_users": "{{user `share_amis_with_account`}}",
"ebs_optimized": true,
"vpc_id": "vpc-123456",
"subnet_id": "subnet-123456",
"security_group_id": "sg-123456",
"user_data_file": "scripts/disable_tty.sh",
"launch_block_device_mappings": [{
"device_name": "{{user `root_device_name`}}",
"volume_size": 10,
"volume_type": "gp2",
"delete_on_termination": true
}],
"tags": {
"packer": "true",
"ansible_role": "{{user `ansible_role`}}",
"builtby": "{{user `builtby`}}",
"ami_name": "{{user `ami_name_prefix`}}_{{user `ami_creation_date`}}",
"ami_name_prefix": "{{user `ami_name_prefix`}}",
"project": "{{user `project`}}"
}
}]
To start with I added "associate_public_ip_address:false" (false is the default as well) as every time I ran packer the host was assigned a public ip address but even adding that it still picks up a public ip????????
I used a security group that I had assigned to Jenkins build slaves which also communicate over port 22 and I haven't had any issue with accessing them from any part of my infrastructure.
I get this error:
1562344256,,ui,error,Build 'amazon-ebs' errored: Timeout waiting for SSH.
1562344256,,error-count,1
1562344256,,ui,error,\n==> Some builds didn't complete successfully and had errors:
1562344256,amazon-ebs,error,Timeout waiting for SSH.
1562344256,,ui,error,--> amazon-ebs: Timeout waiting for SSH.
During the wait period for SSH to respond I was able to nc -v 1.2.3.5 22 and I get a connection so the security group is allowing communications on port 22 from my IP address.
If I change the security group to 0.0.0.0/0 it connects straight away but why when I can nc to port 22 with the restricted security group can packer not initiate an SSH connection? Is packer trying to use the public IP address that I can not for the life of me turn off?
I thought it might be quite helpful to tcpdump the traffic on port 22 to see what was happening but I have a locked down laptop that does not allow the install of that particular handy item.
I can also ssh to the builder from my laptop but get a Too many authentication failures error and can't log in to see what is going on.
So the reason that the packer builder is getting a public ip is down to the subnet settings - map_public_ip_on_launch = true.
So answer is build a new private subnet for the packer builder, build a new NAT GW in the public subnet then route from the private subnet to the NAT GW with a new routing table.

unable to create an OVA in virtualbox using packer with private_key authentication

I am unable to create an OVA using packer in virtualbox with id_rsa.From the host machine I am able to ssh to the vbox host using same private key. The error is as given
"Error waiting for SSH: ssh: handshake failed: ssh: unable to
authenticate, attempted methods [none publickey], no supported methods
remain". Using "ssh_password" the OVA is created successfully. But
my objective is to create an OVA using private key.
{
"builders": [{
"type": "virtualbox-ovf",
"source_path": "/root/Documents/OVA_idrsa.ova",
"ssh_username": "support",
"ssh_private_key_file": "id_rsa",
"ssh_pty": "true",
"ssh_port": 22,
"vrdp_bind_address": "0.0.0.0",
"guest_additions_mode": "disable",
"virtualbox_version_file": "",
"headless": true,
"ssh_skip_nat_mapping": "true",
"boot_wait": "120s",
"ssh_wait_timeout": "1000s",
"shutdown_command": ""
}]
}
I have tried using the ssh_password instead. It was successfull. But with private_key file the issue is recurrent.
{
"builders": [{
"type": "virtualbox-ovf",
"source_path": "/root/Documents/OVA_idrsa.ova",
"ssh_username": "support",
"ssh_private_key_file": "id_rsa",
"ssh_pty": "true",
"ssh_port": 22,
"vrdp_bind_address": "0.0.0.0",
"guest_additions_mode": "disable",
"virtualbox_version_file": "",
"headless": true,
"ssh_skip_nat_mapping": "true",
"boot_wait": "120s",
"ssh_wait_timeout": "1000s",
"shutdown_command": ""
}]
}
Error:
"Error waiting for SSH: ssh: handshake failed: ssh: unable to
authenticate, attempted methods [none publickey], no supported methods
remain"

Fail to create ElasticBeanstalk custom platform with "Unmatched region"

I'm trying to create a custom platform for region ap-northeast-1 following aws documentation.
ebp create ends with failure, and ebp events shows an error indicating that the created AMI is in different region from the service region.
2018-04-28 00:49:18 INFO Initiated platform version creation for 'NodePlatform_Ubuntu/1.0.0'.
2018-04-28 00:49:22 INFO Creating Packer builder environment 'eb-custom-platform-builder-packer'.
2018-04-28 00:52:39 INFO Starting Packer building task.
2018-04-28 00:52:44 INFO Creating CloudWatch log group '/aws/elasticbeanstalk/platform/NodePlatform_Ubuntu'.
2018-04-28 01:03:48 INFO Successfully built AMI(s): 'ami-5f2f4527' for 'arn:aws:elasticbeanstalk:ap-northeast-1:392559473945:platform/NodePlatform_Ubuntu/1.0.0'
2018-04-28 01:04:03 ERROR Unmatched region for created AMI 'ami-5f2f4527': 'us-west-2' (service region: 'ap-northeast-1').
2018-04-28 01:04:03 INFO Failed to create platform version 'NodePlatform_Ubuntu/1.0.0'.
I used this sample custom platform provided in aws document and modified only custom_platform.json for builders.region and builders.source_ami to match with the region of my Custom Platform Builder.
.elasticbeanstalk/config.yml
global:
application_name: Custom Platform Builder
branch: null
default_ec2_keyname: null
default_platform: null
default_region: ap-northeast-1
instance_profile: null
platform_name: NodePlatform_Ubuntu
platform_version: null
profile: eb-cli
repository: null
sc: git
workspace_type: Platform
custom_platform.json
{
"variables": {
"platform_name": "{{env `AWS_EB_PLATFORM_NAME`}}",
"platform_version": "{{env `AWS_EB_PLATFORM_VERSION`}}",
"platform_arn": "{{env `AWS_EB_PLATFORM_ARN`}}"
},
"builders": [
{
"type": "amazon-ebs",
"name": "HVM AMI builder",
"region": "ap-northeast-1",
"source_ami": "ami-60a4b21c",
"instance_type": "m3.medium",
"ssh_username": "ubuntu",
"ssh_pty": "true",
"ami_name": "NodeJs running on Ubuntu Server 16.04 LTS (built on {{isotime \"20060102150405\"}})",
"tags": {
"eb_platform_name": "{{user `platform_name`}}",
"eb_platform_version": "{{user `platform_version`}}",
"eb_platform_arn": "{{user `platform_arn`}}"
}
}
],
"provisioners": [
{
"type": "file",
"source": "builder",
"destination": "/tmp/"
},
{
"type": "shell",
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo {{ .Path }}",
"scripts": [
"builder/builder.sh"
]
}
]
}
It seems my modification to custom_platform.json does not take effect.
What I missed was committing the changes...
Though EB and Packer documentation do not refer to anything about vcs or git, it seems packer uses git to create an archive of the custom platform files and thus the changes I made was not included in it because I did not commit them.
I noticed that ebp create was giving me this warning...
mac.local:NodePlatform_Ubuntu% ebp create
WARNING: You have uncommitted changes.