Error creating artifact: resource not found - packer

I'm new to using Packer and I'd like to get some help with a problem I'm having. I can't seem to be able to find any information about this error. This error occur both on my local computer, and in atlas after push.
Running Packer v.0.8.7.dev
Please give me a helping hand!
Error:
==> virtualbox-iso: Running post-processor: atlas
virtualbox-iso (atlas): Creating artifact: /
Build 'virtualbox-iso' errored: 1 error(s) occurred:
* Post-processor failed: Error creating artifact: resource not found
==> Some builds didn't complete successfully and had errors:
--> virtualbox-iso: 1 error(s) occurred:
* Post-processor failed: Error creating artifact: resource not found
==> Builds finished but no artifacts were created.
Configuration:
{
"push": {
"name": "",
"vcs": true
},
"variables": {
"atlas_username": "{{env `ATLAS_USERNAME`}}",
"atlas_name": "{{env `ATLAS_NAME`}}"
},
"provisioners": [
{
"type": "shell",
"scripts": [
"scripts/base.sh",
"scripts/virtualbox.sh",
"scripts/vmware.sh",
"scripts/vagrant.sh",
"scripts/dep.sh",
"scripts/cleanup.sh",
"scripts/zerodisk.sh",
"scripts/custom.sh"
],
"override": {
"virtualbox-iso": {
"execute_command": "echo 'vagrant'|sudo -S bash '{{.Path}}'"
},
"vmware-iso": {
"execute_command": "echo 'vagrant'|sudo -S bash '{{.Path}}'"
}
}
}
],
"builders": [
{
"type": "virtualbox-iso",
"boot_command": [
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg ",
"debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
"hostname={{ .Name }} ",
"fb=false debconf/frontend=noninteractive ",
"keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA keyboard-configuration/variant=USA console-setup/ask_detect=false ",
"initrd=/install/initrd.gz -- <enter>"
],
"headless": false,
"boot_wait": "10s",
"disk_size": 20480,
"guest_os_type": "Ubuntu_64",
"http_directory": "http",
"iso_checksum": "c2571c4c2fc17bef1fad9e5db5e7afdb4bd29cd8ab51e42f9c036238c4e54caa",
"iso_checksum_type": "sha256",
"iso_url": "http://ftp.acc.umu.se/mirror/cdimage.ubuntu.com/releases/14.04/release/ubuntu-14.04.3-server-amd64+mac.iso",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_port": 22,
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo '/sbin/halt -h -p' > shutdown.sh; echo 'vagrant'|sudo -S bash 'shutdown.sh'",
"guest_additions_path": "VBoxGuestAdditions_{{.Version}}.iso",
"virtualbox_version_file": ".vbox_version"
},
{
"type": "vmware-iso",
"boot_command": [
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg ",
"debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
"hostname={{ .Name }} ",
"fb=false debconf/frontend=noninteractive ",
"keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA keyboard-configuration/variant=USA console-setup/ask_detect=false ",
"initrd=/install/initrd.gz -- <enter>"
],
"boot_wait": "10s",
"disk_size": 20480,
"guest_os_type": "Ubuntu-64",
"headless": true,
"http_directory": "http",
"iso_checksum": "af224223de99e2a730b67d7785b657f549be0d63221188e105445f75fb8305c9",
"iso_checksum_type": "sha256",
"iso_url": "http://releases.ubuntu.com/precise/ubuntu-12.04.5-server-amd64.iso",
"skip_compaction": true,
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_port": 22,
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo '/sbin/halt -h -p' > shutdown.sh; echo 'vagrant'|sudo -S bash 'shutdown.sh'",
"tools_upload_flavor": "linux"
}
],
"post-processors": [
[{
"type": "vagrant",
"keep_input_artifact": true
},
{
"type": "atlas",
"only": ["vmware-iso"],
"artifact": "{{user `atlas_username`}}/{{user `atlas_name`}}",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "vmware_desktop",
"version": "0.0.1"
}
},
{
"type": "atlas",
"only": ["virtualbox-iso"],
"artifact": "{{user `atlas_username`}}/{{user `atlas_name`}}",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "virtualbox",
"version": "0.0.1"
}
}]
]
}

You get
virtualbox-iso (atlas): Creating artifact: /
so it seems your variable from
"variables": {
"atlas_username": "{{env `ATLAS_USERNAME`}}",
"atlas_name": "{{env `ATLAS_NAME`}}"
},
are not set correctly - make sure they are set or try to hardcode at first.
Also as a side note, in the latest doc its recommended to use an atlas_token to access atlas
"post-processors": [
{
"type": "atlas",
"only": ["virtualbox-iso"],
"token": "{{user `atlas_token`}}",
"artifact": "hashicorp/foobar",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "virtualbox",
"version": "0.0.1"
"created_at": "{{timestamp}}"
}
}
]

Related

Using packer and type qemu in the json file to create a guest kvm vm, but ssh timeout error coming

I have RHEL 8.5 as the KVM host. I want to create a guest vm through packer type qemu and have a json file where all the configurations are mentioned.
{
"builders": [
{
"type": "qemu",
"iso_url": "/var/lib/libvirt/images/test.iso",
"iso_checksum": "md5:3959597d89e8c20d58c4514a7cf3bc7f",
"output_directory": "/var/lib/libvirt/images/iso-dir/test",
"disk_size": "55G",
"headless": "true",
"qemuargs": [
[
"-m",
"4096"
],
[
"-smp",
"2"
]
],
"format": "qcow2",
"shutdown_command": "echo 'siedgerexuser' | sudo -S shutdown -P now",
"accelerator": "kvm",
"ssh_username": "nonrootuser",
"ssh_password": "********",
"ssh_timeout": "20m",
"vm_name": "test",
"net_device": "virtio-net",
"disk_interface": "virtio",
"http_directory": "/home/azureuser/http",
"boot_wait": "10s",
"boot_command": [
"e inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/anaconda-ks.cfg"
]
}
],
"provisioners":
[
{
"type": "file",
"source": "/home/azureuser/service_status_check.sh",
"destination": "/tmp/service_status_check.sh"
},
{
"type": "file",
"source": "/home/azureuser/service_check.sh",
"destination": "/tmp/service_check.sh"
},
{
"type": "file",
"source": "/home/azureuser/azure.sh",
"destination": "/tmp/azure.sh"
},
{
"type": "file",
"source": "/home/azureuser/params.cfg",
"destination": "/tmp/params.cfg"
},
{
"type": "shell" ,
"execute_command": "echo 'siedgerexuser' | {{.Vars}} sudo -E -S bash '{{.Path}}'",
"inline": [
"echo copying" , "cp /tmp/params.cfg /root/",
"sudo ls -lrt /root/params.cfg",
"sudo ls -lrt /opt/scripts/"
],
"inline_shebang": "/bin/sh -x"
},
{
"type": "shell",
"pause_before": "5s",
"expect_disconnect": true ,
"inline": [
"echo runningconfigurescript" , "sudo sh /opt/scripts/configure-env.sh"
]
},
{
"type": "shell",
"pause_before": "200s",
"inline": [
"sudo sh /tmp/service_check.sh",
"sudo sh /tmp/azure.sh"
]
}
]
}
It is working fine in rhel 7.9, but the same thing giving ssh timeout error in RHEL 8.4.
But when i am creating guest vm with virt-install it is able to create a vm and i am able to see it in cockpit web ui, but when i initiate packer build then while giving ssh timeout error it is not visible in cockpit UI, so not able to see where the guest vm created get stuck.
Can anyone please help me to fix this issue

pm2 logs are disappearing after 5 days

Even though I specify "numBackups" as "90" I'm only seeing 5 log files, older ones are being removed.
This is how I start the server
pm2 start server.js --name app_server --log log/app.log --time
This is my full log4js.json
{
"appenders": {
"server": {
"type": "file",
"filename": "log/app.log",
"pattern": "yyyy-MM-dd",
"numBackups": "90",
"compress": true
}
},
"categories": {
"default": {
"appenders": [
"server"
],
"level": "DEBUG"
}
}
}
numBackups has been replaced with backups attribute

g++ not detected in VS Code

I'm trying to compile c++ inside VS Code.
I have MinGW installed.
I've followed the steps in this video: https://www.youtube.com/watch?v=rFdJ68WbkdQ
And the steps at the "getting started" docs https://code.visualstudio.com/docs/languages/cpp
Actually, my config shows like this:
{
"configurations": [
{
"name": "Mac",
"includePath": [
"/usr/include",
"/usr/local/include",
"${workspaceRoot}"
],
"defines": [],
"intelliSenseMode": "clang-x64",
"browse": {
"path": [
"/usr/include",
"/usr/local/include",
"${workspaceRoot}"
],
"limitSymbolsToIncludedHeaders": true,
"databaseFilename": ""
},
"macFrameworkPath": [
"/System/Library/Frameworks",
"/Library/Frameworks"
]
},
{
"name": "Linux",
"includePath": [
"/usr/include",
"/usr/local/include",
"${workspaceRoot}"
],
"defines": [],
"intelliSenseMode": "clang-x64",
"browse": {
"path": [
"/usr/include",
"/usr/local/include",
"${workspaceRoot}"
],
"limitSymbolsToIncludedHeaders": true,
"databaseFilename": ""
}
},
{
"name": "Win32",
"includePath": [
"C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.11.25503/include/*",
"C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.11.25503/atlmfc/include/*",
"C:/Program Files (x86)/Windows Kits/10/Include/10.0.16299.0/um",
"C:/Program Files (x86)/Windows Kits/10/Include/10.0.16299.0/ucrt",
"C:/Program Files (x86)/Windows Kits/10/Include/10.0.16299.0/shared",
"C:/Program Files (x86)/Windows Kits/10/Include/10.0.16299.0/winrt",
"${workspaceRoot}"
],
"defines": [
"_DEBUG",
"UNICODE"
],
"intelliSenseMode": "msvc-x64",
"browse": {
"path": [
"${workspaceRoot}",
"C:\\MinGW\\lib\\gcc\\mingw32\\6.3.0\\include\\c++",
"C:\\MinGW\\bin"
],
"limitSymbolsToIncludedHeaders": true,
"databaseFilename": ""
}
}
],
"version": 3
}
And the "tasks.json" file has the following:
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "build",
"type": "shell",
"command": "‪‪g++",
"args": [
"-g", "Calculator.cpp", "-o","Calculator"
],
"group": {
"kind": "build",
"isDefault": true
},
"problemMatcher":"$gcc"
}
]
}
But when I hit "run main task" it prompts:
"> Executing task: ‪‪g++ -g Calculator.cpp -o Calculator <
'‪‪g++' is not recognized as an internal or external command,
operable program or batch file.
The terminal process terminated with exit code: 1"
How can I get gcc detected?
I'm using VS Code with Windows 10 machine BTW.
If you have not added the folder path to windows, try that first.
If that still does not work try adding the full path in "task.json" instead of just g++.
Something like this:
C:/MinGW/bin/g++

Can't mount EFS in the ECS instance

I have this UserData in the configurationConfig resource in my cloudformation template:
"UserData":{ "Fn::Base64" : {
"Fn::Join" : ["", [
"#!/bin/bash -xv\n",
"yum -y update\n",
"yum -y install aws-cfn-bootstrap\n",
"yum -y install awslogs jq\n",
"#Install NFS client\n",
"yum -y install nfs-utils\n",
"#Install pip\n",
"yum -y install python27 python27-pip\n",
"#Install awscli\n",
"pip install awscli\n",
"#Upgrade to the latest version of the awscli\n",
"#pip install --upgrade awscli\n",
"#Add support for EFS to the CLI configuration\n",
"aws configure set preview.efs true\n",
"#Get region of EC2 from instance metadata\n",
"EC2_AVAIL_ZONE=`curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone`\n",
"EC2_REGION=",{ "Ref": "AWS::Region"} ,"\n",
"mkdir /efs-tmp/\n",
"chown -R ec2-user:ec2-user /efs-tmp/\n",
"DIR_SRC=$EC2_AVAIL_ZONE.",{ "Fn::FindInMap" : [ "FileSystemMap", {"Ref" : "EnvParam"}, "FileSystemID"] },".efs.$EC2_REGION.amazonaws.com\n",
"DIR_TGT=/efs-tmp/\n",
"touch /home/ec2-user/echo.res\n",
"echo ",{ "Fn::FindInMap" : [ "FileSystemMap", {"Ref" : "EnvParam"}, "FileSystemID"] }," >> /home/ec2-user/echo.res\n",
"echo $EC2_AVAIL_ZONE >> /home/ec2-user/echo.res\n",
"echo $EC2_REGION >> /home/ec2-user/echo.res\n",
"echo $DIR_SRC >> /home/ec2-user/echo.res\n",
"echo $DIR_TGT >> /home/ec2-user/echo.res\n",
"#Mount EFS file system\n",
"mount -t nfs4 -o vers=4.1 $DIR_SRC:/ $DIR_TGT >> /home/ec2-user/echo.res\n",
"#Backup fstab\n",
"cp -p /etc/fstab /etc/fstab.back-$(date +%F)\n",
"echo -e \"$DIR_SRC:/ $DIR_TGT nfs4 nfsvers=4.1 0 0 | tee -a /etc/fstab\n",
"docker ps\n",
"service docker stop\n",
"service docker start\n",
"/opt/aws/bin/cfn-init -v",
" --stack ", { "Ref": "AWS::StackName" },
" --resource ContainerInstances",
" --region ", { "Ref" : "AWS::Region" },"\n",
"service awslogs start\n",
"chkconfig awslogs on\n"
]]}
Here is the security group of the ECS container:
"EcsSecurityGroup":{
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "ECS SecurityGroup",
"SecurityGroupIngress" : [
{
"IpProtocol" : "tcp",
"FromPort" : "2049",
"ToPort" : "2049",
"CidrIp" : {"Ref" : "CIDRVPC"}
},
{
"IpProtocol" : "tcp",
"FromPort" : "22",
"ToPort" : "22",
"CidrIp" : "0.0.0.0/0"
}
],
"SecurityGroupEgress" : [
{
"IpProtocol" : "-1",
"FromPort" : "-1",
"ToPort" : "-1",
"CidrIp" : "0.0.0.0/0"
}
],
"VpcId":{ "Ref":"VpcId" }
}
},
After running the template, I ssh into the instance, waited for userdata to finish executing, then I found in /var/log/cloud-init-ouptut.log this error:
mount.nfs4: Connection timed out
Moreover, the /etc/fstab file does not contain the mount line.
And I can't access the File system, because the created folder for EFS is empty..
Please tell me where is the issue here?
Ensure you created EFS security group and allow your ec2 security in the ingress rules:
"EfsSecurityGroup": {
"Properties": {
"GroupDescription": "EFS security group",
"SecurityGroupIngress": [
{
"FromPort": 2049,
"IpProtocol": "tcp",
"SourceSecurityGroupId": {
"Ref": "YOUR_EC2_SECURITY_GROUP"
},
"ToPort": 2049
},
],
"Tags": [
{
"Key": "Application",
"Value": {
"Ref": "AWS::StackName"
}
},
{
"Key": "Name",
"Value": "efs-sg"
}
],
"VpcId": {
"Ref": "YOUR_VPC_ID"
}
},
"Type": "AWS::EC2::SecurityGroup"
}
Ensure EFS mountarget exist:
"EFSMountTargetYourAZ": {
"Properties": {
"FileSystemId": "EFS_id",
"SecurityGroups": [
{
"Ref": "EFS_SECURITY_GROUP"
}
],
"SubnetId": {
"Ref": "SUBNET_ID"
}
},
"Type": "AWS::EFS::MountTarget"
},
There's a typo (missing closing \") in this line in your script, which is causing the attempted write to /etc/fstab to fail:
echo -e \"$DIR_SRC:/ $DIR_TGT nfs4 nfsvers=4.1 0 0 | tee -a /etc/fstab\n",
This should read:
echo -e \"$DIR_SRC:/ $DIR_TGT nfs4 nfsvers=4.1 0 0\" | tee -a /etc/fstab\n",
You need to make sure that an AWS::EFS::MountTarget resource exists in the availability zone specified. Otherwise, the attempt to mount the filesystem using the DNS name will fail to resolve correctly. See Mounting File Systems and AWS::EFS::FileSystem for further documentation.

Packer hangs waiting for (inline) Shell script to be executed

being new to packer i am trying to build my first virtual box image with a packer file. But somehow it hangs on the inline shell provisioning. I cannot figure out what the issue is. Tried to debug and it hangs on.
virtualbox-iso: Provisioning with shell script: /var/folders/27/p5wvd4l164z3c56378y7pp940000gn/T/packer-shell450560231
My packer script is as follows:
{
"provisioners": [{
"type": "shell",
"inline": [
"sleep 30",
"sudo apt-get update"
]
}],
"builders": [
{
"type": "virtualbox-iso",
"boot_command": [
"<esc><wait>",
"<esc><wait>",
"<enter><wait>",
"/install/vmlinuz<wait>",
" auto<wait>",
" console-setup/ask_detect=false<wait>",
" console-setup/layoutcode=us<wait>",
" console-setup/modelcode=pc105<wait>",
" debian-installer=en_US<wait>",
" fb=false<wait>",
" initrd=/install/initrd.gz<wait>",
" kbd-chooser/method=us<wait>",
" keyboard-configuration/layout=USA<wait>",
" keyboard-configuration/variant=USA<wait>",
" locale=en_US<wait>",
" netcfg/get_hostname=ubuntu-1404<wait>",
" netcfg/get_domain=acme.com<wait>",
" noapic<wait>",
" preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg<wait>",
" -- <wait>",
"<enter><wait>"
],
"boot_wait": "10s",
"disk_size": 40960,
"guest_os_type": "Ubuntu_64",
"http_directory": "http",
"iso_checksum": "9e5fecc94b3925bededed0fdca1bd417",
"iso_checksum_type": "md5",
"iso_url": "http://releases.ubuntu.com/14.04/ubuntu-14.04.3-server-amd64.iso",
"ssh_username": "packer",
"ssh_password": "packer",
"ssh_port": 22,
"ssh_pty" : "true",
"headless": "false",
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo packer | sudo -S shutdown -P now",
"output_directory": "/Users/marco/Desktop/generated_images/ubuntu",
"vboxmanage": [
[ "modifyvm", "{{.Name}}", "--memory", "512" ],
[ "modifyvm", "{{.Name}}", "--cpus", "1" ]
]
}
]
}
You can get verbose output from Packer with PACKER_LOG=1 before the build command. That might help diagnose what's happening on particular scripts. Also packer has a --debug flag that will stop the build at breakpoints and enable you to login to the image.