Fail to create ElasticBeanstalk custom platform with "Unmatched region" - amazon-elastic-beanstalk

I'm trying to create a custom platform for region ap-northeast-1 following aws documentation.
ebp create ends with failure, and ebp events shows an error indicating that the created AMI is in different region from the service region.
2018-04-28 00:49:18 INFO Initiated platform version creation for 'NodePlatform_Ubuntu/1.0.0'.
2018-04-28 00:49:22 INFO Creating Packer builder environment 'eb-custom-platform-builder-packer'.
2018-04-28 00:52:39 INFO Starting Packer building task.
2018-04-28 00:52:44 INFO Creating CloudWatch log group '/aws/elasticbeanstalk/platform/NodePlatform_Ubuntu'.
2018-04-28 01:03:48 INFO Successfully built AMI(s): 'ami-5f2f4527' for 'arn:aws:elasticbeanstalk:ap-northeast-1:392559473945:platform/NodePlatform_Ubuntu/1.0.0'
2018-04-28 01:04:03 ERROR Unmatched region for created AMI 'ami-5f2f4527': 'us-west-2' (service region: 'ap-northeast-1').
2018-04-28 01:04:03 INFO Failed to create platform version 'NodePlatform_Ubuntu/1.0.0'.
I used this sample custom platform provided in aws document and modified only custom_platform.json for builders.region and builders.source_ami to match with the region of my Custom Platform Builder.
.elasticbeanstalk/config.yml
global:
application_name: Custom Platform Builder
branch: null
default_ec2_keyname: null
default_platform: null
default_region: ap-northeast-1
instance_profile: null
platform_name: NodePlatform_Ubuntu
platform_version: null
profile: eb-cli
repository: null
sc: git
workspace_type: Platform
custom_platform.json
{
"variables": {
"platform_name": "{{env `AWS_EB_PLATFORM_NAME`}}",
"platform_version": "{{env `AWS_EB_PLATFORM_VERSION`}}",
"platform_arn": "{{env `AWS_EB_PLATFORM_ARN`}}"
},
"builders": [
{
"type": "amazon-ebs",
"name": "HVM AMI builder",
"region": "ap-northeast-1",
"source_ami": "ami-60a4b21c",
"instance_type": "m3.medium",
"ssh_username": "ubuntu",
"ssh_pty": "true",
"ami_name": "NodeJs running on Ubuntu Server 16.04 LTS (built on {{isotime \"20060102150405\"}})",
"tags": {
"eb_platform_name": "{{user `platform_name`}}",
"eb_platform_version": "{{user `platform_version`}}",
"eb_platform_arn": "{{user `platform_arn`}}"
}
}
],
"provisioners": [
{
"type": "file",
"source": "builder",
"destination": "/tmp/"
},
{
"type": "shell",
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo {{ .Path }}",
"scripts": [
"builder/builder.sh"
]
}
]
}
It seems my modification to custom_platform.json does not take effect.

What I missed was committing the changes...
Though EB and Packer documentation do not refer to anything about vcs or git, it seems packer uses git to create an archive of the custom platform files and thus the changes I made was not included in it because I did not commit them.
I noticed that ebp create was giving me this warning...
mac.local:NodePlatform_Ubuntu% ebp create
WARNING: You have uncommitted changes.

Related

Failed to start minikube: Error while starting minikube. Error: X Exiting due to MK_USAGE: Container runtime must be set to "containerd" for rootless

I'm getting the error and I believe the way to solve it is by running: minikube start --container-runtime=containerd
but the extension seems to run minikube start. So how am I supposed to add the flag?
Here's the launch.json file
{
"configurations": [
{
"name": "Cloud Run: Run/Debug Locally",
"type": "cloudcode.cloudrun",
"request": "launch",
"build": {
"docker": {
"path": "Dockerfile"
}
},
"image": "dai",
"service": {
"name": "dai",
"containerPort": 8080,
"resources": {
"limits": {
"memory": "256Mi"
}
}
},
"target": {
"minikube": {}
},
"watch": true
}
]
}
Cloud Code for VS Code doesn't support such settings at the moment. But you can configure minikube to apply these settings with minikube config set.
The Cloud Run emulation creates a separate minikube profile called cloud-run-dev-internal. So you should be able to run the following:
minikube config set --profile cloud-run-dev-internal container-runtime containerd
You have to delete that minikube profile to cause the setting to take effect for your next launch:
minikube delete --profile cloud-run-dev-internal

Packer custom image build failed with ssh authentication error

I'm trying to build custom image for AWS EKS managed node group, Note: my custom image (ubuntu) already has MFA and private key based authentication enabled.
I cloned github repository to build eks related changes from the below url.
git clone https://github.com/awslabs/amazon-eks-ami && cd amazon-eks-ami
Next i made few changes to run the make file
cat eks-worker-al2.json
{
"variables": {
"aws_region": "eu-central-1",
"ami_name": "template",
"creator": "{{env `USER`}}",
"encrypted": "false",
"kms_key_id": "",
"aws_access_key_id": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_access_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"aws_session_token": "{{env `AWS_SESSION_TOKEN`}}",
"binary_bucket_name": "amazon-eks",
"binary_bucket_region": "eu-central-1",
"kubernetes_version": "1.20",
"kubernetes_build_date": null,
"kernel_version": "",
"docker_version": "19.03.13ce-1.amzn2",
"containerd_version": "1.4.1-2.amzn2",
"runc_version": "1.0.0-0.3.20210225.git12644e6.amzn2",
"cni_plugin_version": "v0.8.6",
"pull_cni_from_github": "true",
"source_ami_id": "ami-12345678",
"source_ami_owners": "00012345",
"source_ami_filter_name": "template",
"arch": null,
"instance_type": null,
"ami_description": "EKS Kubernetes Worker AMI with AmazonLinux2 image",
"cleanup_image": "true",
"ssh_interface": "",
"ssh_username": "nandu",
"ssh_private_key_file": "/home/nandu/.ssh/template_rsa.ppk",
"temporary_security_group_source_cidrs": "",
"security_group_id": "sg-08725678910",
"associate_public_ip_address": "",
"subnet_id": "subnet-01273896789",
"remote_folder": "",
"launch_block_device_mappings_volume_size": "4",
"ami_users": "",
"additional_yum_repos": "",
"sonobuoy_e2e_registry": ""
After adding user and private key build getting failed with below error.
logs
amazon-ebs: Error waiting for SSH: Packer experienced an authentication error when trying to connect via SSH. This can happen if your username/password are wrong. You may want to double-check your credentials as part of your debugging process. original error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain.
for me just changue region for aws o delete aws region in packer.

QuantumLeap, OrionCB and IoTagent-LoRaWAN integration

I was reading the QuantumLeap docs and I was wondering how those Generic Enablers are integrated, I mean, I've deployed the docker containers and apparently are all running, in fact I've been able to create a device in the IoTagent-LoRaWAN with the POST request which I'm also able to retrieve with the GET request to http://localhost:4061/iot/devices; however and it's apparently receiving the info from TTN as the log shows:
fiware-iot-agent | {"timestamp":"2020-06-24T19:23:04.759Z","level":"info","message":"New message in topic"}
fiware-iot-agent | {"timestamp":"2020-06-24T19:23:04.760Z","level":"info","message":"IOTA provisioned devices:"}
fiware-iot-agent | {"timestamp":"2020-06-24T19:23:04.760Z","level":"info","message":"Decoding CaynneLPP message:AQIBbA=="}
fiware-iot-agent | {"timestamp":"2020-06-24T19:23:04.760Z","level":"error","message":"Could not cast message to NGSI"}
However ... there is a last error message that I don't know if could cause problems, "level":"error","message":"Could not cast message to NGSI"
Also ... I don't know how should I proced now with OrionCB and QuantumLeap because ... QuantumLeap docs talk about create an OrionCB subscription, but ... I had understood from OrionCB docs that subscriptions are created to follow a previously created entity, so .. should I create both?
Is QuantumLeap storing info from any created subscription in OrionCB? How can I tight an entity to that IoTagent-LoRaWAN device created?
Thank you all!
Well, It was apparently again a problem with docker-compose.yml file; it was not deploying correctly the mongoDB container thus OrionCB cannot connect to it.
When all containers are deployed the IoTagent should be able to create an new entity when you add a new device, then creating the proper subscription in OrionCB pointing the notifications to QuantumLeap should work:
{
"description": "Test subscription",
"subject": {
"entities": [
{
"idPattern": ".*",
"type": "Room"
}
],
"condition": {
"attrs": [
"temperature"
]
}
},
"notification": {
"http": {
"url": "http://quantumleap:8668/v2/notify"
},
"attrs": [
"temperature"
],
"metadata": ["dateCreated", "dateModified"]
},
"throttling": 5
}

PowerShell shell hangs when saving results to variable (Azure Pipelines List)

I am trying to save the results of an Azure CLI (az pipelines list) command to a variable, but the shell/script hangs.
If I run the command on its own, it works:
PS C:\> az pipelines list --project PROJECT_NAME --name PIPELINE_NAME --output json
This command is in preview. It may be changed/removed in a future release.
[
{
"authoredBy": {
# ...
},
"createdDate": "2019-12-07T00:08:03.620000+00:00",
"draftOf": null,
"drafts": [],
"id": 541,
"latestBuild": null,
"latestCompletedBuild": null,
"metrics": null,
"name": "PIPELINE_NAME",
"path": "\\",
"project": {
"abbreviation": null,
"defaultTeamImageUrl": null,
"description": null,
"id": "99a1b81a-ca3b-418a-86cf-0965eaba6dab",
"lastUpdateTime": "2019-12-13T20:54:20.28Z",
"name": "PROJECT_NAME",
"revision": 462,
"state": "wellFormed",
"url": "https://dev.azure.com/ORGANIZATION_NAME/_apis/projects/99a1b81a-ca3b-418a-86cf-0965eaba6dab",
"visibility": "private"
},
"quality": "definition",
"queue": {
"id": 501,
"name": "Azure Pipelines",
"pool": {
"id": 65,
"isHosted": true,
"name": "Azure Pipelines"
},
"url": "https://dev.azure.com/ORGANIZATION_NAME/_apis/build/Queues/501"
},
"queueStatus": "enabled",
"revision": 30,
"type": "build",
"uri": "vstfs:///Build/Definition/541",
"url": "https://dev.azure.com/ORGANIZATION_NAME/99a1b81a-ca3b-418a-86cf-0965eaba6dab/_apis/build/Definitions/541?revision=30"
}
]
PS C:\>
However, if I try to assign the results to a variable, the shell/script hangs instead:
PS C:\> $pipelines = az pipelines list --project PROJECT_NAME --name PIPELINE_NAME --output json
This command is in preview. It may be changed/removed in a future release.
And the cursor jumps to the character 61 position and just stays there forever.
What may be the cause of this behaviour? I feel like the preview warning is causing some trouble, but I was not sure how to suppress it.
Any insight is greatly appreciated.
Thanks!
Okay, this is going to sound odd, but this is a rendering issue only - the app hasn't hung at all, it just stopped the console from outputting correctly, including the prompt after the command finishes.
At the top of your script add the following:
$PSBackgroundColor = $Host.UI.RawUI.BackgroundColor
$PSForegroundColor = $Host.UI.RawUI.ForegroundColor
function Reset-Console {
$Host.UI.RawUI.BackgroundColor = $PSBackgroundColor
$Host.UI.RawUI.ForegroundColor = $PSForegroundColor
}
Then after running the command:
Reset-Console
This fixed the issue for me.
As mentioned by the previous answer this is due to the color output of the azure cli (In this case the warning text) messes up the terminal.
Since PR [Core] Knack adoption #12604 it is possible to disable coloring of output for Azure cli by setting the environment variable AZURE_CORE_NO_COLOR to True (o alternatively by setting the [core] no_color=True option in ~/.azure/config
I am using i successfully with version 2.14.2 of Azure CLI.
From the description of PR [Core] Knack adoption #12604
[Core] PREVIEW: Allow disabling color by setting AZURE_CORE_NO_COLOR
environment variable to True or [core] no_color=True config (#12601)

Elastic Beanstalk does not update image from ECR automatically

I have
"Update": "true" in Dockerrun.aws.json
which should automatically update the image and container in the EC2 ionstance when i update the image in ECR.
But when i ssh into the instance after pushing a new image , i still see the container and image not updated.
[root#ip-10-20-60-125 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8e3bab8da13 258e7bc272bd "./graphhopper.sh we…" 8 days ago Up 8 days 8989/tcp tender_mayer
[root#ip-10-20-60-125 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
aws_beanstalk/current-app latest 258e7bc272bd 8 days ago 813MB
openjdk 8-jdk b8d3f94869bb 6 weeks ago 625MB
Dockerrun.aws.json has this
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "xxxxx",
"Key": "xxxxx"
},
"Image": {
"Name": "213074117100.dkr.ecr.us-east-1.amazonaws.com/xxxxx:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8989"
}
],
"Volumes": [
{
"HostDirectory": "/data",
"ContainerDirectory": "/data"
}
],
"Logging": "/var/log/eb",
"Command": "xxxxx"
}
Is there a setting somewhere other than update: "true" ?
If i do a eb deploy, it will pull and update. But "Update": "true" should pull and update automatically when i update the image, which is not happening.
From this AWS Documentation and this thread AWS Beanstalk docker image automatic update doesn't work it seems that update=true just does the docker pull before docker run and it will not update the container on a new image update.
From my current research, it seems there is no way to automate this process at this moment.