aws cdk ecs task scheduling specify existing securitygroup - aws-sdk

When defining an ECS Task Schedule, I can't seem to find a way of specifying an existing security group. Any pointers on where this can be configured using aws cdk?
In the code snippet below, you'll see I am able to create a cron, specify the docker image to schedule and create the schedule itself by specifying the existing cluster and vpc. However, there is no option to specify an existing security group... Is it possible to specify an existing security group?
schedule_cron = scaling.Schedule.cron(minute=manifest['schedule']['minute'],
hour=manifest['schedule']['hour'],
day=manifest['schedule']['day'],
month=manifest['schedule']['month'],
year=manifest['schedule']['year'])
image_option = ecs_patterns.ScheduledFargateTaskImageOptions(image=img,
cpu=manifest["resources"]["cpu"],
memory_limit_mib=manifest["resources"]["memory"],
log_driver=ecs.AwsLogDriver(log_group=log_group,
stream_prefix=manifest["app_name"]),
secrets=secrets,
environment= env)
schedule_pattern = ecs_patterns.ScheduledFargateTask(self, f"scheduledtask{app_name}",
schedule= schedule_cron, scheduled_fargate_task_image_options=image_option, cluster=cluster,
desired_task_count=manifest["replica_count"], vpc=vpc)

The ECS Patterns does not support this yet. The underlying constructs however do. Therefore you must specify the TaskDefinition, Event and Event Target yourself.. With Event the schedule is specified and with Event Target the SecurityGroup is set.
Here is an example implementation using TypeScript. Please adjust this to Python using the aws_cdk.aws_events and aws_cdk.aws_events_targets modules.
import aas = require('#aws-cdk/aws-applicationautoscaling');
import cdk = require('#aws-cdk/core');
import events = require("#aws-cdk/aws-events")
import event_targets = require("#aws-cdk/aws-events-targets");
import ec2 = require('#aws-cdk/aws-ec2');
const securityGroup = new ec2.SecurityGroup(this, "SecurityGroup", {
vpc: vpc,
});
const task = new ecs.FargateTaskDefinition(this, "TaskDefinition", {
family: "ScheduledTask",
cpu: ..,
memoryLimitMiB: ..,
});
task.addContainer("app_name", ...);
const rule = new events.Rule(this, "Rule", {
description: "ScheduledTask app_name Trigger",
enabled: true,
schedule: aas.Schedule.rate(cdk.Duration.hours(1)),
targets: [
new event_targets.EcsTask({
cluster: cluster,
taskDefinition: task,
securityGroup: securityGroup,
}),
],
});
Please note that the EcsTask event target only allows one security group. This issue was raised a while ago on GitHub: https://github.com/aws/aws-cdk/issues/3312

Related

Create a Network Load Balancer on Oracle Cloud Infrastructure with a Reserved IP using Terraform

Using Terraform to set up a Network Load Balancer on Oracle Cloud Infrastructure, it works as expected if created with an ephemeral public IP, however one created using a reserved public IP does not respond. Here are the exact Terraform resourses used to create the load balancer:
resource "oci_core_public_ip" "ip" {
for_each = { for lb in var.load_balancers: lb.subnet => lb if ! lb.private
compartment_id = local.compartment_ocid
display_name = "${var.name}-public-ip"
lifetime = "RESERVED"
lifecycle {
prevent_destroy = true
}
}
resource "oci_network_load_balancer_network_load_balancer" "nlb" {
for_each = { for lb in var.load_balancers: lb.subnet => lb if lb.type == "network" }
compartment_id = local.compartment_ocid
display_name = "${var.name}-network-load-balancer"
subnet_id = oci_core_subnet.s[each.value.subnet].id
is_private = each.value.private
#reserved_ips {
# id = oci_core_public_ip.ip[each.value.subnet].id
#}
}
All of the other resources: security list rules, listeners, backend set and backends, etc, etc, are created such that the above works. If, however I uncomment the assignment of reserved_ips to the network load balancer then it does not work: no response from the load balancer's public IP. Everything is the same except those three lines being uncommented.
Between each test I tear down everything and recreate with Terraform. It always works with an ephemeral IP and never works with the reserved IP. Why? What am I missing? Or does this just not work as advertised?
The Terraform version is v1.3.4 and the resource version is oracle/oci version 4.98.0.
The reserved IP is set up correctly however the terraform provider removes its association with the load balancer's private IP. Closer inspection of the Terraform output shows this
~ resource "oci_core_public_ip" "ip" {
id = "ocid1.publicip.oc1.uk-london-1.ama...sta"
- private_ip_id = "ocid1.privateip.oc1.uk-london-1.abw...kya" -> null
# (11 unchanged attributes hidden)
}
Manually replacing it fixes it (until the next tf run)
$ oci network public-ip update --public-ip-id ocid1.publicip.oc1.uk-london-1.ama...rrq --private-ip-id ocid1.privateip.oc1.uk-london-1.abw...kya
There is a bug ticket on Terraform's github.

Azure Durable Function AppSettings

I'm trying to create an azure durable function but it's very difficult to find some normal guides on this subject. I've setup the DI and I try to read the settings of the function but it crashes
I have setup an Azure Function project in VS 2019 and added a Durable Orchestrator Function Template. I removed all the "static" references from the class and all seem to work fine until I add the configurationbuilder in the startup file
Can anyone explain to me how this should work or give some guidance, where to find some explanation of the configuration of a durable functions ? What should I have in the host.json , local.settings.json and how this changes be when I publish it on the portal ?
My case is this. The startup file looks like this
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.Extensions.Configuration;
[assembly: FunctionsStartup(typeof(DurableFunctions.Startup))]
namespace DurableFunctions
{
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
var settings = new ConfigurationBuilder()
.AddEnvironmentVariables()
.Build();
}
}
}
The host.json is like this
{
"version": "2.0"
}
The local.settings.json
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet"
}
}
And the error I get when I start the debugger is this
This is output:
[11/8/2019 10:29:04 AM] A host error has occurred during startup operation '8b80bc94-2b98-408b-895f-c5697430acfd'.
[11/8/2019 10:29:04 AM] Microsoft.Azure.WebJobs.Extensions.DurableTask: Value cannot be null.
[11/8/2019 10:29:04 AM] Parameter name: hostConfiguration.
Value cannot be null.
Parameter name: provider
You want to instead override the ConfigureAppConfiguration method in your FunctionStartup class (https://learn.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection#customizing-configuration-sources).
The following example takes the one provided in the documentation a step further by adding user secrets.
public override void ConfigureAppConfiguration(IFunctionsConfigurationBuilder builder)
{
FunctionsHostBuilderContext context = builder.GetContext();
builder.ConfigurationBuilder
.SetBasePath(context.ApplicationRootPath)
.AddJsonFile("appsettings.json", optional: true, reloadOnChange: false)
.AddJsonFile($"appsettings.{context.EnvironmentName}.json", optional: true, reloadOnChange: false)
.AddUserSecrets(Assembly.GetExecutingAssembly(), true, true)
.AddEnvironmentVariables();
}
By default, configuration files such as appsettings.json are not automatically copied to the Azure Function output folder. Be sure to review the documentation (https://learn.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection#customizing-configuration-sources) for modifications to your .csproj file. Also note that due to the way the method appends the existing providers it is necessary to always end with .AddEnvironmentVariables().
A deeper discussion on configuration in an Azure Function can be found at Using ConfigurationBuilder in FunctionsStartup
Did you get a response to this?
Did you try adding the local settings from the json, before the Environment Variables?
var config = new ConfigurationBuilder()
.SetBasePath(context.FunctionAppDirectory)
// This gives you access to your application settings in your local development environment
.AddJsonFile("local.settings.json", optional: true, reloadOnChange: false)
.AddJsonFile("secret.settings.json", optional: true, reloadOnChange: false)
// This is what actually gets you the application settings in Azure
.AddEnvironmentVariables()
.Build();
I am having trouble when I deploy, I have set up a variable group yet they are not being picked up.

SSH to Google Compute instance using NodeJS, without gcloud

I'm trying to create a SSH tunnel into a compute instance, from an environment that doesn't have gcloud installed (App Engine Standard NodeJS Environment).
What are the steps needed to do that? How does gcloud compute ssh command does it? Is there a NodeJS library that already does it?
I created the package gcloud-ssh-tunnel that does the necessary steps:
Create a private/public key using sshpk
Imports the public key using the OS Login API
SSH using ssh2 (and specifically create a tunnel, because this was the use case I needed - see the Why? section in the package)
Delete the public key using the OS Login API (to not overflow the account or leave security access)
You can use ssh2 to do that in nodejs.
"gcloud compute ssh" generates persistent SSH keys for the user. The public key is stored in project or instance SSH keys metadata, and the Guest Environment creates the necessary local user and places ~/.ssh/authorized_keys in its home directory.
You can manually add your public key to the instance, and then connect to it via ssh using a node ssh library1.
Or you can set a startup script for the instance when you are creating it2.
As Cloud Ace pointed out, you can use the ssh2 module3 for node.js compatibility.
In order to SSH into a GCP instance you have to:
Enable OS Login
Create a service account and assign it "Compute OS Admin Login" role.
Create SSH key and import it into the service account.
Use that SSH key and POSIX username.
The first 2 steps already link to the documentation.
Create SSH key:
import {
generatePrivateKey,
} from 'sshpk';
const keyPair = generatePrivateKey('ecdsa');
const privateKey = keyPair.toString();
const publicKey = keyPair.toPublic().toString();
Import key:
const osLoginServiceClient = new OsLoginServiceClient({
credentials: googleCredentials,
});
const [result] = await osLoginServiceClient.importSshPublicKey({
parent: osLoginServiceClient.userPath(googleCredentials.client_email),
sshPublicKey: {
expirationTimeUsec: ((Date.now() + 10 * 60 * 1_000) * 1_000).toString(),
key: publicKey,
},
});
SSH using the key:
const ssh = new NodeSSH();
await ssh.connect({
host,
privateKey,
username: loginProfile.posixAccounts[0].username,
});
In this example, I am using node-ssh but you can use anything.
The only other catch is that you need to figure out the public host. Implementation for that looks like this:
const findFirstPublicIp = async (
googleCredentials: GoogleCredentials,
googleZone: string,
googleProjectId: string,
instanceName: string,
) => {
const instancesClient = new InstancesClient({
credentials: googleCredentials,
});
const instances = await instancesClient.get({
instance: instanceName,
project: googleProjectId,
zone: googleZone,
});
for (const instance of instances) {
if (!instance || !('networkInterfaces' in instance) || !instance.networkInterfaces) {
throw new Error('Unexpected result.');
}
for (const networkInterface of instance.networkInterfaces) {
if (!networkInterface || !('accessConfigs' in networkInterface) || !networkInterface.accessConfigs) {
throw new Error('Unexpected result.');
}
for (const accessConfig of networkInterface.accessConfigs) {
if (accessConfig.natIP) {
return accessConfig.natIP;
}
}
}
}
throw new Error('Could not locate public instance IP address.');
};
Finally, to clean up, you have to call deleteSshPublicKey with the name of the key that you've imported:
const fingerprint = crypto
.createHash('sha256')
.update(publicKey)
.digest('hex');
const sshPublicKey = loginProfile.sshPublicKeys?.[fingerprint];
if (!sshPublicKey) {
throw new Error('Could not locate SSH public key with a matching fingerprint.');
}
const ssh = new NodeSSH();
await ssh.connect({
host,
privateKey,
username: loginProfile.posixAccounts[0].username,
});
await osLoginServiceClient.deleteSshPublicKey({
name: sshPublicKey.name,
});
In general, you'd need to reserve & assign a static external IP address to begin with (unless trying to SSH from within the same network). And a firewall rule needs to be defined for port tcp/22, which then can be applied as a "label" to the network interface, which has that external IP assigned.
The other way around works with gcloud app instances ssh:
SSH into the VM of an App Engine Flexible instance
which might be less effort & cost to setup, because a GCP VM usually has gcloud installed.

adonisjs lucid module not found

I used adonis make:model Thing --migration to create and migrate. Therefore i have 'Thing.js file in my models with the following code in it:
'use strict'
const Model = use('Model')
class Thing extends Model {
}
module.exports = Thing
I then replaced 'Model' with 'Lucid' since I'd like to structure a relational database. But this is the error I get on my terminal when I run server.js: "Cannot find module 'Lucid'".
and this is how it looks inside the start/app.js file:
const providers = [
'#adonisjs/framework/providers/AppProvider',
'#adonisjs/framework/providers/ViewProvider',
'#adonisjs/lucid/providers/LucidProvider',
'#adonisjs/bodyparser/providers/BodyParserProvider',
'#adonisjs/cors/providers/CorsProvider',
'#adonisjs/shield/providers/ShieldProvider',
'#adonisjs/session/providers/SessionProvider',
'#adonisjs/auth/providers/AuthProvider',
'#adonisjs/validator/providers/ValidatorProvider'
]
and at the end:
module.exports = { providers, aceProviders, aliases, commands }
What is the reason for this? How do I fix it?
ps: the project was initialized the typical way thus the folder structure is as is: adonis new myprojectsname
use('Model') will use the Model class of Lucid provider. You don't need to change it to create a relational database.

How to specify EMR cluster create CLI commands using AWS Java SDK?

Ok, this question is where I reached after trying out some stuff. I'll first give a brief intro to what I wanted to do and how I got here.
I'm writing a script to start an EMR cluster using Java AWS SDK. The EMR cluster is to be started inside a VPC and a subnet with a certain id. When I specify the subnet id (code line below ending with // ******) the emr cluster stays in the STARTING state and does not move ahead for several minutes, eventually giving up and failing. I'm not sure if there's a bug with the implementation of this functionality in the SDK.
try {
/**
* Specifying credentials
*/
String accessKey = EmrUtils.ACCESS_KEY;
String secretKey = EmrUtils.SECRET_ACCESS_KEY;
AWSCredentials credentials = new BasicAWSCredentials(accessKey,
secretKey);
/**
* Initializing emr client object
*/
emrClient = new AmazonElasticMapReduceClient(credentials);
emrClient.setEndpoint(EmrUtils.ENDPOINT);
/**
* Specifying bootstrap actions
*/
ScriptBootstrapActionConfig scriptBootstrapConfig = new ScriptBootstrapActionConfig();
scriptBootstrapConfig.setPath("s3://bucket/bootstrapScript.sh");
BootstrapActionConfig bootstrapActions = new BootstrapActionConfig(
"Bootstrap Script", scriptBootstrapConfig);
RunJobFlowRequest jobFlowRequest = new RunJobFlowRequest()
.withName("Java SDK EMR cluster")
.withLogUri(EmrUtils.S3_LOG_URI)
.withAmiVersion(EmrUtils.AMI_VERSION)
.withBootstrapActions(bootstrapActions)
.withInstances(
new JobFlowInstancesConfig()
.withEc2KeyName(EmrUtils.EC2_KEY_PAIR)
.withHadoopVersion(EmrUtils.HADOOP_VERSION)
.withInstanceCount(1)
.withEc2SubnetId(EmrUtils.EC2_SUBNET_ID) // ******
.withKeepJobFlowAliveWhenNoSteps(true)
.withMasterInstanceType(EmrUtils.MASTER_INSTANCE_TYPE)
.withTerminationProtected(true)
.withSlaveInstanceType(EmrUtils.SLAVE_INSTANCE_TYPE));
RunJobFlowResult result = emrClient.runJobFlow(jobFlowRequest);
String jobFlowId = result.getJobFlowId();
System.out.println(jobFlowId);
} catch (Exception e) {
e.printStackTrace();
System.out.println("Shutting down cluster");
if (emrClient != null) {
emrClient.shutdown();
}
}
When I do the same thing using the EMR console, the cluster starts, bootstraps and successfully goes into the WAITING state. Is there any other way I can specify the subnet id to start a cluster. I suppose boto allows us to send additional parameters as a string. I found something similar in Java: .withAdditionalInfo(additionalInfo) which is a method of RunJobFlowRequest() and takes a JSON string as an argument. I don't however know the key that should be used for the ec2 subnet id in the JSON string.
(Using python boto is not an option for me, I've faced other showstopping issues with that and had to shift to AWS Java SDK)