Create a Network Load Balancer on Oracle Cloud Infrastructure with a Reserved IP using Terraform - oracle-cloud-infrastructure

Using Terraform to set up a Network Load Balancer on Oracle Cloud Infrastructure, it works as expected if created with an ephemeral public IP, however one created using a reserved public IP does not respond. Here are the exact Terraform resourses used to create the load balancer:
resource "oci_core_public_ip" "ip" {
for_each = { for lb in var.load_balancers: lb.subnet => lb if ! lb.private
compartment_id = local.compartment_ocid
display_name = "${var.name}-public-ip"
lifetime = "RESERVED"
lifecycle {
prevent_destroy = true
}
}
resource "oci_network_load_balancer_network_load_balancer" "nlb" {
for_each = { for lb in var.load_balancers: lb.subnet => lb if lb.type == "network" }
compartment_id = local.compartment_ocid
display_name = "${var.name}-network-load-balancer"
subnet_id = oci_core_subnet.s[each.value.subnet].id
is_private = each.value.private
#reserved_ips {
# id = oci_core_public_ip.ip[each.value.subnet].id
#}
}
All of the other resources: security list rules, listeners, backend set and backends, etc, etc, are created such that the above works. If, however I uncomment the assignment of reserved_ips to the network load balancer then it does not work: no response from the load balancer's public IP. Everything is the same except those three lines being uncommented.
Between each test I tear down everything and recreate with Terraform. It always works with an ephemeral IP and never works with the reserved IP. Why? What am I missing? Or does this just not work as advertised?
The Terraform version is v1.3.4 and the resource version is oracle/oci version 4.98.0.

The reserved IP is set up correctly however the terraform provider removes its association with the load balancer's private IP. Closer inspection of the Terraform output shows this
~ resource "oci_core_public_ip" "ip" {
id = "ocid1.publicip.oc1.uk-london-1.ama...sta"
- private_ip_id = "ocid1.privateip.oc1.uk-london-1.abw...kya" -> null
# (11 unchanged attributes hidden)
}
Manually replacing it fixes it (until the next tf run)
$ oci network public-ip update --public-ip-id ocid1.publicip.oc1.uk-london-1.ama...rrq --private-ip-id ocid1.privateip.oc1.uk-london-1.abw...kya
There is a bug ticket on Terraform's github.

Related

How to find CPU MEMORY usage with docker stats command?

I am using docker-java API to execute docker API in my project. I didn't find any suitable method which lists down docker CPU memory usage as
GET /v1.24/containers/redis1/stats HTTP/1.1 with the help of docker-java API
Dependency
compile group: 'com.github.docker-java', name: 'docker-java', version: '3.1.2'
Code
public static void execute() {
DockerClient dockerClient = DockerClientBuilder.getInstance().build();
dockerClient.statsCmd("containerName");
}
I didn't get any output
Tell me how to execute docker stats with docker-java api
This works for me
public Statistics getNextStatistics() throws ProfilingException {
AsyncResultCallback<Statistics> callback = new AsyncResultCallback<>();
client.statsCmd(containerId).exec(callback);
Statistics stats;
try {
stats = callback.awaitResult();
callback.close();
} catch (RuntimeException | IOException e) {
// you may want to throw an exception here
}
return stats; // this may be null or invalid if the container has terminated
}
DockerClient is where we can establish a connection between a Docker engine/daemon and our application.
By default, the Docker daemon can only be accessible at the unix:///var/run/docker.sock file. We can locally communicate with the Docker engine listening on the Unix socket unless otherwise configured.
we can open a connection in two steps:
DefaultDockerClientConfig.Builder config
= DefaultDockerClientConfig.createDefaultConfigBuilder();
DockerClient dockerClient = DockerClientBuilder
.getInstance(config)
.build();
Since engines could rely on other characteristics, the client is also configurable with different conditions.
For example, the builder accepts a server URL, that is, we can update the connection value if the engine is available on port 2375:
DockerClient dockerClient
= DockerClientBuilder.getInstance("tcp://docker.baeldung.com:2375").build();
Note that we need to prepend the connection string with unix:// or tcp:// depending on the connection type.

terraform aws_elastic_beanstalk_environment SSL PolicyNames

Using terraform, does anyone know how to set a predefined SSL Security Policy for an ELB, from within the aws_elastic_beanstalk_environment resource?
I've tried various permutations of parameters, branching out from something like the below, but have had no luck.
```
setting {
name = "PolicyNames"
namespace = "aws:elb:listener"
value = "ELBSecurityPolicy-TLS-1-2-2017-01"
}
```
Can this be done using the setting syntax?
regards
Michael
Following works for classic ELB, LoadBalancerPorts is also required to set to 443 for the predefined policy to take effect.
setting {
namespace = "aws:elb:policies:sslpolicy"
name = "SSLReferencePolicy"
value = "ELBSecurityPolicy-TLS-1-2-2017-01"
}
setting {
namespace = "aws:elb:policies:sslpolicy"
name = "LoadBalancerPorts"
value = "443"
}
Try this:
setting {
name = "SSLReferencePolicy"
namespace = "aws:elb:policies:policy_name"
value = "ELBSecurityPolicy-TLS-1-2-2017-01"
}
SSLReferencePolicy
The name of a predefined security policy that adheres to AWS security best practices and that you want to enable for a SSLNegotiationPolicyType policy that defines the ciphers and protocols that will be accepted by the load balancer. This policy can be associated only with HTTPS/SSL listeners.
Refer:
aws:elb:policies:policy_name
This works:
setting {
name = "SSLReferencePolicy"
namespace = "aws:elb:policies:SSLReferencePolicy"
value = "ELBSecurityPolicy-TLS-1-2-2017-01"
}

Bare Metal Cloud - How to set authorized ssh keys for compute instances?

I have successfully provisioned Bare Metal Cloud compute instances using the following code:
public static Instance createInstance(
ComputeClient computeClient,
String compartmentId,
AvailabilityDomain availabilityDomain,
String instanceName,
Image image,
Shape shape,
Subnet subnet
) {
LaunchInstanceResponse response = computeClient.launchInstance(
LaunchInstanceRequest.builder()
.launchInstanceDetails(
LaunchInstanceDetails.builder()
.availabilityDomain(availabilityDomain.getName())
.compartmentId(compartmentId)
.displayName(instanceName)
.imageId(image.getId())
.shape(shape.getShape())
.subnetId(subnet.getId())
.build())
.build());
return response.getInstance();
}
However, I can't SSH into any instances I create via the code above, because there's no parameter on launchInstance to pass in the public key of my SSH keypair.
How can I tell the instance what SSH public key to allow? I know it must be possible somehow since the console UI allows me to provide the SSH public key as part of instance creation.
According to the launch instance API documentation, you need to pass your SSH public key via the ssh_authorized_keys field of the metadata parameter:
Providing Cloud-Init Metadata
You can use the following metadata key names to provide information to Cloud-Init:
"ssh_authorized_keys" - Provide one or more public SSH keys to be
included in the ~/.ssh/authorized_keys file for the default user on
the instance. Use a newline character to separate multiple keys. The
SSH keys must be in the format necessary for the authorized_keys file
The code for this in the Java SDK looks like this:
public static Instance createInstance(
ComputeClient computeClient,
String compartmentId,
AvailabilityDomain availabilityDomain,
String instanceName,
Image image,
Shape shape,
Subnet subnet
) {
String sshPublicKey = "ssh-rsa AAAAB3NzaC1y...key shortened for example...fdK/ABqxgH7sy3AWgBjfj some description";
Map<String, String> metadata = new HashMap<>();
metadata.put("ssh_authorized_keys", sshPublicKey);
LaunchInstanceResponse response = computeClient.launchInstance(
LaunchInstanceRequest.builder()
.launchInstanceDetails(
LaunchInstanceDetails.builder()
.availabilityDomain(availabilityDomain.getName())
.compartmentId(compartmentId)
.displayName(instanceName)
.imageId(image.getId())
.metadata(metadata)
.shape(shape.getShape())
.subnetId(subnet.getId())
.build())
.build());
return response.getInstance();
}
Then the instance will allow you to SSH to it using the SSH keypair for that public key.

How to "start" or "activate" a network with libvirt?

How do you "start" an inactive network using libvirt? With virsh this would be net-start <network>.
I can create a network with virNetworkDefineXML, which will:
Define an inactive persistent virtual network or modify an existing persistent one from the XML description.
(which is the equivalent of virsh net-define), but I don't know how to "start" this newly-created, but inactive network.
I'm using the libvirt-python bindings, but knowing the correct C API would be sufficient.
The API is virNetworkCreate():
Create and start a defined network. If the call succeed the network moves from the defined to the running networks pools.
To find this, we can look at the source for virsh. The "net-start" command is defined in tools/virsh-network.c:
static bool
cmdNetworkStart(vshControl *ctl, const vshCmd *cmd)
{
virNetworkPtr network;
bool ret = true;
const char *name = NULL;
if (!(network = virshCommandOptNetwork(ctl, cmd, &name)))
return false;
if (virNetworkCreate(network) == 0) {
vshPrint(ctl, _("Network %s started\n"), name);
} else {
vshError(ctl, _("Failed to start network %s"), name);
ret = false;
}
virNetworkFree(network);
return ret;
}
In libvirt-python, this means simply calling .create() on the network object returned from .networkDefineXML():
conn = libvirt.open('qemu:///system')
# Define a new persistent, inactive network
xml = open('net.xml', 'r').read()
net = conn.networkDefineXML(xml)
# Set it to auto-start
net.setAutostart(True)
# Start it!
net.create()

How to specify EMR cluster create CLI commands using AWS Java SDK?

Ok, this question is where I reached after trying out some stuff. I'll first give a brief intro to what I wanted to do and how I got here.
I'm writing a script to start an EMR cluster using Java AWS SDK. The EMR cluster is to be started inside a VPC and a subnet with a certain id. When I specify the subnet id (code line below ending with // ******) the emr cluster stays in the STARTING state and does not move ahead for several minutes, eventually giving up and failing. I'm not sure if there's a bug with the implementation of this functionality in the SDK.
try {
/**
* Specifying credentials
*/
String accessKey = EmrUtils.ACCESS_KEY;
String secretKey = EmrUtils.SECRET_ACCESS_KEY;
AWSCredentials credentials = new BasicAWSCredentials(accessKey,
secretKey);
/**
* Initializing emr client object
*/
emrClient = new AmazonElasticMapReduceClient(credentials);
emrClient.setEndpoint(EmrUtils.ENDPOINT);
/**
* Specifying bootstrap actions
*/
ScriptBootstrapActionConfig scriptBootstrapConfig = new ScriptBootstrapActionConfig();
scriptBootstrapConfig.setPath("s3://bucket/bootstrapScript.sh");
BootstrapActionConfig bootstrapActions = new BootstrapActionConfig(
"Bootstrap Script", scriptBootstrapConfig);
RunJobFlowRequest jobFlowRequest = new RunJobFlowRequest()
.withName("Java SDK EMR cluster")
.withLogUri(EmrUtils.S3_LOG_URI)
.withAmiVersion(EmrUtils.AMI_VERSION)
.withBootstrapActions(bootstrapActions)
.withInstances(
new JobFlowInstancesConfig()
.withEc2KeyName(EmrUtils.EC2_KEY_PAIR)
.withHadoopVersion(EmrUtils.HADOOP_VERSION)
.withInstanceCount(1)
.withEc2SubnetId(EmrUtils.EC2_SUBNET_ID) // ******
.withKeepJobFlowAliveWhenNoSteps(true)
.withMasterInstanceType(EmrUtils.MASTER_INSTANCE_TYPE)
.withTerminationProtected(true)
.withSlaveInstanceType(EmrUtils.SLAVE_INSTANCE_TYPE));
RunJobFlowResult result = emrClient.runJobFlow(jobFlowRequest);
String jobFlowId = result.getJobFlowId();
System.out.println(jobFlowId);
} catch (Exception e) {
e.printStackTrace();
System.out.println("Shutting down cluster");
if (emrClient != null) {
emrClient.shutdown();
}
}
When I do the same thing using the EMR console, the cluster starts, bootstraps and successfully goes into the WAITING state. Is there any other way I can specify the subnet id to start a cluster. I suppose boto allows us to send additional parameters as a string. I found something similar in Java: .withAdditionalInfo(additionalInfo) which is a method of RunJobFlowRequest() and takes a JSON string as an argument. I don't however know the key that should be used for the ec2 subnet id in the JSON string.
(Using python boto is not an option for me, I've faced other showstopping issues with that and had to shift to AWS Java SDK)