I see that there is a gce_lb Ansible module, but it is unclear to me whether or not I can actually use this to change the instances assigned to that LB or whether the module just creates and destroys LBs.
In contrast, EC2 clearly has one module just for creating and destroying ELBs, and another module explicitly for [de]registering instances to/from an existing ELB.
Currently, the gce_lb module is only for creating/destroying the LB. It does not support adding/removing instances.
The GCE modules in Ansible are built on top of the python libcloud library which does have support for add/remove. I think a similar approach taken by the EC2 modules would be a good solution here also.
Related
Assumption: Terraform installed on MS Visual Studio Code.
Since CloudFormation supports both JSON templates and Terraform supports JSON this seems like a yes. However when I load a CloudFormation template into MS VisualStudio Code, and change the name from test.json to test.tf VS Code doesn't recognize the formatting, well visually as the name implies.
Also tried to just Run the test.json and test.tf files and Code says it doesn't know how to debug json. Also Code can't find a json debugger in the marketplace (which seems a little hard to imagine)
Anyone else have experience with this?
Since CloudFormation supports both JSON templates and Terraform supports JSON this seems like a yes.
This is far from being true.
Although, both Terraform and CloudFormation have support for JSON files, this does not means that the syntax of those JSON files are understood by both of them. They are entirely different products developed by different maintainers. They have different ways of defining and managing resources which you would want to provision.
Terraform's AWS provider has support for creating CloudFormation stacks, more info in the documentation. If you really want to, you might be able to provision resources from CFN files, but certainly this is not accomplished just by renaming a test.json to test.tf.
It seems that you have misunderstood some things:
Both CloudFormation JSON or YML files and Terraform TF (or JSON) are in a declarative language. This is true for JSON and YML in general. You can't debug or run these files, as they are only describing an infrastructure (or an object in general) and don't implement any logic.
For Terraform you need to install the HashiCorp.terraform extension. This will give you syntax highlighting.
For Cloudformation I recommend cf-lint extension.
Both in CloudFormation and in Terraform you edit the files (JSON, YML, TF) and use a CLI to deploy your code. Terraform works in a much higher-level than CloudFormation. CloudFormation can only be used to deploy stacks, i.e. a set of resources. If you also need to deploy application code, you must have done that beforehand in a S3 bucket and reference it from there. Also in many situations you need to create a stack and then update it. Or you need to combine two or more stacks in a Stack Set.
Terraform manages all this complexity for you and offers a lot of features on top of CloudFormation. As already said, the JSON file formats isn't the same and can't be used interchangeably. You can provision a CloudFormation stack with Terraform though.
I'm trying to install Openwhisk onto Openshift. I've followed the official guide and it worked.
Now the point is that my environment would be a multitenant ecosystem, so let's pretend having two different users (Ux and Uy) who want to run their containers on my openwhisk environment.
I'd like to have the following projects in my openshift:
Core project, that hosts the Openwhisk's Ingress, Controller, Kafka and CouchDB components (maybe also the Invokers?)
UxPRJ project, that hosts only the containers running actions created by Ux (maybe also the Invokers?)
UyPRJ project, that hosts only the containers running actions created by Uy (maybe also the Invokers?)
The following images better explain what I've in mind:
or also:
Is this possible configuration possible?
Looking around, I wasn't able to find anything like that...
Thank you.
The OpenWhisk loadbalancer which assigns functions to invokers does not segregate users in the way you want, but it is possible to do what you want if you modify the loadbalancer. The way it works now is that there is a list of available invokers which form the allowed set of invokers for a function assignment. You could at that point take into consideration a partitioning based on the user and form the allowed set of invokers differently. There are other ways to realize the partitioning you want as well, but all require modification to the OpenWhisk control plane.
I'm following official docs guide to write a Scala script for launching EMR cluster using AWS Java SDK. I'm able to identify 3 major steps needed here:
Instantiating an EMR Client
I do this using AmazonElasticMapReduceClientBuilder.defaultClient()
Creating a JobFlowRequest
I create a RunJobFlowRequest object and supply it with JobFlowInstancesConfig (both objects are supplied with appropriate parameters depending on the requirement)
Running JobFlowRequest
This is done by calling emrClient.runJobFlow(runJobFlowRequest) which returns a RunJobFlowResult object
But RunJobFlowResult object doesn't provide any clue as to whether the cluster was launched successfully or not (with all the given configurations)
Now I'm aware that listClusters() method of the emrClient can be used to get cluster id of the newly-launched cluster through which we can query the state of the cluster using describeCluster() call. However since I'm using a Scala script to perform all this stuff, I need the process to be automated (here looking up the cluster id in the result of getClusters() will have to be done manually)
Is there any way this could be achieved?
You have all the pieces there but haven't quite stitched them together.
The cluster's id can be retrieved from RunJobFlowResult.getJobFlowId(). (It is a string starting with "j-".) Then you can pass this jobFlowId to DescribeCluster.
I don't blame you for your confusion though, since it's called "jobFlowId" for some methods (mainly older API methods) and "clusterId" in other methods. They are really the same thing though.
I have seen this terms many times on the google code over configuration or configuration over code. I tried on by searching it on google, but still got nothing. Recently i started work it on gulp again the mystery came code over configuration.
Can you please tell me what is both and what is the difference on them?
Since you tagged this with gulp, I'll give you a popular comparision to another tool (Gruunt) to tell the difference.
Grunt’s tasks are configured in a configuration object inside the
Gruntfile, while Gulp’s are coded using a Node style syntax.
taken from here
So basically with configuration you have to give your tool the information it needs to work like it thinks it has to work.
If you focus on code you tell your tool what steps it has to complete directly.
There's quite a bunch of discussion about which one is better. You'll have to have a read and decide what fits your project best.
Code over configuration (followed by gulp) and the opposite configuration over code (followed by grunt) are approaches/principles in software/program development where both gulp and grunt are using for the same thing as automate tasks. The approach refers to developing programs according to typical programming conventions, versus programmer defined configurations. Both approaches has its own context / purpose and not a question of which one is better.
In gulp each tasks are a java-script function, necessarily no configuration involved up-front (although functions can normally take configuration values) and chaining multiple functions together to create a build script. Gulp use node stream. A stream basically continuously flow of data and can be manipulated asynchronously. However in grunt all the tasks are configured in a configuration object in a file and those are run in sequence.
Reference: https://deliciousbrains.com/grunt-vs-gulp-battle-build-tools/
Because you talked about "code" I'll try and give a different perspective.
While answering a question on figuring out IP address from inside of a docker container Docker container IP address from .net project
there are two codes possible
var ipAddress = HttpContext.Request.HttpContext.Connection.LocalIpAddress;
This will give you the IP address at runtime, but, it won't have control over it.
It can also lead to more code in the future to do something with the IP Address. Like feeding a load balancer or the likes.
I'd prefer a configuration over it.
such as
An environment variable with pre-configured IP addresses for each container service. Such as:
WEB_API_1_IP=192.168.0.10
WEB_API_2_IP=192.168.0.11
.
.
.
NETWORK_SUBNET=192.168.0.0/24
a docker-compose that ties the environment variable to IP address of the container. Such as:
version: '3.3'
services:
web_api:
.
.
.
networks:
public_net:
ipv4_address: ${WEB_API_1_IP}
.
and some .net code that links the two and give access within the code.
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddEnvironmentVariables();
})
The code that we wrote is about reading through the configuration.
But it gives way better control. Depending on what environment you are running it you could have different environment files.
The subnet the number of machine they are all configured options rather than tricky code which requires more maintenance and is error prone.
In the Cloudbees wiki, this page explains how to add configuration parameter for an app deployment, using cloudbees-web.xml.
But, is the content of:
<appid>APP_ID</appid>
injected as a well ? How can I retrieve this value from my application's code ?
My preference is to avoid coding an application to contain explicit references to the container within which it runs. So I would favour using techniques that do not tie your code to CloudBees (a.k.a. us).
Thus I would use a container specific descriptor file that configures a context parameter, then your application just reads the context parameter and uses that parameter directly.
There are two techniques for doing this:
Application Environments personally I love this way... though if you want to deploy the application to your own test environment that you have just spun up yourself, your cloudbees-web.xml will likely be missing the required environment definition... so better is to use the newer
Configuration Parameters so that when you need your own test instance you just define the configuration parameters for that test environment and then deploy the exact same artifact to that instance... it also prevents the issue of deploying to the test instance with the production environment turned on.
I am sure one of the RUN# team may well have some other trick such as a System property that tells you the app id... but keep in mind that when running locally, e.g. using a local jetty/tomcat/bees:run container your code will then blow up!