Assumption: Terraform installed on MS Visual Studio Code.
Since CloudFormation supports both JSON templates and Terraform supports JSON this seems like a yes. However when I load a CloudFormation template into MS VisualStudio Code, and change the name from test.json to test.tf VS Code doesn't recognize the formatting, well visually as the name implies.
Also tried to just Run the test.json and test.tf files and Code says it doesn't know how to debug json. Also Code can't find a json debugger in the marketplace (which seems a little hard to imagine)
Anyone else have experience with this?
Since CloudFormation supports both JSON templates and Terraform supports JSON this seems like a yes.
This is far from being true.
Although, both Terraform and CloudFormation have support for JSON files, this does not means that the syntax of those JSON files are understood by both of them. They are entirely different products developed by different maintainers. They have different ways of defining and managing resources which you would want to provision.
Terraform's AWS provider has support for creating CloudFormation stacks, more info in the documentation. If you really want to, you might be able to provision resources from CFN files, but certainly this is not accomplished just by renaming a test.json to test.tf.
It seems that you have misunderstood some things:
Both CloudFormation JSON or YML files and Terraform TF (or JSON) are in a declarative language. This is true for JSON and YML in general. You can't debug or run these files, as they are only describing an infrastructure (or an object in general) and don't implement any logic.
For Terraform you need to install the HashiCorp.terraform extension. This will give you syntax highlighting.
For Cloudformation I recommend cf-lint extension.
Both in CloudFormation and in Terraform you edit the files (JSON, YML, TF) and use a CLI to deploy your code. Terraform works in a much higher-level than CloudFormation. CloudFormation can only be used to deploy stacks, i.e. a set of resources. If you also need to deploy application code, you must have done that beforehand in a S3 bucket and reference it from there. Also in many situations you need to create a stack and then update it. Or you need to combine two or more stacks in a Stack Set.
Terraform manages all this complexity for you and offers a lot of features on top of CloudFormation. As already said, the JSON file formats isn't the same and can't be used interchangeably. You can provision a CloudFormation stack with Terraform though.
Related
Is there any tool to generate .gitlab-ci.yml file like Jenkins has job-dsl-plugin to create jobs?
Jenkins DSL plugin allows me to generate jobs using Groovy, which outputs an xml that describes a job for Jenkins.
I can use DSL and a json file to generate jobs in Jenkins. What I’m looking for is a tool to help me generate .gitlab-ci.yml based on a specification.
The main question i have to ask what is your goal?
just reduce maintenance effort for repeating job snippets:
Sometimes .gitlab-ci.yml file are pretty similar in a lot of projects, and you want to manage them centrally. Then i recommend to take a look at Having Gitlab Projects calling the same gitlab-ci.yml stored in a central location - which shows multiple ways of centralizing your build,
generate pipeline configuration as the build is highly flexible
Actually this is more a templating task, and can be achieved in nearly every script language you like.
With simple bash, groovy, python, go, .. you name it. In the end the question is, what kind of flexibility you strive for, and what kind of logic you need for the generation. I will not go into the detail on how to generate a the .gitlab-ci.yml file, but how to use it for your next step. Because this is in my opinion the most crucial step. There is the way of simply generating and committing it, but you can also use GitLab CI to generate a file for you, which will be used in the next job of your pipeline.
setup:
script:
- echo ".." # generate your yaml file here, maybe use a custom image
artifacts:
paths:
- generated.gitlab-ci.yml
trigger:
needs:
- setup
trigger:
include:
- artifact: generated.gitlab-ci.yml
job: setup
strategy: depend
This allows you to generate a child pipeline and execute it - we use this for highly generic builds in monorepos.
see for further reading
GitLab JSONNET Example - documentation example for generated yml files within a pipeline
Dynamic Childpipelines - documentation for dynamically created pipelines
I'm using a set of T4 templates in most of my MVC projects that create a set of Managers (think repositories), ViewModels and Extensions - utility extension methods such as ToModel(), ToViewModel() and ToSelectList(). So far so good. An enormous amount of basic 'plumbing' code is now written for me.
What I'd really like is an ability to configure variables that are used within those templates from an external file and then have the template use that file when executing.
I know I can run another T4 template from within another, but I can't find a way to add configuration in a separate file.
Presently, I include an 'Entity' table in my database and use that for configuration. It works but it feels dirty to have this in the database.
T4 is just C#/VB.Net code in the end, you can pretty much use any libraries you want. If you want an external configuration file you could use json.net and a simple json file in your project. At the start of your template, use the file io in the framework to read your json files contents, pass that to json.net and then extract the parameters you need. The most common way to use json.net is to serialize and deserialize classes but it also gives you access to a lower json dictionary like object that you can use linq to get any data you need from the json.
But remember there is always more then one way to solve a problem and this is a problem I have been trying to solve for a while. My preferred solution is an extension that I have created called T4 Awesome. My extension takes a totally different approach to using T4 for scaffolding inside Visual Studio. I add multiple tool windows and context menus around the IDE to make managing and using T4 templates faster and easier. I have a dynamic UI that lets you define simple parameters and pass them to your templates and also give you much more control over the final output files location. Feel free to check it out. And full disclaimer, I charge for this extension but have a free community version that should be able to do what you want.
I about to deploy a logstash instance that will handle a variety of inputs and do multiple filter actions. The configuration file will most likely end up having a lot of if-then statements given the complexity and number of the inputs.
My questions are:
Is there any way to make the configuration file more 'modular'? In a programming sense, I would create functions/subroutines so that I could test independently. I've thought about dynamically creating mini-configuration files that I can use for testing. These mini files could then be combined into one production configuration.
Are there any "best practices" for testing, deploying and managing more complicated Logstash configurations?
Thanks!
There's no support for functions/subroutines per se. I break up different filters into separate files to keep a logical separation and avoid having gigantic files. I also have inputs and outputs in different files. That way I can combine all filters with debug inputs/output, for example
input {
stdin {}
}
output {
stdout {
codec => rubydebug
}
}
and invoke Logstash by hand to inspect the results of given input. Since filter ordering matters I'm using the fact that Logstash reads configuration files in alphabetical order, so the files are named NN-some-descriptive-name.conf, where NN is an integer.
I've also written a script that automates this process by letting you write a spec with test inputs and the expected resulting messages, and if there's a mismatch it'll bail out with an error and display the diff. I may be able to open source it.
As for deployment, use any configuration management system like Puppet, Chef, SaltStack, Ansible, CFEngine, or similar that you're familiar with. I'm quite happy with Ansible.
As #Magnus Bäck stated, the answer to 1. is no. currently there is no support for functions.
But as for your second question, there is a way to make the logstash configuration more modular. you can split the configuration file to multiple files, and point logstash to the files directory.
check the directory option in logstash man:
-f, --config CONFIG_PATH Load the logstash config from a specific file
or directory. If a direcory is given, all
files in that directory will be concatonated
in lexicographical order and then parsed as a
single config file. You can also specify
wildcards (globs) and any matched files will
be loaded in the order described above.
What is a good way to coordinate configuration changes through environments?
In an effort to decouple our code from the environment we've moved all environmental config to external files. So maybe the application will look for ${application.config.dir}/app.properties and app.properties could contain:
user.auth.endpoint=http://some.url/user
user.auth.apikey=abcd12314
The problem is, user.auth.endpoint needs to point to a test resource when on test, a staging resource when on the staging environment, and a production resource when on prod.
We could maintain different copies of the config file but this would violate DRY and become very unwieldy (there are 20+ production environments).
What's a good way to manage this? What tools should I be searching for?
Externalizing config is a good idea, you could externalize them all the way to environment variables.
Env vars are easy to change between deploys without changing any code;
unlike config files, there is little chance of them being checked into
the code repo accidentally; and unlike custom config files, or other
config mechanisms such as Java System Properties, they are a language-
and OS-agnostic standard.
From http://12factor.net/config
I know of three approaches to this.
The first approach is to write, say, a Python "wrapper" script for your application. The script will find out some environmental details, such as hostname, user name and values of environment variables, and then construct the appropriate configuration file (or a set of command-line options) that is passed to the real application.
The second approach is to embed an interpreter for a scripting language (Python, Lua and Tcl come to mind) into your application. This makes it possible for you to write a configuration file in the syntax of that embedded scripting language. In this way, the configuration file can make use of the scripting language's features, such as the ability to query environment variables or execute a shell command (such as hostname) and use if-then-else statements to set variables appropriately.
The third approach (if you are using C++ or Java) is to use the open-source Config4* library (disclaimer, I am the main developer of that). I recommend you read Chapter 2 of the "Config4* Getting Started" manual to see examples of how its flexible syntax can enable a single configuration file adapt to multiple environments.
You can take a look at http://www.configapp.com. You work with 1 configuration file, and switch/tab between the environments. Internally it's just 1 configuration file, and it handles the environment variables and generates the config file for the specific environment. In Config terminology, you have 1 Prod environment with 20+ instances. You will have a Prod environment configuration and you can tweak the 20+ instances accordingly using a web interface.
You moved environment specific properties to a separate file, but with Config, you don't have to do that. With Config, you can have 1 configuration file, with environment variables support, and common configuration applied to all environments.
Note that I'm part of the Config team.
I'd like to know what is the most efficient way of handling properties in Scala. I'm tired of having gazillion property files, xml files and other type of configuration files in Java and wonder if there's "best practice" to handle those someway more efficient in Scala?
Why would you have a gazillion property files?
I'm still using the Apache commons Digester, which works perfectly well in Scala. It's basically a very easy way of making a user-defined XML document map to method calls on a user-defined configurator class. I find it extremely useful when I want to parse some configuration data (as opposed to application properties).
For application properties, you might either use a dependency injection framework (like Spring) or just plain old property files. I'd also be interested to see if Scala offers anything on top of this, though.
EDIT: Typesafe config gives you a simple and powerful solution for configuration - https://github.com/typesafehub/config
ORIGINAL (possibly not very useful):
Quoting from "Programming in Scala":
"In Scala, you can configure via Scala code itself."
Scala's runtime linking allows for classes to be swapped at runtime and the general philosophy of these languages tends to favour convention over configuration. If you don't want to deal with gazillion property files, just don't have them.
Check out Configgy which looks like a neat little library. It includes nesting and change-notification. It also include a logging library.
Unfortunately, it didn't compile for me on the Mac instances I tried. Let us know if you have better luck and what you think...
Update: solved Mac compilation problems. See this post.