Abstracting configuration in Pulumi - configuration

I've been learning Pulumi (really great tool!) and I've the following scenario. Let's assume that I've a bunch of resources which I'd like to deploy to multiple environemnts. For each of the environments, I'd like to have different names (for example, ResourceGroup in dev == "dev", qa =="qa" etc.). How can I achieve that with Pulumi ? What I'd like to avoid is having all the configurations (for all the envs) added in one Pulumi config file. With a small number of resources it may work, but with a bigger number, I guess this file will become unreadable. The only solution that comes to my mind is to have a component created for all of the resources and then create a stack for every single environment, which can use the component with the resources. But maybe there's a better option ?

Pulumi stacks are designed specifically for this. You can have one pulumi project file that can be deployed into multiple environments via stacks.
To get diferent names per stack:
Create a stack with environment name, for example:
pulumi stack init dev
Use pulumi.getStack() to get stack name inside your code.
It will looks something like this:
const stack = pulumi.getStack()
const resourceGroup = new azure.core.ResourceGroup("group", {
name: `group-{stack}` // use stack name as part of resource name
location: "West US"
});
This example is for JS, but you can find references for other languages in linked docs
In addition to stack names, you can also have separate configuration values per stack - https://www.pulumi.com/docs/intro/concepts/project/#stack-settings-file
See Organizing projects stacks for more infor on how to organize your stacks and environments.
More real example of using stack name as part of resource name

Related

Questionnaire tool to create config files

I have an application that needs a configuration file with several inputs which depend on the project that is going to be delivered. Things that are included in this conf file are IP's of databases, activating certain functions depending on the customer's needs, changing the values of some title screens, etc... A short example of a file could be something like:
postgresdb=192.156.98.98
transactions.enabled=true
application.name="client-1-logistics"
historicaldb=196.125.125.16
....
This files can become large and it might be difficult to find which parameters must be changed, specially if the configuration process has to be done by an external department.
I was looking into some kind of tool or framework that allows you to create some sort of questionnaire by which the user answers yes or no questions and fills out boxes with specific IP's or messages and get as a result the configuration file needed. This would be much tidier as you could group the questions into sections and has the potential of customising the configuration process with more context on the different parameters.
Does anyone know of such a framework?. How do you handle this kind of complex configuration processes?
The approach I outline below is not exactly what you are looking for, but it might provide some food for thought.
Use a template engine (example, Velocity, or any of the
several dozen listed in Wikipedia) to create a templated
version of your configuration file, containing lots of boilerplate
configuration that won't change, with the occasional
${variable_name} placeholder (the syntax for a placeholder will
vary from one template engine to another).
Write a small metadata file containing variable_name=value
settings.
Write a trivial program that: (a) parses the metadata file and loads
the variable_name=value settings into a Map (the template engine
might refer to the Map as, say, a context object); (b) uses the
template engine to parse the template file; (c)
merges/evaluates/instantiates the parsed template file with the settings in
the Map; and (d) writes the result to the target
configuration file.
You might be able to use steps 1 and 3 above without change. It is only step 2 that you need to adapt to your questionnaire requirements. Instead of a questionnaire, perhaps you could give users a document that explains how to write the metadata file.

What is "Code over configuration"?

I have seen this terms many times on the google code over configuration or configuration over code. I tried on by searching it on google, but still got nothing. Recently i started work it on gulp again the mystery came code over configuration.
Can you please tell me what is both and what is the difference on them?
Since you tagged this with gulp, I'll give you a popular comparision to another tool (Gruunt) to tell the difference.
Grunt’s tasks are configured in a configuration object inside the
Gruntfile, while Gulp’s are coded using a Node style syntax.
taken from here
So basically with configuration you have to give your tool the information it needs to work like it thinks it has to work.
If you focus on code you tell your tool what steps it has to complete directly.
There's quite a bunch of discussion about which one is better. You'll have to have a read and decide what fits your project best.
Code over configuration (followed by gulp) and the opposite configuration over code (followed by grunt) are approaches/principles in software/program development where both gulp and grunt are using for the same thing as automate tasks. The approach refers to developing programs according to typical programming conventions, versus programmer defined configurations. Both approaches has its own context / purpose and not a question of which one is better.
In gulp each tasks are a java-script function, necessarily no configuration involved up-front (although functions can normally take configuration values) and chaining multiple functions together to create a build script. Gulp use node stream. A stream basically continuously flow of data and can be manipulated asynchronously. However in grunt all the tasks are configured in a configuration object in a file and those are run in sequence.
Reference: https://deliciousbrains.com/grunt-vs-gulp-battle-build-tools/
Because you talked about "code" I'll try and give a different perspective.
While answering a question on figuring out IP address from inside of a docker container Docker container IP address from .net project
there are two codes possible
var ipAddress = HttpContext.Request.HttpContext.Connection.LocalIpAddress;
This will give you the IP address at runtime, but, it won't have control over it.
It can also lead to more code in the future to do something with the IP Address. Like feeding a load balancer or the likes.
I'd prefer a configuration over it.
such as
An environment variable with pre-configured IP addresses for each container service. Such as:
WEB_API_1_IP=192.168.0.10
WEB_API_2_IP=192.168.0.11
.
.
.
NETWORK_SUBNET=192.168.0.0/24
a docker-compose that ties the environment variable to IP address of the container. Such as:
version: '3.3'
services:
web_api:
.
.
.
networks:
public_net:
ipv4_address: ${WEB_API_1_IP}
.
and some .net code that links the two and give access within the code.
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddEnvironmentVariables();
})
The code that we wrote is about reading through the configuration.
But it gives way better control. Depending on what environment you are running it you could have different environment files.
The subnet the number of machine they are all configured options rather than tricky code which requires more maintenance and is error prone.

What issues could arise from using class hierarchy to structure the different parts of a configuration setting?

Here is the context of my question. It is typical that one organizes configuration values into different files. In my case, my criteria is easy editing and portability from one server to another. The package is for Internet payments and it is designed so that a single installation of the package can be used for different applications. Also, we expect that an application can have different stages (development, testing, staging and production) on different servers. I use different files for each of the following three categories: the config values that depend only on the application, those that depend only on the server and those that depend on both. In this way, I can easily move the configuration values that depend only on the application from one server to another, say from development to production. They are edited often. So, it is worth it. Similarly, I can edit the values that are specific to the server in a single file without having to maintain redundant copies for the different applications. The term "configuration value" includes anything that must be defined differently in different applications or servers, even functions. If the definition of a function depends on the application or on the server, then it is a part of the configuration process. The term "configuration value" appeared natural to me, even it includes functions.
Now, here is the question. I wanted the functions to be PHPUnit testable. I use PHP, but perhaps the question makes sense in other languages as well. I decided to store the configuration values as properties and methods in classes and used class hierarchy to organize the different categories. The base class is PaymentConfigServer (depend only on the server). The application dependent values are in PaymentConfigApp extends PaymentConfigServer and those that depend on both are in PaymentConfig extends PaymentConfigApp. The class PaymentConfigApp contains configuration values that depend either on the application or on the server, but the file itself contains values that depend on the application only. Similarly, PaymentConfig contains all conf values, but the file itself contains values that depend on both only. Can this use of class hierarchy lead to issues? I am not looking for discussions about the best approach. I just want to know, if you met a similar situation, what issues I should keep in mind, what conflicts could arise, etc?
Typically, subclasses are used to add or modify functionality rather than remove functionality. Thus, the single-inheritance approach you suggested suffers from a conceptual problem that is likely to result in confusion for anyone who has to maintain the application if/when you get hit by a bus: the base class provides support for server-specific configuration, but then you (pretend to) remove that functionality in the PaymentConfigApp subclass, and (pretend to) re-add the functionality in its PaymentConfig subclass.
I am not familiar with the PHP language, but if it supports multiple inheritance, then I think it would be more logical to have two base classes: PaymentConfigServer and PaymentConfigApp, and then have PaymentConfig inherit from both of those base classes.
Another approach might be to have just a single class in which the constructor takes an enum parameter (or a pair of boolean parameters) that is used to specify whether the class should deal with just server-specific configuration, just application-specific configuration, or both types of configuration.
By the way, the approach you are suggesting for maintaining configuration data is not one that I have used. If you are interested in reading about an alternative approach, then you can read an answer I gave to another question on StackOverflow.

Is there a preprocessor for json files? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have some configuration files that I store the complex object values as serialized json. Currently there is a configuration file for each environment (localhost, dev, prod etc.) and for each installation by client. Most of the values are identically for the configurations between environments but not all. So for three environments and four clients I currently have 12 total files to manage.
If this were a web.config file there would be web.config transforms that would solve the problem. If this was c# I'd have compiler preprocessor directives that could be useed to substitute the different values based on the current build configuration.
Does anyone know of anything that works basically this way or have some good suggestion on tried and true ways to proceed? What I would like is to reduce the number of files down to a single instance for each installation that can suffice for each environment.
Configuration of configuration always seems a bit overdone to me, but you could use a properties file for the parts that change, and apache ant's <replace> task to do the substitutions. Something like this:
<replace
file="configure.json"
propertyFile="config-of-config.properties">
<replacefilter
token="#token1#"
property="property.key"/>
</replace>
Jsonnet from Google is a language that with with a super-set syntax based on JSON, adding high level language features that help to model data in JSON fromat. The compilation step produces JSON. I have used it in a project to describe complex deployment environments that inherit from one another at times and that share domain attributes albeit utilizing them differently from one instance to another.
As an example, an instance contains applications, tenant subscriptions for those applications, contracts, destinations and so forth. The values for all of these attributes are objects the recur throughout environments.
Their docs are very thorough and don't miss the std functions because they make for some very powerful data rendering capabilities.
I wrote a Squirrelistic JSON Preprocessor which uses Golang Text Templates syntax to generate JSON files based on parameters provided.
JSON template can include reference to other templates, use conditional logic, comments, variables and everything else which Golang Text templates package provides.
This really comes down to your full stack.
If you're talking about some application that runs solely client-side, with no server-side processing, whatsoever, then there's really no such thing as pre-processing.
You can process the data further before actually using it, but that won't mean that it will be processed prior to the page being served -- it means that people have to sit around, waiting for that to happen before the apps which need that data can be initialized.
The benefit of using JSON, to begin with is that it's just a data-store, and is quite language-agnostic, and quite widely supported, now. So if it's not 100% client-side, there's nothing stopping you from pre-processing in whatever language you're using on the server, and caching those versions of those files, to serve (and cache) to users, based on their need.
If you really, really need a system to do live processing of config-files, on the client-side, and you've gone through the work of creating app-views which load early, but show the user that they're deferring initialization (ie: "loading..."/spinners), then download a second JSON file, which holds all of the needed implementation-specific data (you'll have 12 of these tiny little files, which should be simple to manage), parse both JSON files into JS objects, and extend the large config object with the additional data in the secondary file.
Please note: Use localhost or some other storage facility to cache this, so that for html5-browsers, this longer load only happens one time.
There is one, https://www.npmjs.com/package/json-variables
Conceptually, it is a function which takes a string, JSON contents, sprinkled with specially marked variables and it produces a string with those variables resolved. Same like Sass or Less does for CSS - it's used to DRY up the source code.
Here's an example.
You'd put something like this in JSON file:
{
"firstName": "customer.firstName",
"message": "Hi %%_firstName_%%",
"preheader": "%%_firstName_%%, look what's inside"
}
Notice how it's DRY — single source of truth for the firstName value.
json-variables would process it into:
{
"firstName": "customer.firstName",
"message": "Hi customer.firstName",
"preheader": "customer.firstName, look what's inside"
}
that is, Hi %%_firstName_%% would look for firstName at the root level (but equally, it could be a deeper path, for example, data1.data2.firstName). Resolving also "bubbles up" to the root level, also you can use custom data structures and more.
Missing pieces of a JSON-processing task puzzle are:
Means to merge multiple JSON files, various ways (object-merge-advanced)
Means to orchestrate actions — Gulp is good if you're preferred programming language is JS
Means to get/set values by path (object-path - its notation uses dots only, no brackets key1.key2.array.2 instead of key1.key2.array[2])
Means to maintain the same set of keys across set of JSON files - you add a key in one, it's added on all others (object-fill-missing-keys)
In described case, we can do at least two approaches: one-to-many, or many-to-many.
Former - Gulp could be "baking" many JSON files from one or more JSON-like source files, json-variables DRY-ing up the references.
Later - alternatively, it could be "managed" set of JSON files rendered into set of distribution files — Gulp watches src folder, runs object-fill-missing-keys to normalise schemas, maybe even sorting objects (yes, it's possible, sorted-object).
It all depends how similar is the desired set of JSON files and how values are customised and is it done manually or programmatically.

merging 2 html files into a single vm (velocity macro)

I have two html files, I used 2 different frameworks to create 2 different web application for smart phones and other devices such as tablets.
now I have to use Velocity Macro, and merge this two html files into a single vm, that generates 2 outputs depending on a configuration.
i have been searching for methods to do this and I found this: http://www.roseindia.net/apachevelocity/macro-wrap-html.shtml
My question is do I need to build a Java fie just like in the link and then make a vm file, or can I just make a single vm file without making any java files?
if my question in unclear let me know I try to explain more.
The Java class shown there is just to demonstrate the template, and all the template does is demonstrate how to use the Velocity #macro directive.
IMO putting both HTML files into a single VM template is a bad idea, because it will be large and difficult to understand, modify, and debug. Instead, consider using the #parse or #include directives.
Alternatively, consider a mechanism at a higher level to serve the appropriate pages directly instead of pushing the template decision-making process into the templates themselves--this is arguably the best solution.