The 3rd factor of "The twelve-factor app", states that the application configuration must be stored in the environment.
For the image of a Quarkus application deployed on OpenShift, what is the best solution: where can I put the application configuration?
Thanks a lot.
Kind regards.
ConfigMaps are designed for this exact purpose. (The opening line of the documentation is "Config maps allow you to decouple configuration artifacts from image content to keep containerized applications portable.") Secrets are the equivalent functionality for sensitive information.
ConfigMaps generally the I recommend for most situations. Because, as you say, this is a universal solution that can be used for all kinds of applications, no matter how they are implemented. Pods have many ways of accessing ConfigMaps , all OpenShift and Kubernetes tools will understand how to work with ConfigMaps (icluding things like CI/CD), and ConfigMaps are native API resources. Because they are so pervasive, many frameworks also have native support for them. Including Quarkus which was your specific example.)
That said, you also ask in your comments about whether you can use Consul. Absolutely you can, Consul is certified on OpenShift. There are LOTS of third party solutions. And there are reasons that people choose to use them.
But if you ask the generic question "How do you do this in OpenShift?" I will point you to ConfigMaps. If you want to use something else (like Consul) then how you access and manage that config really has to be a question for that product. It looks like there is a Quarkus extension for Consul, but I don't know anything about it.
Related
I am looking for a way to collect Java exceptions thrown by containers. I know the function from the logging system of GKE/GCP and would like to implement a similar logging system in our self-hosted cluster.
I am using Prometheus and Grafana for monitoring metrics.
You need a centralized logging solution. There are some common solutions out there. One of them is the ELK Stack (now named Elastic stack).
It has 3 main components:
Elasticsearch: To store the logs, index them, make them searchable etc.
Logstash: To collect the logs from various sources (containers in your case), parse/filter them and push them to other systems. In ELK's case, push them to Elasticsearch.
Kibana: A web GUI to visualize the data in Elasticsearch, allows searching, creating visual graphs and so on.
See the official page of Elastic stack for more information.
You can also use Fluentd or Fluent Bit instead of Logstash, so it'll be an EFK stack. I personally had pretty good experience with an EFK stack with Fluent Bit.
For another, lighter alternative, you can check out Grafana Loki, which is kind of a logging extension to the popular monitoring setup of Prometheus+Grafana.
If you are already familiar with the Stackdriver solution from GKE, I'd bet your best choice would be to stick with it and install Stackdriver on your self managed Kubernetes cluster as well:
https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/
I want to build my own pass platform based on cloudfoundry and openshift. I want to use some of the functions of these two platforms, and I don't want to deploy them all in the environment. Is this feasible? What similar open source projects can learn from?
Let me produce some contents about OpenShift for you as follows.
OpenShift Online : Free plan is enough to your first training.
OpenShift HandsOn training : Awesome practical training, it need not to prepare your env.
OpenShift Documentation - Enterprise and OpenShift OpenSource AKA OKD - Documentation
If you'd like to deploy to your on-premise as open source project of OpenShift, you can review/test/operate the OKD (former name: OpenShift Origin).
I hope if help you. :^)
In regards to Cloud Foundry, it is just a collection of services. We use Bosh to deploy Cloud Foundry, which knows how to deploy all the services so that they can talk to each other & function cohesively. There's nothing that would prevent you from using a different Bosh configuration (or even totally different tool) to deploy these services in a different way.
You can run projects like Gorouter, UAA, Cloud Controller and Garden stand-alone. The individual project sites typically have instructions for doing this.
Ex:
https://github.com/cloudfoundry/gorouter#start
https://github.com/cloudfoundry/uaa#quick-start
Other components might be a little trickier as they depend on each other. Diego, for example, depends on Garden and is built to send logs through Loggregator. In these cases, you might need to do a little work if you didn't want to use one of the dependent components.
https://github.com/cloudfoundry/diego-design-notes#what-are-all-these-repos-and-what-do-they-do
I would disagree with your comment about these systems being bloated, and say that depends on your perspective. If you don't need a lot of the features, then I could see why you might think that. I'd say overkill might be a better way to put it though.
If you don't need all the functionality that PaaS platforms provide, you could look at other options: Dokku, Kubernetes, Knative, etc... You don't get all the features of CF, but the systems have smaller footprints. If you can live without the extra features, then these might be better options for you.
Hope that helps!
I use Deis Workflow, which is an open source Platform as a Service (PaaS) that makes it easy to deploy and manage applications on our servers.
I understand twelve-factor is the main guideline for Deis Workflow, but is it possible to use it to create services like Postgres, Redis or MySQL?
Some other PaaS services e.g. Dokku and Flynn allow users to create services and link them to the app containers.
Is there a way to acheive the same result in Deis Workflow?
I'm an engineer at Deis, formerly from the Workflow team, and still occasionally involved in it. Great question. As it seems you already caught on to, Workflow is (currently) hyper-focused on 12factor applications. Generally, what we have said is that anyone wishing to do anything more complex than that may wish to "fall back" on "plain Kubernetes," but that doesn't have to be as painful as it might sound when you take Helm into account. Helm is the Kubernetes package manager (and is another Deis product). Helm 2 just went GA today, in fact. It's easy to create your own Helm charts (packages), but even better than that, many charts already exist for common things like Postgres, Redis, and MySQL (all examples you gave). Hope this helps.
I am Anton - one of the maintainers of Hephy, the open source fork of Deis Workflow. https://github.com/teamhephy
Deis Workflow was originally designed with hyper focus on 12-factor apps and deploying them. We don't see any major changes to that in the coming few months except the possibility to define multiple services per application namespaces. See this PR: https://github.com/teamhephy/controller/pull/71
Aside from all of this, we hope to integrate other services that provide DBaaS (Databases as a Service) and do some blog posts on how to use Hephy Workflow and those services together for a common solution.
I'm working on my first fiware project. I want to make an application consisting on an online store, with some Specific Enablers to provide some functionality. This is all very new to me, and I've been reading a lot these days, but I'm pretty lost and I really wonder if what I'm trying to do is even possible.
Can I use wirecloud to make a mashup application like this? How can I integrate specific enablers with the web application? Is there some kind of enabler to provide online store functionality?
Thanks for your time
WireCloud comes with support for some of the Generic Enablers provided by FIWARE (WStore, Orion, Kurento, ...) and can be used for accessing to your Specific Enablers (although you will need to code). You can learn more about WireCloud on the available course at FIWARE Lab Academy.
One of the Generic Enablers that seems useful for your use case seems to be the Store Generic Enabler. The store is used in FIWARE Lab for distributing/selling services, datasets, and WireCloud components, but can be configured for other purposes.
Time and again I am faced with the issue of having multiple environments that must be configured individually for an application that would run in all of them (e.g. QA, regional production env's, dev, staging, etc.) and I am wondering what would be the best way to organize different configurations?
Would it be in the database? Different configuration files per environment? Or maybe the same file with different sections/xml tags? How would these be then deployed? Embedded within the app? Or put manually in after installation to be modified in-place?
This question is not technology-specific - I've worked with .net and Java, web-apps and desktop apps and this issue comes up time and again. I'm looking to learn different approaches to maybe adapt a hybrid to address this.
EDIT: There's one caveat that I must point out - when configuration is part of the deployed solution, it is generally installed under root user on the host. In large organizations developers usually don't have a root access to production hosts so any changes to the configuration require a new build and deployment. Needless to say this isn't the nicest approach - especially at organizations that have a very strict release process involving multiple teams and approval levels... (sigh I know!)
Borrowed from Jez Humble and David Farley's book "Continuous Delivery (page 41)", you can:
Your build scripts can pull configuration in and incorporate it into your binaries at build time.
Your packaging software can inject configuration at packaging time, such as when creating assemblies, ears, or gems.
Your deployment scripts or installers can fetch the necessary information or ask the user for it and pass it to your application at
deployment time as part of the installation process.
Your application itself can fetch configuration at startup time or run time.
It is considered bad practice by them to inject configuration files in build and compile times, because you should be able to deploy the same binary file to every environments.
My experience was that you could bake all configuration files for every environments (except sensitive information) to your deployment file (war, jar, zip, etc). And you design your application to take in an extra parameter when starts, to pickup the right sets of configuration files (from your extracted deployment file, or from local/remote file system if they are sensitive, or from a database) during application's startup time.
The question is difficult to answer because it's somewhat vague. There is no technology-agnostic approach to configuration as far as I know. Exactly how configuration is set up will depend on the language/technology in question.
I'm not familiar with .net but with java a popular approach is to have a maven build set up with different profiles. Each profile is specific to an environment. You can then define different properties files that have environment-specific values, an example from the above link is:
environment.properties - This is the default configuration and will be packaged in the artifact by default.
environment.test.properties - This is the variant for the test environment.
environment.prod.properties - This is basically the same as the test variant and will be used in the production environment.
You can then build your project as follows:
mvn -Pprod package
I have good news and bad news.
The good news is that Config4* (of which I am the maintainer) neatly addresses this issue with its support for adaptive configuration. Basically, this is the ability for a configuration file to adapt itself to the environment (including hostname, username, environment variables, and command-line options) in which it is running. Read Chapter 2 of the "Getting Started" manual for details. Don't worry: it is a short chapter.
The bad news is that, currently, Config4* implementations exist only for C++ and Java, so your .Net applications are out of luck. And even with C++ and Java applications, it won't make pragmatic sense to retrofit Config4* into an existing application. Because of this, I'd advise trying to use Config4* only in new applications.
Despite the bad news, I think it is worth your while to read the above-mentioned chapter of the Config4* documentation, because doing so may provide you with ideas that you can adapt to fit your needs.