Multiple containers in one pod - openshift

I am migrating an application from openshift 2 such consists of a Java(jetty) webserver and a mongo database.
Both the webserver and mongo need access to persistent storage, as well as the server accessing the database.
As the volume available to me can't (I believe) be accessed by two pods my current goal is to include both the server and dB into the same pod as separate containers.
I have tried copying the mongo container into the deploy config for the server but I just get an error saying the config is invalid with no description of why.
Is this an approach that could work and how can I find out why it isn't?

It is possible to do it if you really needed to, but not normally recommended for production systems.
In doing it, you are limited to a single replica and cannot scale your application, also, you can't use Rolling deployment strategy and must use Recreate.
For some examples of templates which deploy a database with front end together in same pod which you might adapt, see the 'testing' variants of the templates at:
https://github.com/openshift-evangelists/wordpress-quickstart/tree/master/templates
For those templates the build of the application image was done as separate manual step and they were just handling the deployment, so you will need to incorporate the build configuration into them yourself after you have copied and modified them for your own purposes.
UPDATE 1
Those templates do now include build configurations as have been tweaking the way they work.

Related

Entities in FIWARE orion disappear after some time passes

I created a project from https://fiware-tutorials.readthedocs.io/en/latest/time-series-data.html tutorial and just changed the entities name and type and everything work right. But after some time (usually a day) all entities in Orion disappears (although the data in Quantumleap persists) and I can not get the entities properties with this command :
curl -X GET \
--url 'http://localhost:1026/v2/entities?type=Temp'
What is the problem? is there some restriction in tutorial projects?
The tutorials have been written as an introduction to NGSI, not as a robust architectural solution. The idea is just to get something "quick and dirty" up and running on a developer's machine and various shortcuts have been taken. Indeed the docker-compose files all hold the following disclaimer:
WARNING: Do not deploy this tutorial configuration directly to a production environment
The tutorial docker-compose files have not been written for production deployment and will not
scale. A proper architecture has been sacrificed to keep the narrative focused on the learning
goals, they are just used to deploy everything onto a single Docker machine. All FIWARE components
are running at full debug and extra ports have been exposed to allow for direct calls to services.
They also contain various obvious security flaws - passwords in plain text, no load balancing,
no use of HTTPS and so on.
This is all to avoid the need of multiple machines, generating certificates, encrypting secrets
and so on, purely so that a single docker-compose file can be read as an example to build on,
not use directly.
When deploying to a production environment, please refer to the Helm Repository
for FIWARE Components in order to scale up to a proper architecture:
see: https://github.com/FIWARE/helm-charts/
Perhaps the most relevant factor here to answer your question, there is typically no Volume Persistence - the tutorials clean up after themselves where possible to avoid leaving data on a user's machine unnecessarily.
If you have lost all your entity data when connecting to Orion, my guess here is that the MongoDB database has exited and restarted for some reason. Since there is deliberately no persistent volume set up, this would mean that all previous entities are lost on the restart.
A solution on how to persist volumes and fix this behaviour can be found in answers to another question on this site - something like:
version: "3.9"
services:
mongodb:
image: mongo:4.4
ports:
- 27017:27017
volumes:
- type: volume
source: mongodb_data_volume
target: /data/db
volumes:
mongodb_data_volume:
external: true

What is the best way to use mysql server with docker?

I have a golang web application associated with MySQL database. I need to deploy that web application in number of servers provided by different vendors. So I am going to used docker images to deploy this web app. The thing I need to know is, it is okay to keep Mysql server on same docker image or should I make a separate docker image to deploy MySQL on those servers.
A rule of thumb with Docker which you should follow is "One application, one container" It's always the best practice to have separate containers for different parts of your application. The main reason is that down the line if you want to replace MySQL with some NoSQL database, you could simply kill the container and spin up a new one and not worry about it affecting your golang application

Docker image and DB Server integration

We are using Docker Images for Spring Boot Rest Services. the current setup is working fine in Production. We want to use the similar setup in Development Environment. The spring boot image needs to connect to the database. At this point we have couple of options:
Have a centralized database server and have all the docker images from each development machine to connect to it.
Create a separate database image and have the developers run it along with the Spring boot image in the same Dev Machine.
Option #1 is easier to implement but if there is a change in the database, it may impact the whole development community in the organization, Option #2 mitigates that risk but it creates the problem of DataSync i.e when someone starts both these images, how to make sure it has all the required data.
I am wondering if there is any other option I need to consider or given these two options, which makes sense?
I went with option #2, it helps to provide isolated work environment.

Approach of deploying pods/services on Kubernetes

I know the concepts of Kubernetes. I'm able to do some basic stuff als create a pod and create a service above it. I do this by using kubectl commands. When I'm searching for examples I see a log of .yaml and .json files.
Is it the global approach to create a .json or .yaml file which describes the setup of your pods/services (a bit like a docker-compose file)?
Maybe a strange question but for me it seems a bit weird that everyone who's using kubernetes is able to write his own .yaml-templates for their applications.
Creating pods and services, scaling pods etc can all be done from the kubectl using the appropriate commands. It will internally create the necessary config and create the resources.
This is great for trying out things, but not ideal for anything more serious. You want to version control, inspect and evolve your config. So it is better to start with a yaml or json config for what you want to create, and use that with kubectl.

Docker-compose Kubernetes ENV properties interoperability

I'm building my staging environment using docker-compose, with application that was previously ran in Google Cloud using Kubernetes.
My application was configured, using ENV properties provided inside Kubernetes container, and now after switching to docker-composite, I have different naming convention for linked services.
I can think of few solutions, for my problem:
Change my application, to support alternative configurations, so it would support both docker-composite & Kubernetes
Create aliases in docker-compose or Kubernetes so that configuration would always be available in single format in both environments, and I would not need to touch my application configurations.
Maybe some other way, which I don't see
I want to go with the 2nd solution, but I don't know how exactly to configure it. Have ideas?
You could use the environment section to define 'docker-compose' variables like PARAM1=${PARAM2}. In this case, docker-compose will have the same variables that Kubernetes has.