We are using Docker Images for Spring Boot Rest Services. the current setup is working fine in Production. We want to use the similar setup in Development Environment. The spring boot image needs to connect to the database. At this point we have couple of options:
Have a centralized database server and have all the docker images from each development machine to connect to it.
Create a separate database image and have the developers run it along with the Spring boot image in the same Dev Machine.
Option #1 is easier to implement but if there is a change in the database, it may impact the whole development community in the organization, Option #2 mitigates that risk but it creates the problem of DataSync i.e when someone starts both these images, how to make sure it has all the required data.
I am wondering if there is any other option I need to consider or given these two options, which makes sense?
I went with option #2, it helps to provide isolated work environment.
Related
i developed an application using Angular, Spring Boot and MySQL database. I want do publish it into docker hub but im still confused if i should create different images for each (Angular, API SpringBoot and MySQL) or i should just put it all in one docker image
I tried dockerizing only the spring boot api but my doubs still remains about the whole app
The backend and frontend should be in the same image. Depending if the backend or frontend is shared with other services you can think to make seperate images. If they are not shared it doesnt make sense to make two images because your frontend is not working without your backend and vice verser.
The Database should be in a seperate image, it is not part of your application, it is part of your data storage and could be easily shared with other applications.
Good practice is putting them separately.
To make your application more flexible, you may define all accesses as an environment variable of the image.
That is to say, defining the base url of your backend as ENV, the access to your database as ENV
After that, you could leverage docker-compose to orchestrate it all
I'm struggling with finding out how to properly test stuff on my local PC and then transfer that over to production.
So here is my situation:
I got a project in NodeJS/typescript, and I'm using Prisma in it for managing my database. On my server I just run a MySQL database, and for testing on my PC I always just used SQLite.
But now that I want to use Prisma Migrate (because it's highly recommended to do so in production) I can't because I use different databases on my PC vs on my Server. Now here comes my question, what is the correct way to test with a database during development?
Should I just connect to my server and make a test database there? Use VS Code's SSH coding function to code directly on the server and connect to the database? Install MySQL on my PC? Like, what's the correct way to do it?
Always use the same brand and same version database in development and testing that you will eventually deploy to. There are compatibility differences between brands, i.e. an SQL query that works on SQLite does not necessarily work the same on MySQL, and vice-versa. Even data types and schema definitions aren't all the same between different SQL products.
If you use different SQL databases in development and production, you will waste a bunch of time and increase your gray hair debugging problems in production, as you insist, "it works on my machine."
This is avoidable!
When I develop on my local computer, I usually have an instance of MySQL Server running in a Docker container on my laptop.
I assume any test data on my laptop is temporary. I can easily recreate schema and data at any time, using scripts that are checked into my source control repo, so I don't worry about losing any data. In fact, I feel no hesitation to drop it and recreate it several times a week.
So if I need to upgrade the local database version to match an upgrade on production, I just delete the Docker container and its data, pull the new Docker image version, initialize a new data dir, and reload my test data again.
Every step is scripted, even the Docker pull.
The caveat to my practice is that you can't necessarily duplicate the software if you use cloud databases, for example Amazon Aurora. There's no way to run an Aurora-compatible instance on your laptop (and don't believe the salespeople that Aurora is fully compatible with MySQL; it's not). So you could run a small Aurora instance in a development VPC and connect to that from your app development environment. At least if your internet connection is reliable enough.
By the way, a similar rule applies to all the other technology you use in development. The version of Node.js, Prisma, other NPM dependencies, http and cache servers, etc. Even the operating system might be the source of compatibility issues, but you may have to develop in a Virtual Machine to match the OS to production exactly.
At one past job, I did help the developer team create what we called the "golden image" which was a pre-configured VM with all our software dependencies installed, and we used this golden image for both the developer sandbox VM, and also an AMI from which we launched the production Amazon EC2 instances. So all the developers were guaranteed to have a test environment that matched production exactly. After that, if they had code problems, they could fix it in development and have a much higher confidence it would work after deploying to production.
I have a golang web application associated with MySQL database. I need to deploy that web application in number of servers provided by different vendors. So I am going to used docker images to deploy this web app. The thing I need to know is, it is okay to keep Mysql server on same docker image or should I make a separate docker image to deploy MySQL on those servers.
A rule of thumb with Docker which you should follow is "One application, one container" It's always the best practice to have separate containers for different parts of your application. The main reason is that down the line if you want to replace MySQL with some NoSQL database, you could simply kill the container and spin up a new one and not worry about it affecting your golang application
I am migrating an application from openshift 2 such consists of a Java(jetty) webserver and a mongo database.
Both the webserver and mongo need access to persistent storage, as well as the server accessing the database.
As the volume available to me can't (I believe) be accessed by two pods my current goal is to include both the server and dB into the same pod as separate containers.
I have tried copying the mongo container into the deploy config for the server but I just get an error saying the config is invalid with no description of why.
Is this an approach that could work and how can I find out why it isn't?
It is possible to do it if you really needed to, but not normally recommended for production systems.
In doing it, you are limited to a single replica and cannot scale your application, also, you can't use Rolling deployment strategy and must use Recreate.
For some examples of templates which deploy a database with front end together in same pod which you might adapt, see the 'testing' variants of the templates at:
https://github.com/openshift-evangelists/wordpress-quickstart/tree/master/templates
For those templates the build of the application image was done as separate manual step and they were just handling the deployment, so you will need to incorporate the build configuration into them yourself after you have copied and modified them for your own purposes.
UPDATE 1
Those templates do now include build configurations as have been tweaking the way they work.
I'm planning to migrate my application stack to Docker. Let me describe the services I'm currently using:
HAProxy, which is used for SSL termination on all service's connections (HTTP and raw TCP connections), and forwards traffic to the services below.
Nginx, which serves static files, like updates and some information pages.
Node.js, which runs the main applications.
MySQL (MariaDB), the database used and shared by all the applications.
My question is about the database.
What's the proper way of running MariaDB in this case?
Install and run it inside my container, along with the other services?
Run the official image in a separate container, and link my container to it with the --link option of Docker's run command?
Does the first option have any disadvantage?
TeamSpeak docker container uses the second option and that's what made me question myself about the correct way of running the database in my case, but I particularly feel more inclined to package all the services inside my own image.
Docker Philosophy: Split your application into microservices and use a container for each microservice.
In your case, I recommend a MariaDB container, Using official (Library) Image gives you easier update management, but feel free to use your custom image.
An HAProxy Container, A nginx container and a nodejs container.
This way you divided your application into microservices, and you can upgrade, manage and troubleshoot them easier in an isolated environment.
If you are thinking about delivering your application to end users via docker, a simple docker-compose file will do the trick for easy launching the required containers.