Docker, AspNetCore, DB connection string best practices - mysql

I've been spending the last week or so attempting to learn docker and all the things it can do, however one thing I'm struggling to get my head around is the best practice on how to manage secrets, especially around database connection strings and how these should be stored.
I have a plan in my head where I want to have a docker image, which will contain an ASP.NET Core website, MySQL database and a PHPMyAdmin frontend, and deploy this onto a droplet I have at DigitalOcean.
I've been playing around a little bit and I have a docker-compose.yml file which has the MySQL DB and PhpMyAdmin correctly linked together
version: "3"
services:
db:
image: mysql:latest
container_name: mysqlDatabase
environment:
- MYSQL_ROOT_PASSWORD=0001
- MYSQL_DATABASE=atestdb
restart: always
volumes:
- /var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: db-mgr
ports:
- "3001:80"
environment:
- PMA_HOST=db
restart: always
depends_on:
- db
This is correctly creating a MySQL DB for me and I can connect to it with the running PHPMyAdmin front end using root / 0001 as the username/password combo.
I know I would now need to add my AspNetCore web app to this, but I'm still stumped by the best way to have my DB password.
I have looked at docker swarm/secrets, but I still don't fully understand how this works, especially if I want to check my docker-compose file into GIT/SCM. Other things I have read have suggested using environment variables, but I still don't seem to understand how that is any different to just checking in the connection string in my appsettings.json file, or for that matter, how this would work in a full CI/CD build pipeline.
This question helped my out a little getting to this point, but they still have their DB password in their docker-compose file.
It might be that I'm trying to overthink this
Any help, guidance or suggestions would be gratefully received.

If you are using Docker Swarm then you can take advantage of the secrets feature and store all your sensitive information like passwords or even the whole connection string as docker secret.
For each secret that is created Docker will mount a file inside the container. By default it will mount all the secrets in /run/secrets folder.
You can create a custom configuration provider to read the secret and map it as configuration value
public class SwarmSecretsConfigurationProvider : ConfigurationProvider
{
private readonly IEnumerable<SwarmSecretsPath> _secretsPaths;
public SwarmSecretsConfigurationProvider(
IEnumerable<SwarmSecretsPath> secretsPaths)
{
_secretsPaths = secretsPaths;
}
public override void Load()
{
var data = new Dictionary<string, string>
(StringComparer.OrdinalIgnoreCase);
foreach (var secretsPath in _secretsPaths)
{
if (!Directory.Exists(secretsPath.Path) && !secretsPath.Optional)
{
throw new FileNotFoundException(secretsPath.Path);
}
foreach (var filePath in Directory.GetFiles(secretsPath.Path))
{
var configurationKey = Path.GetFileName(filePath);
if (secretsPath.KeyDelimiter != ":")
{
configurationKey = configurationKey
.Replace(secretsPath.KeyDelimiter, ":");
}
var configurationValue = File.ReadAllText(filePath);
data.Add(configurationKey, configurationValue);
}
}
Data = data;
}
}
then you must add the custom provider to the application configuration
public static IHostBuilder CreateHostBuilder(string[] args)
{
return Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddSwarmSecrets();
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
then if you create a secret with name "my_connection_secret"
$ echo "Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd=myPassword;" \
| docker secret create my_connection_secret -
and map it to your service as connectionstrings:DatabaseConnection
services:
app:
secrets:
- target: ConnectionStrings:DatabaseConnection
source: my_connection_secret
it will be the same as writing it to the appsettings.config
{
"ConnectionStrings": {
"DatabaseConnection": "Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd=myPassword;"
}
}
If you don't want to store all the connection string as secret then you can use a placeholder for the password
Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd={{pwd}};
and use another custom configuration provider to replace it with the password stored as secret.
On my blog post How to manage passwords in ASP.NET Core configuration files I explain in detail how to create a custom configuration provider that allows you to keep only the password as a secret and update the configuration string at runtime. Also the the full source code of this article is hosted on github.com/gabihodoroaga/blog-app-secrets.

Secrets are complicated. I will say that pulling them out into environment variables kicks the problem down the road a bit, especially when you are only using docker-compose (and not something fancier like kubernetes or swarm). Your docker-compose.yaml file would look something like this:
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
Compose will pull MYSQL_ROOT_PASSWORD from an .env file or a command line/environment variable when you spin up your services. Most CI/CD services provide ways (either through a GUI or through some command line interface) of encrypting secrets that get mapped to environment variables on the CI server.
Not to say that environment variables are necessarily the best way of handling secrets. But if you do move to an orchestration platform, like kubernetes, there will be a straightforward path to mapping kubernetes secrets to those same environment variables.

Related

mkdir /.gitlab-runner: permission denied running GitLab Runner in Kubernetes deployed via Helm

I'm trying to deploy the GitLab Runner (15.7.1) onto an on-premise Kubernetes cluster and getting the following error:
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
This is occurring with both the 15.7.1 image (Ubuntu?) and the alpine3.13-v15.7.1 image. Looking at the deployment, it looks likes it should be trying to use /home/gitlab-runner, but for some reason it is trying to use root (/), which is a protected directory.
Anyone else experience this issue or have a suggestion as to what to look at?
I am using the Helm chart (0.48.0) using a copy of the images from dockerhub (simply moved into a local repository as internet access is not available from the cluster). Connectivity to GitLab appears to be working, but the error causes the overall startup to fail. Full logs are:
Registration attempt 4 of 30
Runtime platform arch=amd64 os=linux pid=33 revision=6d480948 version=15.7.1
WARNING: Running in user-mode.
WARNING: The user-mode requires you to manually start builds processing:
WARNING: $ gitlab-runner run
WARNING: Use sudo for system-mode:
WARNING: $ sudo gitlab-runner...
Created missing unique system ID system_id=r_Of5q3G0yFEVe
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
I have tried the 15.7.1 image, the alpine3.13-v15.7.1 image, and the gitlab-runner-ocp:amd64-v15.7.1 image and searched the values.yaml for anything relevant to the path. Looking at the deployment template, it appears that it ought to be using /home/gitlab-runner as the directory (instead of /) [though the docs suggested it was /home].
As for "what was I expecting", of course I was expecting that it would "just work" :)
So, resolved this (and other) issues with:
Updated helm deployment template to mount an empty volume at /.gitlab-runner
[separate issue] explicitly added builds_dir and environment [per gitlab-org/gitlab-runner#3511 (comment 114281106)].
These two steps appeared to be sufficient to get the Helm chart deployment working.
You can easily create and mount the emptyDir (in case you are creating gitlab-runner with kubernetes manifest *.yml file):
volumes:
- emptyDir: {}
name: gitlab-runner
volumeMounts:
- name: gitlab-runner
mountPath: /.gitlab-runner
-------------------- OR --------------------
volumeMounts:
- name: root-gitlab-runner
mountPath: /.gitlab-runner
volumes:
- name: root-gitlab-runner
emptyDir:
medium: "Memory"

Rundeck user management with ECS Fargate [Community edition]

I am managing users via the realm.properties file located in home/rundeck/server/config directory until an LDAP/AD solution is implemented. Everytime I updated the ECS/container task, the users I created using the previous container is deleted. I believe this is due to the lifecycle management of the container?
Is there any other way to manage users with Rundeck community?
Thanks.
You can use a realm.properties file as a volume (supported by ECS). In that way, you can use a local / persistent custom realm.properties file. Take a look at this example:
The docker-compose.yml file.
services:
rundeck:
image: rundeck/rundeck:4.6.1
environment:
- RUNDECK_GRAILS_URL=http://localhost:4440
volumes:
- ./data/realm.properties:/home/rundeck/server/config/realm.properties
ports:
- "4440:4440"
restart: unless-stopped
The realm.properties local file (stored at the data directory, at the same level as the docker-compose.yml file).
admin:admin,user,admin
user:user,user
bob:bob,user,admin
Other options:
Process Automation (formerly "Rundeck Enterprise") includes a GUI User Management feature.
As you said, LDAP / AD integration.

How do I set different private keys for different environments for Elastic Beanstalk?

I am looking at this article https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-storingprivatekeys.html and I understand how I could store the private key file on server using s3.
However, I am not sure as to how I can change the private key file to store in different environments.
How do I achieve the above?
You can store the private keys in S3 for the different environments, download them all, but then only access the one you need for your specific environment. For example:
files:
"/tmp/my_private_key.staging.json":
mode: "000400"
owner: webapp
group: webapp
authentication: "S3Auth"
source: https://s3-us-west-1.amazonaws.com/my_bucket/my_private_key.staging.json
"/tmp/my_private_key.production.json":
mode: "000400"
owner: webapp
group: webapp
authentication: "S3Auth"
source: https://s3-us-west-1.amazonaws.com/my_bucket/my_private_key.production.json
container_commands:
key_transfer_1:
command: "mkdir -p .certificates"
key_transfer_2:
command: "mv /tmp/my_private_key.$APP_ENVIRONMENT.json .certificates/private_key.json"
key_transfer_3:
command: "rm /tmp/my_private_key.*"
where you have set APP_ENVIRONMENT as an environment variable to be "staging" or "production", etc.

Docker - Spring Cloud Config Client, Issue with Config Server Discovery

I'm experimenting with Spring Cloud Netflix stack and Spring Cloud Config Server and clients.
For this, I have set a minimal example as shown from the following docker-compose file.
version: '3'
services:
#Eureka Service
discovery:
container_name: discovery
image: jbprek/discovery:latest
ports:
- "8761:8761"
#Spring cloud config server
configservice:
container_name: configserver
image: jbprek/configserver:latest
ports:
- "8888:8888"
depends_on:
- discovery
#Example microservice using discovery and spring cloud config
constants-service:
container_name: constants-service
ports:
- "8080:8080"
image: jbprek/constants-service:latest
depends_on:
- discovery
- configservice
The implementation of discovery and configserver are minimal following various samples and the full code can cloned by:
git clone https://prek#bitbucket.org/prek/boot-netflix-problem.git
When the spring cloud config client "constants-service" uses the following configuration in bootstrap.properties
spring.application.name=constants-service
spring.cloud.config.uri=http://configserver:8888
then everything seems to work fine including registration with "Eureka" and retrieval of configuration from configserver.
Then to lookup configserver by discovery and then retrieve the configuration from constant-service, I modify the bootstrap.properties file as follows:
spring.application.name=constants-service
#Lookup through Eureka
spring.cloud.config.discovery.enabled=true
spring.cloud.config.discovery.service-id=CONFIGSERVER
The above change has the effect of preventing "constants-service" to connect to eureka, by using as Eureka hostname, localhost instead of discovery and both the lookup of configserver service and self registration with Eureka fail.
The application.properties for discovery is:
server.port=8761
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false
The application.properties for configserver is:
server.port=8888
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.client.service-url.defaultZone=http://discovery:8761/eureka
spring.cloud.config.server.git.uri=https://bitbucket.org/prek/boot-netflix-problem-config-data.git
spring.cloud.config.server.git.clone-on-start=true
spring.cloud.config.server.git.force-pull=true
spring.cloud.config.discovery.enabled=false
And for constants service is:
spring.application.name=constants-service
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.client.service-url.defaultZone=http://discovery:8761/eureka
Can someone advise on the above configuration?
Update
According to the answer provided below by #nmyk for constants-service which ise both Eureka (discovery) client and Spring Cloud Config client, the configuration for both the discovery and the config should be contained in bootstrap.properties files so given the examples mentioned above the boostrap.properties file for constants service could be:
spring.application.name=constants-service
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.client.service-url.defaultZone=http://discovery:8761/eureka
spring.cloud.config.discovery.enabled=true
spring.cloud.config.discovery.service-id=CONFIGSERVER
You are switching your application to be in 'Discovery First mode', so your constants-service should know about Eureka and get configserver URL from it by name.
Problem is simple : your bootstrap.properties of constants-service does not contain URL to Eureka, your should move eureka client configuration from git repo (application.properties) to bootstrap.properties.

Using MySQL on Openshift with Symfony 2

I added MySQL, and PHPMyAdmin cartridges to my openshift php app.
After mysql cartridge was added I saw the page which says:
Connection URL: mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/
but I have no idea what does it mean.
When I access mysql database through PHPMyAdmin,
I see 127.8.111.1 as db host, so I configured my symfony 2 app (parameters.yml):
parameters:
database_driver: pdo_mysql
database_host: 127.8.111.1
database_port: 3306
database_name: <some_database>
database_user: admin
database_password: <some_password>
Now when I access my web page it throws an error, which I believe related to mysql connection. Can someone show me proper way of doing the above?
EDIT: It seems mysql connection works fine, but somehow
Error 101 (net::ERR_CONNECTION_RESET): Unknown error
is thrown.
The code I use and works very well to make my apps working both on localhost and openshift without changing database config parameters every time I move between them is this:
<?php
# app/config/params.php
if (getEnv("OPENSHIFT_APP_NAME")!='') {
$container->setParameter('database_host', getEnv("OPENSHIFT_MYSQL_DB_HOST"));
$container->setParameter('database_port', getEnv("OPENSHIFT_MYSQL_DB_PORT"));
$container->setParameter('database_name', getEnv("OPENSHIFT_APP_NAME"));
$container->setParameter('database_user', getEnv("OPENSHIFT_MYSQL_DB_USERNAME"));
$container->setParameter('database_password', getEnv("OPENSHIFT_MYSQL_DB_PASSWORD"));
}?>
This will tell the app that if is openshift environment it needs to load different username, host, database, etc.
Then you have to import this file (params.php) from your app/config/config.yml file:
imports:
- { resource: parameters.yml }
- { resource: security.yml }
- { resource: params.php }
...
And that's it. You will never have to touch this file or parameters.yml when you move on openshift or localhost.
Connection URL: mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/
OpenShift exposes environment variables to your application containing the host and port information for your database. You should reference these environment variables in your configuration instead of hard-coding values. I am not a Symfony expert, but it looks to me like you would need to do the following in order to use this information in your app:
Create a pre-start hook for your application and export variables in Symfony's expected format. Add the following to the .openshift/action_hooks/pre_start_php-5.3 file in your application's git repo:
export SYMFONY__DATABASE__HOST=$OPENSHIFT_MYSQL_DB_HOST
export SYMFONY__DATABASE__PORT=$OPENSHIFT_MYSQL_DB_PORT
Symphony uses this pattern to identify external configuration in the environment, and will make the this configuration available for use in your YAML configuration:
parameters:
database_driver: pdo_mysql
database_host: "%database.host%"
database_port: "%database.port%"
EDIT:
Another option to expose this information for use in the YAML configuration is to import a php file in your app/config/config.yml:
imports:
- { resource: parameters.php }
In app/config/parameters.php:
$container->setParameter('database.host', getEnv("OPENSHIFT_MYSQL_DB_HOST"));
$container->setParameter('database.port', getEnv("OPENSHIFT_MYSQL_DB_PORT"));