Docker - Spring Cloud Config Client, Issue with Config Server Discovery - spring-cloud-netflix

I'm experimenting with Spring Cloud Netflix stack and Spring Cloud Config Server and clients.
For this, I have set a minimal example as shown from the following docker-compose file.
version: '3'
services:
#Eureka Service
discovery:
container_name: discovery
image: jbprek/discovery:latest
ports:
- "8761:8761"
#Spring cloud config server
configservice:
container_name: configserver
image: jbprek/configserver:latest
ports:
- "8888:8888"
depends_on:
- discovery
#Example microservice using discovery and spring cloud config
constants-service:
container_name: constants-service
ports:
- "8080:8080"
image: jbprek/constants-service:latest
depends_on:
- discovery
- configservice
The implementation of discovery and configserver are minimal following various samples and the full code can cloned by:
git clone https://prek#bitbucket.org/prek/boot-netflix-problem.git
When the spring cloud config client "constants-service" uses the following configuration in bootstrap.properties
spring.application.name=constants-service
spring.cloud.config.uri=http://configserver:8888
then everything seems to work fine including registration with "Eureka" and retrieval of configuration from configserver.
Then to lookup configserver by discovery and then retrieve the configuration from constant-service, I modify the bootstrap.properties file as follows:
spring.application.name=constants-service
#Lookup through Eureka
spring.cloud.config.discovery.enabled=true
spring.cloud.config.discovery.service-id=CONFIGSERVER
The above change has the effect of preventing "constants-service" to connect to eureka, by using as Eureka hostname, localhost instead of discovery and both the lookup of configserver service and self registration with Eureka fail.
The application.properties for discovery is:
server.port=8761
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false
The application.properties for configserver is:
server.port=8888
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.client.service-url.defaultZone=http://discovery:8761/eureka
spring.cloud.config.server.git.uri=https://bitbucket.org/prek/boot-netflix-problem-config-data.git
spring.cloud.config.server.git.clone-on-start=true
spring.cloud.config.server.git.force-pull=true
spring.cloud.config.discovery.enabled=false
And for constants service is:
spring.application.name=constants-service
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.client.service-url.defaultZone=http://discovery:8761/eureka
Can someone advise on the above configuration?
Update
According to the answer provided below by #nmyk for constants-service which ise both Eureka (discovery) client and Spring Cloud Config client, the configuration for both the discovery and the config should be contained in bootstrap.properties files so given the examples mentioned above the boostrap.properties file for constants service could be:
spring.application.name=constants-service
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.client.service-url.defaultZone=http://discovery:8761/eureka
spring.cloud.config.discovery.enabled=true
spring.cloud.config.discovery.service-id=CONFIGSERVER

You are switching your application to be in 'Discovery First mode', so your constants-service should know about Eureka and get configserver URL from it by name.
Problem is simple : your bootstrap.properties of constants-service does not contain URL to Eureka, your should move eureka client configuration from git repo (application.properties) to bootstrap.properties.

Related

Rundeck user management with ECS Fargate [Community edition]

I am managing users via the realm.properties file located in home/rundeck/server/config directory until an LDAP/AD solution is implemented. Everytime I updated the ECS/container task, the users I created using the previous container is deleted. I believe this is due to the lifecycle management of the container?
Is there any other way to manage users with Rundeck community?
Thanks.
You can use a realm.properties file as a volume (supported by ECS). In that way, you can use a local / persistent custom realm.properties file. Take a look at this example:
The docker-compose.yml file.
services:
rundeck:
image: rundeck/rundeck:4.6.1
environment:
- RUNDECK_GRAILS_URL=http://localhost:4440
volumes:
- ./data/realm.properties:/home/rundeck/server/config/realm.properties
ports:
- "4440:4440"
restart: unless-stopped
The realm.properties local file (stored at the data directory, at the same level as the docker-compose.yml file).
admin:admin,user,admin
user:user,user
bob:bob,user,admin
Other options:
Process Automation (formerly "Rundeck Enterprise") includes a GUI User Management feature.
As you said, LDAP / AD integration.

How to use Openshift OAuth server as authentication provider for my web app running in openshift cluster?

I am deploying a web application in Openshift cluster. I want to use Openshift authentication to login to the web application. But couldn't find documentation on how to use Openshift authentication for third party apps deployed in Openshift. Can anyone give some pointers here?
Here are two sites / repositories describing how to use the oauth-proxy as a sidecar container:
https://linuxera.org/oauth-proxy-secure-applications-openshift/
https://github.com/openshift/oauth-proxy/#using-this-proxy-with-openshift
The gist of it is that you'll need to add the openshift/oauth-proxy container to your Deployment as a sidecar and route your traffic through this additional container:
apiVersion: apps/v1
kind: Deployment
[..]
spec:
[..]
template:
spec:
containers:
- <YOUR_APPLICATION_CONTAINER>
- name: oauth-proxy
args:
- -provider=openshift
- -https-address=:8888
- -http-address=
- -email-domain=*
- -upstream=http://localhost:8080
- -tls-cert=/etc/tls/private/tls.crt
- -tls-key=/etc/tls/private/tls.key
- -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
- -cookie-secret-file=/etc/proxy/secrets/session_secret
- -openshift-service-account=reversewords
- -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- -skip-auth-regex=^/metrics
image: quay.io/openshift/oauth-proxy:4.6
ports:
- name: oauth-proxy
containerPort: 8888
protocol: TCP
You can find full examples in the linked repository or the linked tutorial.

Docker, AspNetCore, DB connection string best practices

I've been spending the last week or so attempting to learn docker and all the things it can do, however one thing I'm struggling to get my head around is the best practice on how to manage secrets, especially around database connection strings and how these should be stored.
I have a plan in my head where I want to have a docker image, which will contain an ASP.NET Core website, MySQL database and a PHPMyAdmin frontend, and deploy this onto a droplet I have at DigitalOcean.
I've been playing around a little bit and I have a docker-compose.yml file which has the MySQL DB and PhpMyAdmin correctly linked together
version: "3"
services:
db:
image: mysql:latest
container_name: mysqlDatabase
environment:
- MYSQL_ROOT_PASSWORD=0001
- MYSQL_DATABASE=atestdb
restart: always
volumes:
- /var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: db-mgr
ports:
- "3001:80"
environment:
- PMA_HOST=db
restart: always
depends_on:
- db
This is correctly creating a MySQL DB for me and I can connect to it with the running PHPMyAdmin front end using root / 0001 as the username/password combo.
I know I would now need to add my AspNetCore web app to this, but I'm still stumped by the best way to have my DB password.
I have looked at docker swarm/secrets, but I still don't fully understand how this works, especially if I want to check my docker-compose file into GIT/SCM. Other things I have read have suggested using environment variables, but I still don't seem to understand how that is any different to just checking in the connection string in my appsettings.json file, or for that matter, how this would work in a full CI/CD build pipeline.
This question helped my out a little getting to this point, but they still have their DB password in their docker-compose file.
It might be that I'm trying to overthink this
Any help, guidance or suggestions would be gratefully received.
If you are using Docker Swarm then you can take advantage of the secrets feature and store all your sensitive information like passwords or even the whole connection string as docker secret.
For each secret that is created Docker will mount a file inside the container. By default it will mount all the secrets in /run/secrets folder.
You can create a custom configuration provider to read the secret and map it as configuration value
public class SwarmSecretsConfigurationProvider : ConfigurationProvider
{
private readonly IEnumerable<SwarmSecretsPath> _secretsPaths;
public SwarmSecretsConfigurationProvider(
IEnumerable<SwarmSecretsPath> secretsPaths)
{
_secretsPaths = secretsPaths;
}
public override void Load()
{
var data = new Dictionary<string, string>
(StringComparer.OrdinalIgnoreCase);
foreach (var secretsPath in _secretsPaths)
{
if (!Directory.Exists(secretsPath.Path) && !secretsPath.Optional)
{
throw new FileNotFoundException(secretsPath.Path);
}
foreach (var filePath in Directory.GetFiles(secretsPath.Path))
{
var configurationKey = Path.GetFileName(filePath);
if (secretsPath.KeyDelimiter != ":")
{
configurationKey = configurationKey
.Replace(secretsPath.KeyDelimiter, ":");
}
var configurationValue = File.ReadAllText(filePath);
data.Add(configurationKey, configurationValue);
}
}
Data = data;
}
}
then you must add the custom provider to the application configuration
public static IHostBuilder CreateHostBuilder(string[] args)
{
return Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddSwarmSecrets();
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
then if you create a secret with name "my_connection_secret"
$ echo "Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd=myPassword;" \
| docker secret create my_connection_secret -
and map it to your service as connectionstrings:DatabaseConnection
services:
app:
secrets:
- target: ConnectionStrings:DatabaseConnection
source: my_connection_secret
it will be the same as writing it to the appsettings.config
{
"ConnectionStrings": {
"DatabaseConnection": "Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd=myPassword;"
}
}
If you don't want to store all the connection string as secret then you can use a placeholder for the password
Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd={{pwd}};
and use another custom configuration provider to replace it with the password stored as secret.
On my blog post How to manage passwords in ASP.NET Core configuration files I explain in detail how to create a custom configuration provider that allows you to keep only the password as a secret and update the configuration string at runtime. Also the the full source code of this article is hosted on github.com/gabihodoroaga/blog-app-secrets.
Secrets are complicated. I will say that pulling them out into environment variables kicks the problem down the road a bit, especially when you are only using docker-compose (and not something fancier like kubernetes or swarm). Your docker-compose.yaml file would look something like this:
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
Compose will pull MYSQL_ROOT_PASSWORD from an .env file or a command line/environment variable when you spin up your services. Most CI/CD services provide ways (either through a GUI or through some command line interface) of encrypting secrets that get mapped to environment variables on the CI server.
Not to say that environment variables are necessarily the best way of handling secrets. But if you do move to an orchestration platform, like kubernetes, there will be a straightforward path to mapping kubernetes secrets to those same environment variables.

Securing Short-term-history (STH, aka. comet) with FIWARE-PEP-STEELSKIN

I'm struggling around FIWARE Short Time Historic (STH, aka. Comet) securization by using Steelskin, the additional GEi of PEP Proxy GE (https://github.com/telefonicaid/fiware-pep-steelskin).
We finally came up with a configuration that perfectly works with orion and perseo but it does not propertly handle STH calls. It returns:
{
"name": "ACCESS_DENIED",
"message": "The user does not have the appropriate permissions to access the selected action"
}
But it perfectly handle orion calls with given token. Has anyone a working configuration on docker-compose schema?
Our PEP frontend looks like:
pep-sth-fe:
#image: telefonicaiot/fiware-pep-steelskin:latest
build: ./fiware-pep-steelskin
links:
- sth
- keystone
- keypass
ports:
- "8666:8666"
- "11213:11211"
environment:
- COMPONENT_PLUGIN=rest
- TARGET_HOST=sth
- TARGET_PORT=8666
- PROXY_USERNAME=pep
- PROXY_PASSWORD=XXXXXXXX
- ACCESS_HOST=keypass
- ACCESS_PORT=7070
- AUTHENTICATION_HOST=keystone
- AUTHENTICATION_PORT=5001
According to: https://github.com/telefonicaid/fiware-pep-steelskin/blob/master/errorcodes.md
It might be a keypass configuration issue. Creating and assigning an authorised role to allow queries on pep proxied sth?
Thanks in advance for your help.
Bests!

Using MySQL on Openshift with Symfony 2

I added MySQL, and PHPMyAdmin cartridges to my openshift php app.
After mysql cartridge was added I saw the page which says:
Connection URL: mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/
but I have no idea what does it mean.
When I access mysql database through PHPMyAdmin,
I see 127.8.111.1 as db host, so I configured my symfony 2 app (parameters.yml):
parameters:
database_driver: pdo_mysql
database_host: 127.8.111.1
database_port: 3306
database_name: <some_database>
database_user: admin
database_password: <some_password>
Now when I access my web page it throws an error, which I believe related to mysql connection. Can someone show me proper way of doing the above?
EDIT: It seems mysql connection works fine, but somehow
Error 101 (net::ERR_CONNECTION_RESET): Unknown error
is thrown.
The code I use and works very well to make my apps working both on localhost and openshift without changing database config parameters every time I move between them is this:
<?php
# app/config/params.php
if (getEnv("OPENSHIFT_APP_NAME")!='') {
$container->setParameter('database_host', getEnv("OPENSHIFT_MYSQL_DB_HOST"));
$container->setParameter('database_port', getEnv("OPENSHIFT_MYSQL_DB_PORT"));
$container->setParameter('database_name', getEnv("OPENSHIFT_APP_NAME"));
$container->setParameter('database_user', getEnv("OPENSHIFT_MYSQL_DB_USERNAME"));
$container->setParameter('database_password', getEnv("OPENSHIFT_MYSQL_DB_PASSWORD"));
}?>
This will tell the app that if is openshift environment it needs to load different username, host, database, etc.
Then you have to import this file (params.php) from your app/config/config.yml file:
imports:
- { resource: parameters.yml }
- { resource: security.yml }
- { resource: params.php }
...
And that's it. You will never have to touch this file or parameters.yml when you move on openshift or localhost.
Connection URL: mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/
OpenShift exposes environment variables to your application containing the host and port information for your database. You should reference these environment variables in your configuration instead of hard-coding values. I am not a Symfony expert, but it looks to me like you would need to do the following in order to use this information in your app:
Create a pre-start hook for your application and export variables in Symfony's expected format. Add the following to the .openshift/action_hooks/pre_start_php-5.3 file in your application's git repo:
export SYMFONY__DATABASE__HOST=$OPENSHIFT_MYSQL_DB_HOST
export SYMFONY__DATABASE__PORT=$OPENSHIFT_MYSQL_DB_PORT
Symphony uses this pattern to identify external configuration in the environment, and will make the this configuration available for use in your YAML configuration:
parameters:
database_driver: pdo_mysql
database_host: "%database.host%"
database_port: "%database.port%"
EDIT:
Another option to expose this information for use in the YAML configuration is to import a php file in your app/config/config.yml:
imports:
- { resource: parameters.php }
In app/config/parameters.php:
$container->setParameter('database.host', getEnv("OPENSHIFT_MYSQL_DB_HOST"));
$container->setParameter('database.port', getEnv("OPENSHIFT_MYSQL_DB_PORT"));