Rundeck user management with ECS Fargate [Community edition] - containers

I am managing users via the realm.properties file located in home/rundeck/server/config directory until an LDAP/AD solution is implemented. Everytime I updated the ECS/container task, the users I created using the previous container is deleted. I believe this is due to the lifecycle management of the container?
Is there any other way to manage users with Rundeck community?
Thanks.

You can use a realm.properties file as a volume (supported by ECS). In that way, you can use a local / persistent custom realm.properties file. Take a look at this example:
The docker-compose.yml file.
services:
rundeck:
image: rundeck/rundeck:4.6.1
environment:
- RUNDECK_GRAILS_URL=http://localhost:4440
volumes:
- ./data/realm.properties:/home/rundeck/server/config/realm.properties
ports:
- "4440:4440"
restart: unless-stopped
The realm.properties local file (stored at the data directory, at the same level as the docker-compose.yml file).
admin:admin,user,admin
user:user,user
bob:bob,user,admin
Other options:
Process Automation (formerly "Rundeck Enterprise") includes a GUI User Management feature.
As you said, LDAP / AD integration.

Related

mkdir /.gitlab-runner: permission denied running GitLab Runner in Kubernetes deployed via Helm

I'm trying to deploy the GitLab Runner (15.7.1) onto an on-premise Kubernetes cluster and getting the following error:
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
This is occurring with both the 15.7.1 image (Ubuntu?) and the alpine3.13-v15.7.1 image. Looking at the deployment, it looks likes it should be trying to use /home/gitlab-runner, but for some reason it is trying to use root (/), which is a protected directory.
Anyone else experience this issue or have a suggestion as to what to look at?
I am using the Helm chart (0.48.0) using a copy of the images from dockerhub (simply moved into a local repository as internet access is not available from the cluster). Connectivity to GitLab appears to be working, but the error causes the overall startup to fail. Full logs are:
Registration attempt 4 of 30
Runtime platform arch=amd64 os=linux pid=33 revision=6d480948 version=15.7.1
WARNING: Running in user-mode.
WARNING: The user-mode requires you to manually start builds processing:
WARNING: $ gitlab-runner run
WARNING: Use sudo for system-mode:
WARNING: $ sudo gitlab-runner...
Created missing unique system ID system_id=r_Of5q3G0yFEVe
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
I have tried the 15.7.1 image, the alpine3.13-v15.7.1 image, and the gitlab-runner-ocp:amd64-v15.7.1 image and searched the values.yaml for anything relevant to the path. Looking at the deployment template, it appears that it ought to be using /home/gitlab-runner as the directory (instead of /) [though the docs suggested it was /home].
As for "what was I expecting", of course I was expecting that it would "just work" :)
So, resolved this (and other) issues with:
Updated helm deployment template to mount an empty volume at /.gitlab-runner
[separate issue] explicitly added builds_dir and environment [per gitlab-org/gitlab-runner#3511 (comment 114281106)].
These two steps appeared to be sufficient to get the Helm chart deployment working.
You can easily create and mount the emptyDir (in case you are creating gitlab-runner with kubernetes manifest *.yml file):
volumes:
- emptyDir: {}
name: gitlab-runner
volumeMounts:
- name: gitlab-runner
mountPath: /.gitlab-runner
-------------------- OR --------------------
volumeMounts:
- name: root-gitlab-runner
mountPath: /.gitlab-runner
volumes:
- name: root-gitlab-runner
emptyDir:
medium: "Memory"

Docker - Spring Cloud Config Client, Issue with Config Server Discovery

I'm experimenting with Spring Cloud Netflix stack and Spring Cloud Config Server and clients.
For this, I have set a minimal example as shown from the following docker-compose file.
version: '3'
services:
#Eureka Service
discovery:
container_name: discovery
image: jbprek/discovery:latest
ports:
- "8761:8761"
#Spring cloud config server
configservice:
container_name: configserver
image: jbprek/configserver:latest
ports:
- "8888:8888"
depends_on:
- discovery
#Example microservice using discovery and spring cloud config
constants-service:
container_name: constants-service
ports:
- "8080:8080"
image: jbprek/constants-service:latest
depends_on:
- discovery
- configservice
The implementation of discovery and configserver are minimal following various samples and the full code can cloned by:
git clone https://prek#bitbucket.org/prek/boot-netflix-problem.git
When the spring cloud config client "constants-service" uses the following configuration in bootstrap.properties
spring.application.name=constants-service
spring.cloud.config.uri=http://configserver:8888
then everything seems to work fine including registration with "Eureka" and retrieval of configuration from configserver.
Then to lookup configserver by discovery and then retrieve the configuration from constant-service, I modify the bootstrap.properties file as follows:
spring.application.name=constants-service
#Lookup through Eureka
spring.cloud.config.discovery.enabled=true
spring.cloud.config.discovery.service-id=CONFIGSERVER
The above change has the effect of preventing "constants-service" to connect to eureka, by using as Eureka hostname, localhost instead of discovery and both the lookup of configserver service and self registration with Eureka fail.
The application.properties for discovery is:
server.port=8761
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false
The application.properties for configserver is:
server.port=8888
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.client.service-url.defaultZone=http://discovery:8761/eureka
spring.cloud.config.server.git.uri=https://bitbucket.org/prek/boot-netflix-problem-config-data.git
spring.cloud.config.server.git.clone-on-start=true
spring.cloud.config.server.git.force-pull=true
spring.cloud.config.discovery.enabled=false
And for constants service is:
spring.application.name=constants-service
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.client.service-url.defaultZone=http://discovery:8761/eureka
Can someone advise on the above configuration?
Update
According to the answer provided below by #nmyk for constants-service which ise both Eureka (discovery) client and Spring Cloud Config client, the configuration for both the discovery and the config should be contained in bootstrap.properties files so given the examples mentioned above the boostrap.properties file for constants service could be:
spring.application.name=constants-service
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.client.service-url.defaultZone=http://discovery:8761/eureka
spring.cloud.config.discovery.enabled=true
spring.cloud.config.discovery.service-id=CONFIGSERVER
You are switching your application to be in 'Discovery First mode', so your constants-service should know about Eureka and get configserver URL from it by name.
Problem is simple : your bootstrap.properties of constants-service does not contain URL to Eureka, your should move eureka client configuration from git repo (application.properties) to bootstrap.properties.

Docker, AspNetCore, DB connection string best practices

I've been spending the last week or so attempting to learn docker and all the things it can do, however one thing I'm struggling to get my head around is the best practice on how to manage secrets, especially around database connection strings and how these should be stored.
I have a plan in my head where I want to have a docker image, which will contain an ASP.NET Core website, MySQL database and a PHPMyAdmin frontend, and deploy this onto a droplet I have at DigitalOcean.
I've been playing around a little bit and I have a docker-compose.yml file which has the MySQL DB and PhpMyAdmin correctly linked together
version: "3"
services:
db:
image: mysql:latest
container_name: mysqlDatabase
environment:
- MYSQL_ROOT_PASSWORD=0001
- MYSQL_DATABASE=atestdb
restart: always
volumes:
- /var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: db-mgr
ports:
- "3001:80"
environment:
- PMA_HOST=db
restart: always
depends_on:
- db
This is correctly creating a MySQL DB for me and I can connect to it with the running PHPMyAdmin front end using root / 0001 as the username/password combo.
I know I would now need to add my AspNetCore web app to this, but I'm still stumped by the best way to have my DB password.
I have looked at docker swarm/secrets, but I still don't fully understand how this works, especially if I want to check my docker-compose file into GIT/SCM. Other things I have read have suggested using environment variables, but I still don't seem to understand how that is any different to just checking in the connection string in my appsettings.json file, or for that matter, how this would work in a full CI/CD build pipeline.
This question helped my out a little getting to this point, but they still have their DB password in their docker-compose file.
It might be that I'm trying to overthink this
Any help, guidance or suggestions would be gratefully received.
If you are using Docker Swarm then you can take advantage of the secrets feature and store all your sensitive information like passwords or even the whole connection string as docker secret.
For each secret that is created Docker will mount a file inside the container. By default it will mount all the secrets in /run/secrets folder.
You can create a custom configuration provider to read the secret and map it as configuration value
public class SwarmSecretsConfigurationProvider : ConfigurationProvider
{
private readonly IEnumerable<SwarmSecretsPath> _secretsPaths;
public SwarmSecretsConfigurationProvider(
IEnumerable<SwarmSecretsPath> secretsPaths)
{
_secretsPaths = secretsPaths;
}
public override void Load()
{
var data = new Dictionary<string, string>
(StringComparer.OrdinalIgnoreCase);
foreach (var secretsPath in _secretsPaths)
{
if (!Directory.Exists(secretsPath.Path) && !secretsPath.Optional)
{
throw new FileNotFoundException(secretsPath.Path);
}
foreach (var filePath in Directory.GetFiles(secretsPath.Path))
{
var configurationKey = Path.GetFileName(filePath);
if (secretsPath.KeyDelimiter != ":")
{
configurationKey = configurationKey
.Replace(secretsPath.KeyDelimiter, ":");
}
var configurationValue = File.ReadAllText(filePath);
data.Add(configurationKey, configurationValue);
}
}
Data = data;
}
}
then you must add the custom provider to the application configuration
public static IHostBuilder CreateHostBuilder(string[] args)
{
return Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddSwarmSecrets();
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
then if you create a secret with name "my_connection_secret"
$ echo "Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd=myPassword;" \
| docker secret create my_connection_secret -
and map it to your service as connectionstrings:DatabaseConnection
services:
app:
secrets:
- target: ConnectionStrings:DatabaseConnection
source: my_connection_secret
it will be the same as writing it to the appsettings.config
{
"ConnectionStrings": {
"DatabaseConnection": "Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd=myPassword;"
}
}
If you don't want to store all the connection string as secret then you can use a placeholder for the password
Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd={{pwd}};
and use another custom configuration provider to replace it with the password stored as secret.
On my blog post How to manage passwords in ASP.NET Core configuration files I explain in detail how to create a custom configuration provider that allows you to keep only the password as a secret and update the configuration string at runtime. Also the the full source code of this article is hosted on github.com/gabihodoroaga/blog-app-secrets.
Secrets are complicated. I will say that pulling them out into environment variables kicks the problem down the road a bit, especially when you are only using docker-compose (and not something fancier like kubernetes or swarm). Your docker-compose.yaml file would look something like this:
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
Compose will pull MYSQL_ROOT_PASSWORD from an .env file or a command line/environment variable when you spin up your services. Most CI/CD services provide ways (either through a GUI or through some command line interface) of encrypting secrets that get mapped to environment variables on the CI server.
Not to say that environment variables are necessarily the best way of handling secrets. But if you do move to an orchestration platform, like kubernetes, there will be a straightforward path to mapping kubernetes secrets to those same environment variables.

Securing Short-term-history (STH, aka. comet) with FIWARE-PEP-STEELSKIN

I'm struggling around FIWARE Short Time Historic (STH, aka. Comet) securization by using Steelskin, the additional GEi of PEP Proxy GE (https://github.com/telefonicaid/fiware-pep-steelskin).
We finally came up with a configuration that perfectly works with orion and perseo but it does not propertly handle STH calls. It returns:
{
"name": "ACCESS_DENIED",
"message": "The user does not have the appropriate permissions to access the selected action"
}
But it perfectly handle orion calls with given token. Has anyone a working configuration on docker-compose schema?
Our PEP frontend looks like:
pep-sth-fe:
#image: telefonicaiot/fiware-pep-steelskin:latest
build: ./fiware-pep-steelskin
links:
- sth
- keystone
- keypass
ports:
- "8666:8666"
- "11213:11211"
environment:
- COMPONENT_PLUGIN=rest
- TARGET_HOST=sth
- TARGET_PORT=8666
- PROXY_USERNAME=pep
- PROXY_PASSWORD=XXXXXXXX
- ACCESS_HOST=keypass
- ACCESS_PORT=7070
- AUTHENTICATION_HOST=keystone
- AUTHENTICATION_PORT=5001
According to: https://github.com/telefonicaid/fiware-pep-steelskin/blob/master/errorcodes.md
It might be a keypass configuration issue. Creating and assigning an authorised role to allow queries on pep proxied sth?
Thanks in advance for your help.
Bests!

Cronjob of existing Pod

I have a django app running on Openshift 3. I need to run certain manage.py commands on a regular basis. In Openshift 2 I used the Cron gear and now in Openshift 3 I want to use the CronJob pod type.
I want to create a pod for the cronjob, use the same source as the django app is using, but not expose it.
For example:
W1 - Django app
D1 - Postgres DB
M1 - django app for manage.py jobs, run as a cronjob pod.
Any help is appreciated.
You want to use a scheduled job.
https://docs.openshift.com/container-platform/3.5/dev_guide/cron_jobs.html
https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
https://blog.openshift.com/openshift-jobs/
Note that at this time (OpenShift 3.5), you have to use batch/v2alpha1 as the API version. Be careful of out of date documentation showing older version labels.
What I am not sure of is how you can easily reference the image associated with an existing imagestream produced when you used the S2I builder to build you application and you want to use the same image. The base Kubernetes object for this expects you to refer to the image from the image registry. You would thus need to work that out by looking at the imagestream and copying the image registry IP and image details over by hand.
UPDATE 1
See:
https://stackoverflow.com/a/45227960/128141
for details of how from OpenShift 3.6 you can have it resolve the imagestream name automatically. That mechanism is still alpha status in 3.6, but does work.
I've gotten it to work with specifying the image name in the YAML, but then tried to get it to work as part of the template, but ran into an error when trying to use the batch/v1 version on this server
Cannot create cron job "djangomanage". The API version batch/v1 for kind CronJob is not supported by this server.
My template code is
- apiVersion: batch/v1
kind: CronJob
metadata:
name: djangomanage
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: djangomanage
image: '${NAME}:latest'
env:
- name: APP_SCRIPT
value: "/opt/app-root/src/cron.sh"
restartPolicy: Never
CRON.SH
python /opt/app-root/src/manage.py
you need to update line 1 with this:
- apiVersion: batch/v1beta1
see link below:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#cronjob-v1beta1-batch