How do I set different private keys for different environments for Elastic Beanstalk? - amazon-elastic-beanstalk

I am looking at this article https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-storingprivatekeys.html and I understand how I could store the private key file on server using s3.
However, I am not sure as to how I can change the private key file to store in different environments.
How do I achieve the above?

You can store the private keys in S3 for the different environments, download them all, but then only access the one you need for your specific environment. For example:
files:
"/tmp/my_private_key.staging.json":
mode: "000400"
owner: webapp
group: webapp
authentication: "S3Auth"
source: https://s3-us-west-1.amazonaws.com/my_bucket/my_private_key.staging.json
"/tmp/my_private_key.production.json":
mode: "000400"
owner: webapp
group: webapp
authentication: "S3Auth"
source: https://s3-us-west-1.amazonaws.com/my_bucket/my_private_key.production.json
container_commands:
key_transfer_1:
command: "mkdir -p .certificates"
key_transfer_2:
command: "mv /tmp/my_private_key.$APP_ENVIRONMENT.json .certificates/private_key.json"
key_transfer_3:
command: "rm /tmp/my_private_key.*"
where you have set APP_ENVIRONMENT as an environment variable to be "staging" or "production", etc.

Related

Need Azure Files shares to be mounted using SAS signatures

Friends, any idea on how to mount Azure file share using SAS signature in a container.
I was able to mount Azure file share using Storage Account name and Storage account Key but wasn't able to do using SAS token.
If you guys come across this kind of requirement, please free to share your suggestions.
Tried with below command to create secret:
kubectl create secret generic dev-fileshare-sas --from-literal=accountname=######### --from-literal sasToken="########" --type="azure/blobfuse"
volumes mount conf in container:
- name: azurefileshare
flexVolume:
driver: "azure/blobfuse"
readOnly: false
secretRef:
name: dev-fileshare-sas
options:
container: test-file-share
mountoptions: "--file-cache-timeout-in-seconds=120"
Thanks.
To mount a file share, you must use SMB. SMB supports mounting the file share using Identity based authentication (AD DS and AAD DS) or storage account key (not SAS). SAS key can only be used when accessing the file share using REST (for example, Storage Explorer).
This is covered in the FAQ: Frequently asked questions (FAQ) for Azure Files | Microsoft Docs

Cloud function deployment issue

When I deploy cloud function I get the following error.
I am using go mod and I am able to build and run all the integration test from my sandbox,
One of the cloud function dependency uses private github repo,
When I deploy cloud function
go: github.com/myrepo/ptrie#v0.1.: git fetch -f origin refs/heads/:refs/heads/ refs/tags/:refs/tags/ in /builder/pkg/mod/cache/vcs/41e03711c0ecff6d0de8588fa6de21a2c351c59fd4b0a1b685eaaa5868c5892e: exit status 128:
fatal: could not read Username for 'https://github.com': terminal prompts disabled
You might want to create a personal access token within Github and then configure git to use that token.
That command would look like this:
git config --global url."https://{YOUR TOKEN}:x-oauth-basic#github.com/".insteadOf "https://github.com/"
After that, git should be able to read from your private repo
How about using endly to automate your cloud function build, in this case you would
use go mod with vendor, where you private repo would be added to vendor folder,
Make sure that you add .gcloudignore to not incliude go.mod, go.sum
#.gcloudignore
go.mod
go.sum
The automation workflow with endly that uses private repo with credentials may look like the following
#deploy.yaml
init:
appPath: $WorkingDirectory(.)
target:
URL: ssh://127.0.0.1/
credentials: localhost
myGitSecret: ${secrets.private-git}
pipeline:
secretInfo:
action: print
comments: print git credentials (debuging only_
message: $AsJSON($myGitSecret)
package:
action: exec:run
comments: vendor build for deployment speedup
target: $target
checkError: true
terminators:
- Password
- Username
secrets:
#secret var alias: secret file i.e ~/.secret/private-git.json
gitSecrets: private-git
commands:
- export GIT_TERMINAL_PROMPT=1
- export GO111MODULE=on
- unset GOPATH
- cd ${appPath}/
- go mod vendor
- '${cmd[3].stdout}:/Username/? $gitSecrets.Username'
- '${output}:/Password/? $gitSecrets.Password'
deploy:
action: gcp/cloudfunctions:deploy
'#name': MyFn
timeout: 540s
availableMemoryMb: 2048
entryPoint: MyFn
runtime: go111
eventTrigger:
eventType: google.storage.object.finalize
resource: projects/_/buckets/${matcherConfig.Bucket}
source:
URL: ${appPath}/
Finally check out cloud function e2e testing and deployment automation

Docker, AspNetCore, DB connection string best practices

I've been spending the last week or so attempting to learn docker and all the things it can do, however one thing I'm struggling to get my head around is the best practice on how to manage secrets, especially around database connection strings and how these should be stored.
I have a plan in my head where I want to have a docker image, which will contain an ASP.NET Core website, MySQL database and a PHPMyAdmin frontend, and deploy this onto a droplet I have at DigitalOcean.
I've been playing around a little bit and I have a docker-compose.yml file which has the MySQL DB and PhpMyAdmin correctly linked together
version: "3"
services:
db:
image: mysql:latest
container_name: mysqlDatabase
environment:
- MYSQL_ROOT_PASSWORD=0001
- MYSQL_DATABASE=atestdb
restart: always
volumes:
- /var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: db-mgr
ports:
- "3001:80"
environment:
- PMA_HOST=db
restart: always
depends_on:
- db
This is correctly creating a MySQL DB for me and I can connect to it with the running PHPMyAdmin front end using root / 0001 as the username/password combo.
I know I would now need to add my AspNetCore web app to this, but I'm still stumped by the best way to have my DB password.
I have looked at docker swarm/secrets, but I still don't fully understand how this works, especially if I want to check my docker-compose file into GIT/SCM. Other things I have read have suggested using environment variables, but I still don't seem to understand how that is any different to just checking in the connection string in my appsettings.json file, or for that matter, how this would work in a full CI/CD build pipeline.
This question helped my out a little getting to this point, but they still have their DB password in their docker-compose file.
It might be that I'm trying to overthink this
Any help, guidance or suggestions would be gratefully received.
If you are using Docker Swarm then you can take advantage of the secrets feature and store all your sensitive information like passwords or even the whole connection string as docker secret.
For each secret that is created Docker will mount a file inside the container. By default it will mount all the secrets in /run/secrets folder.
You can create a custom configuration provider to read the secret and map it as configuration value
public class SwarmSecretsConfigurationProvider : ConfigurationProvider
{
private readonly IEnumerable<SwarmSecretsPath> _secretsPaths;
public SwarmSecretsConfigurationProvider(
IEnumerable<SwarmSecretsPath> secretsPaths)
{
_secretsPaths = secretsPaths;
}
public override void Load()
{
var data = new Dictionary<string, string>
(StringComparer.OrdinalIgnoreCase);
foreach (var secretsPath in _secretsPaths)
{
if (!Directory.Exists(secretsPath.Path) && !secretsPath.Optional)
{
throw new FileNotFoundException(secretsPath.Path);
}
foreach (var filePath in Directory.GetFiles(secretsPath.Path))
{
var configurationKey = Path.GetFileName(filePath);
if (secretsPath.KeyDelimiter != ":")
{
configurationKey = configurationKey
.Replace(secretsPath.KeyDelimiter, ":");
}
var configurationValue = File.ReadAllText(filePath);
data.Add(configurationKey, configurationValue);
}
}
Data = data;
}
}
then you must add the custom provider to the application configuration
public static IHostBuilder CreateHostBuilder(string[] args)
{
return Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddSwarmSecrets();
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
then if you create a secret with name "my_connection_secret"
$ echo "Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd=myPassword;" \
| docker secret create my_connection_secret -
and map it to your service as connectionstrings:DatabaseConnection
services:
app:
secrets:
- target: ConnectionStrings:DatabaseConnection
source: my_connection_secret
it will be the same as writing it to the appsettings.config
{
"ConnectionStrings": {
"DatabaseConnection": "Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd=myPassword;"
}
}
If you don't want to store all the connection string as secret then you can use a placeholder for the password
Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd={{pwd}};
and use another custom configuration provider to replace it with the password stored as secret.
On my blog post How to manage passwords in ASP.NET Core configuration files I explain in detail how to create a custom configuration provider that allows you to keep only the password as a secret and update the configuration string at runtime. Also the the full source code of this article is hosted on github.com/gabihodoroaga/blog-app-secrets.
Secrets are complicated. I will say that pulling them out into environment variables kicks the problem down the road a bit, especially when you are only using docker-compose (and not something fancier like kubernetes or swarm). Your docker-compose.yaml file would look something like this:
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
Compose will pull MYSQL_ROOT_PASSWORD from an .env file or a command line/environment variable when you spin up your services. Most CI/CD services provide ways (either through a GUI or through some command line interface) of encrypting secrets that get mapped to environment variables on the CI server.
Not to say that environment variables are necessarily the best way of handling secrets. But if you do move to an orchestration platform, like kubernetes, there will be a straightforward path to mapping kubernetes secrets to those same environment variables.

Using environment properties with files in elastic beanstalk config files

Working with Elastic Beanstalk .config files is kinda... interesting. I'm trying to use environment properties with the files: configuration option in an Elastc Beanstalk .config file. What I'd like to do is something like:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
content: |
${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY}
To create an /etc/passwd-s3fs file with content something like:
ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd
I.e. use the environment properties defined in the AWS Console (Elastic Beanstalk/Configuration/Software Configuration/Environment Properties) to initialize system configuration files and such.
I've found that it is possible to use environment properties in container-command:s, like so:
container_commands:
000-create-file:
command: echo ${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY} > /etc/passwd-s3fs
However, doing so will require me to manually set owner, group, file permissions etc. It's also much more of a hassle when dealing with larger configuration files than the Files: configuration option...
Anyone got any tips on this?
How about something like this. I will use the word "context" for dev vs. qa.
Create one file per context:
dev-envvars
export MYAPP_IP_ADDR=111.222.0.1
export MYAPP_BUCKET=dev
qa-envvars
export MYAPP_IP_ADDR=111.222.1.1
export MYAPP_BUCKET=qa
Upload those files to a private S3 folder, S3://myapp/config.
In IAM, add a policy to the aws-elasticbeanstalk-ec2-role role that allows reading S3://myapp/config.
Add the following file to your .ebextensions directory:
envvars.config
files:
"/opt/myapp_envvars" :
mode: "000644"
owner: root
group: root
# change the source when you need a different context
#source: https://s3-us-west-2.amazonaws.com/myapp/dev-envvars
source: https://s3-us-west-2.amazonaws.com/myapp/qa-envvars
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: myapp
commands:
# commands executes after files per
# http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
10-load-env-vars:
command: . /opt/myapp_envvars
Per the AWS Developer's Guide, commands "run before the application and web server are set up and the application version file is extracted," and before container-commands. I guess the question will be whether that is early enough in the boot process to make the environment variables available when you need them. I actually wound up writing an init.d script to start and stop things in my EC2 instance. I used the technique above to deploy the script.
Credit for the “Resources” section that allows downloading from secured S3 goes to the May 7, 2014 post that Joshua#AWS made to this thread.
I am gravedigging but since I stumbled across this in the course of my travels, there is a "clever" way to do what you describe–at least in 2018, and at least since 2016. You can retrieve an environment variable by key with get-config:
/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY
And likewise all environment variables with (as JSON or --output YAML)
/opt/elasticbeanstalk/bin/get-config environment
Example usage in a container command:
container_commands:
00_store_env_var_in_file_and_chmod:
command: "/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_KEY | install -D /dev/stdin /etc/somefile && chmod 640 /etc/somefile"
Example usage in a file:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/00_do_stuff.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
YOUR_ENV_VAR=$(source /opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY)
echo "Hello $YOUR_ENV_VAR"
I was introduced to get-config by Thomas Reggi in https://serverfault.com/a/771067.
I assume that AWS_ACCESS_KEY_ID and AWS_SECRET_KEY are known to you prior to the app deployment.
You can create the file on your workstation and submit it to Elastic Beanstalk instance with the code on $ git aws.push
$ cd .ebextensions
$ echo 'ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd' > passwd-s3fs
In .config:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
container_commands:
10-copy-passwords-file:
command: "cat .ebextensions/passwd-s3fs > /etc/passwd-s3fs"
You might have to play with the permissions or execute cat as sudo. Also, I put the file into .ebextensions for example, it can be anywhere in your project.
Hope it helps.

Using MySQL on Openshift with Symfony 2

I added MySQL, and PHPMyAdmin cartridges to my openshift php app.
After mysql cartridge was added I saw the page which says:
Connection URL: mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/
but I have no idea what does it mean.
When I access mysql database through PHPMyAdmin,
I see 127.8.111.1 as db host, so I configured my symfony 2 app (parameters.yml):
parameters:
database_driver: pdo_mysql
database_host: 127.8.111.1
database_port: 3306
database_name: <some_database>
database_user: admin
database_password: <some_password>
Now when I access my web page it throws an error, which I believe related to mysql connection. Can someone show me proper way of doing the above?
EDIT: It seems mysql connection works fine, but somehow
Error 101 (net::ERR_CONNECTION_RESET): Unknown error
is thrown.
The code I use and works very well to make my apps working both on localhost and openshift without changing database config parameters every time I move between them is this:
<?php
# app/config/params.php
if (getEnv("OPENSHIFT_APP_NAME")!='') {
$container->setParameter('database_host', getEnv("OPENSHIFT_MYSQL_DB_HOST"));
$container->setParameter('database_port', getEnv("OPENSHIFT_MYSQL_DB_PORT"));
$container->setParameter('database_name', getEnv("OPENSHIFT_APP_NAME"));
$container->setParameter('database_user', getEnv("OPENSHIFT_MYSQL_DB_USERNAME"));
$container->setParameter('database_password', getEnv("OPENSHIFT_MYSQL_DB_PASSWORD"));
}?>
This will tell the app that if is openshift environment it needs to load different username, host, database, etc.
Then you have to import this file (params.php) from your app/config/config.yml file:
imports:
- { resource: parameters.yml }
- { resource: security.yml }
- { resource: params.php }
...
And that's it. You will never have to touch this file or parameters.yml when you move on openshift or localhost.
Connection URL: mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/
OpenShift exposes environment variables to your application containing the host and port information for your database. You should reference these environment variables in your configuration instead of hard-coding values. I am not a Symfony expert, but it looks to me like you would need to do the following in order to use this information in your app:
Create a pre-start hook for your application and export variables in Symfony's expected format. Add the following to the .openshift/action_hooks/pre_start_php-5.3 file in your application's git repo:
export SYMFONY__DATABASE__HOST=$OPENSHIFT_MYSQL_DB_HOST
export SYMFONY__DATABASE__PORT=$OPENSHIFT_MYSQL_DB_PORT
Symphony uses this pattern to identify external configuration in the environment, and will make the this configuration available for use in your YAML configuration:
parameters:
database_driver: pdo_mysql
database_host: "%database.host%"
database_port: "%database.port%"
EDIT:
Another option to expose this information for use in the YAML configuration is to import a php file in your app/config/config.yml:
imports:
- { resource: parameters.php }
In app/config/parameters.php:
$container->setParameter('database.host', getEnv("OPENSHIFT_MYSQL_DB_HOST"));
$container->setParameter('database.port', getEnv("OPENSHIFT_MYSQL_DB_PORT"));