deploy hashicorp vault without persistent storage in openshift - openshift

How to deploy the hashicorp vault in openshift with out using persistent volumes(PV)?
In the openshift cluster as a normal user(not a cluster admin),need to deploy the vault server. I followed the URL but it has persistent volumes (/vault/file) in vault.yaml file in it, which requires permission for my account to create persistent container but I do not have enough permission for my account. so i removed the pv mount paths in the vault-config.json like below, but I am seeing the below error.
{"backend":
{"file":
{"path": "/tmp/file"}
},
...
...
}
Is it possible to create the vault server without PV, like using the local file path (/tmp/file) as backend storage as a normal user?
What is the alternative way to deploy vault in openshift without PV to deploy hashicorp vault?
Below is the error when run with pv,
--> Scaling vault-1 to 1
--> FailedCreate: vault-1 Error creating: pods "vault-1-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
error: update acceptor rejected vault-1: pods for rc 'dev-poc-environment/vault-1' took longer than 600 seconds to become available

How to deploy the hashicorp vault in openshift with out using
persistent volumes(PV)?
You can use In-Memory storage backend as mentioned here. So your vault config looks something like this:
$cat config.hcl
disable_mlock = true
storage "inmem" {}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 0
tls_cert_file = "/etc/service/vault-server/vault-server.crt"
tls_key_file = "/etc/service/vault-server/vault-server.key"
}
ui = true
max_lease_ttl = "7200h"
default_lease_ttl = "7200h"
api_addr = "http://127.0.0.1:8200"
But with this data/secrets are not persistent.
Another way is to add a file path to the storage, so that all the secrets which are encrypted stored at the mentioned path.
so now your config changes to
storage "file" {
path = "ANY-PATH"
}
POINTS TO BE NOTED HERE:
Path defined should have permissions to write/read data/secrets
This could be any path that is inside the container, just to avoid dependency on persistence volume.
But what is the problem with this model? When the container restarts, all the data will be lost as the container doesn't store data.
No High Availability – the Filesystem backend does not support high
availability.
So what should be the ideal solution? Anything that makes our data highly available, which is achieved by using dedicated backend storage using a database.
For simplicity, let us take PostgreSQL as backend storage.
storage "postgresql" {
connection_url = "postgres://user123:secret123!#localhost:5432/vault"
}
so now config looks something like this:
$ cat config.hcl
disable_mlock = true
storage "postgresql" {
connection_url = "postgres://vault:vault#vault-postgresql:5432/postgres?sslmode=disable"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 0
tls_cert_file = "/etc/service/vault-server/vault-server.crt"
tls_key_file = "/etc/service/vault-server/vault-server.key"
}
ui = true
max_lease_ttl = "7200h"
default_lease_ttl = "7200h"
api_addr = "http://127.0.0.1:8200"
So choosing backend storage helps you to persist your data even if the container restarts.
As you are specifically looking for a solution in openshift, create a postgresSQL container using template provided and make vault point it to it using the service name as explanied in the above config.hcl
Hope this helps!

Related

Need Azure Files shares to be mounted using SAS signatures

Friends, any idea on how to mount Azure file share using SAS signature in a container.
I was able to mount Azure file share using Storage Account name and Storage account Key but wasn't able to do using SAS token.
If you guys come across this kind of requirement, please free to share your suggestions.
Tried with below command to create secret:
kubectl create secret generic dev-fileshare-sas --from-literal=accountname=######### --from-literal sasToken="########" --type="azure/blobfuse"
volumes mount conf in container:
- name: azurefileshare
flexVolume:
driver: "azure/blobfuse"
readOnly: false
secretRef:
name: dev-fileshare-sas
options:
container: test-file-share
mountoptions: "--file-cache-timeout-in-seconds=120"
Thanks.
To mount a file share, you must use SMB. SMB supports mounting the file share using Identity based authentication (AD DS and AAD DS) or storage account key (not SAS). SAS key can only be used when accessing the file share using REST (for example, Storage Explorer).
This is covered in the FAQ: Frequently asked questions (FAQ) for Azure Files | Microsoft Docs

GCP, terraform is installed on GCP project-A 'test-instance' instance, using terraform code, how to deploy/create instance on project-B?

GCP, terraform is installed on GCP project-A 'test-instance' instance, using terraform how to deploy instance on project-B ?
I was able to do it using gcloud command, does anyone knows how to do it ?
provider "google" {
project = "project-b"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_compute_instance" "vm_instance" {
name = "terraform-instance"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
# A default network is created for all GCP projects
network = "default"
access_config {}
}
}
The problem you are facing is around access control.
You are trying to run terraform from a VM lives in Project-A and terraform code wants to create a new VM (or other resource) in Project-B.
By default, service account attached to Project-A VM does not have enough rights to create any resource in Project-B. To solve this, you can create a service account at Folder level (or Org level) which has permissions to create VM in required projects and then you can attach that service account to the VM which runs terraform.
Hope this helps.
I suggest you use Terraform Variables using .tfvars files and multiple Terraform Workspaces. You can then switch between workspaces and apply the tfvars for each particular project separately.
e.g.
# variables.tf
variable "project_id" {
type = string
}
And then use the variable in your terraform config:
# main.tf
provider "google" {
project = var.project_id
region = "us-central1"
zone = "us-central1-c"
}
The tfvars will then look like this:
# vars/dev.tfvars
project_id = "my-dev-project"
Full invocation within your workspace (see the docs) can then be done using plan/apply as you would normally do:
terraform workspace select dev
terraform plan -var-file vars/dev.tfvars

Instance of module depending on another instance of same module in Terraform

I'm trying to figure out a way to make one instance of a module depend on the successful deployment of another instance of the same module. Unfortunately, although resources support it, modules don't seem to support the explicit depends_on switch:
➜ db_terraform git:(master) ✗ terraform plan
Error: module "slave": "depends_on" is not a valid argument
I have these in the root module: main.tf
module "master" {
source = "./modules/database"
cluster_role = "master"
..
server_count = 1
}
module "slave" {
source = "./modules/database"
cluster_role = "slave"
..
server_count = 3
}
resource "aws_route53_record" "db_master" {
zone_id = "<PRIVZONE>"
name = "master.example.com"
records = ["${module.master.instance_private_ip}"]
type = "A"
ttl = "300"
}
I want master to be deployed first. What I'm trying to do is launch two AWS instances with a database product installed. Once the master comes up, its IP will be used to create a DNS record. Once this is done, the slaves get created and will use the IP to "enlist" with the master as part of the cluster. How do I prevent the slaves from coming up concurrently with the master? I'm trying to avoid slaves failing to connect with master since the DB record may not have been created by the time the slave is ready.
I've read recommendations for using a null_resource in this context, but it's not clear to me how it should be used to help my problem.
Fwiw, here's the content of main.tf in the module.
resource "aws_instance" "database" {
ami = "${data.aws_ami.amazonlinux_legacy.id}"
instance_type = "t2.xlarge"
user_data = "${data.template_file.db_init.rendered}"
count = "${var.server_count}"
}
Thanks in advance for any answers.

Startup script from Bitbucket (https) fail to download, but works if instance is reset

I am programatically launching a new instance using the Compute Engine API for Go [1], and a tool I made called vmproxy [2].
The problem I have is that if I launch a preemptible VM using a startup-script-url pointing to https://bitbucket.org/ronoaldo/debian-custom/raw/tip/tools/autobuild, the build script fails to download. I can see in the serial console output that the the startup script metadata is there, and that it attempts to be downloaded with curl, but that part fails.
However, if I reset the instance via the developers console, the script is properly downloaded and runs nicelly.
The code I am using to setup the instance is:
// Ronolinux is a VM Proxy that runs an live systems build on Compute Engine
var (
Ronolinux = &vmproxy.VM{
Path: "/",
Instance: vmproxy.Instance{
Name: "ronolinux-buildd",
Zone: "us-central1-f",
Image: vmproxy.ResourcePrefix + "/debian-cloud/global/images/debian-8-jessie-v20150915",
MachineType: "n1-standard-1",
Metadata: map[string]string{
"startup-script-url": "https://bitbucket.org/ronoaldo/debian-custom/raw/tip/tools/autobuild",
"shutdown-script": `!#/bin/bash
gsutil cp /var/log/startupscript.log gs://ronoaldo/ronolinux/build-$(date +%Y%m%d%H%M%S).log
`,
},
Scopes: []string{ storageReadWrite },
},
}
)
[1] https://godoc.org/google.golang.org/api/compute/v1
[2] https://godoc.org/ronoaldo.gopkg.net/aetools/vmproxy
If your startup script is not hosted on Cloud Storage, there is a random chance the download will fail. If you look at the serial console output, make sure to scroll horizontally, as it will not wrap long lines. In my case, the error line was very long, and this hidded the real end of the message:
(... long curl on-line progress output )
curl: (7) Failed to connect to bitbucket.org port 443: Connection timed out
(...)
Your host must respond within a 10s timeout. In my case, the first boot usually failed to contact Bitbucket, hence failing to download the script; a VM reset also made things work, as the network latency outside Google Cloud were probably better.
I ended up moving to host the script on cloud storage to avoid these issues.

How to automatically exit/stop the running instance

I have managed to create an instance and ssh into it. However, I have couple of questions regarding the Google Compute Engine.
I understand that I will be charged for the time my instance is running. That is till I exit out of the instance. Is my understanding correct?
I wish to run some batch job (java program) on my instance. How do I make my instance stop automatically after the job is complete (so that I don't get charged for the additional time it may run)
If I start the job and disconnect my PC, will the job continue to run on the instance?
Regards,
Asim
Correct, instances are charged for the time they are running. (to the minute, minimum 10 minutes). Instances run from the time they are started via the API until they are stopped via the API. It doesn't matter if any user is logged in via SSH or not. For most automated use cases users never log in - programs are installed and started via start up scripts.
You can view your running instances via the Cloud Console, to confirm if any are currently running.
If you want to stop your instance from inside the instance, the easiest way is to start the instance with the compute-rw Service Account Scope and use gcutil.
For example, to start your instance from the command line with the compute-rw scope:
$ gcutil --project=<project-id> addinstance <instance name> --service_account_scopes=compute-rw
(this is the default when manually creating an instance via the Cloud Console)
Later, after your batch job completes, you can remove the instance from inside the instance:
$ gcutil deleteinstance -f <instance name>
You can put halt command at the end of your batch script (assuming that you output your results on persistent disk).
After halt the instance will have a state of TERMINATED and you will not be charged.
See https://developers.google.com/compute/docs/pricing
scroll downn to "instance uptime"
You can auto shutdown instance after model training. Just run few extra lines of code after the model training is complete.
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'v1', credentials=credentials)
# Project ID for this request.
project = 'xyz' # Project ID
# The name of the zone for this request.
zone = 'xyz' # Zone information
# Name of the instance resource to stop.
instance = 'xyz' # instance id
request = service.instances().stop(project=project, zone=zone, instance=instance)
response = request.execute()
add this to your model training script. When the training is complete GCP instance automatically shuts down.
More info on official website:
https://cloud.google.com/compute/docs/reference/rest/v1/instances/stop
If you want to stop the instance using the python script, you can follow this way:
from google.cloud.compute_v1.services.instances import InstancesClient
from google.oauth2 import service_account
instance_client = InstancesClient().from_service_account_file(<location-path>)
zone = <zone>
project = <project>
instance = <instance_id>
instance_client.stop(project=project, instance=instance, zone=zone)
In the above script, I have assumed you are using service-account for authentication. For documentation of libraries used you can go here:
https://googleapis.dev/python/compute/latest/compute_v1/instances.html