Is it possibly to overwrite the Kubeconfig with terraform's Kubernetes provider - azure-cli

I wanted to run terraform and then be able to run kubectl in the cli right after terraform completes. Or is this something you don't do. I would want to make a script to run kubectl commands after terraform finishes creating the cluster.
I have this and I am assuming I could write terraform kubernetes code but I don't believe it is overwriting the cli's kubeconfig referenced file.
provider "kubernetes" {
load_config_file = false
host = azurerm_kubernetes_cluster.cluster_1.kube_config.0.host
username = azurerm_kubernetes_cluster.cluster_1.kube_config.0.username
password = azurerm_kubernetes_cluster.cluster_1.kube_config.0.password
client_certificate = base64decode(azurerm_kubernetes_cluster.cluster_1.kube_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.cluster_1.kube_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.cluster_1.kube_config.0.cluster_ca_certificate)
}

If I understand correctly, you want to add a context inside your kube config file after creating a cluster. Maybe running az aks get-credentials using Terraform after creation will work?
resource "null_resource" "add_context" {
provisioner "local-exec" {
command = "az aks get-credentials --resource-group ${azurerm_kubernetes_cluster.cluster_1.resource_group_name} --name ${azurerm_kubernetes_cluster.cluster_1.name} --overwrite-existing"
}
depends_on = [azurerm_kubernetes_cluster.cluster_1]
}

Related

GCP, terraform is installed on GCP project-A 'test-instance' instance, using terraform code, how to deploy/create instance on project-B?

GCP, terraform is installed on GCP project-A 'test-instance' instance, using terraform how to deploy instance on project-B ?
I was able to do it using gcloud command, does anyone knows how to do it ?
provider "google" {
project = "project-b"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_compute_instance" "vm_instance" {
name = "terraform-instance"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
# A default network is created for all GCP projects
network = "default"
access_config {}
}
}
The problem you are facing is around access control.
You are trying to run terraform from a VM lives in Project-A and terraform code wants to create a new VM (or other resource) in Project-B.
By default, service account attached to Project-A VM does not have enough rights to create any resource in Project-B. To solve this, you can create a service account at Folder level (or Org level) which has permissions to create VM in required projects and then you can attach that service account to the VM which runs terraform.
Hope this helps.
I suggest you use Terraform Variables using .tfvars files and multiple Terraform Workspaces. You can then switch between workspaces and apply the tfvars for each particular project separately.
e.g.
# variables.tf
variable "project_id" {
type = string
}
And then use the variable in your terraform config:
# main.tf
provider "google" {
project = var.project_id
region = "us-central1"
zone = "us-central1-c"
}
The tfvars will then look like this:
# vars/dev.tfvars
project_id = "my-dev-project"
Full invocation within your workspace (see the docs) can then be done using plan/apply as you would normally do:
terraform workspace select dev
terraform plan -var-file vars/dev.tfvars

deploy hashicorp vault without persistent storage in openshift

How to deploy the hashicorp vault in openshift with out using persistent volumes(PV)?
In the openshift cluster as a normal user(not a cluster admin),need to deploy the vault server. I followed the URL but it has persistent volumes (/vault/file) in vault.yaml file in it, which requires permission for my account to create persistent container but I do not have enough permission for my account. so i removed the pv mount paths in the vault-config.json like below, but I am seeing the below error.
{"backend":
{"file":
{"path": "/tmp/file"}
},
...
...
}
Is it possible to create the vault server without PV, like using the local file path (/tmp/file) as backend storage as a normal user?
What is the alternative way to deploy vault in openshift without PV to deploy hashicorp vault?
Below is the error when run with pv,
--> Scaling vault-1 to 1
--> FailedCreate: vault-1 Error creating: pods "vault-1-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
error: update acceptor rejected vault-1: pods for rc 'dev-poc-environment/vault-1' took longer than 600 seconds to become available
How to deploy the hashicorp vault in openshift with out using
persistent volumes(PV)?
You can use In-Memory storage backend as mentioned here. So your vault config looks something like this:
$cat config.hcl
disable_mlock = true
storage "inmem" {}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 0
tls_cert_file = "/etc/service/vault-server/vault-server.crt"
tls_key_file = "/etc/service/vault-server/vault-server.key"
}
ui = true
max_lease_ttl = "7200h"
default_lease_ttl = "7200h"
api_addr = "http://127.0.0.1:8200"
But with this data/secrets are not persistent.
Another way is to add a file path to the storage, so that all the secrets which are encrypted stored at the mentioned path.
so now your config changes to
storage "file" {
path = "ANY-PATH"
}
POINTS TO BE NOTED HERE:
Path defined should have permissions to write/read data/secrets
This could be any path that is inside the container, just to avoid dependency on persistence volume.
But what is the problem with this model? When the container restarts, all the data will be lost as the container doesn't store data.
No High Availability – the Filesystem backend does not support high
availability.
So what should be the ideal solution? Anything that makes our data highly available, which is achieved by using dedicated backend storage using a database.
For simplicity, let us take PostgreSQL as backend storage.
storage "postgresql" {
connection_url = "postgres://user123:secret123!#localhost:5432/vault"
}
so now config looks something like this:
$ cat config.hcl
disable_mlock = true
storage "postgresql" {
connection_url = "postgres://vault:vault#vault-postgresql:5432/postgres?sslmode=disable"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 0
tls_cert_file = "/etc/service/vault-server/vault-server.crt"
tls_key_file = "/etc/service/vault-server/vault-server.key"
}
ui = true
max_lease_ttl = "7200h"
default_lease_ttl = "7200h"
api_addr = "http://127.0.0.1:8200"
So choosing backend storage helps you to persist your data even if the container restarts.
As you are specifically looking for a solution in openshift, create a postgresSQL container using template provided and make vault point it to it using the service name as explanied in the above config.hcl
Hope this helps!

Using Terraform to dump/restore a MariaDB database

No cloud resources are in use here.
I am new to using Terraform.
I am using Terraform 0.12 to install software on a server. That software expects the remote MariaDB database to be empty, which is done manually. (The software will cause Terraform to abort if the database is not empty.) Right now it is all dummy data.
I would like use Terraform to mysqldump the database prior to destroying the database so that same dump can be restored on a terraform apply. Eventually, the contents of the database need to be preserved between software upgrades.
I have Terraform code to create and destroy the server and install the software. That works fine. The database is handled manually at the moment. When uncommented, the Terraform code to connect to the database works, but I do not have enough experience to do anything more beyond that.
provider "mysql" {
endpoint = "10.0.1.2"
username = "terraform"
password = "changeme"
version = "~> 1.6"
}
resource "mysql_database" "default" {
default_character_set = "utf8"
name = "terraform_test_db"
}
You can use destroy time provisioners to trigger an provisioner action to happen before Terraform tries to destroy the resource.
provider "mysql" {
endpoint = "10.0.1.2"
username = "terraform"
password = "changeme"
version = "~> 1.6"
}
resource "mysql_database" "default" {
name = "terraform_test_db"
default_character_set = "utf8"
provisioner "local-exec" {
when = "destroy"
command = "mysqldump [options] > dump.sql"
}
}

Instance of module depending on another instance of same module in Terraform

I'm trying to figure out a way to make one instance of a module depend on the successful deployment of another instance of the same module. Unfortunately, although resources support it, modules don't seem to support the explicit depends_on switch:
➜ db_terraform git:(master) ✗ terraform plan
Error: module "slave": "depends_on" is not a valid argument
I have these in the root module: main.tf
module "master" {
source = "./modules/database"
cluster_role = "master"
..
server_count = 1
}
module "slave" {
source = "./modules/database"
cluster_role = "slave"
..
server_count = 3
}
resource "aws_route53_record" "db_master" {
zone_id = "<PRIVZONE>"
name = "master.example.com"
records = ["${module.master.instance_private_ip}"]
type = "A"
ttl = "300"
}
I want master to be deployed first. What I'm trying to do is launch two AWS instances with a database product installed. Once the master comes up, its IP will be used to create a DNS record. Once this is done, the slaves get created and will use the IP to "enlist" with the master as part of the cluster. How do I prevent the slaves from coming up concurrently with the master? I'm trying to avoid slaves failing to connect with master since the DB record may not have been created by the time the slave is ready.
I've read recommendations for using a null_resource in this context, but it's not clear to me how it should be used to help my problem.
Fwiw, here's the content of main.tf in the module.
resource "aws_instance" "database" {
ami = "${data.aws_ami.amazonlinux_legacy.id}"
instance_type = "t2.xlarge"
user_data = "${data.template_file.db_init.rendered}"
count = "${var.server_count}"
}
Thanks in advance for any answers.

Perform any operation in Mysql once it is launched by Terraform. How can I run provisioner for RDS?

I have launched an RDS instance using terraform, now I want to create a user and DB inside it, basically, run some query inside it. So how can we achieve that?
Thanks
There are two options depending on what you want to change.
You could use the local-exec provisioner.
Basically, you just need to add something like this inside your aws_db_instance definition:
provisioner "local-exec" {
command = "your great command line!"
}
Bear in mind that this option has a big limitation, the provisioner will be executed ONLY ONCE after the first time the resource is created.
You could a specific Terraform provider like MySQL or PostgreSQL.
More info here:
https://www.terraform.io/docs/provisioners/local-exec.html
https://www.terraform.io/docs/providers/mysql/index.html
Another approach, if you want to run the command based on local file changes is to use a null_resource which triggers when your sql has changed.
resource "null_resource" "setup_db" {
depends_on = ["aws_db_instance.my_db"] #wait for the db to be ready
triggers = {
file_sha = "${sha1(file("file.sql"))}"
}
provisioner "local-exec" {
command = "mysql -u ${aws_db_instance.my_db.username} -p${var.my_db_password} -h ${aws_db_instance.my_db.address} < file.sql"
}
}