GCP, terraform is installed on GCP project-A 'test-instance' instance, using terraform code, how to deploy/create instance on project-B? - json

GCP, terraform is installed on GCP project-A 'test-instance' instance, using terraform how to deploy instance on project-B ?
I was able to do it using gcloud command, does anyone knows how to do it ?
provider "google" {
project = "project-b"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_compute_instance" "vm_instance" {
name = "terraform-instance"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
# A default network is created for all GCP projects
network = "default"
access_config {}
}
}

The problem you are facing is around access control.
You are trying to run terraform from a VM lives in Project-A and terraform code wants to create a new VM (or other resource) in Project-B.
By default, service account attached to Project-A VM does not have enough rights to create any resource in Project-B. To solve this, you can create a service account at Folder level (or Org level) which has permissions to create VM in required projects and then you can attach that service account to the VM which runs terraform.
Hope this helps.

I suggest you use Terraform Variables using .tfvars files and multiple Terraform Workspaces. You can then switch between workspaces and apply the tfvars for each particular project separately.
e.g.
# variables.tf
variable "project_id" {
type = string
}
And then use the variable in your terraform config:
# main.tf
provider "google" {
project = var.project_id
region = "us-central1"
zone = "us-central1-c"
}
The tfvars will then look like this:
# vars/dev.tfvars
project_id = "my-dev-project"
Full invocation within your workspace (see the docs) can then be done using plan/apply as you would normally do:
terraform workspace select dev
terraform plan -var-file vars/dev.tfvars

Related

Github Actions Bicep deployment: "What-if" fails to create Key Vault; "Create" fails with exit code 1

It succeeds when I manually execute Bicep deployment with the following command:
az login
az deployment group what-if --resource-group $RESOURCE_GROUP_NAME --template-file ./infrastructure/bicep/main.bicep --parameters ./infrastructure/bicep/params.json
az deployment group create (with same arguments) fails with exit code 1 and no logged msg.
I then create a Service Principal and I set it as a Github Actions secret which I am supplying to my workflow for authentication with Azure/cli:
az ad sp create-for-rbac --name azure-contributor-github-service-principal --role contributor --scope /subscriptions/$SUBSCRIPTION_ID
Then execution of the same deployment but now automated fails with the following
log message:
"Multiple errors occurred: BadRequest. Please see the details.
BadRequest - The specified KeyVault '/subscriptions/***/resourceGroups/<my_rg_name>/providers/Microsoft.KeyVault/vaults/<my_kv_name>' could not be found."
The Bicep script indeed contains a declaration for a KeyVault resource named <my_kv_name>.
To me, it seems that when I use az cli and login with (az login) my Azure Portal User account the cli is already authorized to have Key Vault-related permissions. GitHub though using the Service Principal that I created especially for that purpose, doesn't have sufficient permissions even if I create it as --role owner.
I struggle to find more debugging information.
Any idea what I am missing?
UPDATE #1:
Considering #4c74356b41's answer I added in my Bicep code an access policy that sets permissions to the Service Principal for secrets.
Unfortunately I receive the same result.
resource keyVaultAccessPolicyForSecrets 'Microsoft.KeyVault/vaults/accessPolicies#2022-07-01' = {
name: '${keyVault.name}/policy'
properties: {
accessPolicies: [
{
applicationId: spPolicyAppId
objectId: spPolicyObjectId
tenantId: spPolicyTenantId
permissions: {
secrets: [ 'all' ]
}
}
]
}
}
UPDATE #2:
I managed to make the Bicep file deployable, but still I had to change its appearance. I believe the root cause of the issue is not related to Service Principal permissions to operate with the Key Vault that the script creates. Here is why I think so:
File structure of the Bicep Code:
core.bicep - responsible for the creation of a Container Registry, a Key Vault, and a Key Vault Secret
aca.bicep - responsible for the creation of a Log Analytics Workspace, a Container App Environment, and a Container App (with configured MS Default CA Image)
main.bicep - where
via "module" I am referencing the core.bicep file which as I
mentioned creates the Key Vault.
I create an existing Key Vault resource a prop of which I need to use as an input param to the next module
via "module" keyword I am referencing the aca.bicep file which I use to create the rest of the resources.
main.bicep:
module core 'core.bicep' = {
name: 'core'
params: {
location: location
solution: solution
spPolicyAppId: spPolicyAppId
spPolicyObjectId: spPolicyObjectId
spPolicyTenantId: spPolicyTenantId
}
}
resource keyVault 'Microsoft.KeyVault/vaults#2022-07-01' existing =
{
name: core.outputs.KeyVaultName
}
module devAca 'aca.bicep' = {
name: 'devAca'
dependsOn: [
core
]
params: {
env: 'dev'
location: location
project: project
solution: solution
containerRegistryName: core.outputs.ContainerRegistryName
containerRegistryPassword:
keyVault.getSecret(core.outputs.ContainerRegistrySecretName)
imageName: imageName
imageTag: imageTag
}
}
Having this structure during the deployment is throwing already the mentioned message. When I took out the code from the subfiles and replaced the modules with it, the deployment started passing successfully.
Moreover, I removed the Key Vault Policy and the infrastructure still deploys successfully including the Key Vault and the Secret in it.
So my conclusion, for now, is that I am somehow misusing the "module" keyword
Azure Key Vault got its own data plane permissions, you need to grant your Service Principal access to secrets\certificates\keys (not sure what you are puling) in the KV (get\list).
reading: https://learn.microsoft.com/en-us/azure/key-vault/general/assign-access-policy?tabs=azure-portal
Since you didn't provide full KeyVault bicep file, we can't be sure if it's Bicep issue or something else. Try this Bicep file below with your parameters, it should create a KeyVault and Access Policy. If it fails, then maybe your user (or service principle) is missing some rights in Azure?
Check App registration API permissions, you might need Application.ReadWrite.All or Azure KeyVault permission.
resource keyVault 'Microsoft.KeyVault/vaults#2022-07-01' = {
name: keyVaultName
location: location
properties: {
tenantId: azureTenantId
accessPolicies: [
{
objectId: ownerObjectId
tenantId: azureTenantId
permissions: {
secrets: secretsPermissions
}
}
]
sku: {
name: keyVaultSkuName
family: 'A'
}
}
}

How to add providers in terraform aws?

This is the error I'm getting:
Failed to query available provider packages
Could not retrieve the list of available versions for provider
hashicorp/mysql: provider registry registry.terraform.io does not have a
provider named registry.terraform.io/hashicorp/mysql
For terraform > 0.13 you need to add a required_providers snippet for any un-official provider (un-official means not owned by HashiCorp and not part of their registry). There was one supported by HashiCorp but it is discontinued (you could potentially use it if you downgrade to TF12).
If you are aware of a community provided one a code snippet similar to the one for docker below should suffice:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
}
}
}
where in source you will give a link to the provider source repo/registry.

Is it possibly to overwrite the Kubeconfig with terraform's Kubernetes provider

I wanted to run terraform and then be able to run kubectl in the cli right after terraform completes. Or is this something you don't do. I would want to make a script to run kubectl commands after terraform finishes creating the cluster.
I have this and I am assuming I could write terraform kubernetes code but I don't believe it is overwriting the cli's kubeconfig referenced file.
provider "kubernetes" {
load_config_file = false
host = azurerm_kubernetes_cluster.cluster_1.kube_config.0.host
username = azurerm_kubernetes_cluster.cluster_1.kube_config.0.username
password = azurerm_kubernetes_cluster.cluster_1.kube_config.0.password
client_certificate = base64decode(azurerm_kubernetes_cluster.cluster_1.kube_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.cluster_1.kube_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.cluster_1.kube_config.0.cluster_ca_certificate)
}
If I understand correctly, you want to add a context inside your kube config file after creating a cluster. Maybe running az aks get-credentials using Terraform after creation will work?
resource "null_resource" "add_context" {
provisioner "local-exec" {
command = "az aks get-credentials --resource-group ${azurerm_kubernetes_cluster.cluster_1.resource_group_name} --name ${azurerm_kubernetes_cluster.cluster_1.name} --overwrite-existing"
}
depends_on = [azurerm_kubernetes_cluster.cluster_1]
}

deploy hashicorp vault without persistent storage in openshift

How to deploy the hashicorp vault in openshift with out using persistent volumes(PV)?
In the openshift cluster as a normal user(not a cluster admin),need to deploy the vault server. I followed the URL but it has persistent volumes (/vault/file) in vault.yaml file in it, which requires permission for my account to create persistent container but I do not have enough permission for my account. so i removed the pv mount paths in the vault-config.json like below, but I am seeing the below error.
{"backend":
{"file":
{"path": "/tmp/file"}
},
...
...
}
Is it possible to create the vault server without PV, like using the local file path (/tmp/file) as backend storage as a normal user?
What is the alternative way to deploy vault in openshift without PV to deploy hashicorp vault?
Below is the error when run with pv,
--> Scaling vault-1 to 1
--> FailedCreate: vault-1 Error creating: pods "vault-1-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
error: update acceptor rejected vault-1: pods for rc 'dev-poc-environment/vault-1' took longer than 600 seconds to become available
How to deploy the hashicorp vault in openshift with out using
persistent volumes(PV)?
You can use In-Memory storage backend as mentioned here. So your vault config looks something like this:
$cat config.hcl
disable_mlock = true
storage "inmem" {}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 0
tls_cert_file = "/etc/service/vault-server/vault-server.crt"
tls_key_file = "/etc/service/vault-server/vault-server.key"
}
ui = true
max_lease_ttl = "7200h"
default_lease_ttl = "7200h"
api_addr = "http://127.0.0.1:8200"
But with this data/secrets are not persistent.
Another way is to add a file path to the storage, so that all the secrets which are encrypted stored at the mentioned path.
so now your config changes to
storage "file" {
path = "ANY-PATH"
}
POINTS TO BE NOTED HERE:
Path defined should have permissions to write/read data/secrets
This could be any path that is inside the container, just to avoid dependency on persistence volume.
But what is the problem with this model? When the container restarts, all the data will be lost as the container doesn't store data.
No High Availability – the Filesystem backend does not support high
availability.
So what should be the ideal solution? Anything that makes our data highly available, which is achieved by using dedicated backend storage using a database.
For simplicity, let us take PostgreSQL as backend storage.
storage "postgresql" {
connection_url = "postgres://user123:secret123!#localhost:5432/vault"
}
so now config looks something like this:
$ cat config.hcl
disable_mlock = true
storage "postgresql" {
connection_url = "postgres://vault:vault#vault-postgresql:5432/postgres?sslmode=disable"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 0
tls_cert_file = "/etc/service/vault-server/vault-server.crt"
tls_key_file = "/etc/service/vault-server/vault-server.key"
}
ui = true
max_lease_ttl = "7200h"
default_lease_ttl = "7200h"
api_addr = "http://127.0.0.1:8200"
So choosing backend storage helps you to persist your data even if the container restarts.
As you are specifically looking for a solution in openshift, create a postgresSQL container using template provided and make vault point it to it using the service name as explanied in the above config.hcl
Hope this helps!

Instance of module depending on another instance of same module in Terraform

I'm trying to figure out a way to make one instance of a module depend on the successful deployment of another instance of the same module. Unfortunately, although resources support it, modules don't seem to support the explicit depends_on switch:
➜ db_terraform git:(master) ✗ terraform plan
Error: module "slave": "depends_on" is not a valid argument
I have these in the root module: main.tf
module "master" {
source = "./modules/database"
cluster_role = "master"
..
server_count = 1
}
module "slave" {
source = "./modules/database"
cluster_role = "slave"
..
server_count = 3
}
resource "aws_route53_record" "db_master" {
zone_id = "<PRIVZONE>"
name = "master.example.com"
records = ["${module.master.instance_private_ip}"]
type = "A"
ttl = "300"
}
I want master to be deployed first. What I'm trying to do is launch two AWS instances with a database product installed. Once the master comes up, its IP will be used to create a DNS record. Once this is done, the slaves get created and will use the IP to "enlist" with the master as part of the cluster. How do I prevent the slaves from coming up concurrently with the master? I'm trying to avoid slaves failing to connect with master since the DB record may not have been created by the time the slave is ready.
I've read recommendations for using a null_resource in this context, but it's not clear to me how it should be used to help my problem.
Fwiw, here's the content of main.tf in the module.
resource "aws_instance" "database" {
ami = "${data.aws_ami.amazonlinux_legacy.id}"
instance_type = "t2.xlarge"
user_data = "${data.template_file.db_init.rendered}"
count = "${var.server_count}"
}
Thanks in advance for any answers.