Specify az active subscription statically - IaC style - azure-cli

Can az cli pick up the current subscription from a bicep file or a local "ansible.cfg" style config file? I don't want to set it from the cli, I want it to be picked up from some local definition.
Right now, the order of deployment commands I need to execute is:
cd ./staging
az login
az account set --subscription <id>
az deployment sub create ...
But az account set --subscription <id> is redundant, I already have subs/env separated by folders, so from whatever folder I am deploying I already know what sub I am using, it is statically defined:
staging/main.bicep
targetScope = 'subscription'
var subscriptionId = '00ta4479...'
module poc '../../../main.bicep' = {
name: 'poc'
scope: subscription(subscriptionId)
}

Related

Test databases in gitlab ci

I would like to use test databases for feature branches.
Of course it would be best to create a gitlab ci environment on the fly (review apps style) and also create a test database on the target system with the same name. Unfortunately, this is not possible because the MySQL databases in the target system have fixed names, like xxx_1, xxx_2 etc. and this cannot be changed without moving to a different hosting provider.
So I would like to do something like "grab an empty test data base from the given xxx_n and then empty it again when the branch is deleted".
How could this be handled with gitlab ci?
Can I set a variable on the project that says "feature branch Y already uses database xxx_4"?
Or should I put a table into the test database to store this information?
Using dynamic environments/variables and stop jobs might be able to do the trick. Stop jobs will run when the environment is "stopped" -- in the case of feature branches without associated MRs, when the feature branch is deleted (or if there is an open MR for the review app, when the MR is merged or closed)
Can I set a variable on the project that says "feature branch Y already uses database xxx_4"?
One way may be to put the db name directly in the environment name. Then the Environments API keeps track of this.
stages:
- pre-deploy
- deploy
determine_database:
stage: pre-deploy
image: python:3.9-slim
script:
- pip install python-gitlab
- database_name=$(determine-database) # determine what database names are not currently in use
- echo "database_name=${database_name}" > vars.env
artifacts:
reports: # automatically set $database_name variable in subsequent jobs
dotenv: "vars.env"
deploy_review_app:
stage: deploy
environment:
name: review/$CI_COMMIT_REF_SLUG/$database_name
on_stop: teardown
script:
- echo "deploying review app for $CI_COMMIT_REF with database name configuration $database_name"
- ... # steps to actually do the deploy
teardown: # this will trigger when the environment is stopped
stage: deploy
variables:
GIT_STRATEGY: none # ensures this works even if the branch is deleted
when: manual
script:
- echo "tearing down test database $database_name"
- ... # actual script steps to stop env and cleanup database
environment:
name: review/$CI_COMMIT_REF_SLUG/$database_name
action: "stop"
The implementation of the determine-database command may have to connect to your database to determine what database names are available (or perhaps you have a set of these provisioned in advance). You can then inspect the GitLab environments API to see what database names are still in use (since it's baked into the environment name).
For example, you might have something like this. Here, I am using the python-gitlab API wrapper just because it's most familiar to me, but the same principle can be applied to any method of calling the GitLab REST API.
#!/usr/bin/env python3
import gitlab
import os, sys, random
GITLAB_URL = os.environ['CI_SERVER_URL']
PROJECT_TOKEN = os.environ['MY_PROJECT_TOKEN'] # you generate and add this to your CI/CD variables!
PROJECT_ID = os.environ['CI_PROJECT_ID']
DATABASE_NAMES = ['xxx_1', 'xxx_2', 'xxx_3'] # or determine this programmatically by connecting to the DB
gl = gitlab.Gitlab(GITLAB_URL, private_token=PROJECT_TOKEN)
in_use_databases = []
project = gl.projects.get(PROJECT_ID)
for environment in project.environments.list(state='available', all=True):
# the in-use database name is the string after the last '/' in the env name
in_use_db_name = environment.name.split('/')[-1]
in_use_databases.append(in_use_db_name)
available_databases = [name for name in DATABASE_NAMES if name not in in_use_databases]
if not available_databases: # bail if all databases are in use
print('FATAL. no available databases', file=sys.stderr)
raise SystemExit(1)
# otherwise pick one and output to stdout
db_name = random.choice(available_databses)
# optionally you could prepare the database here, too, instead of relying on the `on_stop` job.
print(db_name)
There is a potential concurrency problem here (two runs of determine_database concurrently on different branches can potentially select the same db twice before either finish) but that could be addressed with resource locks.

Azure CLI: az cdn endpoint purge | Not working with Service Principal

My Service Principal has the following two roles on the whole Resource Group Playground
CDN Endpoint Contributor
CDN Profile Contributor
I am trying to run the following commands
az login --service-principal --username="ca85199a-7e86-40eb-b6c8-a774a9edc010" --password="<pwd>" --tenant="<tenant-id>"
az cdn endpoint purge -n mopar --profile-name mopar-poc --content-paths "/*" --resource-group Playgroud --no-wait
I am getting the following error.
AuthorizationFailed: The client 'acd5dfea-f69a-4178-812c-4204963c6959' with object id 'acd5dfea-f69a-4178-812c-4204963c6959' does not have authorization to perform action 'Microsoft.Cdn/profiles/endpoints/purge/action' over scope '/subscriptions/b19669be-bfa2-4e86-b7d4-f1b4d98dd2a5/resourceGroups/Playgroud/providers/Microsoft.Cdn/profiles/mopar-poc/endpoints/mopar' or the scope is invalid. If access was recently granted, please refresh your credentials.
What am I missing here?
The two roles are enough, the command works fine on my side, please follow the steps below to troubleshoot.
1.Double-check the RBAC roles in the azure portal, make sure the correct service principal has the correct role in the correct scope.
2.Run az account clear first, then login again to make sure you are using the correct service principal.
3.Make sure you logged in to the correct subscription, just use az account set --subscription <subscription-id> after login to set it.
The error was misleading.
Turns out I had misspelled Playground in the command below.
az cdn endpoint purge -n mopar --profile-name mopar-poc --content-paths "/*" --resource-group Playgroud --no-wait

Is it possibly to overwrite the Kubeconfig with terraform's Kubernetes provider

I wanted to run terraform and then be able to run kubectl in the cli right after terraform completes. Or is this something you don't do. I would want to make a script to run kubectl commands after terraform finishes creating the cluster.
I have this and I am assuming I could write terraform kubernetes code but I don't believe it is overwriting the cli's kubeconfig referenced file.
provider "kubernetes" {
load_config_file = false
host = azurerm_kubernetes_cluster.cluster_1.kube_config.0.host
username = azurerm_kubernetes_cluster.cluster_1.kube_config.0.username
password = azurerm_kubernetes_cluster.cluster_1.kube_config.0.password
client_certificate = base64decode(azurerm_kubernetes_cluster.cluster_1.kube_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.cluster_1.kube_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.cluster_1.kube_config.0.cluster_ca_certificate)
}
If I understand correctly, you want to add a context inside your kube config file after creating a cluster. Maybe running az aks get-credentials using Terraform after creation will work?
resource "null_resource" "add_context" {
provisioner "local-exec" {
command = "az aks get-credentials --resource-group ${azurerm_kubernetes_cluster.cluster_1.resource_group_name} --name ${azurerm_kubernetes_cluster.cluster_1.name} --overwrite-existing"
}
depends_on = [azurerm_kubernetes_cluster.cluster_1]
}

GCP, terraform is installed on GCP project-A 'test-instance' instance, using terraform code, how to deploy/create instance on project-B?

GCP, terraform is installed on GCP project-A 'test-instance' instance, using terraform how to deploy instance on project-B ?
I was able to do it using gcloud command, does anyone knows how to do it ?
provider "google" {
project = "project-b"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_compute_instance" "vm_instance" {
name = "terraform-instance"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
# A default network is created for all GCP projects
network = "default"
access_config {}
}
}
The problem you are facing is around access control.
You are trying to run terraform from a VM lives in Project-A and terraform code wants to create a new VM (or other resource) in Project-B.
By default, service account attached to Project-A VM does not have enough rights to create any resource in Project-B. To solve this, you can create a service account at Folder level (or Org level) which has permissions to create VM in required projects and then you can attach that service account to the VM which runs terraform.
Hope this helps.
I suggest you use Terraform Variables using .tfvars files and multiple Terraform Workspaces. You can then switch between workspaces and apply the tfvars for each particular project separately.
e.g.
# variables.tf
variable "project_id" {
type = string
}
And then use the variable in your terraform config:
# main.tf
provider "google" {
project = var.project_id
region = "us-central1"
zone = "us-central1-c"
}
The tfvars will then look like this:
# vars/dev.tfvars
project_id = "my-dev-project"
Full invocation within your workspace (see the docs) can then be done using plan/apply as you would normally do:
terraform workspace select dev
terraform plan -var-file vars/dev.tfvars

Adding permissions to a project

I am trying to follow this tutorial https://tensorflow.github.io/serving/serving_inception
But I see this
$ gcloud container clusters create inception-serving-cluster --num-nodes 5
ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Required "container.clusters.create" permission for "projects/tensorflow-serving".
I did not see an option to add permissions to the project anywhere. How do I do this using the CLI or the UI?
EDIT:
I do have the project created
EDIT:
Just saw that it works fine from the cloud shell
Update: Your project's name is tensorflow-serving-1360, so you should be running gcloud container clusters create inception-serving-cluster --num-nodes 5 --project=tensorflow-serving-1360.
The project tensorflow-serving is not owned by you. It is the example project name used in the linked tutorial, but you need to replace it with the name of your own project as described in the line at the beginning of Part 2:
Here we assume you have created and logged in a gcloud project named
tensorflow-serving
(Tested on 2019.04.07)
Firstly, check the list of auth accounts:
gcloud auth list
Next set the active account:
gcloud config set account <email_address_from_above_output>
Then, specify the parameter for create cluster commamd:
gcloud container clusters create <cluster_name> --num-nodes=2 --project=<PROJECT_ID>
e.g.
gcloud container clusters create prod-myapp-cluster --num-nodes=2 --project=myapp-20394823094
Expected output:
kubeconfig entry generated for prod-myapp-cluster.
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
prod-myapp-cluster asia-south1-a 1.11.7-gke.12 35.5xx.2xx.1xx n1-standard-1 1.11.7-gke.12 2 RUNNING
Get your project name or create a project if you have created on already at console.cloud.google.com
Enable Kubernetes engine API on the console
run this code on your command prompt
gcloud container clusters create bd-serving-cluster --num-nodes 5 -project=tensorflow-serving-264611 \
--zone=us-central1-f
replace 'bd' with the name of your serving cluster and 'tensorflow-serving-264611' with the project name you created in step 1 and you can choose your preferred zone or use the default 'us-central1-f'