How to solve circlular dependencies when deploying cloud endpoints service using cloud run in Terraform - google-cloud-functions

I am currently trying to set up Google Cloud Endpoints on Cloud Run to be able to have an OpenApi Documentation for my Cloud Functions. I followed the instructions in here for a PoC and it worked fine.
Now I have tried to set it up using terraform 0.12.24
Service Endpoint
data "template_file" "openapi_spec" {
template = file("../../cloud_functions/openapi_spec.yaml")
vars = {
endpoint_service = local.service_name
feedback_post_target = google_cloudfunctions_function.feedbackPOST.https_trigger_url
}
}
resource "google_endpoints_service" "api-gateway" {
service_name = local.service_name
project = var.project_id
openapi_config = data.template_file.openapi_spec.rendered
depends_on = [
google_project_service.endpoints,
google_project_service.service-usage,
]
}
Cloud RUN
locals {
service_name = "${var.endpoint_service_name}.endpoints.${var.project_id}.cloud.goog"
}
resource "google_cloud_run_service" "api-management" {
name = "api-gateway-1233"
location = "europe-west1"
template {
spec {
containers {
image = "gcr.io/endpoints-release/endpoints-runtime-serverless:2"
env {
name = "ENDPOINTS_SERVICE_NAME"
value = local.service_name
}
}
}
}
traffic {
percent = 100
latest_revision = true
}
depends_on = [google_project_service.run]
}
If I try to execute my function from the Endpoints portal now, I get the following error
ENOTFOUND: Error resolving domain "https://function-api-gateway.endpoints.PROJECT_ID.cloud.goog"
which makes total sense, as my endpoints service should use the host url of the cloud run service which is given by
google_cloud_run_service.api-management.status[0].url
which means, that I have to use this in the Service endpoints definition above as service name and host-environmental variable in the openApi definition.
Only when this is set, I can again apply my cloud run service with the env variable being its url itself.
This is a circular dependency which I do not know how to solve.
Any help is highly appreciated!

Related

How to add policy to restrict google cloud function invoker using Terraform

I am trying to create a ServiceAccount that has restricted access to invoke a single cloud function.
resource "google_service_account" "service_account" {
account_id = "service-account-id"
display_name = "Service Account"
}
data "google_iam_policy" "invoker" {
binding {
role = "roles/cloudfunctions.invoker"
members = [
"serviceAccount:${google_service_account.service_account.email}",
]
condition {
expression = "resource.name == projects/project_name/locations/region/functions/function_name"
title = foo
}
}
}
resource "google_cloudfunctions2_function_iam_policy" "binding" {
cloud_function = "projects/project_name/locations/region/functions/function_name"
project = var.common.project_id
location = var.common.default_region
policy_data = data.google_iam_policy.invoker.policy_data
}
However when I apply this change I get an error.
module.handler-build.google_cloudfunctions2_function_iam_policy.binding: Creating...
╷
│ Error: Error setting IAM policy for cloudfunctions2 function "projects/project_name/locations/region/functions/function_name": googleapi: Error 400: Invalid argument: 'An invalid argument was specified. Please check the fields and try again.'
I would try to call google_cloudfunctions2_function_iam_binding or google_cloudfunctions2_function_iam_member but they don't have condition expression that I can use. It is important to add the condition so this service account cannot call other cloud functions.
How can I add invoker policy to the service account?

How do I use s3 lifeycle rules in Terraform in a modular form, i.e. referenced in separate JSON?

Currently, I'm specifying lifecycle_rule under my s3 resource:
resource "aws_s3_bucket" "bucket-name" {
bucket = "bucket-name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
...but I imagine there must be a way to make this more modular, like putting the lifecycle rule into a separate JSON so I can reference it for multiple s3 buckets and reduce the need to edit each resource. I know how to do this in general and have done this with other resources as seen here:
resource "aws_iam_policy" "devops-admin-write" {
name = "devops-admin-s3"
description = "Devops-Admin group s3 policy."
policy = file("iam_policies/devops-admin-write.json")
}
...the difference is that "lifecycle_rule" is an argument and not an attribute - and it's not clear to me how to make it work. Google-Fu has not yielded any clear answers either.
You can use dynamic blocks that you execute with a generic local variable.
So you just need to change the local variable and changes will reflect in all places where this variable is used.
To make it more maintainable I would suggest building a module and reusing the module or using an exiting module.
But the locals + dynamic implementation could look like this:
locals {
lifecycle_rules = [
{
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration = {
days = 30
}
}
]
}
resource "aws_s3_bucket" "bucket-name" {
bucket = "bucket-name"
dynamic "lifecycle_rule" {
for_each = local.lifecycle_rules
content {
id = lifecycle_rule.each.id
enabled = lifecycle_rule.each.enabled
prefix = lifecycle_rule.each.prefix
expiration {
days = lifecycle_rule.each.expiration.days
}
}
}
}
This does not check for errors and is not complete of course - it just implements your example.
See a more complete generic example in our terraform s3-bucket module: find code here

When using terraform with heroku, is there a way to refresh app state after an addon is attached?

I'm using terraform to set up an app on heroku. The app needs a mysql database and I'm using ClearDB addon for that. When ClearDB is attached to the app, it sets a CLEARDB_DATABASE_URL config variable. The app requires the database url to be in a DATABASE_URL config variable.
So what I'm trying to do is to use resource heroku_app_config_association to copy the
value from CLEARDB_DATABASE_URL to DATABASE_URL. The problem is that after heroku_app resource is created, its state does not yet contain CLEARDB_DATABASE_URL, and after heroku_addon is created, the state of heroku_app is not updated.
This is the content of my main.tf file:
provider "heroku" {}
resource "heroku_app" "main" {
name = var.app_name
region = "eu"
}
resource "heroku_addon" "database" {
app = heroku_app.main.name
plan = "cleardb:punch"
}
resource "heroku_app_config_association" "main" {
app_id = heroku_app.main.id
sensitive_vars = {
DATABASE_URL = heroku_app.main.all_config_vars.CLEARDB_DATABASE_URL
}
depends_on = [
heroku_addon.database
]
}
This is the error that I get:
Error: Missing map element
on main.tf line 22, in resource "heroku_app_config_association" "main":
22: DATABASE_URL = heroku_app.main.all_config_vars.CLEARDB_DATABASE_URL
|----------------
| heroku_app.main.all_config_vars is empty map of string
This map does not have an element with the key "CLEARDB_DATABASE_URL".
The configuration variable is copied successfully when terraform apply is executed second time. Is there a way to make it work on the first run?
To get up-to-date state of the heroku_app resource I used the data source:
data "heroku_app" "main" {
name = heroku_app.main.name
depends_on = [
heroku_addon.database
]
}
I could then access the value in heroku_app_config_association resource:
resource "heroku_app_config_association" "main" {
app_id = heroku_app.main.id
sensitive_vars = {
DATABASE_URL = data.heroku_app.main.config_vars.CLEARDB_DATABASE_URL
}
}

Quicksight Dashboard Embed url showing us-east-1 not eu-west-1

Problem:
I want to programmatically fetch a quicksight dashboard URL through the SDK, (dashboard in region: eu-west-1) however whenever I use the following regions I get the following errors when I use the following regions:
eu-west-1: Error: Operation is being called from endpoint eu-west-1, but your identity region is us-east-1. Please use the us-east-1 endpoint.
us-east-1: No error, but the embed url is us-east-1 and results in a us-east-1.quicksight.aws.amazon.com refused to connect error in the browser, eg: https://us-east-1.quicksight.aws.amazon.com/embed/XXXXXX&identityprovider=quicksight&isauthcode=true',
Example Code:
Note: Credentials added for brevity, but are loaded from profile. Have also tried in Java SDK.
const AWS = require('aws-sdk')
const dotenv = require('dotenv').config()
const init = async () => {
AWS.config.credentials = {accessKeyId: process.env.ACCESS_KEY_ID, secretAccessKey: process.env.SECRET_ACCESS_KEY}
AWS.config.region = 'us-east-1'
// AWS.config.region = 'eu-west-1'
const quicksight = new AWS.QuickSight()
const embedUrlParams = {
AwsAccountId: '111122223333',
DashboardId: '11111111-2222-3333-4444-555555555555',
IdentityType: 'QUICKSIGHT',
UserArn: 'arn:aws:quicksight:us-east-1:111122223333:user/default/quicksight-user-1111'
}
const embedUrlRes = await quicksight.getDashboardEmbedUrl(embedUrlParams).promise()
console.log('embedUrlRes', embedUrlRes)
}
init()
CLI:
When I envoke exactly the same through CLI, eg:
aws quicksight get-dashboard-embed-url --aws-account-id 111122223333 --dashboard-id 11111111-2222-3333-4444-555555555555 --identity-type QUICKSIGHT --user-arn "arn:aws:quicksight:us-east-1:111122223333:user/default/quicksight-user-1111" --profile my-quicksight-profile
I get the a perfectly valid embed url in eu-west-1 that embeds perfect through the browser:
https://eu-west-1.quicksight.aws.amazon.com/embed/XXXXXXXX&identityprovider=quicksight&isauthcode=true
So:
I imaging that the SDK is not behaving as the CLI is in the respect of assuming roles, but I've tried this with little success, as well as pointing to quicksight regional endpoints.
Before I go down the rabbit hole, it would be good to see if anyone has experienced the same and how they resolved it.
Thanks!
For people who endup here, While generating and embedded link using sdk if your dashboard is in a different region you have to update quicksight parameters of the sdk to that region
something like the following
// Previous code blocks..
quicksight = new AWS.QuickSight({ region: targetRegion })
quicksight.getDashboardEmbedUrl(Params,function (error, embeddedLink){})
Also you have to whitelist domain on each region since quicksight considers each region as seperate entity

Trigger a cloud build pipeline using Cloud Function

I'm trying to create a cloud function listening to cloudbuilds topic and making an API call to trigger the build. I think I'm missing something in my index.js file (I'm new to Node.js). Can you provide a sample example of a Cloud Function making an API call to the Cloud Build API?
Here is my function:
const request = require('request')
const accessToken = '$(gcloud config config-helper --format='value(credential.access_token)')';
request({
url: 'https://cloudbuild.googleapis.com/v1/projects/[PROJECT_ID]/builds',
auth: {
'bearer': accessToken
},
method: 'POST',
json: {"steps": [{"name":"gcr.io/cloud-builders/gsutil", "args": ['cp','gs://adolfo-test-cloudbuilds/cloudbuild.yaml', 'gs://adolfo-test_cloudbuild/cloudbuild.yaml']}]},
},
module.exports.build = (err, res) => {
console.log(res.body);
});
I was executing the command gcloud config config-helper --format='value(credential.access_token)', copying the token, and putting it as a value to the variable accessToken. But this didn't work for me.
Here is the error: { error: { code: 403, message: 'The caller does not have permission', status: 'PERMISSION_DENIED' } }
I had the same exact problem and I have solved it by writing a small package, you can use it or read the source code.
https://github.com/MatteoGioioso/google-cloud-build-trigger
With this package you can run a pre-configured trigger from cloud build.
You can also extend to call other cloud build API endpoints.
As my understanding cloud build API requires either OAuth2 or a service account. Make sure you gave the right permission to cloud build in the gcp console under IAM. After that you should be able to download the service-account.json file.