When using terraform with heroku, is there a way to refresh app state after an addon is attached? - mysql

I'm using terraform to set up an app on heroku. The app needs a mysql database and I'm using ClearDB addon for that. When ClearDB is attached to the app, it sets a CLEARDB_DATABASE_URL config variable. The app requires the database url to be in a DATABASE_URL config variable.
So what I'm trying to do is to use resource heroku_app_config_association to copy the
value from CLEARDB_DATABASE_URL to DATABASE_URL. The problem is that after heroku_app resource is created, its state does not yet contain CLEARDB_DATABASE_URL, and after heroku_addon is created, the state of heroku_app is not updated.
This is the content of my main.tf file:
provider "heroku" {}
resource "heroku_app" "main" {
name = var.app_name
region = "eu"
}
resource "heroku_addon" "database" {
app = heroku_app.main.name
plan = "cleardb:punch"
}
resource "heroku_app_config_association" "main" {
app_id = heroku_app.main.id
sensitive_vars = {
DATABASE_URL = heroku_app.main.all_config_vars.CLEARDB_DATABASE_URL
}
depends_on = [
heroku_addon.database
]
}
This is the error that I get:
Error: Missing map element
on main.tf line 22, in resource "heroku_app_config_association" "main":
22: DATABASE_URL = heroku_app.main.all_config_vars.CLEARDB_DATABASE_URL
|----------------
| heroku_app.main.all_config_vars is empty map of string
This map does not have an element with the key "CLEARDB_DATABASE_URL".
The configuration variable is copied successfully when terraform apply is executed second time. Is there a way to make it work on the first run?

To get up-to-date state of the heroku_app resource I used the data source:
data "heroku_app" "main" {
name = heroku_app.main.name
depends_on = [
heroku_addon.database
]
}
I could then access the value in heroku_app_config_association resource:
resource "heroku_app_config_association" "main" {
app_id = heroku_app.main.id
sensitive_vars = {
DATABASE_URL = data.heroku_app.main.config_vars.CLEARDB_DATABASE_URL
}
}

Related

Terraform plan garbles jq/json output, but terraform console doesn't

I've been building a github automation with TF to build an S3 bucket with one or more IAM roles as principals. When I assign the roles as JSON to a var (jsonencode/formatlist), testing with terraform console displays the resulting policy perfectly.
But when I run a TF plan the json is garbled instead, resulting in a badly formed principal block.
Here's my variable block with AWS account numbers
variable "account_num" {
default = [
"123456789011",
"123456789012"
]
}
Terraform code block looks like this.
"Principal": {
"AWS": ${jsonencode(formatlist("arn:aws:iam::%s:role/role-access", var.account_num))}
},
When I use the terraform console to try this var block with jsonencode/formatlist, it creates the policy block perfectly.
$ terraform console
> jsonencode(formatlist("arn:aws:iam::%s:role/role-access", var.account_num))
["arn:aws:iam::123456789011:role/role-access","arn:aws:iam::123456789012:role/role-access"]
However in the actual terraform plan, the block is garbled.
+ Principal = {
+ AWS = [
+ <<~EOT
arn:aws:iam::[
"123456789011",
"123456789012"
]:role/role-access
EOT,
]
}
Thanks for any help!
I resolved this by declaring the variable type "string" variable
"account_num" {
type = list(string)
default = []
}

How do I use s3 lifeycle rules in Terraform in a modular form, i.e. referenced in separate JSON?

Currently, I'm specifying lifecycle_rule under my s3 resource:
resource "aws_s3_bucket" "bucket-name" {
bucket = "bucket-name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
...but I imagine there must be a way to make this more modular, like putting the lifecycle rule into a separate JSON so I can reference it for multiple s3 buckets and reduce the need to edit each resource. I know how to do this in general and have done this with other resources as seen here:
resource "aws_iam_policy" "devops-admin-write" {
name = "devops-admin-s3"
description = "Devops-Admin group s3 policy."
policy = file("iam_policies/devops-admin-write.json")
}
...the difference is that "lifecycle_rule" is an argument and not an attribute - and it's not clear to me how to make it work. Google-Fu has not yielded any clear answers either.
You can use dynamic blocks that you execute with a generic local variable.
So you just need to change the local variable and changes will reflect in all places where this variable is used.
To make it more maintainable I would suggest building a module and reusing the module or using an exiting module.
But the locals + dynamic implementation could look like this:
locals {
lifecycle_rules = [
{
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration = {
days = 30
}
}
]
}
resource "aws_s3_bucket" "bucket-name" {
bucket = "bucket-name"
dynamic "lifecycle_rule" {
for_each = local.lifecycle_rules
content {
id = lifecycle_rule.each.id
enabled = lifecycle_rule.each.enabled
prefix = lifecycle_rule.each.prefix
expiration {
days = lifecycle_rule.each.expiration.days
}
}
}
}
This does not check for errors and is not complete of course - it just implements your example.
See a more complete generic example in our terraform s3-bucket module: find code here

How to solve circlular dependencies when deploying cloud endpoints service using cloud run in Terraform

I am currently trying to set up Google Cloud Endpoints on Cloud Run to be able to have an OpenApi Documentation for my Cloud Functions. I followed the instructions in here for a PoC and it worked fine.
Now I have tried to set it up using terraform 0.12.24
Service Endpoint
data "template_file" "openapi_spec" {
template = file("../../cloud_functions/openapi_spec.yaml")
vars = {
endpoint_service = local.service_name
feedback_post_target = google_cloudfunctions_function.feedbackPOST.https_trigger_url
}
}
resource "google_endpoints_service" "api-gateway" {
service_name = local.service_name
project = var.project_id
openapi_config = data.template_file.openapi_spec.rendered
depends_on = [
google_project_service.endpoints,
google_project_service.service-usage,
]
}
Cloud RUN
locals {
service_name = "${var.endpoint_service_name}.endpoints.${var.project_id}.cloud.goog"
}
resource "google_cloud_run_service" "api-management" {
name = "api-gateway-1233"
location = "europe-west1"
template {
spec {
containers {
image = "gcr.io/endpoints-release/endpoints-runtime-serverless:2"
env {
name = "ENDPOINTS_SERVICE_NAME"
value = local.service_name
}
}
}
}
traffic {
percent = 100
latest_revision = true
}
depends_on = [google_project_service.run]
}
If I try to execute my function from the Endpoints portal now, I get the following error
ENOTFOUND: Error resolving domain "https://function-api-gateway.endpoints.PROJECT_ID.cloud.goog"
which makes total sense, as my endpoints service should use the host url of the cloud run service which is given by
google_cloud_run_service.api-management.status[0].url
which means, that I have to use this in the Service endpoints definition above as service name and host-environmental variable in the openApi definition.
Only when this is set, I can again apply my cloud run service with the env variable being its url itself.
This is a circular dependency which I do not know how to solve.
Any help is highly appreciated!

Unable to update Google Drive files using Drive API v3 -- The resource body includes fields which are not directly writable

I am trying to use Google Drive API (v3) to make updates to documents
in Google Drive.
I have read this migration guide:
Google Drive API v3 Migration
And coded it to make a new empty File() with the details I want to update
and then calling execute() with that and the file ID.
But i am still getting an error. Can anyone point out where I am doing wrong?
thanks alot!!
Error:
{
"code" : 403,
"errors" : [{
"domain" : "global",
"message" : "The resource body includes fields which are not directly writable.",
"reason" : "fieldNotWritable"
}],
"message" : "The resource body includes fields which are not directly writable."
}
Code snippet below:
File newFileDetails = new File();
FileList result = service2.files().list()
.setPageSize(10)
.setFields("nextPageToken, files(id, name)")
.execute();
List<File> files = result.getFiles();
if (files == null || files.size() == 0) {
System.out.println("No files found.");
} else {
System.out.println("Files:");
for (File file : files) {
if (file.getName().equals("first_sheet")) {
System.out.printf("%s (%s)\n", file.getName(), file.getId());
newFileDetails.setShared(true);
service2.files().update(file.getId(), newFileDetails).execute();
}
}
}
I had the same issue and found a solution. The key point is: you must create a new File object without Id and use it in update() method. Here is a piece of my code:
val oldMetadata = service!!.files().get(fileId.id).execute()
val newMetadata = File()
newMetadata.name = oldMetadata.name
newMetadata.parents = oldMetadata.parents
newMetadata.description = idHashPair.toDriveString()
val content = ByteArrayContent("application/octet-stream", fileContent)
val result = service!!.files().update(fileId.id, newMetadata, content).execute()
It works. I hope it'll help you.
Referring to https://developers.google.com/drive/v3/reference/files#resource-representations, you can see that shared isn't a writable field. If you think about it, this makes perfect sense. You can share a file by adding a new permission, and you can check if a file has been shared by reading the shared property. But saying a file is shared, other than by actually sharing it, makes no sense.
in the code it looks like this
Drive service... // your own declared implementation of service
File file = new File(); //using the com.google.api.services.drive.model package
// part where you set your data to file like:
file.setName("new name for file");
String fileID = "id of file, which you want to change";
service.files().update(fileID,file).execute();
trying to change the fields from remote files, and rewriting to this file can throw the security exception like exception below.
but it is not a solution for your question.
If you want to share file to another google account by email, you can do it with reimplementing authorization to authorization with using service account of your app, and the add the needed email, as owner of the file.
I was doing the same thing. My goal was to share my file programmatically with my Python code.
And yes, I was getting the same error:
"The resource body includes fields which are not directly writable"
I solved this problem by adding the service's email address of my Virtual Machine (I created it on my Compute Engine dashboard) to Editors of the file.
Then I ran this Python code in my VM:
from googleapiclient.discovery import build
from oauth2client.service_account import ServiceAccountCredentials
# Took the json file from my Google Cloud Platform (GCP) → IAM & Admin → Service Accounts:
service_key_file = 'service_key.json'
scope = 'https://www.googleapis.com/auth/drive'
credentials = ServiceAccountCredentials.from_json_keyfile_name(service_key_file, scopes=scope)
driveV3 = build('drive', 'v3', credentials=credentials)
fileId = '1ZP1xZ0WaH8w2yaQTSx99gafNZWawQabcdVW5DSngavQ' # A spreadsheet file on my GDrive.
newGmailUser = 'testtest#gmail.com'
permNewBody = {
'role': 'reader',
'type': 'user',
'emailAddress': newGmailUser,
}
driveV3.permissions().create(fileId=fileId, body=permNewBody).execute()
print(f"""The file is now shared with this user:
{newGmailUser}\n
See the file here:
https://docs.google.com/spreadsheets/d/1ZP1xZ0WaH8w2yaQTSx99gafNZWawQabcdVW5DSngavQ""")

How to use custom JSON attributes in Chef recipe

I am new to JSON. I have created custom JSON in AWS Opswork and trying to access it as an attribute in Chef recipe, but unfortunately its not catching the JSON values. My JSON file looks like..
{
"normal": {
"filebeat_minjar": {
"log_path" : "/var/log/*.log",
"hosts" : "Some Random Host ID",
"port" : 5000
}
}
}
and I am trying to catch it in recipe as,
log = node['filebeat_minjar']['log_path']
hosts = node['filebeat_minjar']['hosts']
port = node['filebeat_minjar']['port']
But it failed, I have also tried without 'normal'. I got the some []null class error.
Try this way,
log = node['normal']['filbeat_minjar']['log_path']
hosts = node['normal']['filbeat_minjar']['hosts']
port = node['normal']['filbeat_minjar']['port']
or
log = node.normal.filbeat_minjar.log_path
hosts = node.normal.filbeat_minjar.hosts
port = node.normal.filbeat_minjar.port
Json object is like a tree, the elements are the branches.
Hope this help
Your Chef code is correct, but you need to fix the JSON. You don't need the "normal": {...} in there, Chef and OpsWorks will handle that for you.
Following worked for me.
Custom JSON
{
"Production": {
"ApplicationLayer": {
"DockerTag" : "Version1"
}
}
}
Called from chef recipe.
node[:Production][:ApplicationLayer][:DockerTag]