I'm trying to set project wide metadata using Google Cloud Deployment, and to the best of my understanding this would look like:
- name: metadata-customer-name
type: compute.v1.projects
properties:
items:
- key: customer_name
value: {{properties['project']}}
However this fails, with error:
Waiting for update operation-...-...-...-...failed.
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation operation-...-...-...-: <ErrorValue
errors: [<ErrorsValueListEntry
code: u'INTERNAL_ERROR'
message: u"Code: '7653413665057094445'">]>
With a random error code.
Has anyone successfully set project wide metadata for Google Cloud Deployment Manager?
This is not available at this time.The Cloud Deployment documentation lists the supported resources and compute.v1.projects is not one of them.
Related
I have an node application deployed in GCP.
The application includes code to access ressources in AWS-cloud.
For this purpose it uses the aws-SDK with ChainableTemporaryCredentials.
The relevant code lines are...
const credentials = new ChainableTemporaryCredentials({
params: {
RoleArn: `arn:aws:iam::${this.accountId}:role/${this.targetRoleName}`,
RoleSessionName: this.targetRoleName,
},
masterCredentials: new WebIdentityCredentials({
RoleArn: `arn:aws:iam::${this.proxyAccountId}:role/${this.proxyRoleName}`,
RoleSessionName: this.proxyRoleName,
WebIdentityToken: token,
}),
})
await credentials.getPromise()
The WebIdentityToken was received from google and looks good.
At AWS-side I created an proxy-role (the line from masterCredentials RoleArn).
However at runtime I get the error:
Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
I do not understand this error. Because my application runs in GCP and I use temporary credentials I do not understand why I should use aws-credentials in form of an credentials file or environment variables like AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY. I thought the idea to use ChainableTemporaryCredentials is NOT to have direct aws-credentials. Right?
You can see the public code at:
https://github.com/cloud-carbon-footprint/cloud-carbon-footprint/blob/trunk/packages/aws/src/application/GCPCredentials.ts
and documentation regarding env-variables at:
https://www.cloudcarbonfootprint.org/docs/configurations-glossary/
Any help which leads to understanding of this error message is welcome.
Thomas
Solved it. "Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1 was totally misleading." In reality it was a problem with the field-names in the GCP-JWT-token und the policy in aws. See https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html#ck_aud
I'm using pulumi azure native for infrastructure as code. I need to create an Azure Web App (based on an App Service Plan) and add some app settings (and connection strings) throughout the code, e.g., Application Insights instrumentation key, Blob Storage account name, etc.
I figured out that there is a method, WebAppApplicationSettings, that can update web app settings:
from pulumi_azure_native import web
web_app = web.WebApp(
'my-web-app-test123',
...
)
web.WebAppApplicationSettings(
'myappsetting',
name=web_app.name,
resource_group='my-resource-group',
properties={'mySetting': 123456},
opts=ResourceOptions(depends_on=[web_app])
)
It turns out that WebAppApplicationSettings replaces the entire app settings with the value given in the properties parameter, which is not what I need. I need to append a new setting to the existing settings.
So, I tried this:
Fetch the existing settings from web app using list_web_app_application_settings_output
Add the new settings the existing settings
Update the app settings using WebAppApplicationSettings
from pulumi_azure_native import web
app = web.WebApp(
'my-web-app-test123',
...
)
current_apps_settings = web.list_web_app_application_settings_output(
name=web_app.name,
resource_group_name='my-resource-group',
opts=ResourceOptions(depends_on=[web_app])
).properties
my_new_setting = {'mySetting': 123456}
new_app_settings = Output.all(current=current_apps_settings).apply(
lambda args: my_new_setting.update(args['current'])
)
web.WebAppApplicationSettings(
'myappsetting',
name=app.name,
resource_group='my-resource-group',
properties=new_app_settings,
opts=ResourceOptions(depends_on=[web_app])
)
However, this doesn't work either and throws the following error during pulumi up:
Exception: invoke of azure-native:web:listWebAppApplicationSettings failed: invocation of azure-native:web:listWebAppApplicationSettings returned an error: request failed /subscriptions/--------------/reso
urceGroups/pulumi-temp2/providers/Microsoft.Web/sites/my-web-app-test123/config/appsettings/list: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Web/sites/my-web-app-test123' under resource group 'pulumi-temp2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"
error: an unhandled error occurred: Program exited with non-zero exit code: 1
Is there way that I can add a new app setting to Azure Web App using pulumi without changing/removing the existing settings?
Here's a suboptimal workaround: App Configuration and Enable Azure Function Dynamic Configuration.
And as far as I can tell it comes with some drawbacks:
cold start time may increase
additional costs
care must be taken to avoid redundant calls (costly)
additional boilerplate code needed for every function app
Maybe there's a better way, I mean I hope there is, I just haven't found it yet either.
After some searching and reaching out to pulumi-azure-native people, I found an answer:
Azure REST API doesn't currently support this feature, i.e., updating a single Web App setting apart from the others. So, there isn't such a feature in pulumi-azure-native as well.
As a workaround, I stored (kept) all the app settings I needed to be added, updated, or removed in a dictionary throughout my Python script, and then I passed them to the web.WebAppApplicationSettings class at the end of the script so that they will be applied all at once to the Web App resource. This is how I solved my problem.
I recently did a deployment to WestEurope by mistake, I deleted the resources and thought I could redeploy to UKSouth, however whenever I try to redeploy I get the error below:
- At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details. (Code: DeploymentFailed)
- {
"error": {
"code": "InvalidDeploymentLocation",
"message": "Invalid deployment location 'uk south'. The deployment already exists in location 'westeurope'."
}
} (Code:Conflict)
CorrelationId: 8c2a4cd6-4409-46c3-9b7c-544134f0f942
The master template calls several nested templates and I'm having trouble trying to identify where the issue lies. I have checked and there's no soft delete enabled anywhere and also the resources have definitely been deleted from Azure.
Help..
Thanks in advance :)
Not Sure If this is resolved but here is what i did.
Go Azure Portal ->Subscription
On Left Pane under settings click on Deployments
Find the deployment which was done earlier and delete it manually
This should fix your problem.
I'm new to Google Cloud Dataflow, as is probably obvious from my questions below.
I've got a dataflow application written and can get it to run without issue using my personal credentials both locally and on a GCE instance. However, I can't seem to crack the proper steps to get it to run using the compute engine instance's service credentials or service credentials I've created using the API & AUTH section of the console. I always get a 401 not authorized error when I run.
Here's what I've tried...
1) Created virtual machine granting access rights to storage, datastore, sql and compute engine. My understanding is that this supposedly created a CI specific services account that is the server's default credentials. These should be used the same way a user's authentication is used when an app is run on this instance. Here's where I get a 401. My question is... Where can I see this service account that was supposedly created? Or do I just rely that it exists somewhere?
2) Created service credentials in API & Auth portion of developer's console. Then used cloud auth activate-service-account and activated that account by pointing the command at the credentials json file I downloaded. Kind of like the OAUTH round trip when you use gcloud auth login. Here I also get the 401.
3) This last thing was using the service credentials from step 2 separate from the GCE instance and then create an object that implements the CredentialFactory interface and pass it off to the PipelineOptions. However, when it runs the app crashes now with an error saying that it is looking for a method, fromOptions, that isn't in the CredentialFactory interface. How the options were configured, what the credentials factory looked like and the stack trace from this follows.
I would be happy to utilize any of the above 3 methods to make use of service credentials, if I could get any of them to work. Any insight you can provide on what I'm doing wrong, steps I'm leaving out, other unexplored options would be greatly appreciated. The documentation is a little dis-jointed. If there is a clear step by step guide a link to that would be sufficient. What I've found so far on my own has been of little assistance.
If I can provide any additional information please let me know.
Here's some code that may be helpful and the stack trace I get when the code runs using the credential factory.
Options setup code looks like this:
GcrDataflowPipelineOptions options = PipelineOptionsFactory.fromArgs(args)
.withValidation()
.as(GcrDataflowPipelineOptions.class);
options.setKind("Counties");
options.setCredentialFactoryClass(GoogleCredentialProvider.class);
GoogleCredentialProvider.java
Notice the json file I downloaded as part of creating the services account (renamed) is what's loaded as a resource from my apps class path.
public class GoogleCredentialProvider implements CredentialFactory {
#Override
public Credential getCredential() throws IOException, GeneralSecurityException {
final String env = System.getProperty("gcr_dataflow_env", "local");
Properties props = new Properties();
ClassLoader loader = this.getClass().getClassLoader();
props.load(loader.getResourceAsStream(env + "-gcr-dataflow.properties"));
final String credFileName = props.getProperty("gcloud.dataflow.service.account.file");
InputStream credStream = loader.getResourceAsStream(credFileName);
GoogleCredential credential = GoogleCredential.fromStream(credStream);
return credential;
}
}
Stacktrace:
java.lang.RuntimeException: java.lang.RuntimeException: Unable to find factory method com.scotcro.gcr.dataflow.components.pipelines.GoogleCredentialProvider#fromOptions
at com.google.cloud.dataflow.sdk.runners.dataflow.BasicSerializableSourceFormat.evaluateReadHelper(BasicSerializableSourceFormat.java:268)
at com.google.cloud.dataflow.sdk.io.Read$Bound$1.evaluate(Read.java:123)
at com.google.cloud.dataflow.sdk.io.Read$Bound$1.evaluate(Read.java:120)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.visitTransform(DirectPipelineRunner.java:684)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:200)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:196)
at com.google.cloud.dataflow.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:99)
at com.google.cloud.dataflow.sdk.Pipeline.traverseTopologically(Pipeline.java:208)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.run(DirectPipelineRunner.java:640)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.run(DirectPipelineRunner.java:354)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.run(DirectPipelineRunner.java:76)
at com.google.cloud.dataflow.sdk.Pipeline.run(Pipeline.java:149)
at com.scotcro.gcr.dataflow.app.GcrDataflowApp.run(GcrDataflowApp.java:65)
at com.scotcro.gcr.dataflow.app.GcrDataflowApp.main(GcrDataflowApp.java:49)
Caused by: java.lang.RuntimeException: Unable to find factory method com.scotcro.gcr.dataflow.components.pipelines.GoogleCredentialProvider#fromOptions
at com.google.cloud.dataflow.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:224)
at com.google.cloud.dataflow.sdk.util.InstanceBuilder.build(InstanceBuilder.java:161)
at com.google.cloud.dataflow.sdk.options.GcpOptions$GcpUserCredentialsFactory.create(GcpOptions.java:180)
at com.google.cloud.dataflow.sdk.options.GcpOptions$GcpUserCredentialsFactory.create(GcpOptions.java:175)
at com.google.cloud.dataflow.sdk.options.ProxyInvocationHandler.getDefault(ProxyInvocationHandler.java:288)
at com.google.cloud.dataflow.sdk.options.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:127)
at com.sun.proxy.$Proxy42.getGcpCredential(Unknown Source)
at com.google.cloud.dataflow.sdk.io.DatastoreIO$Source.getDatastore(DatastoreIO.java:335)
at com.google.cloud.dataflow.sdk.io.DatastoreIO$Source.createReader(DatastoreIO.java:320)
at com.google.cloud.dataflow.sdk.io.DatastoreIO$Source.createReader(DatastoreIO.java:186)
at com.google.cloud.dataflow.sdk.runners.dataflow.BasicSerializableSourceFormat.evaluateReadHelper(BasicSerializableSourceFormat.java:259)
... 13 more
java.lang.RuntimeException: java.lang.RuntimeException: Unable to find factory method com.scotcro.gcr.dataflow.components.pipelines.GoogleCredentialProvider#fromOptions
2015-07-03 09:55:42,519 | main | DEBUG | co.sc.gc.da.ap.GcrDataflowApp | destroying
at com.google.cloud.dataflow.sdk.runners.dataflow.BasicSerializableSourceFormat.evaluateReadHelper(BasicSerializableSourceFormat.java:268)
at com.google.cloud.dataflow.sdk.io.Read$Bound$1.evaluate(Read.java:123)
at com.google.cloud.dataflow.sdk.io.Read$Bound$1.evaluate(Read.java:120)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.visitTransform(DirectPipelineRunner.java:684)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:200)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:196)
at com.google.cloud.dataflow.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:99)
at com.google.cloud.dataflow.sdk.Pipeline.traverseTopologically(Pipeline.java:208)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.run(DirectPipelineRunner.java:640)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.run(DirectPipelineRunner.java:354)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.run(DirectPipelineRunner.java:76)
at com.google.cloud.dataflow.sdk.Pipeline.run(Pipeline.java:149)
at com.scotcro.gcr.dataflow.app.GcrDataflowApp.run(GcrDataflowApp.java:65)
at com.scotcro.gcr.dataflow.app.GcrDataflowApp.main(GcrDataflowApp.java:49)
Caused by: java.lang.RuntimeException: Unable to find factory method com.scotcro.gcr.dataflow.components.pipelines.GoogleCredentialProvider#fromOptions
at com.google.cloud.dataflow.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:224)
at com.google.cloud.dataflow.sdk.util.InstanceBuilder.build(InstanceBuilder.java:161)
at com.google.cloud.dataflow.sdk.options.GcpOptions$GcpUserCredentialsFactory.create(GcpOptions.java:180)
at com.google.cloud.dataflow.sdk.options.GcpOptions$GcpUserCredentialsFactory.create(GcpOptions.java:175)
at com.google.cloud.dataflow.sdk.options.ProxyInvocationHandler.getDefault(ProxyInvocationHandler.java:288)
at com.google.cloud.dataflow.sdk.options.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:127)
at com.sun.proxy.$Proxy42.getGcpCredential(Unknown Source)
at com.google.cloud.dataflow.sdk.io.DatastoreIO$Source.getDatastore(DatastoreIO.java:335)
at com.google.cloud.dataflow.sdk.io.DatastoreIO$Source.createReader(DatastoreIO.java:320)
at com.google.cloud.dataflow.sdk.io.DatastoreIO$Source.createReader(DatastoreIO.java:186)
at com.google.cloud.dataflow.sdk.runners.dataflow.BasicSerializableSourceFormat.evaluateReadHelper(BasicSerializableSourceFormat.java:259)
... 13 more
You likely do not have the proper credentials. When you execute a Dataflow job from GCE, The service account attached to the instance will be used for validation by DataFlow.
Did you do this when creating your machines?
create a service account for the instance on GCE?
https://cloud.google.com/compute/docs/authentication#using
Set the required scopes for using Dataflow such as storage, compute,
and bigquery? https://www.googleapis.com/auth/cloud-platform
I was wondering if Wirecloud offers complete support for object storage with FI-WARE Testbed instead of Fi-lab. I have successfully integrated Wirecloud with Testbed and have developed a set of widgets that are able to upload/download files to specific containers in Fi-lab with success. However, the same widgets do not seem to work in Fi-lab, as i get an error 500 when trying to retrieve the auth tokens (also with the well known object-storage-test widget) containing the following response:
SyntaxError: Unexpected token
at Object.parse (native)
at create (/home/fiware/fi-ware-keystone-proxy/controllers/Token.js:343:25)
at callbacks (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:164:37)
at param (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:138:11)
at pass (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:145:5)
at Router._dispatch (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:173:5)
at Object.router (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:33:10)
at next (/home/fiware/fi-ware-keystone-proxy/node_modules/express/node_modules/connect/lib/proto.js:195:15)
at Object.handle (/home/fiware/fi-ware-keystone-proxy/server.js:31:5)
at next (/home/fiware/fi-ware-keystone-proxy/node_modules/express/node_modules/connect/lib/proto.js:195:15)
I noticed that the token provided in the beggining (to start the transaction) is
token: Object
id: "%fiware_token%"
Any idea regarding what might have gone wrong?
The WireCloud instance available at FI-WARE's testbed is always the latest stable version while the FI-LAB instance is currently outdated, we're working on updating it as soon as possible. One of the things that changes between those versions is the Object Storage API, so sorry for the inconvenience as you will not be able to use widgets/operators using the Object Storage in both environments.
Anyway, the response you were obtaining seems to indicate the object storage instance you are accessing is not working properly, so you will need to send an email to one of the available mail lists for getting help (fiware-testbed-help or fiware-lab-help) telling what is happening to you (remember to include your account information as there are several object storage nodes and ones can be up and the others down).
Regarding the strange request body:
"token": {
id: "%fiware_token%"
}
This behaviour is normal, as the WireCloud client code has no direct access to the IdM token of the user. It's the WireCloud's proxy which replaces the %fiware_token% pattern with the correct value.