All Google Cloud Functions fail to deploy - google-cloud-functions

I have two projects in Google Cloud, both using Firebase. In Firebase I'm adding the Trigger Email extension that needs to deploy a Google Function. In one project it succeeds and the other fails. I can't seem to deploy ANY function that I write, even the simplest example.
Below is what I'm getting with one of my deploy attempts. Any help is greatly appreciated.
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 13,
"message": "Function deployment failed due to a health check failure. This usually indicates that your code was built successfully but failed during a test execution. Examine the logs to determine the cause. Try deploying again in a few minutes if it appears to be transient."
},
"authenticationInfo": {
"principalEmail": "xxxxxxxxxxxxxxxxxxx"
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"resourceName": "projects/wod-rewards/locations/us-central1/functions/ext-firestore-send-email-processQueue"
},
"insertId": "-xxxxxxxxx",
"resource": {
"type": "cloud_function",
"labels": {
"function_name": "ext-firestore-send-email-processQueue",
"project_id": "xxxxxxxx",
"region": "us-central1"
}
},
"timestamp": "2022-02-14T20:39:25.365473Z",
"severity": "ERROR",
"logName": "projects/wod-rewards/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/xxxxxxxx",
"producer": "cloudfunctions.googleapis.com",
"last": true
},
"receiveTimestamp": "2022-02-14T20:39:25.706517396Z"
}

You should check your logs for Cloud Functions in Google Cloud, might be possible that you ahve a configuration issue with your billing settings blocking you from excecute some tasks from your functions

Related

APIM logger resource referring to AI in other subscription

Trying to enable Application Insights on an API Management Service. The Application Insights is in another subscription. Parameter "ApplicationInsightsInstanceRI" contains the full resource AI id. Any idea of why this error occurs?
Error:
InvalidResourceType: The resource type could not be found in the namespace 'Microsoft.Insights' for api version '2019-12-01'.
"type": "Microsoft.ApiManagement/service/loggers",
"name": "[concat(parameters('apiManagementServiceName'), '/', parameters('ApplicationInsightsInstanceName'))]",
"dependsOn": ["[resourceId('Microsoft.ApiManagement/service', parameters('apiManagementServiceName'))]"],
"apiVersion": "2018-06-01-preview",
"properties": {
"loggerType": "applicationInsights",
"description": "Logger resources to APIM",
"resourceid": "[parameters('ApplicationInsightsInstanceRI')]"
"credentials": {
"instrumentationKey": "[reference(resourceId('Microsoft.Insights/component', parameters('ApplicationInsightsInstanceName')), '2019-12-01', 'Full').properties.InstrumentationKey]",
Any idea of why this error occurs?
This error is due to invalid instrumentation key. After directly mentioning the instrumentation key in my template I was able to get the desired result & the API call is working fine.
Below is the template that worked for me.
{
"type": "Microsoft.ApiManagement/service/loggers",
"apiVersion": "2022-04-01-preview",
"name": "[concat(parameters('service_HelloWorld_APimanagement_name'), '/sangammigrationmetrics')]",
"dependsOn": [
"[resourceId('Microsoft.ApiManagement/service', parameters('service_HelloWorld_APimanagement_name'))]"
],
"properties": {
"loggerType": "applicationInsights",
"credentials": {
"instrumentationKey": "{{<INSTRUMENATION_KEY>}}"
},
"isBuffered": true,
"resourceId": "[parameters('components_SangamMigrationMetrics_externalid')]"
}
},

FLuxMonitor locally: FROM address in transaction is wrong

I'm trying to run decentralized-model locally. I've managed to deploy:
Link contract
AggregatorProxy
FluxAggregator
Consumer contract
Oracle node (offchain)
External adapters (coingecko + coinapi)
I'm mainly struggling for the last piece which is creating a Job which uses the FluxMonitor initiator.
I've created the following job where "0x5379A65A620aEb405C5C5338bA1767AcB48d6750" is the address of FluxAggregator contract
{
"initiators": [
{
"type": "fluxmonitor",
"params": {
"address": "0x5379A65A620aEb405C5C5338bA1767AcB48d6750",
"requestData": {
"data": {
"from": "ETH",
"to": "USD"
}
},
"feeds": [
{
"bridge": "coinapi_cl_ea"
},
{
"bridge": "coingecko_cl_ea"
}
],
"threshold": 1,
"absoluteThreshold": 1,
"precision": 8,
"pollTimer": {
"period": "15m0s"
},
"idleTimer": {
"duration": "1h0m0s"
}
}
}
],
"tasks": [
{
"type": "NoOp"
}
]
}
Unfortunately, it doesn't work, it makes my local ganache fail with this error "Error: The nonce generation function failed, or the private key was invalid"
I've put my Ganache in debug mode in order to log requests to the blockchain. Noticed the following call
eth_call
{
"jsonrpc": "2.0",
"id": 28,
"method": "eth_call",
"params": [
{
"data": "0xfeaf968c",
"from": "0x0000000000000000000000000000000000000000",
"to": "0x5379a65a620aeb405c5c5338ba1767acb48d6750"
},
"latest"
]
}
the signature of the function is correct
"latestRoundData()": "feaf968c"
However , what seems weird is that the from address is "0x0" . Any idea why my Oracle node doesn't use its key to sign the transaction?
thanks a lot
Problem from Ganache. In fact , I wrote a truffle script which:
calls "latestRoundData()" populating the "FROM" with a valid address
calls "latestRoundData()" populating the "FROM" with a 0x0 address
Then I ran the script 2 times:
Connecting to Ganache-cli --> 1st call is successful while the 2nd call fails
Connecting to Kovan testnet --> both calls are successful
I've just opened an issue for ganache-cli team: https://github.com/trufflesuite/ganache-cli/issues/840

NotAuthorizedOrNotFound when pushing custom metric

When I try to push a custom metric to the Oracle Cloud Monitoring service using the Oracle Cloud CLI, I receive the following error:
ServiceError:
{
"code": "NotAuthorizedOrNotFound",
"message": "Authorization failed or requested resource not found.",
"opc-request-id": "request id",
"status": 404
}
This occurs when using the Administrator account and when using an instance principal which has monitoring permission.
Here is the JSON that I am pushing to the Monitoring service:
[
{
"namespace": "myFirstNamespace",
"compartmentId": "tenant id",
"resourceGroup": "myFirstResourceGroup",
"name": "successRate",
"dimensions": {
"resourceId": "ocid1.exampleresource.region1.phx.exampleuniqueID",
"appName": "myAppA"
},
"metadata": {
"unit": "percent",
"displayName": "MyAppA Success Rate"
},
"datapoints": [
{
"timestamp": "2021-06-01T22:19:20Z",
"value": 83.0
}
]
}
]
The CLI command that I am using is:
oci monitoring metric-data post --metric-data file://metric-data.json
The OCI CLI command should be:
oci monitoring metric-data post --metric-data file://metric-data.json --endpoint https://telemetry-ingestion.{{ region }}.oraclecloud.com
replacing {{ region }} with your region.
The --endpoint https://telemetry-ingestion.{{ region }}.oraclecloud.com parameter needs to be added.
Looks like some authorization issue. Please cross check if the instance principle has all the required permission assigned. Please review this document Publishing Custom Metrics and Overview of Monitoring

Google App Engine ERROR Throttling refreshCfg with Gcloud MySQL instance

Continuous failed connection attempts errors are occurring in Google Cloud MySQL running on Google APP Engine with public IP.
These are some of the logs:
receiveTimestamp resource.labels.module_id resource.labels.project_id resource.labels.version_id resource.labels.zone resource.type severity textPayload timestamp
2021-06-08T05:48:43.497385728Z zzzzzzzzzz xxxxxxx 4 eu5 gae_app ERROR Throttling refreshCfg(xxxxxxx:europe-west1:yyyyyyyyy): it was only called 80.802µs ago 2021-06-08T05:48:43.494284Z
2021-06-08T05:19:08.394840567Z zzzzzzzzzz xxxxxxx 4 eu5 gae_app ERROR Throttling refreshCfg(xxxxxxx:europe-west1:yyyyyyyyy): it was only called 42.519µs ago 2021-06-08T05:19:08.391909Z
2021-06-08T05:13:42.889911567Z zzzzzzzzzz xxxxxxx 4 eu5 gae_app ERROR Throttling refreshCfg(xxxxxxx:europe-west1:yyyyyyyyy): it was only called 73.279µs ago 2021-06-08T05:13:42.888659Z
2021-06-08T04:47:07.470804269Z zzzzzzzzzz xxxxxxx 4 eu5 gae_app ERROR Throttling refreshCfg(xxxxxxx:europe-west1:yyyyyyyyy): it was only called 85.928µs ago 2021-06-08T04:47:07.467377Z
I tried some different configurations of max_connections, pool_size, pool_timeout with no success.
I have consulted this previous Issue.
And this documentation.
Some help would be appreciated.
More information. The error is always preceded by this warning in the log record:
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {},
"authenticationInfo": {
"principalEmail": "bbbbbbbb#appspot.gserviceaccount.com",
"serviceAccountDelegationInfo": [
{
"firstPartyPrincipal": {
"principalEmail": "app-engine-appserver#prod.google.com"
}
}
]
},
"requestMetadata": {
"callerIp": "2600:1900:2001:12::8",
"requestAttributes": {
"time": "2021-06-09T05:59:27.400680Z",
"auth": {}
},
"destinationAttributes": {}
},
"serviceName": "cloudsql.googleapis.com",
"methodName": "cloudsql.instances.connect",
"authorizationInfo": [
{
"resource": "instances/aaaaaaaaaaa",
"permission": "cloudsql.instances.connect",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "instances/aaaaaaaaa",
"request": {
"project": "bbbbbbb",
"#type": "type.googleapis.com/google.cloud.sql.v1beta4.SqlInstancesCreateEphemeralCertRequest",
"instance": "zzzzzzzzz",
"body": {}
},
"response": {
"#type": "type.googleapis.com/google.cloud.sql.v1beta4.SslCert",
"kind": "sql#sslCert"
}
},
"insertId": "-rgtsssssssssss",
"resource": {
"type": "cloudsql_database",
"labels": {
"region": "europe-west1",
"project_id": "bbbbbbbb",
"database_id": "aaaaaaaaaaaaaaa"
}
},
"timestamp": "2021-06-09T05:59:27.381352Z",
"severity": "NOTICE",
"logName":
"projects/demosmf/logs/cloudaudit.googleapis.com%2Factivity",
"receiveTimestamp": "2021-06-09T05:59:27.746071609Z"
I think it has something to do with the management of ssl certificates.
I have verified that the application certificates are valid and have not expired
This error has been reported via Google's Public Issue Tracker.
You can follow the thread I mentioned above to track the progress.

Data Factory: AzureSQL in- and output for pipeline activity type AzureMLBatchExecution

In Azure Data Factory, I’m trying to call an Azure Machine Learning model by a Data Factory Pipeline. I want to use a Azure SQL table as input and another Azure SQL table for the output.
First I deployed a Machine Learning (classic) web service. Then I created an Azure Data Factory Pipeline, using a LinkedService (type= ‘AzureML’, using Request URI and API key of the ML-webservice) and a input and output dataset (‘AzureSqlTable’ type).
Deploying and Provisioning is succeeded. The pipeline starts as scheduled, but keeps ‘Running’ without any result. The pipeline activity is not being shown in the Monitor&Manage: Activity Windows.
On different sites and tutorials, I only find JSON-scripts using the activity type ‘AzureMLBatchExecution’ with BLOB in- and outputs. I want to use AzureSQL in- and output but I can’t get this working.
Can someone provide a sample JSON-script or tell me what’s possibly wrong with the code below?
Thanks!
{
"name": "Predictive_ML_Pipeline",
"properties": {
"description": "use MyAzureML model",
"activities": [
{
"type": "AzureMLBatchExecution",
"typeProperties": {},
"inputs": [
{
"name": "AzureSQLDataset_ML_Input"
}
],
"outputs": [
{
"name": "AzureSQLDataset_ML_Output"
}
],
"policy": {
"timeout": "02:00:00",
"concurrency": 3,
"executionPriorityOrder": "NewestFirst",
"retry": 1
},
"scheduler": {
"frequency": "Week",
"interval": 1
},
"name": "My_ML_Activity",
"description": "prediction analysis on ML batch input",
"linkedServiceName": "AzureMLLinkedService"
}
],
"start": "2017-04-04T09:00:00Z",
"end": "2017-04-04T18:00:00Z",
"isPaused": false,
"hubName": "myml_hub",
"pipelineMode": "Scheduled"
}
}
With a little help from a Microsoft technician, I've got this working. The JSON script as mentioned above is only changed in the schedule-section:
"start": "2017-04-01T08:45:00Z",
"end": "2017-04-09T18:00:00Z",
A pipeline is active only between its start time and end time. Because the scheduler is set to weekly, the pipeline is triggered at the start of the week: that date should be within start- and end date. For more details about scheduling, see: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-scheduling-and-execution
The Azure SQL Input dataset should look like this:
{
"name": "AzureSQLDataset_ML_Input",
"properties": {
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "SRC_SQL_Azure",
"typeProperties": {
"tableName": "dbo.Azure_ML_Input"
},
"availability": {
"frequency": "Week",
"interval": 1
},
"external": true,
"policy": {
"externalData": {
"retryInterval": "00:01:00",
"retryTimeout": "00:10:00",
"maximumRetry": 3
}
}
}
I added the external and policy properties to this dataset (see script above) and after that, it worked.