Appflow Update_flow error : Destination object for the destination connector can not be updated - json

Anyone faced this error while calling update_flow for appflow?
errorMessage": "An error occurred (ValidationException) when calling the UpdateFlow operation: Update Flow request failed due to:[Destination object for the destination connector can not be updated]",
"errorType": "ValidationException",
"stackTrace": [
" File "/var/task/lambda_function.py", line 7, in lambda_handler\n response = client.update_flow (\n",
" File "/var/runtime/botocore/client.py", line 357, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File "/var/runtime/botocore/client.py", line 676, in _make_api_call\n raise error_class(parsed_response, operation_name)\n"
What could be the cause.

This is resolved. Appflow doesnt allow to change the destination folders using update_flow. you have to set it exact same as existing flow.

Related

Why can't Cloud Function access metadata server in Vertex AI example?

I followed this Vertex AI tutorial. However, at the last step, as the Cloud Function calls the prediction endpoint, it gets this failure.
This means it could not even access the metadata server. I.e., is not a permissions failure (though I did check that the myproject#appspot.gserviceaccount.com service account does have Project Editor role as specified). It is also an error strictly in Functions and IAM, not in Vertex.AI or other ML systems.
What is going wrong here?
Function execution took 673 ms, finished with status code: 500
Prediction request failed: <class 'google.api_core.exceptions.ServiceUnavailable'>: 503 Getting metadata from plugin failed with error: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform\\n'", <google.auth.transport.requests._Response object at 0x3e095a9f4c50>)
google.auth.exceptions.RefreshError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform\\n'", <google.auth.transport.requests._Response object at 0x3e095a9f4c50>)
File "<string>", line 3, in raise_from
six.raise_from(new_exc, caught_exc)
File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/credentials.py", line 117, in refresh
self.refresh(request)
File "/env/local/lib/python3.7/site-packages/google/auth/credentials.py", line 133, in before_request
self._request, context.method_name, context.service_url, headers
File "/env/local/lib/python3.7/site-packages/google/auth/transport/grpc.py", line 88, in _get_authorization_headers
callback(self._get_authorization_headers(context), None)
File "/env/local/lib/python3.7/site-packages/google/auth/transport/grpc.py", line 101, in __call__
context, _AuthMetadataPluginCallback(callback_state, callback))
File "/env/local/lib/python3.7/site-packages/grpc/_plugin_wrapping.py", line 78, in __call__
Traceback (most recent call last):
The above exception was the direct cause of the following exception:
google.auth.exceptions.TransportError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform\\n'", <google.auth.transport.requests._Response object at 0x3e095a9f4c50>)
response,
File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/_metadata.py", line 187, in get
token_json = get(request, path, params=params)
File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/_metadata.py", line 263, in get_service_account_token
request, service_account=self._service_account_email, scopes=scopes
File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/credentials.py", line 113, in refresh
Traceback (most recent call last):
AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x3e0961671dd0>" raised exception!

Google Cloud Function error "OperationError: code=3, message=Function failed on loading user code"

I get an error from time to time when deploying nodejs10 cloud functions to GCP. The error seems to go away on it's own, I just redeploy the same thing a few times. Anyone know what causes it? He's the log:
command: gcloud beta functions deploy exchangeIcon --verbosity debug --runtime nodejs10 --memory 128 --region europe-west1 --timeout 5 --trigger-http --set-env-vars=FUNCTION_REGION=europe-west1,BUILD_DATE=2019-05-09T10:01:05.497Z --entry-point app
DEBUG: Running [gcloud.beta.functions.deploy] with arguments: [--entry-point: "app", --memory: "134217728", --region: "europe-west1", --runtime: "nodejs10", --set-env-vars: "OrderedDict([(u'FUNCTION_REGION', u'europe-west1'), (u'BUILD_DATE', u'2019-05-09T10:01:05.497Z')])", --timeout: "5", --trigger-http: "True", --verbosity: "debug", NAME: "exchangeIcon"]
INFO: Not using a .gcloudignore file.
INFO: Not using a .gcloudignore file.
Deploying function (may take a while - up to 2 minutes)...
..........................................................................failed.
DEBUG: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message:
Traceback (most recent call last):
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 985, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 795, in Run
resources = command_instance.Run(args)
File "/Users/me/Downloads/google-cloud-sdk/lib/surface/functions/deploy.py", line 231, in Run
enable_vpc_connector=True)
File "/Users/me/Downloads/google-cloud-sdk/lib/surface/functions/deploy.py", line 175, in _Run
return api_util.PatchFunction(function, updated_fields)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 300, in CatchHTTPErrorRaiseHTTPExceptionFn
return func(*args, **kwargs)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 356, in PatchFunction
operations.Wait(op, messages, client, _DEPLOY_WAIT_NOTICE)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 126, in Wait
_WaitForOperation(client, request, notice)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 101, in _WaitForOperation
sleep_ms=SLEEP_MS)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 219, in RetryOnResult
result = func(*args, **kwargs)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 65, in _GetOperationStatus
raise exceptions.FunctionsError(OperationErrorToString(op.error))
FunctionsError: OperationError: code=3, message=Function failed on loading user code. Error message:
ERROR: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function failed on loading user code.
In my Stackdriver Logging I just see INVALID_ARGUMENT, but nothing else.
The problem stems from your terminal commands not being properly formatted.
--verbosity=debug
is the proper way to type this. Also same thing with your runtime.

Extra character placed in path of DSC config - Azure PS

I'm trying to work out why I'm getting the error below; the path to the configuration.ps1 file should be configuration\configuration.ps1, however its failing as its reading it as configuration.0\configuration.ps1.
the whole error message is below, has anyone else come across this?
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "VMExtensionProvisioningError",
"message": "VM has reported a failure when processing extension 'CreateADPDC'. Error message: \"The DSC Extension received an incorrect input: An error occurred while
executing script or module 'configuration.ps1': The term 'C:\\Packages\\Plugins\\Microsoft.Powershell.DSC\\2.77.0.0\\bin\\..\\DSCWork\\configuration.0\\configuration.ps1' is not
recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try
again..\nPlease correct the input and retry executing the extension.\"."
}
]
}
}'
At line:4 char:14
+ ... New-AzureRmResourceGroupDeployment -Name "coredeployment1 ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [New-AzureRmResourceGroupDeployment], Exception
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.NewAzureResourceGroupDeploymentCmdlet
New-AzureRmResourceGroupDeployment : 18:31:08 - VM has reported a failure when processing extension 'CreateADPDC'. Error message: "The DSC Extension received an incorrect input: An
error occurred while executing script or module 'configuration.ps1': The term
'C:\Packages\Plugins\Microsoft.Powershell.DSC\2.77.0.0\bin\..\DSCWork\configuration.0\configuration.ps1' is not recognized as the name of a cmdlet, function, script file, or operable
program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again..
Please correct the input and retry executing the extension.".
Thanks in advance :)

Data Migration From SQL Server to MySQL

I am getting this Attribute Error : 'No Type' objectives has no attribute 'split' when i tried to migrate sql db to mysql db in mysql workbench
these are the Log details :
Starting...
Connect to source DBMS...
- Connecting to source...
Connecting to Mssql#sa...
Opening ODBC connection to Driver=sa;DATABASE=;UID=sa;PWD=XXXX...
Connected to Mssql# 11.0.2100.60
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py", line 147, in connect
_connections[connection.__id__]["version"] = getServerVersion(connection)
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py", line 174, in getServerVersion
ver_parts = [ int(part) for part in ver_string.split('.') ] + 4*[ 0 ]
AttributeError: 'NoneType' object has no attribute 'split'
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py", line 174, in getServerVersion
ver_parts = [ int(part) for part in ver_string.split('.') ] + 4*[ 0 ]
AttributeError: 'NoneType' object has no attribute 'split'
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/Resources/libraries/workbench/wizard_progress_page_widget.py", line 65, in run
self.func()
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/migration_source_selection.py", line 406, in task_connect
raise e
SystemError: AttributeError("'NoneType' object has no attribute 'split'"): error calling Python module function DbMssqlRE.getServerVersion
*** ERROR: Error during Connect to source DBMS: AttributeError("'NoneType' object has no attribute 'split'"): error calling Python module function DbMssqlRE.getServerVersion
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/Resources/libraries/workbench/wizard_progress_page_widget.py", line 543, in update_status
task.run()
File "/Applications/MySQLWorkbench.app/Contents/Resources/libraries/workbench/wizard_progress_page_widget.py", line 80, in run
raise e
SystemError: AttributeError("'NoneType' object has no attribute 'split'"): error calling Python module function DbMssqlRE.getServerVersion
*** ERROR: Exception in task 'Connect to source DBMS': SystemError('AttributeError("\'NoneType\' object has no attribute \'split\'"): error calling Python module function DbMssqlRE.getServerVersion',)
Failed
i tried a solution (given below , taken from the link https://bugs.mysql.com/bug.php?id=66030&thanks=3&notify=195). but it didn't help. I still get the same error. Please help me.
solution:
// We'll need some help from you to diagnose this one. With a text editor, open the /Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py file
and around line 174 you'll find a line that looks like:
ver_string = execute_query(connection, "SELECT SERVERPROPERTY('ProductVersion')").fetchone()[0]
Change that to:
ver_string = execute_query(connection, "SELECT CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR)").fetchone()[0]
Then save and retry. Thanks! //

rabbitmq_parameter module json argument

I'm trying to use the rabbitmq_parameter ansible module to set a federation upstream set, while dynamically generating the set, with something like this:
- name: Set federation upstream set
rabbitmq_parameter:
component: federation-upstream-set
name: my-upstreams
vhost: my-vhost
value: "{{ my_upstream_set }}"
The variable my_upstream_set is defined in a separate host variable file, like so:
my_upstream_set: [{"upstream": "upstream1"}, {"upstream": "upstream2"}]
However, no matter how I generate the value argument, which must be json, (with or without quotes, with simple or double quotes, yaml or json formatted), I can't get this to work. I get either the task failing with "stderr: Error: JSON decoding error", or the following error:
failed: [myhost] => {"failed": true, "parsed": false}
invalid output was: Traceback (most recent call last):
File "<stdin>", line 1498, in <module>
File "<stdin>", line 142, in main
File "<stdin>", line 104, in set
File "<stdin>", line 88, in _exec
File "<stdin>", line 1351, in run_command
File "/usr/lib/python2.7/posixpath.py", line 261, in expanduser
if not path.startswith('~'):
AttributeError: 'list' object has no attribute 'startswith'
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
I've tried running the task with a hardcoded value (so, directly in the task file) and it works as expected, but I have no way of integrating variables into that. Any idea what I might be doing wrong here? Thanks!
my_upstream_set should be a JSON (string) in your case it is a list.