It is possible to send message from ejabberd server manually? - ejabberd

I am trying to send a message from ejabberd server to the client manually. I create a module and use offline_message_hook to send the message to FCM and take action on FCM response. If I did not get any error in FCM response I try to send an acknowledgement message to the sender that message is sent to the FCM. I am using erlang to create modules for ejabberd server. When I try to send a message like this:
ejabberd_ctl:send_message("normal", From, To, "ack", "101020").
I got this error:
exception error: undefined function ejabberdctl:send_message/5
in function mod_http_offline:create_message/1 (/opt/ejabberd-modules/mod_http_offline.erl, line 36)
in call from ejabberd_hooks:safe_apply/4 (src/ejabberd_hooks.erl, line 236)
in call from ejabberd_hooks:run_fold1/4 (src/ejabberd_hooks.erl, line 217)
in call from ejabberd_sm:route/1 (src/ejabberd_sm.erl, line 146)
in call from ejabberd_router:do_route/1 (src/ejabberd_router.erl, line 399)
in call from ejabberd_router:route/1 (src/ejabberd_router.erl, line 92)
in call from ejabberd_c2s:check_privacy_then_route/2 (src/ejabberd_c2s.erl, line 855)
The error is seems like the function is not accessable for the module but I don't know what file should I need to include to use this function?
When I try to send a message like this:
XmlBody = {xmlelement, "message", [{"id", []},{"type", "normal"}, {"from", From}, {"to", To}], [{xmlelement, "body", [], [{xmlcdata, <<"Test Message">>}]}]},
ejabberd_router:route(From, To, XmlBody);
I got this error:
exception error: no function clause matching
ejabberd_router:route(<<"e5d6d83c-ea77-4d10-aaac-4e0e38899ac2">>,
<<"67456efc-be57-4cbd-a176-527de2dce19d">>,
{xmlelement,"message",
[{"id",[]},
{"type","normal"},
{"from",
<<"e5d6d83c-ea77-4d10-aaac-4e0e38899ac2">>},
{"to",
<<"67456efc-be57-4cbd-a176-527de2dce19d">>}],
[{xmlelement,"body",[],
[{xmlcdata,<<"Test Message">>}]}]}) (src/ejabberd_router.erl, line 101)
in function mod_http_offline:create_message/1 (/opt/ejabberd-modules/mod_http_offline.erl, line 38)
in call from ejabberd_hooks:safe_apply/4 (src/ejabberd_hooks.erl, line 236)
in call from ejabberd_hooks:run_fold1/4 (src/ejabberd_hooks.erl, line 217)
in call from ejabberd_sm:route/1 (src/ejabberd_sm.erl, line 146)
in call from ejabberd_router:do_route/1 (src/ejabberd_router.erl, line 399)
in call from ejabberd_router:route/1 (src/ejabberd_router.erl, line 92)
in call from ejabberd_c2s:check_privacy_then_route/2 (src/ejabberd_c2s.erl, line 855)
I did not understand this error.

Related

Using multiple gunicorn workers cause the error with status PGRES_TUPLES_OK and no message from the libpq

I have a Flask website that uses the peewee ORM. The connection is managed by FlaskDB.
When I only use 1 gunicorn worker, it works well. But as soon as I use 2 or more gunicorn workers, I start getting this error:
Traceback (most recent call last):
File "/home/user/.local/lib/python3.10/site-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
File "/home/user/.local/lib/python3.10/site-packages/flask/app.py", line 1519, in full_dispatch_request
return self.finalize_request(rv)
File "/home/user/.local/lib/python3.10/site-packages/flask/app.py", line 1540, in finalize_request
response = self.process_response(response)
File "/home/user/.local/lib/python3.10/site-packages/flask/app.py", line 1888, in process_response
self.session_interface.save_session(self, ctx.session, response)
File "/home/user/project/session.py", line 113, in save_session
saved_session.save()
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 6497, in save
rows = self.update(**field_dict).where(self._pk_expr()).execute()
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 1886, in inner
return method(self, database, *args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 1957, in execute
return self._execute(database)
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 2442, in _execute
cursor = database.execute(self)
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 3112, in execute
return self.execute_sql(sql, params, commit=commit)
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 3096, in execute_sql
with __exception_wrapper__:
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 2873, in __exit__
reraise(new_type, new_type(exc_value, *exc_args), traceback)
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 183, in reraise
raise value.with_traceback(tb)
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 3099, in execute_sql
cursor.execute(sql, params or ())
peewee.DatabaseError: error with status PGRES_TUPLES_OK and no message from the libpq
The peewee documentation states that the connection should be thread-safe, but it seems there are thread-safety issues here.
I use playhouse.pool.PooledPostgresqlDatabase if that matters.
What is the solution to this problem?
I believe this is possibly due to the connection being opened before the workers are forked. I'm not too familiar w/gunicorns worker model, but googling your error reveals similar problems w/multiprocessing. Specifically,
When a program uses multiprocessing or fork(), and an Engine object is copied to the child process, Engine.dispose() should be called so that the engine creates brand new database connections local to that fork. Database connections generally do not travel across process boundaries.
That is for SQLAlchemy, but the same should apply to Peewee. Just ensure that all your connections are closed (pool.close_all()) when your workers first start running. Or similarly, if you're opening any database connections at module scope, ensure you call close_all() after using them. This way when your workers start up they will each have an empty pool of connections.
db = FlaskDB(app)
# ...
db.database.close_all()

Why can't Cloud Function access metadata server in Vertex AI example?

I followed this Vertex AI tutorial. However, at the last step, as the Cloud Function calls the prediction endpoint, it gets this failure.
This means it could not even access the metadata server. I.e., is not a permissions failure (though I did check that the myproject#appspot.gserviceaccount.com service account does have Project Editor role as specified). It is also an error strictly in Functions and IAM, not in Vertex.AI or other ML systems.
What is going wrong here?
Function execution took 673 ms, finished with status code: 500
Prediction request failed: <class 'google.api_core.exceptions.ServiceUnavailable'>: 503 Getting metadata from plugin failed with error: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform\\n'", <google.auth.transport.requests._Response object at 0x3e095a9f4c50>)
google.auth.exceptions.RefreshError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform\\n'", <google.auth.transport.requests._Response object at 0x3e095a9f4c50>)
File "<string>", line 3, in raise_from
six.raise_from(new_exc, caught_exc)
File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/credentials.py", line 117, in refresh
self.refresh(request)
File "/env/local/lib/python3.7/site-packages/google/auth/credentials.py", line 133, in before_request
self._request, context.method_name, context.service_url, headers
File "/env/local/lib/python3.7/site-packages/google/auth/transport/grpc.py", line 88, in _get_authorization_headers
callback(self._get_authorization_headers(context), None)
File "/env/local/lib/python3.7/site-packages/google/auth/transport/grpc.py", line 101, in __call__
context, _AuthMetadataPluginCallback(callback_state, callback))
File "/env/local/lib/python3.7/site-packages/grpc/_plugin_wrapping.py", line 78, in __call__
Traceback (most recent call last):
The above exception was the direct cause of the following exception:
google.auth.exceptions.TransportError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/myproject#appspot.gserviceaccount.com/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform\\n'", <google.auth.transport.requests._Response object at 0x3e095a9f4c50>)
response,
File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/_metadata.py", line 187, in get
token_json = get(request, path, params=params)
File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/_metadata.py", line 263, in get_service_account_token
request, service_account=self._service_account_email, scopes=scopes
File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/credentials.py", line 113, in refresh
Traceback (most recent call last):
AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x3e0961671dd0>" raised exception!

Appflow Update_flow error : Destination object for the destination connector can not be updated

Anyone faced this error while calling update_flow for appflow?
errorMessage": "An error occurred (ValidationException) when calling the UpdateFlow operation: Update Flow request failed due to:[Destination object for the destination connector can not be updated]",
"errorType": "ValidationException",
"stackTrace": [
" File "/var/task/lambda_function.py", line 7, in lambda_handler\n response = client.update_flow (\n",
" File "/var/runtime/botocore/client.py", line 357, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File "/var/runtime/botocore/client.py", line 676, in _make_api_call\n raise error_class(parsed_response, operation_name)\n"
What could be the cause.
This is resolved. Appflow doesnt allow to change the destination folders using update_flow. you have to set it exact same as existing flow.

Google Cloud Function error "OperationError: code=3, message=Function failed on loading user code"

I get an error from time to time when deploying nodejs10 cloud functions to GCP. The error seems to go away on it's own, I just redeploy the same thing a few times. Anyone know what causes it? He's the log:
command: gcloud beta functions deploy exchangeIcon --verbosity debug --runtime nodejs10 --memory 128 --region europe-west1 --timeout 5 --trigger-http --set-env-vars=FUNCTION_REGION=europe-west1,BUILD_DATE=2019-05-09T10:01:05.497Z --entry-point app
DEBUG: Running [gcloud.beta.functions.deploy] with arguments: [--entry-point: "app", --memory: "134217728", --region: "europe-west1", --runtime: "nodejs10", --set-env-vars: "OrderedDict([(u'FUNCTION_REGION', u'europe-west1'), (u'BUILD_DATE', u'2019-05-09T10:01:05.497Z')])", --timeout: "5", --trigger-http: "True", --verbosity: "debug", NAME: "exchangeIcon"]
INFO: Not using a .gcloudignore file.
INFO: Not using a .gcloudignore file.
Deploying function (may take a while - up to 2 minutes)...
..........................................................................failed.
DEBUG: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message:
Traceback (most recent call last):
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 985, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 795, in Run
resources = command_instance.Run(args)
File "/Users/me/Downloads/google-cloud-sdk/lib/surface/functions/deploy.py", line 231, in Run
enable_vpc_connector=True)
File "/Users/me/Downloads/google-cloud-sdk/lib/surface/functions/deploy.py", line 175, in _Run
return api_util.PatchFunction(function, updated_fields)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 300, in CatchHTTPErrorRaiseHTTPExceptionFn
return func(*args, **kwargs)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 356, in PatchFunction
operations.Wait(op, messages, client, _DEPLOY_WAIT_NOTICE)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 126, in Wait
_WaitForOperation(client, request, notice)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 101, in _WaitForOperation
sleep_ms=SLEEP_MS)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 219, in RetryOnResult
result = func(*args, **kwargs)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 65, in _GetOperationStatus
raise exceptions.FunctionsError(OperationErrorToString(op.error))
FunctionsError: OperationError: code=3, message=Function failed on loading user code. Error message:
ERROR: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function failed on loading user code.
In my Stackdriver Logging I just see INVALID_ARGUMENT, but nothing else.
The problem stems from your terminal commands not being properly formatted.
--verbosity=debug
is the proper way to type this. Also same thing with your runtime.

Data Migration From SQL Server to MySQL

I am getting this Attribute Error : 'No Type' objectives has no attribute 'split' when i tried to migrate sql db to mysql db in mysql workbench
these are the Log details :
Starting...
Connect to source DBMS...
- Connecting to source...
Connecting to Mssql#sa...
Opening ODBC connection to Driver=sa;DATABASE=;UID=sa;PWD=XXXX...
Connected to Mssql# 11.0.2100.60
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py", line 147, in connect
_connections[connection.__id__]["version"] = getServerVersion(connection)
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py", line 174, in getServerVersion
ver_parts = [ int(part) for part in ver_string.split('.') ] + 4*[ 0 ]
AttributeError: 'NoneType' object has no attribute 'split'
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py", line 174, in getServerVersion
ver_parts = [ int(part) for part in ver_string.split('.') ] + 4*[ 0 ]
AttributeError: 'NoneType' object has no attribute 'split'
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/Resources/libraries/workbench/wizard_progress_page_widget.py", line 65, in run
self.func()
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/migration_source_selection.py", line 406, in task_connect
raise e
SystemError: AttributeError("'NoneType' object has no attribute 'split'"): error calling Python module function DbMssqlRE.getServerVersion
*** ERROR: Error during Connect to source DBMS: AttributeError("'NoneType' object has no attribute 'split'"): error calling Python module function DbMssqlRE.getServerVersion
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/Resources/libraries/workbench/wizard_progress_page_widget.py", line 543, in update_status
task.run()
File "/Applications/MySQLWorkbench.app/Contents/Resources/libraries/workbench/wizard_progress_page_widget.py", line 80, in run
raise e
SystemError: AttributeError("'NoneType' object has no attribute 'split'"): error calling Python module function DbMssqlRE.getServerVersion
*** ERROR: Exception in task 'Connect to source DBMS': SystemError('AttributeError("\'NoneType\' object has no attribute \'split\'"): error calling Python module function DbMssqlRE.getServerVersion',)
Failed
i tried a solution (given below , taken from the link https://bugs.mysql.com/bug.php?id=66030&thanks=3&notify=195). but it didn't help. I still get the same error. Please help me.
solution:
// We'll need some help from you to diagnose this one. With a text editor, open the /Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py file
and around line 174 you'll find a line that looks like:
ver_string = execute_query(connection, "SELECT SERVERPROPERTY('ProductVersion')").fetchone()[0]
Change that to:
ver_string = execute_query(connection, "SELECT CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR)").fetchone()[0]
Then save and retry. Thanks! //