Is it possible to use SFTP inside a GCP Cloud function? [duplicate] - google-cloud-functions

I'm trying to send a file from a bucket to a FTP server using pysftp.
For this I'm using Google cloud functions with python 3.7
I have tried with many different ways, but always with an error.
1.- Downloading the file as string: in this example, to avoiding error with the file content, I will create a file which contains "test string" instead of using what is inside of the bucket file
def sftp_handler():
myHostname = "hotsname.com"
myUsername = "username"
myPassword = "pass"
try:
keydata = b"""AAAAB..."""
key = paramiko.RSAKey(data=decodebytes(keydata))
cnopts = pysftp.CnOpts()
cnopts.hostkeys.add('hotname.com', 'ssh-rsa', key)
with pysftp.Connection(host=myHostname, username=myUsername, password=myPassword,port=22,cnopts=cnopts) as sftp:
print ("Connection succesfully stablished ... ")
remoteFilePath = '/dir1/dir2/dir3/'
sftp.putfo(StringIO('test string'), os.path.join(remoteFilePath, "OC.csv"))
# connection closed automatically at the end of the with-block
except:
print("Unexpected error in sftp_handler:")
traceback.print_exc()
The error output (stackdriver log )
Traceback (most recent call last): E undefined
File "/user_code/main.py", line 36, in sftp_handler E
sftp.putfo(StringIO('xml string'), os.path.join(remoteFilePath, "OC.csv")) E
File "/env/local/lib/python3.7/site-packages/pysftp/__init__.py", line 474, in putfo E
callback=callback, confirm=confirm) E
File "/env/local/lib/python3.7/site-packages/paramiko/sftp_client.py", line 714, in putfo E
with self.file(remotepath, "wb") as fr: E
File "/env/local/lib/python3.7/site-packages/paramiko/sftp_client.py", line 372, in open E
t, msg = self._request(CMD_OPEN, filename, imode, attrblock) E
File "/env/local/lib/python3.7/site-packages/paramiko/sftp_client.py", line 813, in _request E
return self._read_response(num) E
File "/env/local/lib/python3.7/site-packages/paramiko/sftp_client.py", line 865, in _read_response E
self._convert_status(msg) E
File "/env/local/lib/python3.7/site-packages/paramiko/sftp_client.py", line 898, in _convert_status E
raise IOError(text)
OSError E
2.- Using a temp file in which I will write the content of the file from the bucket
<adding this to the previous code>
with tempfile.NamedTemporaryFile(suffix='.csv', prefix=os.path.basename(__file__)) as tf:
tf.write(b'Hello world!')
remoteFilePath = '/dir1/dir2/dir3/'
sftp.put(os.path.dirname(tf.name), os.path.join(remoteFilePath, "OC.csv"))
The error log:
Traceback (most recent call last): E
File "/user_code/main.py", line 40, in sftp_handler E
sftp.put(os.path.dirname(tf.name), os.path.join(remoteFilePath, "OC.csv")) E
File "/env/local/lib/python3.7/site-packages/pysftp/__init__.py", line 364, in put E
confirm=confirm) E
File "/env/local/lib/python3.7/site-packages/paramiko/sftp_client.py", line 758, in put E
with open(localpath, "rb") as fl: E
IsADirectoryError: [Errno 21] Is a directory: '/tmp' E
3.- As Martin suggested in his answer:
with pysftp.Connection(host=myHostname, username=myUsername, password=myPassword,port=22,cnopts=cnopts) as sftp:
print ("Connection succesfully stablished ... ")
remoteFilePath = '/dir1/dir2/dir3/'
sftp.putfo(StringIO('test string'), remoteFilePath + "OC.csv")
I get the following error:
Traceback (most recent call last): E
File "/user_code/main.py", line 36, in sftp_handler E
sftp.putfo(StringIO('test string'), remoteFilePath + "OC.csv") E
File "/env/local/lib/python3.7/site-packages/pysftp/__init__.py", line 474, in putfo E
callback=callback, confirm=confirm) E
File "/env/local/lib/python3.7/site-packages/paramiko/sftp_client.py", line 714, in putfo E
with self.file(remotepath, "wb") as fr: E
File "/env/local/lib/python3.7/site-packages/paramiko/sftp_client.py", line 372, in open E
t, msg = self._request(CMD_OPEN, filename, imode, attrblock) E
File "/env/local/lib/python3.7/site-packages/paramiko/sftp_client.py", line 813, in _request E
return self._read_response(num) E
File "/env/local/lib/python3.7/site-packages/paramiko/sftp_client.py", line 865, in _read_response E
self._convert_status(msg) E
File "/env/local/lib/python3.7/site-packages/paramiko/sftp_client.py", line 898, in _convert_status E
raise IOError(text) E
OSError E
4.- Downloading the file from the bucket, create a file while executing the cloud functions and send this file to the sftp (Not implemented yet as I'm not sure if it will work creating a file in a Cloud Function)
How I could do this?
Why is failing?
Note: I also tried using FileZilla and I also have an error, so this should be something related the FTP server:
09:00:46 Status: Connecting to sftp-staging.messagesnetwork.com...
09:01:06 Status: Connected to sftp-staging.messagesnetwork.com
09:01:06 Status: Retrieving directory listing...
09:01:06 Status: Listing directory /
09:01:07 Status: Directory listing of "/" successful
09:01:11 Status: Retrieving directory listing of "/folder1"...
09:01:37 Command: cd "folder1"
09:01:37 Response: New directory is: "/folder1"
09:01:37 Command: ls
09:01:37 Error: Connection timed out after 20 seconds of inactivity
09:01:38 Error: Failed to retrieve directory listing
09:01:38 Status: Disconnected from server
09:01:38 Status: Connecting to sftp-staging.messagesnetwork.com...
09:02:04 Status: Connected to sftp-staging.messagesnetwork.com
09:02:09 Status: Retrieving directory listing of "/folder1"...
09:02:10 Status: Listing directory /folder1
09:02:10 Status: Directory listing of "/folder1" successful
09:02:12 Status: Retrieving directory listing of "/folder1/folder2"...
09:02:21 Status: Listing directory /folder1/folder2
09:02:23 Status: Directory listing of "/folder1/folder2" successful
09:02:24 Status: Retrieving directory listing of "/folder1/folder2/folder3"...
09:02:25 Status: Listing directory /folder1/folder2/folder3
09:02:26 Status: Directory listing of "/folder1/folder2/folder3" successful
09:02:30 Status: Connecting to sftp-staging.messagesnetwork.com...
09:02:41 Status: Connected to sftp-staging.messagesnetwork.com
09:02:45 Status: Starting upload of <...>\Desktop\folder2cf.txt
09:02:59 Command: cd "/folder1/folder2/folder3"
09:02:59 Response: New directory is: "/folder1/folder2/folder3"
09:02:59 Command: put "<...>\Desktop\folder2cf.txt" "folder2cf.txt"
09:02:59 Error: /folder1/folder2/folder3/folder2cf.txt: open for write: failure
09:02:59 Error: File transfer failed
09:02:59 Status: Starting upload of <...>\Desktop\folder2cf.txt
09:02:59 Status: Retrieving directory listing of "/folder1/folder2/folder3"...
09:02:59 Status: Listing directory /folder1/folder2/folder3
09:03:11 Command: put "<...>\Desktop\folder2cf.txt" "folder2cf.txt"
09:03:11 Error: /folder1/folder2/folder3/folder2cf.txt: open for write: failure
09:03:11 Error: File transfer failed
09:03:11 Status: Starting upload of <...>\Desktop\folder2cf.txt
09:03:11 Status: Retrieving directory listing of "/folder1/folder2/folder3"...
09:03:11 Status: Listing directory /folder1/folder2/folder3
09:03:34 Command: put "<...>\Desktop\folder2cf.txt" "folder2cf.txt"
09:03:34 Error: /folder1/folder2/folder3/folder2cf.txt: open for write: failure
09:03:34 Error: File transfer failed
09:03:34 Status: Retrieving directory listing of "/folder1/folder2/folder3"...
09:03:39 Status: Listing directory /folder1/folder2/folder3
09:03:44 Status: Directory listing of "/folder1/folder2/folder3" successful

sftp.putfo(StringIO('test string'), os.path.join(remoteFilePath, "OC.csv"))
I cannot say atm, if this is the real problem, but you should not use path.join for SFTP paths. SFTP always uses forward slash. path.join uses a separator of the local operating system. It may be different (particularly on Windows).
The code should be:
remoteFilePath = '/dir1/dir2/dir3/'
sftp.putfo(StringIO('test string'), remoteFilePath + "OC.csv")
sftp.put(os.path.dirname(tf.name), os.path.join(remoteFilePath, "OC.csv"))
Use of the path.dirname makes so sense here. You want to upload a file, so you have to give the path to a file to upload, not a path to its containing folder.
And there's the same problem with path.join as previously.

Related

web3py EthereumTesterProvider - Basic interactions with a smart contract fail

On the web3py EthereumTesterProvider blockchain, I tested the contract deployment example at https://web3py.readthedocs.io/en/v5/contracts.html.
But I came across 2 errors.
pip config (windows 10) :
web3py (5.31.3)
eth-tester (0.8.0b3)
Here is the code :
from web3 import Web3
from solcx import compile_source
from pprint import pprint
# Solidity source code
compiled_sol = compile_source(
'''
pragma solidity ^0.8.17;
contract Greeter {
string public greeting;
constructor() public {
greeting = 'Hello';
}
function setGreeting(string memory _greeting) public {
greeting = _greeting;
}
function greet() view public returns (string memory) {
return greeting;
}
}
''',
output_values=['abi', 'bin']
)
# retrieve the contract interface
contract_id, contract_interface = compiled_sol.popitem()
# get bytecode and abi
bytecode = contract_interface['bin']
abi = contract_interface['abi']
# web3.py instance
w3 = Web3(Web3.EthereumTesterProvider())
# set pre-funded account as sender
w3.eth.default_account = w3.eth.accounts[0]
greeter_bin = w3.eth.contract(abi=abi, bytecode=bytecode)
# Submit the transaction that deploys the contract
tx_hash = greeter_bin.constructor().transact() # <==== first error
# tx_hash = greeter_bin.constructor().transact({'gas': 123456})
# Wait for the transaction to be mined, and get the transaction receipt
tx_receipt = w3.eth.wait_for_transaction_receipt(tx_hash)
pprint(dict(tx_receipt))
greeter_obj = w3.eth.contract(address=tx_receipt.contractAddress, abi=abi)
print(f"{greeter_obj.functions.greet().call() = }") # <===== Second error
tx_hash = greeter_obj.functions.setGreeting('Nihao').transact()
tx_receipt = w3.eth.wait_for_transaction_receipt(tx_hash)
print(f"{greeter_obj.functions.greet().call() = }")
1) The first error takes place at the contract deployment:
"TypeError: MockBackend.estimate_gas() takes 2 positional arguments but 3 were given."
I fixed it by adding a dictionary with some gas but the exemple from Web3.py does not have this parameter. I'd like to know the reason. Did I miss something?
C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\eth_tester\backends\__init__.py:30: UserWarning: Ethereum Tester: No backend was explicitly set, and no *full* backends were available. Falling back to the `MockBackend` which does not support all EVM functionality. Please refer to the `eth-tester` documentation for information on what backends are available and how to set them. Your py-evm package may need to be updated.
warnings.warn(
Traceback (most recent call last):
File "D:\_P\dev\python\blockchain\web3\tester1.py", line 45, in <module>
tx_hash = greeter_bin.constructor().transact()
File "C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\eth_utils\decorators.py", line 18, in _wrapper
return self.method(obj, *args, **kwargs)
.............
File "C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\web3\providers\eth_tester\middleware.py", line 331, in middleware
return make_request(method, [filled_transaction] + list(params)[1:])
File "C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\web3\middleware\formatting.py", line 94, in middleware
response = make_request(method, params)
File "C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\web3\providers\eth_tester\main.py", line 103, in make_request
response = delegator(self.ethereum_tester, params)
File "cytoolz\functoolz.pyx", line 253, in cytoolz.functoolz.curry.__call__
File "cytoolz\functoolz.pyx", line 249, in cytoolz.functoolz.curry.__call__
File "C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\web3\providers\eth_tester\defaults.py", line 66, in call_eth_tester
return getattr(eth_tester, fn_name)(*fn_args, **fn_kwargs)
File "C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\eth_tester\main.py", line 483, in estimate_gas
raw_gas_estimate = self.backend.estimate_gas(raw_transaction, raw_block_number)
TypeError: MockBackend.estimate_gas() takes 2 positional arguments but 3 were given
2) After fixing the first error by adding {'gas': 123456} in transact(), the second error takes place at greeter_obj.functions.greet().call() :
"ValueError: Error expected to be a dict."
For this one, I have no clue
Information�: impossible de trouver des fichiers pour le(s) mod�le(s) sp�cifi�(s).
C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\eth_tester\backends_init_.py:30: UserWarning: Ethereum Tester: No backend was explicitly set, and no full backends were available. Falling back to the MockBackend which does not support all EVM functionality. Please refer to the eth-tester documentation for information on what backends are available and how to set them. Your py-evm package may need to be updated.
warnings.warn(
{'blockHash': HexBytes('0xafae7675633fedae22a1f5b9d11066ff78de5947f7b3e2915824823cc65d0e56'),
'blockNumber': 1,
'contractAddress': '0xa0Beb7081fDaF3ed157370836A85eeC20CEc9e04',
'cumulativeGasUsed': 21000,
'effectiveGasPrice': 1000000000,
'from': '0xaBbACadABa000000000000000000000000000000',
'gasUsed': 21000,
'logs': [],
'state_root': b'\x00',
'status': 0,
'to': '',
'transactionHash': HexBytes('0x5193460ead56b33c2fa79b490a6c0f4e0d68e07c712d762afcadc5976148bf1a'),
'transactionIndex': 0,
'type': '0x2'}
Traceback (most recent call last):
File "D:_P\dev\python\blockchain\web3\tester1.py", line 54, in
print(f"{greeter_obj.functions.greet().call() = }")
File "C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\web3\contract.py", line 970, in call
return call_contract_function(
File "C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\web3\contract.py", line 1525, in call_contract_function
return_data = web3.eth.call(
File "C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\web3\module.py", line 57, in caller
result = w3.manager.request_blocking(method_str,
File "C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\web3\manager.py", line 198, in request_blocking
return self.formatted_response(response,
File "C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\web3\manager.py", line 170, in formatted_response
apply_error_formatters(error_formatters, response)
File "C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\web3\manager.py", line 70, in apply_error_formatters
formatted_resp = pipe(response, error_formatters)
File "cytoolz\functoolz.pyx", line 666, in cytoolz.functoolz.pipe
File "cytoolz\functoolz.pyx", line 641, in cytoolz.functoolz.c_pipe
File "C:\Users\Gilles\AppData\Local\Programs\Python\Python310\lib\site-packages\web3_utils\method_formatters.py", line 555, in raise_solidity_error_on_revert
raise ValueError('Error expected to be a dict')
ValueError: Error expected to be a dict
Please note the status of the deployment transaction: 'status': 0
It failed but a contractAddress was returned !
As far as the UserWarning is concerned, I also tried unsuccessfully MockBackend (though it's the default backend):
from eth_tester import MockBackend
w3 = Web3(Web3.EthereumTesterProvider(MockBackend()))
Lastly, I tried to install py-evm "pip install py-evm" to try PyEVMBackend backend, but the installation failed at pyethash dependency:
"D:\Program\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\Gilles\AppData\Local\Programs\Python\Python310\include -IC:\Users\Gilles\AppData\Local\Programs\Python\Python310\Include "-ID:\Program\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\include" "-ID:\Program\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\cppwinrt" /Tcsrc/libethash/io_win32.c /Fobuild\temp.win-amd64-cpython-310\Release\src/libethash/io_win32.obj -Isrc/ -std=gnu99 -Wall
clÿ: Ligne de commande warning D9002ÿ: option '-std=gnu99' inconnue ignor‚e
io_win32.c
c1: fatal error C1083: Impossible d'ouvrir le fichier sourceÿ: 'src/libethash/io_win32.c'ÿ: No such file or directory
error: command 'D:\\Program\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> pyethash

Using multiple gunicorn workers cause the error with status PGRES_TUPLES_OK and no message from the libpq

I have a Flask website that uses the peewee ORM. The connection is managed by FlaskDB.
When I only use 1 gunicorn worker, it works well. But as soon as I use 2 or more gunicorn workers, I start getting this error:
Traceback (most recent call last):
File "/home/user/.local/lib/python3.10/site-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
File "/home/user/.local/lib/python3.10/site-packages/flask/app.py", line 1519, in full_dispatch_request
return self.finalize_request(rv)
File "/home/user/.local/lib/python3.10/site-packages/flask/app.py", line 1540, in finalize_request
response = self.process_response(response)
File "/home/user/.local/lib/python3.10/site-packages/flask/app.py", line 1888, in process_response
self.session_interface.save_session(self, ctx.session, response)
File "/home/user/project/session.py", line 113, in save_session
saved_session.save()
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 6497, in save
rows = self.update(**field_dict).where(self._pk_expr()).execute()
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 1886, in inner
return method(self, database, *args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 1957, in execute
return self._execute(database)
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 2442, in _execute
cursor = database.execute(self)
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 3112, in execute
return self.execute_sql(sql, params, commit=commit)
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 3096, in execute_sql
with __exception_wrapper__:
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 2873, in __exit__
reraise(new_type, new_type(exc_value, *exc_args), traceback)
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 183, in reraise
raise value.with_traceback(tb)
File "/home/user/.local/lib/python3.10/site-packages/peewee.py", line 3099, in execute_sql
cursor.execute(sql, params or ())
peewee.DatabaseError: error with status PGRES_TUPLES_OK and no message from the libpq
The peewee documentation states that the connection should be thread-safe, but it seems there are thread-safety issues here.
I use playhouse.pool.PooledPostgresqlDatabase if that matters.
What is the solution to this problem?
I believe this is possibly due to the connection being opened before the workers are forked. I'm not too familiar w/gunicorns worker model, but googling your error reveals similar problems w/multiprocessing. Specifically,
When a program uses multiprocessing or fork(), and an Engine object is copied to the child process, Engine.dispose() should be called so that the engine creates brand new database connections local to that fork. Database connections generally do not travel across process boundaries.
That is for SQLAlchemy, but the same should apply to Peewee. Just ensure that all your connections are closed (pool.close_all()) when your workers first start running. Or similarly, if you're opening any database connections at module scope, ensure you call close_all() after using them. This way when your workers start up they will each have an empty pool of connections.
db = FlaskDB(app)
# ...
db.database.close_all()

Google Cloud Function error "OperationError: code=3, message=Function failed on loading user code"

I get an error from time to time when deploying nodejs10 cloud functions to GCP. The error seems to go away on it's own, I just redeploy the same thing a few times. Anyone know what causes it? He's the log:
command: gcloud beta functions deploy exchangeIcon --verbosity debug --runtime nodejs10 --memory 128 --region europe-west1 --timeout 5 --trigger-http --set-env-vars=FUNCTION_REGION=europe-west1,BUILD_DATE=2019-05-09T10:01:05.497Z --entry-point app
DEBUG: Running [gcloud.beta.functions.deploy] with arguments: [--entry-point: "app", --memory: "134217728", --region: "europe-west1", --runtime: "nodejs10", --set-env-vars: "OrderedDict([(u'FUNCTION_REGION', u'europe-west1'), (u'BUILD_DATE', u'2019-05-09T10:01:05.497Z')])", --timeout: "5", --trigger-http: "True", --verbosity: "debug", NAME: "exchangeIcon"]
INFO: Not using a .gcloudignore file.
INFO: Not using a .gcloudignore file.
Deploying function (may take a while - up to 2 minutes)...
..........................................................................failed.
DEBUG: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message:
Traceback (most recent call last):
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 985, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 795, in Run
resources = command_instance.Run(args)
File "/Users/me/Downloads/google-cloud-sdk/lib/surface/functions/deploy.py", line 231, in Run
enable_vpc_connector=True)
File "/Users/me/Downloads/google-cloud-sdk/lib/surface/functions/deploy.py", line 175, in _Run
return api_util.PatchFunction(function, updated_fields)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 300, in CatchHTTPErrorRaiseHTTPExceptionFn
return func(*args, **kwargs)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 356, in PatchFunction
operations.Wait(op, messages, client, _DEPLOY_WAIT_NOTICE)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 126, in Wait
_WaitForOperation(client, request, notice)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 101, in _WaitForOperation
sleep_ms=SLEEP_MS)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 219, in RetryOnResult
result = func(*args, **kwargs)
File "/Users/me/Downloads/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 65, in _GetOperationStatus
raise exceptions.FunctionsError(OperationErrorToString(op.error))
FunctionsError: OperationError: code=3, message=Function failed on loading user code. Error message:
ERROR: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function failed on loading user code.
In my Stackdriver Logging I just see INVALID_ARGUMENT, but nothing else.
The problem stems from your terminal commands not being properly formatted.
--verbosity=debug
is the proper way to type this. Also same thing with your runtime.

Airflow: Cannot assign requested address error while using emailoperator

Unable to receive email on task failure or even using EmailOperator
Hi Guys,
I am unable to receive email from my box even after adding required parameters to send one.
Below is how my default args looks like --
default_args = {
'owner': 'phonrao',
'depends_on_past': False,
#'start_date': datetime(2019, 3, 28),
'start_date': airflow.utils.dates.days_ago(2),
'email': ['phonrao#gmail.com'],
'email_on_failure': True,
'email_on_retry': True,
'retries': 1,
'retry_delay': timedelta(minutes=5),
#'on_failure_callback': report_failure,
#'end_date': datetime(2020,4 ,1),
#'schedule_interval': '#hourly',
}
I have few HttpsOperator task in between -- those are working good and are a success, but they do not send email on error(I purposely tried to introduce an error to check if they send any email). Below is an example of my task.
t1 = SimpleHttpOperator(
task_id='t1',
http_conn_id='http_waterfall',
endpoint='/update_data',
method='POST',
headers={"Content-Type":"application/json"},
xcom_push=True,
log_response=True,
dag=dag,
)
and this is my EmailOperator task
t2 = EmailOperator(
dag=dag,
task_id="send_email",
to='phonrao#gmail.com',
subject='Success',
html_content="<h3>Success</h3>"
)
t2 >> t1
Below is the error from Logs:
[2019-04-02 15:28:21,305] {{base_task_runner.py:101}} INFO - Job 845: Subtask send_email [2019-04-02 15:28:21,305] {{cli.py:520}} INFO - Running <TaskInstance: schedulerDAG.send_email 2019-04-02T15:23:08.896589+00:00 [running]> on host a47cd79aa987
[2019-04-02 15:28:21,343] {{logging_mixin.py:95}} INFO - [2019-04-02 15:28:21,343] {{configuration.py:255}} WARNING - section/key [smtp/smtp_user] not found in config
[2019-04-02 15:28:21,343] {{models.py:1788}} ERROR - [Errno 99] Cannot assign requested address
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 1657, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.6/site-packages/airflow/operators/email_operator.py", line 78, in execute
mime_subtype=self.mime_subtype, mime_charset=self.mime_charset)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/email.py", line 55, in send_email
mime_subtype=mime_subtype, mime_charset=mime_charset, **kwargs)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/email.py", line 101, in send_email_smtp
send_MIME_email(smtp_mail_from, recipients, msg, dryrun)
File "/usr/local/lib/python3.6/site-packages/airflow/utils/email.py", line 121, in send_MIME_email
s = smtplib.SMTP_SSL(SMTP_HOST, SMTP_PORT) if SMTP_SSL else smtplib.SMTP(SMTP_HOST, SMTP_PORT)
File "/usr/local/lib/python3.6/smtplib.py", line 251, in __init__
(code, msg) = self.connect(host, port)
File "/usr/local/lib/python3.6/smtplib.py", line 336, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "/usr/local/lib/python3.6/smtplib.py", line 307, in _get_socket
self.source_address)
File "/usr/local/lib/python3.6/socket.py", line 724, in create_connection
raise err
File "/usr/local/lib/python3.6/socket.py", line 713, in create_connection
sock.connect(sa)
OSError: [Errno 99] Cannot assign requested address
[2019-04-02 15:28:21,351] {{models.py:1817}} INFO - All retries failed; marking task as FAILED
Below is my airflow.cfg
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = localhost
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
# smtp_user = airflow
# smtp_password = airflow
smtp_port = 25
smtp_mail_from = airflow#example.com
Has anyone encounter this issue and any suggestions on how do I resolve this?
If your airflow running on Kubernetes (installed by helm chart), you should take a look in "airflow-worker-0" pod, and make sure the environment variable of SMTP_HOST or SMTP_USER ... available in the config. Simply debugging, access to the container of airflow-worker and then run python, then trying these commands to make sure it works correctly.
import airflow
airflow.utils.email.send_email('example#gmail.com', 'Airflow TEST HERE', 'This is airflow status success')
I have the same issues, by resolving the environment variable of SMTP. Now it works.

Data Migration From SQL Server to MySQL

I am getting this Attribute Error : 'No Type' objectives has no attribute 'split' when i tried to migrate sql db to mysql db in mysql workbench
these are the Log details :
Starting...
Connect to source DBMS...
- Connecting to source...
Connecting to Mssql#sa...
Opening ODBC connection to Driver=sa;DATABASE=;UID=sa;PWD=XXXX...
Connected to Mssql# 11.0.2100.60
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py", line 147, in connect
_connections[connection.__id__]["version"] = getServerVersion(connection)
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py", line 174, in getServerVersion
ver_parts = [ int(part) for part in ver_string.split('.') ] + 4*[ 0 ]
AttributeError: 'NoneType' object has no attribute 'split'
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py", line 174, in getServerVersion
ver_parts = [ int(part) for part in ver_string.split('.') ] + 4*[ 0 ]
AttributeError: 'NoneType' object has no attribute 'split'
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/Resources/libraries/workbench/wizard_progress_page_widget.py", line 65, in run
self.func()
File "/Applications/MySQLWorkbench.app/Contents/PlugIns/migration_source_selection.py", line 406, in task_connect
raise e
SystemError: AttributeError("'NoneType' object has no attribute 'split'"): error calling Python module function DbMssqlRE.getServerVersion
*** ERROR: Error during Connect to source DBMS: AttributeError("'NoneType' object has no attribute 'split'"): error calling Python module function DbMssqlRE.getServerVersion
Traceback (most recent call last):
File "/Applications/MySQLWorkbench.app/Contents/Resources/libraries/workbench/wizard_progress_page_widget.py", line 543, in update_status
task.run()
File "/Applications/MySQLWorkbench.app/Contents/Resources/libraries/workbench/wizard_progress_page_widget.py", line 80, in run
raise e
SystemError: AttributeError("'NoneType' object has no attribute 'split'"): error calling Python module function DbMssqlRE.getServerVersion
*** ERROR: Exception in task 'Connect to source DBMS': SystemError('AttributeError("\'NoneType\' object has no attribute \'split\'"): error calling Python module function DbMssqlRE.getServerVersion',)
Failed
i tried a solution (given below , taken from the link https://bugs.mysql.com/bug.php?id=66030&thanks=3&notify=195). but it didn't help. I still get the same error. Please help me.
solution:
// We'll need some help from you to diagnose this one. With a text editor, open the /Applications/MySQLWorkbench.app/Contents/PlugIns/db_mssql_grt.py file
and around line 174 you'll find a line that looks like:
ver_string = execute_query(connection, "SELECT SERVERPROPERTY('ProductVersion')").fetchone()[0]
Change that to:
ver_string = execute_query(connection, "SELECT CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR)").fetchone()[0]
Then save and retry. Thanks! //