gcloud crashing due to (SSLHandshakeError): [SSL: UNKNOWN_PROTOCOL] - google-compute-engine

I have installed gcloud sdk by following the link https://cloud.google.com/sdk/docs/downloads-apt-get on Ubuntu 16.04.6 LTS. I have also done the proxy configuration using the following link https://cloud.google.com/sdk/docs/proxy-settings.
Google Cloud SDK 288.0.0
alpha 2020.04.03
beta 2020.04.03
bq 2.0.56
core 2020.04.03
gsutil 4.49
kubectl 2020.04.03
gcloud init is successful and gcloud info --run-diagnostics displays no problems. However gcloud crashes for any other commands I run. I have tried the following commands.
1. gcloud compute images list
2. gcloud services list
3. gcloud logging logs list
Here is the message I get after the crash.
ERROR: gcloud crashed (SSLHandshakeError): [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:590)
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
Can someone please help.
PS. Here is debug output.
DEBUG: Running [gcloud.compute.images.list] with arguments: [--verbosity: "debug"]
INFO: Refreshing access_token
INFO: Display format: " table(
name,
selfLink.map().scope(projects).segment(0):label=PROJECT,
family,
deprecated.state:label=DEPRECATED,
status
)"
DEBUG: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:590)
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 983, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 809, in Run
display_info=self.ai.display_info).Display()
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/calliope/display.py", line 483, in Display
self._printer.Print(self._resources)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/resource/resource_printer_base.py", line 275, in Print
for resource in resources:
File "/usr/lib/google-cloud-sdk/lib/surface/compute/images/list.py", line 113, in _FilterDeprecated
for image in images:
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/compute/lister.py", line 1065, in __call__
errors=errors):
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/compute/request_helper.py", line 204, in ListJson
for item in _ListCore(requests, http, batch_url, errors, _HandleJsonList):
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/compute/request_helper.py", line 134, in _ListCore
requests=requests, http=http, batch_url=batch_url)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/compute/batch_helper.py", line 106, in MakeRequests
batch_request_callback=batch_checker.BatchCheck)
File "/usr/bin/../lib/google-cloud-sdk/lib/third_party/apitools/base/py/batch.py", line 226, in Execute
batch_http_request.Execute(http)
File "/usr/bin/../lib/google-cloud-sdk/lib/third_party/apitools/base/py/batch.py", line 492, in Execute
self._Execute(http)
File "/usr/bin/../lib/google-cloud-sdk/lib/third_party/apitools/base/py/batch.py", line 449, in _Execute
response = http_wrapper.MakeRequest(http, request)
File "/usr/bin/../lib/google-cloud-sdk/lib/third_party/apitools/base/py/http_wrapper.py", line 356, in MakeRequest
max_retry_wait, total_wait_sec))
File "/usr/bin/../lib/google-cloud-sdk/lib/third_party/apitools/base/py/http_wrapper.py", line 304, in HandleExceptionsAndRebuildHttpConnections
raise retry_args.exc
SSLHandshakeError: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:590)
ERROR: gcloud crashed (SSLHandshakeError): [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:590)

I got it working after some debugging. Seems like firewall was the issue in my case. I shifted to python 3.5 . After which I ran the same command gcloud compute images list
I got the error "Caught socket error, retrying request to url https://compute.googleapis.com/batch/compute/v1". Adding this url to firewall exception solved my issue.

Related

I am getting attribute Error while running train.py in YOLOV5. Can anyone help me with this?

When I run python train.py --img 640 --batch 4 --epochs 5 --data training/dataset.yaml --cfg training/yolov5l.yaml --weights yolov5l.pt for YOLO V5 in my system I get the following nd why is it:
Traceback (most recent call last):
File "train.py", line 544, in
train(hyp, opt, device, tb_writer)
File "train.py", line 72, in train
wandb_logger = WandbLogger(opt, save_dir.stem, run_id, data_dict)
File "D:\sandra\ai.projects\yolo\yolov5\utils\wandb_logging\wandb_utils.py", line 108, in init
self.data_dict = self.setup_training(opt, data_dict)
File "D:\sandra\ai.projects\yolo\yolov5\utils\wandb_logging\wandb_utils.py", line 139, in setup_training
self.train_artifact_path, self.train_artifact = self.download_dataset_artifact(data_dict.get('train'),
AttributeError: 'str' object has no attribute 'get'
wandb: Waiting for W&B process to finish, PID 22204
wandb: Program failed with code 1.
wandb: Find user logs for this run at: D:\sandra\ai.projects\yolo\yolov5\wandb\offline-run-20210427_130128-jr2z73rr\logs\debug.log
wandb: Find internal logs for this run at: D:\sandra\ai.projects\yolo\yolov5\wandb\offline-run-20210427_130128-jr2z73rr\logs\debug-internal.log
wandb: You can sync this run to the cloud by running:
wandb: wandb sync D:\sandra\ai.projects\yolo\yolov5\wandb\offline-run-20210427_130128-jr2z73rr
Looking at the error traceback, it seems that your dataset configuration file has some missing info, or most likely in the wrong format. The file should contain both training and validation set paths. Here's an example:
train: path/to/train # Notice the spaces
val: path/to/val
...

libvirt.libvirtError: An error occurred, but the cause is unknow

I am using XEN hypervisor. For managing virtual Machine I am using virt-manager whenever I want to start to Virtual Machine at last when everything is ready and I click the create Button I get the following error
Unable to complete install: 'An error occurred, but the cause is unknown'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/createvm.py", line 2089, in _do_async_install
guest.installer_instance.start_install(guest, meter=meter)
File "/usr/share/virt-manager/virtinst/install/installer.py", line 542, in start_install
domain = self._create_guest(
File "/usr/share/virt-manager/virtinst/install/installer.py", line 491, in _create_guest
domain = self.conn.createXML(install_xml or final_xml, 0)
File "/usr/lib/python3/dist-packages/libvirt.py", line 4034, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirt.libvirtError: An error occurred, but the cause is unknow
For the Xen driver, you may have more infos in /var/log/libvirt/libxl/libxl-driver.log
Same issue here.
When I check the /var/log/syslog, there is a log from libvirtd: "unsupported configuration: emulator '/usr/lib/xen-4.11/bin/qemu-system-i386' not found".
It may be caused by this bug. So we can create a symbolic link sudo ln -s /usr/bin/qemu-system-i386 /usr/lib/xen-4.11/bin/qemu-system-i386, and I hope it works for you too.
Check if you have installed
qemu-system-x86-xen
if not - install
apt install qemu-system-x86-xen
that helped me

How do I connect Airflow to SQLite locally?

I'm trying to try out Airflow for the very first time and I'm trying to connect it to a local SQLite database. But I can't seem to get my head around on how to actually do it.
I've read up on Airflow's document, Set my executor to LocalExecutor and set up my sql_alchemy_conn to sqlite:////home/myName/Programs/sqlite3/DatabaseName.db but it doesn't seem to work as it throws an
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 21, in <module>
from airflow import configuration
File "/usr/local/lib/python2.7/dist-packages/airflow/__init__.py", line 35, in <module>
from airflow import configuration as conf
File "/usr/local/lib/python2.7/dist-packages/airflow/configuration.py", line 520, in <module>
conf.read(AIRFLOW_CONFIG)
File "/usr/local/lib/python2.7/dist-packages/airflow/configuration.py", line 283, in read
self._validate()
File "/usr/local/lib/python2.7/dist-packages/airflow/configuration.py", line 169, in _validate
self.get('core', 'executor')))
airflow.exceptions.AirflowConfigException: error: cannot use sqlite with the LocalExecutor
error when I tried to run airflow initdb. I tried to google around and tried vipul sharma's solution found here and set the value of my sql_alchemy_conn to mysql://:#localhost:3306/ but it still doesn't work as it throws an
sqlalchemy.exc.OperationalError: (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'myName'#'localhost' (using password: NO)")
error. I know that the answer should be really simple but I really don't understand how to so I hope you can guide me through on what to do/read.
Use SequentialExecutor
"This executor will only run one task instance at a time, can be used for debugging. It is also the only executor that can be used with sqlite since sqlite doesn’t support multiple connections." airflow documentation
You just didn't need to change to LocalExecutor, change it back to SequentialExecutor, change sql_alchemy_conn to point to sqlite:////home/myName/Programs/sqlite3/DatabaseName.db and stop airflow services (webserver, scheduler).
Execute airflow initdb then start up the services again.
Hopefully that works.

Selenium python library via docker, Chrome error failed to start: exited abnormally

I am trying to run some python scripts with the selenium library from within a docker container based on miniconda/anaconda, but I keep getting this error: selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally. I am also using a python wrapper for xvfb to avoid opening a real Chrome window.
To reproduce this (from a running docker container):
root#304ccd3bae83:/opt# python
Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 18:10:19)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>>
>>> from selenium import webdriver
>>> from xvfbwrapper import Xvfb
>>>
>>> with Xvfb(width=1366, height=768) as xvfb:
... my_driver = webdriver.Chrome('/opt/chromedriver/2.33/chromedriver')
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/opt/conda/lib/python3.6/site-packages/selenium/webdriver/chrome/webdriver.py", line 69, in __init__
desired_capabilities=desired_capabilities)
File "/opt/conda/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 151, in __init__
self.start_session(desired_capabilities, browser_profile)
File "/opt/conda/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 240, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/opt/conda/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 308, in execute
self.error_handler.check_response(response)
File "/opt/conda/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py", line 194, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally
(Driver info: chromedriver=2.33.506092 (733a02544d189eeb751fe0d7ddca79a0ee28cce4),platform=Linux 4.4.0-116-generic x86_64)
According to this: https://sites.google.com/a/chromium.org/chromedriver/help/chrome-doesn-t-start it seems someone may need to use a stand-alone version of Chrome that works for all users, but I am not sure how the docker build works, I guess the docker image is built as root, and all the code inside it is executed as root, so there should not be any issue with different users controlling Chrome.
This python code works fine on a normal Ubuntu laptop with X windows. I need to carefully pick both the versions of Chrome and chromedriver, at the moment when checking from within the running docker container:
root#304ccd3bae83:/opt# /opt/chromedriver/2.33/chromedriver --version
ChromeDriver 2.33.506092 (733a02544d189eeb751fe0d7ddca79a0ee28cce4)
root#304ccd3bae83:/opt# google-chrome-stable --version
Google Chrome 62.0.3202.75
These options helped solving the issue.
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument("--disable-setuid-sandbox")
One of them is needed when seeing Chrome failed to start: crashed.
Also: make sure there are no zombies (from previous executions) for the chrome-driver process using ps aux | grep chrome-driver to find the PIDs to kill.
Bare in mind that if you are using the Python multiprocessing library to spawn many processes involving their own instance of the Chrome browser, then you can not use Docker (which is supposed to start just one Python process, unless using stuff like supervisor), so you may see: selenium.common.exceptions.WebDriverException: Message: chrome not reachable if you try anyway.

Tryton ERP MySQL installation

I'm trying to install Tryton ERP with MySQL as the database. It's not quite clear what you are meant to do.
From the config documentation you simply supply the uri to the database under the [database] section:
[database]
uri = mysql://user:pass#localhost:3306
However running trytond -v -c /home/user/.config/tryton/3.8/tryton.conf does not seem to get it working. When trying to access the 127.0.0.1:8050 where I've got Tryton running, I simply get 127.0.0.1 - - [23/Nov/2015 16:55:10] code 404, message File not found
One would assume, Tryton either installs the database on its own or you need to create yourself somehow but I didn't see any documentation surrounding that.
I've also trying adding a database through the Tryton GUI, it encounters the following error:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tryton/gui/window/dbcreate.py", line 65, in server_change
common.refresh_langlist(self.combo_language, host, port)
File "/usr/local/lib/python2.7/dist-packages/tryton/common/common.py", line 253, in refresh_langlist
lang_list = rpc.db_exec(host, port, 'list_lang')
File "/usr/local/lib/python2.7/dist-packages/tryton/rpc.py", line 57, in db_exec
result = getattr(connection.common.db, method)(None, None, *args)
File "/usr/lib/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
File "/usr/local/lib/python2.7/dist-packages/tryton/jsonrpc.py", line 271, in __request
verbose=self.__verbose
File "/usr/lib/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
File "/usr/lib/python2.7/xmlrpclib.py", line 1306, in single_request
return self.parse_response(response)
File "/usr/lib/python2.7/xmlrpclib.py", line 1482, in parse_response
return u.close()
File "/usr/local/lib/python2.7/dist-packages/tryton/jsonrpc.py", line 134, in close
return json.loads(self.data, object_hook=object_hook)
File "/usr/lib/python2.7/dist-packages/simplejson/__init__.py", line 505, in loads
return cls(encoding=encoding, **kw).decode(s)
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line 389, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
JSONDecodeError: Expecting value: line 1 column 2 (char 1)
I've got the prerequisites installed that were listed here and MySQL-python package installed, should there be anything else?
You should create a database on MySQL with it's own tools. Once the database is created you must initialize it using the following command:
trytond -c <config_file> -d <database name> --all
See for complete reference:
http://doc.tryton.org/3.8/trytond/doc/topics/setup_database.html#topics-setup-database.
Once finished, the server will ask for an admin password. Once entered you can conect using the tryton client with the admin user and the entered password.
In order to access tryton from web client you must install and configure the sao web interface, that can be found on:
https://www.npmjs.com/package/tryton-sao