There are multiple blogs on how to make flash work post EOL for sites or firmware console that have failed to migrate before EOL.
downgrade to google-chrome 87.x
build mms.cfg and put in appropriate google-chrome config directory
To this extent I want to run this patch in a VM (Ubuntu 20.4.1 on VirtualBox 6.1.6) to ensure that my actual machines are unto date and fully patched. However I'm finding patch works on my MacOS host but not in Ubuntu VM. The run once button never appears and flash component in webpage shows download failed
To remove potential for typos - I've coded in python with a YaML config file. Why does this not work in google-chrome on Ubuntu?
chrome.py
import re, shutil, subprocess, yaml, click
from pathlib import Path
import platform
def cmd(cmd:str="ls -al", assertfail=True) :
up = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, close_fds=True)
str = [o.rstrip().decode() for o in up.stdout]
exitcode = up.wait()
if assertfail: assert exitcode == 0, f"[{exitcode}] {cmd} {str}"
return exitcode, str
#click.command()
#click.option("--uninstall", default=False, is_flag=True, help="uninstall google chrome")
def main(uninstall):
click.secho(f"{platform.system()}")
with open(Path.cwd().joinpath("chrome.yaml")) as f: config = yaml.safe_load(f)
if platform.system()=="Linux":
if uninstall:
e, out = cmd("sudo apt-get purge -y google-chrome-stable")
print(out)
cdeb = Path.cwd().joinpath(f"chrome-{config['version']}.deb")
# download required version of chrome if it has not already been downloaded
if not cdeb.exists():
e, out = cmd(f"wget --no-verbose -O {cdeb} http://dl.google.com/linux/chrome/deb/pool/main/g/google-chrome-stable/google-chrome-stable_{config['version']}_amd64.deb")
print(out)
# check wanted version of chrome is installed
e, iv = cmd("google-chrome --version", assertfail=False)
iv = iv[0] if e==0 else "0.0.0.0"
vre = re.compile("[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+")
iv = re.search(vre, iv)[0]
wv = re.search(vre, config["version"])[0]
click.secho(f"installed: {iv} wanted: {wv} same: {iv==wv}",bg="cyan")
if iv!=wv:
e, out = cmd(f"sudo apt install -y {cdeb}")
print(out)
# make sure required locations of adobe config files exist
p2 = Path.home().joinpath(".config/google-chrome/Default/Pepper Data/Shockwave Flash/System/")
elif platform.system()=="Darwin":
p2 = Path.home().joinpath("Library/Application Support/Google/Chrome/Default/Pepper Data/Shockwave Flash/System/")
else:
click.secho(f"unknow operating system {platform.system}")
exit(1)
if not p2.exists():
p2.mkdir(parents=True)
# build adobe flash config file
mmsf = Path.cwd().joinpath("mms.cfg")
with open(mmsf, "w") as f:
for l in config["base"]: f.write(f"{l}\n")
for u in config["urls"]:
for l in [f"{k}={v}{u}\n" for p in config["urlkeys"] for k,v in p.items()]: f.write(l)
# distribute adobe flash config file
shutil.copy(mmsf, p2)
click.secho(str(p2.joinpath("mms.cfg")), bg="blue", bold=True, reverse=True)
with open(p2.joinpath("mms.cfg")) as f:
click.secho(f.read(), bg="blue")
if __name__ == '__main__':
main()
chrome.yaml
base:
- EnableAllowList=1
- EOLUninstallDisable=1
- ErrorReportingEnable=1
# - TraceOutputFileEnable=1
# - PolicyFileLog=1
- AllowListPreview=1
- TraceOutputEcho=1
urls:
- codegeek.net
- ultrasounds.com
- photobox.co.uk
- secure.photobox.com
- serving.photos.photobox.com
urlkeys:
- AllowListUrlPattern: "*://*."
- WhitelistUrlPattern: "*://*."
- AllowListUrlPattern: "*://"
- WhitelistUrlPattern: "*://"
# https://www.ubuntuupdates.org/package/google_chrome/stable/main/base/google-chrome-stable
# 87 - last version with flash bundled
version: 87.0.4280.141-1
Related
I use Chrome CDP for some tasks automation.
One have to first start the chrome with CDP:
chromium-browser --remote-debugging-port=9222
and it reports something like
DevTools listening on ws://127.0.0.1:9222/devtools/browser/3e3152c6-20fc-4cea-a9d2-60e4e6b8ad70
I have to copy the ws://... URL to my config file manually to be able to proceed with my task. I probably can work around this using python's subprocess.Popen to do this instead and extract the URL but isn't there a way how to make this URL configurable or at least fixed?
Thanks to wOxxOm! It really can be read from http://127.0.0.1:9222/json/version (Documentation)
As an alternative, I wrote Python script to launch it and get the endpoint as well:
from subprocess import Popen, PIPE
class Browser:
BANNER = "DevTools listening on "
def __init__(self, path="/usr/bin/chromium-browser",
port=9222, ignore_tls_errors=False):
cmd = [path, f"--remote-debugging-port={port}"]
if ignore_tls_errors:
cmd.append("--ignore-certificate-errors")
self.process = Popen(cmd, stdout=PIPE, stderr=PIPE, universal_newlines=True)
output = ""
for line in self.process.stderr:
output += line
if self.BANNER in output:
start_pos = output.find(self.BANNER) + len(self.BANNER)
end_pos = output.find("\n", start_pos)
self.url = output[start_pos:end_pos]
break
def close(self):
self.process.terminate()
if __name__ == "__main__":
try:
b = Browser()
print("URL:", b.url)
finally:
b.close()
I'm building a simple Dash app, including an html.Video() component. The issue is that local video files aren't playing (videos hosted online are working fine).
import dash
from dash.dependencies import Input, Output
import os
app = dash.Dash(__name__)
app.layout = html.Video(src="/static/test.mp4", controls=True)
#app.server.route('/static/<path:path>')
def serve_static(path):
root_dir = os.getcwd()
return flask.send_from_directory(os.path.join(root_dir, 'static'), path)
if __name__ == '__main__':
app.run_server(debug=True)
Folder structure:
app.py
/static
test.mp4
I use OpenCV to create the .mp4 files:
def crop_video(vid_file, start_frame, end_frame, fps=30.0):
vid_name = f"/static/test.mp4"
cap = cv2.VideoCapture(vid_file)
ret, frame = cap.read()
h, w, _ = frame.shape
fourcc = cv2.VideoWriter_fourcc(*"mp4v")
writer = cv2.VideoWriter(vid_name, fourcc, fps, (w, h))
f = 0
while ret:
f += 1
if start_frame <= f and f <= end_frame:
writer.write(frame)
ret, frame = cap.read()
writer.release()
cap.release()
return vid_name
I've tried the solutions from here, here, here, and here, without any luck
Figured out the issue. For whatever reason, the .mp4 videos created by OpenCV using ".mpv4" is not compatible with the html.Video() component in Dash. Using ffmpeg to create the video worked as expected. Hope it saves someone a few hours in the future.
I have an existing API in my AWS account. Now I am trying to use ansible to redeploy api after introducing any resource policy changes.
According to AWS I need to use below CLI command to redeploy the api:
- name: deploy API
command: >
aws apigateway update-stage --region us-east-1 \
--rest-api-id <rest-api-id> \
--stage-name 'stage'\
--patch-operations op='replace',path='/deploymentId',value='<deployment-id>'
Above, 'deploymentId' from previous deployment will be different after every deployment that's why trying to create that as a variable so this can be automated for redeployment steps.
I can get previous deployment information using below CLI:
- name: Get deployment information
command: >
aws apigateway get-deployments \
--rest-api-id 123454ne \
--region us-east-1
register: deployment_info
And output looks like this:
deployment_info.stdout_lines:
- '{'
- ' "items": ['
- ' {'
- ' "id": "abcd",'
- ' "createdDate": 1228509116'
- ' }'
- ' ]'
- '}'
I was using deployment_info.items.id as deploymentId and couldn't able to make this work. Now stuck on what can be Ansible CLI command to get id from output and use this id as deploymentId in deployment commands.
How can I use this id for deploymentId in deployment commands?
I created a small ansible module which you might find useful
#!/usr/bin/python
# Creates a new deployment for an API GW stage
# See https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-deployments.html
# Based on https://github.com/ansible-collections/community.aws/blob/main/plugins/modules/aws_api_gateway.py
# TODO needed?
# from __future__ import absolute_import, division, print_function
# __metaclass__ = type
import json
import traceback
try:
import botocore
except ImportError:
pass # Handled by AnsibleAWSModule
from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
def main():
argument_spec = dict(
api_id=dict(type='str', required=True),
stage=dict(type='str', required=True),
deploy_desc=dict(type='str', required=False, default='')
)
module = AnsibleAWSModule(
argument_spec=argument_spec,
supports_check_mode=True
)
api_id = module.params.get('api_id')
stage = module.params.get('stage')
client = module.client('apigateway')
# Update stage if not in check_mode
deploy_response = None
changed = False
if not module.check_mode:
try:
deploy_response = create_deployment(client, api_id, **module.params)
changed = True
except (botocore.exceptions.ClientError, botocore.exceptions.EndpointConnectionError) as e:
msg = "Updating api {0}, stage {1}".format(api_id, stage)
module.fail_json_aws(e, msg)
exit_args = {"changed": changed, "api_deployment_response": deploy_response}
module.exit_json(**exit_args)
retry_params = {"retries": 10, "delay": 10, "catch_extra_error_codes": ['TooManyRequestsException']}
#AWSRetry.jittered_backoff(**retry_params)
def create_deployment(client, rest_api_id, **params):
result = client.create_deployment(
restApiId=rest_api_id,
stageName=params.get('stage'),
description=params.get('deploy_desc')
)
return result
if __name__ == '__main__':
main()
How to convert a pb file into tflite file using python3 or in terminal.
Don't know any of the details of the model.Here is the link of the pb file.
(Edited)
I have done converting pb file to tflite file by the following code :
import tensorflow.compat.v1 as tf
import numpy as np
graph_def_file = "./models/20170512-110547.pb"
def representative_dataset_gen():
for _ in range(num_calibration_steps):
# Get sample input data as a numpy array in a method of your choosing.
yield [input]
converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file,
input_arrays=["input","phase_train"],
output_arrays=["embeddings"],
input_shapes={"input":[1,160,160,3],"phase_train":False})
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
print("converting")
open("./models/converted_model.tflite", "wb").write(tflite_model)
print("Done")
Error :Getting Segmentation fault(core dumped)
2020-01-20 11:42:18.153263: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory
2020-01-20 11:42:18.153363: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory
2020-01-20 11:42:18.153385: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2020-01-20 11:42:18.905028: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-01-20 11:42:18.906845: E tensorflow/stream_executor/cuda/cuda_driver.cc:351] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2020-01-20 11:42:18.906874: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (kalgudi-GA-78LMT-USB3-6-0): /proc/driver/nvidia/version does not exist
2020-01-20 11:42:18.934144: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3616020000 Hz
2020-01-20 11:42:18.934849: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x39aa0f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-01-20 11:42:18.934910: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
Segmentation fault (core dumped)
with out any details of the model, you cannot convert it to a .tflite model. I suggest that you go through this document for Post Training Quantization again. As there are too many details that is redundant to show here.
Here is an example for post training quantization of a frozen graph. The model is taken from here (you can see that it's a full tarball of infomation about the model)
import sys, os, glob
import tensorflow as tf
import pathlib
import numpy as np
if len(sys.argv) != 2:
print('Usage: <' + sys.argv[0] + '> <frozen_graph_file> <representative_image_dir>')
exit()
tf.compat.v1.enable_eager_execution()
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.DEBUG)
def fake_representative_data_gen():
for _ in range(100):
fake_image = np.random.random((1,192,192,3)).astype(np.float32)
yield [fake_image]
frozen_graph = sys.argv[1]
input_array = ['input']
output_array = ['MobilenetV1/Predictions/Reshape_1']
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(frozen_graph, input_array, output_array)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = fake_representative_data_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
quant_dir = pathlib.Path(os.getcwd(), 'output')
quant_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = quant_dir/'mobilenet_v1_0.25_192_quant.tflite'
tflite_model_file.write_bytes(tflite_model)
I cannot get google-drive file-delete() method to work via the Python API.
It is acting broken.
I offer some info about my setup:
Ubuntu 16.04
Python 3.5.2 (default, Nov 12 2018, 13:43:14)
google-api-python-client (1.7.9)
google-auth (1.6.3)
google-auth-httplib2 (0.0.3)
google-auth-oauthlib (0.3.0)
Below, I list a Python script which can reproduce the bug:
"""
googdrive17.py
This script should delete files named 'hello.txt'
Ref:
https://developers.google.com/drive/api/v3/quickstart/python
https://developers.google.com/drive/api/v3/reference/files
Demo (Ubuntu):
sudo apt install python3-pip
sudo pip3 install --upgrade google-api-python-client
sudo pip3 install --upgrade google-auth-httplib2
sudo pip3 install --upgrade google-auth-oauthlib
python3 googdrive17.py
"""
import pickle
import os.path
from googleapiclient.discovery import build
from googleapiclient.http import MediaFileUpload
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
# I s.declare a very permissive scope (for training only):
SCOPES = ['https://www.googleapis.com/auth/drive']
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as fh:
creds = pickle.load(fh)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server()
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
# I s.create a file so I can upload it:
with open('/tmp/hello.txt','w') as fh:
fh.write("hello world\n")
# From my laptop, I s.upload a file named hello.txt:
drive_service = build('drive', 'v3', credentials=creds)
file_metadata = {'name': 'hello.txt'}
media = MediaFileUpload('/tmp/hello.txt', mimetype='text/plain')
create_response = drive_service.files().create(body=file_metadata,
media_body=media,
fields='id').execute()
file_id = create_response.get('id')
print('new /tmp/hello.txt file_id:')
print(file_id)
# Q: With googleapiclient, how to filter files list()-response?
# A1: https://developers.google.com/drive/api/v3/reference/files/list
# A2: https://developers.google.com/drive/api/v3/search-files
list_response = drive_service.files().list(
orderBy = "createdTime desc",
q = "name='hello.txt'",
pageSize = 22,
fields = "files(id, name)"
).execute()
items = list_response.get('files', [])
if items:
for item in items:
print('I will try to delete this file:')
print(u'{0} ({1})'.format(item['name'], item['id']))
del_response = drive_service.files().delete(fileId=item['id'])
print('del_response.body:')
print( del_response.body)
print('I will try to emptyTrash:')
trash_response = drive_service.files().emptyTrash()
print('trash_response.body:')
print( trash_response.body)
else:
print('hello.txt not found in your google-drive account.')
When I run the script I see output similar to that listed below:
$ python3 googdrive17.py
new /tmp/hello.txt file_id:
1m8nKOfIeB0E5t60F_-9bKwIJds8PSvYY
I will try to delete this file:
hello.txt (1m8nKOfIeB0E5t60F_-9bKwIJds8PSvYY)
del_response.body:
None
I will try to delete this file:
hello.txt (1Ow4fcUBgEYUy3ezYScDKlLSMbp-hyOLT)
del_response.body:
None
I will try to delete this file:
hello.txt (1TiUrLgQdY1Cb9w0UWHjnmj7HZBaFsKcp)
del_response.body:
None
I will try to emptyTrash:
trash_response.body:
None
$
I see that two of the API calls work well:
files.list()
files.create()
Two calls appear broken:
files.delete()
files.emptyTrash()
Perhaps, though, I call them incorrectly?
How about this modification?
At first, the official document of Files: delete method and Files: emptyTrash method says as follows.
If successful, this method returns an empty response body.
By this, when the file was deleted and the trash was cleared, the returned del_response and trash_response are empty.
Modified script:
From your question, I could understand that files.list() and files.create() works. So I would like to propose the modification points for files.delete() and files.emptyTrash(). Please modify your script as follows.
From:
for item in items:
print('I will try to delete this file:')
print(u'{0} ({1})'.format(item['name'], item['id']))
del_response = drive_service.files().delete(fileId=item['id'])
print('del_response.body:')
print( del_response.body)
print('I will try to emptyTrash:')
trash_response = drive_service.files().emptyTrash()
print('trash_response.body:')
print( trash_response.body)
To:
for item in items:
print('I will try to delete this file:')
print(u'{0} ({1})'.format(item['name'], item['id']))
del_response = drive_service.files().delete(fileId=item['id']).execute() # Modified
print('del_response.body:')
print(del_response)
print('I will try to emptyTrash:')
trash_response = drive_service.files().emptyTrash().execute() # Modified
print('trash_response.body:')
print(trash_response)
execute() was added for drive_service.files().delete() and drive_service.files().emptyTrash().
References:
Files: delete
Files: emptyTrash
If this was not the result you want, I apologize.