Set licence key via environmental variable - configuration

I have a problem. I am using ArangoDB version 3.8 and enterprise for Windows. I got an licence key.
But unfortunately I do not know where I should set the key. You can find some .conf files, e.g. arangod.conf
Where can I set a licence key via environmental variable on ArangoDB version 3.8 enterprise on Windows?
Example
ARANGO_LICENSE_KEY: EVALUATION:<key>
When I am using Docker I could set the key inital by starting the ArangoDB container.
docker run -p 8529:8529 -e ARANGO_ROOT_PASSWORD=openSesame -e ARANGO_LICENSE_KEY=EVALUATION:<key> arangodb/enterprise:3.8.6
arangod.conf
# ArangoDB configuration file
#
# Documentation:
# https://www.arangodb.com/docs/stable/administration-configuration.html
#
[database]
directory = #ROOTDIR#var/lib/arangodb3
[server]
# Specify the endpoint for HTTP requests by clients.
# tcp://ipv4-address:port
# tcp://[ipv6-address]:port
# ssl://ipv4-address:port
# ssl://[ipv6-address]:port
# unix:///path/to/socket
#
# Examples:
# endpoint = tcp://0.0.0.0:8529
# endpoint = tcp://127.0.0.1:8529
# endpoint = tcp://localhost:8529
# endpoint = tcp://myserver.arangodb.com:8529
# endpoint = tcp://[::]:8529
# endpoint = tcp://[fe80::21a:5df1:aede:98cf]:8529
#
endpoint = tcp://127.0.0.1:8529
storage-engine = auto
# reuse a port on restart or wait until it is freed by the operating system
# reuse-address = false
authentication = true
# number of maximal server threads. use 0 to make arangod determine the
# number of threads automatically, based on available CPUs
# maximal-threads = 0
# gather server statistics
statistics = true
# the user and group are normally set in the start script
# uid = arangodb
# gid = arangodb
# uid = arangodb
[javascript]
startup-directory = #ROOTDIR#/usr/share/arangodb3/js
app-path = #ROOTDIR#var/lib/arangodb3-apps
# app-path = //arangodb3/apps
# number of V8 contexts available for JavaScript execution. use 0 to
# make arangod determine the number of contexts automatically.
# v8-contexts = 0
[foxx]
# enable Foxx queues in the server
# queues = true
# interval (seconds) to use for polling jobs in Foxx queues
# queues-poll-interval = 1
[log]
level = info
# file = #ROOTDIR#var/log/arangodb3/arangod.log
[cluster]
[rocksdb]
# encryption-keyfile=/your-encryption-file

Related

chrome flash EOL on linux/ubuntu

There are multiple blogs on how to make flash work post EOL for sites or firmware console that have failed to migrate before EOL.
downgrade to google-chrome 87.x
build mms.cfg and put in appropriate google-chrome config directory
To this extent I want to run this patch in a VM (Ubuntu 20.4.1 on VirtualBox 6.1.6) to ensure that my actual machines are unto date and fully patched. However I'm finding patch works on my MacOS host but not in Ubuntu VM. The run once button never appears and flash component in webpage shows download failed
To remove potential for typos - I've coded in python with a YaML config file. Why does this not work in google-chrome on Ubuntu?
chrome.py
import re, shutil, subprocess, yaml, click
from pathlib import Path
import platform
def cmd(cmd:str="ls -al", assertfail=True) :
up = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, close_fds=True)
str = [o.rstrip().decode() for o in up.stdout]
exitcode = up.wait()
if assertfail: assert exitcode == 0, f"[{exitcode}] {cmd} {str}"
return exitcode, str
#click.command()
#click.option("--uninstall", default=False, is_flag=True, help="uninstall google chrome")
def main(uninstall):
click.secho(f"{platform.system()}")
with open(Path.cwd().joinpath("chrome.yaml")) as f: config = yaml.safe_load(f)
if platform.system()=="Linux":
if uninstall:
e, out = cmd("sudo apt-get purge -y google-chrome-stable")
print(out)
cdeb = Path.cwd().joinpath(f"chrome-{config['version']}.deb")
# download required version of chrome if it has not already been downloaded
if not cdeb.exists():
e, out = cmd(f"wget --no-verbose -O {cdeb} http://dl.google.com/linux/chrome/deb/pool/main/g/google-chrome-stable/google-chrome-stable_{config['version']}_amd64.deb")
print(out)
# check wanted version of chrome is installed
e, iv = cmd("google-chrome --version", assertfail=False)
iv = iv[0] if e==0 else "0.0.0.0"
vre = re.compile("[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+")
iv = re.search(vre, iv)[0]
wv = re.search(vre, config["version"])[0]
click.secho(f"installed: {iv} wanted: {wv} same: {iv==wv}",bg="cyan")
if iv!=wv:
e, out = cmd(f"sudo apt install -y {cdeb}")
print(out)
# make sure required locations of adobe config files exist
p2 = Path.home().joinpath(".config/google-chrome/Default/Pepper Data/Shockwave Flash/System/")
elif platform.system()=="Darwin":
p2 = Path.home().joinpath("Library/Application Support/Google/Chrome/Default/Pepper Data/Shockwave Flash/System/")
else:
click.secho(f"unknow operating system {platform.system}")
exit(1)
if not p2.exists():
p2.mkdir(parents=True)
# build adobe flash config file
mmsf = Path.cwd().joinpath("mms.cfg")
with open(mmsf, "w") as f:
for l in config["base"]: f.write(f"{l}\n")
for u in config["urls"]:
for l in [f"{k}={v}{u}\n" for p in config["urlkeys"] for k,v in p.items()]: f.write(l)
# distribute adobe flash config file
shutil.copy(mmsf, p2)
click.secho(str(p2.joinpath("mms.cfg")), bg="blue", bold=True, reverse=True)
with open(p2.joinpath("mms.cfg")) as f:
click.secho(f.read(), bg="blue")
if __name__ == '__main__':
main()
chrome.yaml
base:
- EnableAllowList=1
- EOLUninstallDisable=1
- ErrorReportingEnable=1
# - TraceOutputFileEnable=1
# - PolicyFileLog=1
- AllowListPreview=1
- TraceOutputEcho=1
urls:
- codegeek.net
- ultrasounds.com
- photobox.co.uk
- secure.photobox.com
- serving.photos.photobox.com
urlkeys:
- AllowListUrlPattern: "*://*."
- WhitelistUrlPattern: "*://*."
- AllowListUrlPattern: "*://"
- WhitelistUrlPattern: "*://"
# https://www.ubuntuupdates.org/package/google_chrome/stable/main/base/google-chrome-stable
# 87 - last version with flash bundled
version: 87.0.4280.141-1

tinydb:Empty query was evaluated

One of the things this below code does is put different student IDs in tiny database after checking if the new ID is already present or not.
Code's below -
#enroll.py
# USAGE
# python enroll.py --id S1901 --name somename --conf config/config.json
# import the necessary packages
from pyimagesearch.utils import Conf
from imutils.video import VideoStream
from tinydb import TinyDB
from tinydb import where
import face_recognition
import argparse
import imutils
import pyttsx3
import time
import cv2
import os
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--id", required=True,
help="Unique student ID of the student")
ap.add_argument("-n", "--name", required=True,
help="Name of the student")
ap.add_argument("-c", "--conf", required=True,
help="Path to the input configuration file")
args = vars(ap.parse_args())
# load the configuration file
conf = Conf(args["conf"])
# initialize the database and student table objects
db = TinyDB(conf["db_path"])
studentTable = db.table("student")
# retrieve student details from the database
student = studentTable.search(where(args["id"]))
# check if an entry for the student id does *not* exist, if so, then
# enroll the student
if len(student) == 0:
# initialize the video stream and allow the camera sensor to warmup
print("[INFO] warming up camera...")
vs = VideoStream(src=0).start()
time.sleep(2.0)
# initialize the number of face detections and the total number
# of images saved to disk
faceCount = 0
total = 0
# ask the student to stand in front of the camera
print("{} please stand in front of the camera until you" \
"receive further instructions".format(args["name"]))
# initialize the status as detecting
status = "detecting"
# create the directory to store the student's data
os.makedirs(os.path.join(conf["dataset_path"], conf["class"],
args["id"]), exist_ok=True)
# loop over the frames from the video stream
while True:
# grab the frame from the threaded video stream, resize it (so
# face detection will run faster), flip it horizontally, and
# finally clone the frame (just in case we want to write the
# frame to disk later)
frame = vs.read()
frame = imutils.resize(frame, width=400)
frame = cv2.flip(frame, 1)
orig = frame.copy()
# convert the frame from from RGB (OpenCV ordering) to dlib
# ordering (RGB) and detect the (x, y)-coordinates of the
# bounding boxes corresponding to each face in the input image
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
boxes = face_recognition.face_locations(rgb,
model=conf["detection_method"])
# loop over the face detections
for (top, right, bottom, left) in boxes:
# draw the face detections on the frame
cv2.rectangle(frame, (left, top), (right, bottom),
(0, 255, 0), 2)
# check if the total number of face detections are less
# than the threshold, if so, then skip the iteration
if faceCount < conf["n_face_detection"]:
# increment the detected face count and set the
# status as detecting face
faceCount += 1
status = "detecting"
continue
# save the frame to correct path and increment the total
# number of images saved
p = os.path.join(conf["dataset_path"], conf["class"],
args["id"], "{}.png".format(str(total).zfill(5)))
cv2.imwrite(p, orig[top:bottom, left:right])
total += 1
# set the status as saving frame
status = "saving"
# draw the status on to the frame
cv2.putText(frame, "Status: {}".format(status), (10, 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 2)
# show the output frame
cv2.imshow("Frame", frame)
cv2.waitKey(1)
# if the required number of faces are saved then break out from
# the loop
if total == conf["face_count"]:
# let the student know that face enrolling is over
print("Thank you {} you are now enrolled in the {} " \
"class.".format(args["name"], conf["class"]))
break
# insert the student details into the database
studentTable.insert({args["id"]: [args["name"], "enrolled"]})
# print the total faces saved and do a bit of cleanup
print("[INFO] {} face images stored".format(total))
print("[INFO] cleaning up...")
cv2.destroyAllWindows()
vs.stop()
# otherwise, a entry for the student id exists
else:
# get the name of the student
name = student[0][args["id"]][0]
print("[INFO] {} has already already been enrolled...".format(
name))
# close the database
db.close()
ISSUE:
While i run this code for the 1st time, everything works fine.
>> python3 enroll.py --id S1111 --name thor --conf config/config.json
I get my ID in my json file as shown below -
{"student": {"1": {"S1111": ["thor", "enrolled"]}}}
But when i try to put another ID -
python3 enroll.py --id S1112 --name hulk --conf config/config.json
I get the following error -
ERROR:
Traceback (most recent call last):
File "enroll.py", line 35, in <module>
student = studentTable.search(where(args["id"]))
File "/usr/lib/python3.5/site-packages/tinydb/table.py", line 222, in search
docs = [doc for doc in self if cond(doc)]
File "/usr/lib/python3.5/site-packages/tinydb/table.py", line 222, in <listcomp>
docs = [doc for doc in self if cond(doc)]
File "/usr/lib/python3.5/site-packages/tinydb/queries.py", line 59, in __call__
return self._test(value)
File "/usr/lib/python3.5/site-packages/tinydb/queries.py", line 136, in notest
raise RuntimeError('Empty query was evaluated')
RuntimeError: Empty query was evaluated
If i change my table name from student to something else then again it will store id only for the first time then gives the same error. I'm not sure what's wrong here.

Zabbix Web scenarios variables random number or other function

I need post variable with random number value. How can i generate random variable in web scenario? Can i run some script or macro to generate random value for scenario or step?
There is no native way to do it, as you guessed you can make it work with a macro and a custom script.
You can define a {$RANDOM} host macro and use it in the web scenario step as a post field value.
Then you have to change it periodically with a crontabbed script, a python sample like:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Set a random macro to a value.
Provide user from the commandline or from Env var support:
# export ZABBIX_SERVER='https://your_zabbix_host/zabbix/'
# export ZABBIX_USERNAME='admin'
# export ZABBIX_PASSWORD='secretPassword'
$ ./setRandomMacro.py -u admin -p zabbix -Z http://yourzabbix -H yourHost -M '{$RANDOM}'
Connecting to http://yourzabbix
Host yourHost (Id: ----)
{$RANDOM}: current value "17" -> new value "356"
$ ./setRandomMacro.py -u admin -p zabbix -Z http://yourzabbix -H yourHost -M '{$RANDOM}'
Connecting to http://yourzabbix
Host yourHost (Id: ----)
{$RANDOM}: current value "356" -> new value "72"
"""
from zabbix.api import ZabbixAPI
import json
import argparse
import getopt
import sys
import os
import random
# Class for argparse env variable support
class EnvDefault(argparse.Action):
# From https://stackoverflow.com/questions/10551117/
def __init__(self, envvar, required=True, default=None, **kwargs):
if not default and envvar:
if envvar in os.environ:
default = os.environ[envvar]
if required and default:
required = False
super(EnvDefault, self).__init__(default=default, required=required,
**kwargs)
def __call__(self, parser, namespace, values, option_string=None):
setattr(namespace, self.dest, values)
def jsonPrint(jsonUgly):
print(json.dumps(jsonUgly, indent=4, separators=(',', ': ')))
def ArgumentParser():
parser = argparse.ArgumentParser()
parser.add_argument('-Z',
required=True,
action=EnvDefault,
envvar='ZABBIX_SERVER',
help="Specify the zabbix server URL ie: http://yourserver/zabbix/ (ZABBIX_SERVER environment variable)",
metavar='zabbix-server-url')
parser.add_argument('-u',
required=True,
action=EnvDefault,
envvar='ZABBIX_USERNAME',
help="Specify the zabbix username (ZABBIX_USERNAME environment variable)",
metavar='Username')
parser.add_argument('-p',
required=True,
action=EnvDefault,
envvar='ZABBIX_PASSWORD',
help="Specify the zabbix username (ZABBIX_PASSWORD environment variable)",
metavar='Password')
parser.add_argument('-H',
required=True,
help="Hostname",
metavar='hostname')
parser.add_argument('-M',
required=True,
help="Macro to set",
metavar='macro')
return parser.parse_args()
def main(argv):
# Parse arguments and build work variables
args = ArgumentParser()
zabbixURL = args.Z
zabbixUsername = args.u
zabbixPassword = args.p
hostName = args.H
macroName = args.M
# API Connect
print('Connecting to {}'.format(zabbixURL))
zapi = ZabbixAPI(url=zabbixURL, user=zabbixUsername,
password=zabbixPassword)
hostObj = zapi.host.get(search={'host': hostName}, output='hostids')
print('Host {} (Id: {})'.format(hostName, hostObj[0]['hostid']))
currentMacro = zapi.usermacro.get(
hostids=hostObj[0]['hostid'], filter={'macro': macroName})
if (currentMacro):
newMacroValue = random.randint(1, 1001)
print('{}: current value "{}" -> new value "{}"'.format(macroName,
currentMacro[0]['value'], newMacroValue))
zapi.usermacro.update(
hostmacroid=currentMacro[0]['hostmacroid'], value=newMacroValue)
else:
print('No {} macro found on host {}'.format(macroName, hostName))
if __name__ == "__main__":
main(sys.argv[1:])

Converting csv file to JSON in flume

I am trying to pass a csv file from flume to kafka. I am able to pass the file directly using the following config file to pass the entire file from flume to Kafka.
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe the source
a1.sources.r1.type = exec
a1.sources.r1.command = cat /User/Desktop/logFile.csv
# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = kafkaTopic
a1.sinks.k1.brokerList = localhost:9092
a1.sinks.sink1.batchSize = 20
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 10000
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
But I want it to be converted to JSON format before passing to kafka for further processing. Can someone please advise me as how to convert a file from csv to JSON format.
Thanks!!
I think you need to write your own interceptor.
Start with implement interceptor interface
Read CSV from flume event body.
Parse it and Compose JSON
Stick it back to event body
Example: https://questforthought.wordpress.com/2014/01/13/using-flume-interceptor-multiplexing/

How do I copy a set of csv files from my local directory to HDFS using Flume

How do I copy a set of csv files from my local directory to HDFS using Flume? I tried using spool directory as my source, but failed to copy. Then I used the following flume configuration to get my result:
agent1.sources = tail
agent1.channels = MemoryChannel-2
agent1.sinks = HDFS
agent1.sources.tail.type = exec
agent1.sources.tail.command = tail -F /home/cloudera/runs/*
agent1.sources.tail.channels = MemoryChannel-2
agent1.sinks.HDFS.channel = MemoryChannel-2
agent1.sinks.HDFS.type = hdfs
agent1.sinks.HDFS.hdfs.path = hdfs://localhost:8020/user/cloudera/runs
agent1.sinks.HDFS.hdfs.file.Type = DataStream
agent1.channels.MemoryChannel-2.type = memory
I got my files copied to hdfs, but they contain special characters and will be of no use to me. My local directory is /home/cloudera/runs and my HDFS target directory is /user/cloudera/runs.
I used the below flume configuration to get the job done.
#Flume Configuration Starts
# Define a file channel called fileChannel on agent_slave_1
agent_slave_1.channels.fileChannel1_1.type = file
# on linux FS
agent_slave_1.channels.fileChannel1_1.capacity = 200000
agent_slave_1.channels.fileChannel1_1.transactionCapacity = 1000
# Define a source for agent_slave_1
agent_slave_1.sources.source1_1.type = spooldir
# on linux FS
#Spooldir in my case is /home/cloudera/runs
agent_slave_1.sources.source1_1.spoolDir = /home/cloudera/runs/
agent_slave_1.sources.source1_1.fileHeader = false
agent_slave_1.sources.source1_1.fileSuffix = .COMPLETED
agent_slave_1.sinks.hdfs-sink1_1.type = hdfs
#Sink is /user/cloudera/runs_scored under hdfs
agent_slave_1.sinks.hdfs-sink1_1.hdfs.path = hdfs://localhost.localdomain:8020/user/cloudera/runs_scored/
agent_slave_1.sinks.hdfs-sink1_1.hdfs.batchSize = 1000
agent_slave_1.sinks.hdfs-sink1_1.hdfs.rollSize = 268435456
agent_slave_1.sinks.hdfs-sink1_1.hdfs.rollInterval = 0
agent_slave_1.sinks.hdfs-sink1_1.hdfs.rollCount = 50000000
agent_slave_1.sinks.hdfs-sink1_1.hdfs.writeFormat=Text
agent_slave_1.sinks.hdfs-sink1_1.hdfs.fileType = DataStream
agent_slave_1.sources.source1_1.channels = fileChannel1_1
agent_slave_1.sinks.hdfs-sink1_1.channel = fileChannel1_1
agent_slave_1.sinks = hdfs-sink1_1
agent_slave_1.sources = source1_1
agent_slave_1.channels = fileChannel1_1
In your sink, you need to use
agent1.sinks.HDFS.hdfs.fileType = DataStream
instead of
agent1.sinks.HDFS.hdfs.file.Type = DataStream
rest seems fine.