Missing file (net.xml) in Running Environment Flow - reinforcement-learning

On tutorial 01 from flow:Tutorial 01.
I executed the code
flow_params = dict(
exp_tag='ring_example',
env_name=AccelEnv,
network=RingNetwork,
simulator='traci',
sim=sim_params,
env=env_params,
net=net_params,
veh=vehicles,
initial=initial_config,
tls=traffic_lights,
)
# number of time steps
flow_params['env'].horizon = 3000
exp = Experiment(flow_params)
# run the sumo simulation
_ = exp.run(1, convert_to_csv=True)
I got an error afterward, here is the error
Error during start: [Errno 2] No such file or directory: '.../kernel/network/debug/cfg/ring_example_20201208-1332481607405568.58399.net.xml' Retrying in 1 seconds...
How should it be generated or where can it be found?

This is an issue with my file naming conversion. Apparently, the command to be called in
subprocess.call(
[
'netconvert -c "' + self.net_path + self.cfgfn +
'" --output-file="' + self.cfg_path + self.netfn +
'" --no-internal-links="false"'
],
stdout=subprocess.DEVNULL,
shell=True)
requires no spacing. In my case, I have my folder named "Machine Learning."

Related

redis.clients.jedis.exceptions.JedisDataException: ERR Error compiling script (new function): user_script:1: malformed number near

I'm writing a lua script in redis, and execute it in spring, the content is as simple as
local store = redis.call('hget',KEYS[1],'capacity')
print(store)
if store <= 0
then return 0
end
store = store - 1
redis.call('hset',KEYS[1],'capacity',store)
redis.call('sadd',KEYS[2],ARGV[1])
return 1
but when I run this script, an exception throws
redis.clients.jedis.exceptions.JedisDataException: ERR Error compiling script (new function): user_script:1: malformed number near '262b4ca69c1805485d135aa6298c2b00bc7c8c09'
And I tried the following script in redis-cli
eval "local s = tonumber(redis.call('hget',KEYS[1],'capacity')) return s" 1 001
It returns
(integer) 100
And the Java code is showing as follows:
String script ="local store = redis.call('hget',KEYS[1],'capacity')\n" +
"print(store)\n" +
"if store <= 0\n" +
"then return 0\n" +
"end\n" +
"store = store - 1\n" +
"redis.call('hset',KEYS[1],'capacity',store)\n" +
"redis.call('sadd',KEYS[2],ARGV[1])\n" +
"return 1\n" +
"\n";
if(sha==null){
sha = jedis.scriptLoad(script) ;
System.out.println("sha:"+sha);
}
Object ojb = jedis.eval(sha,2,id,userName,id) ;
Now I'm so confused and any help will be grateful
You want to use jedis.evalsha instead of jedis.eval.
The error you are getting is Redis server trying to interpret 262b4ca69c1805485d135aa6298c2b00bc7c8c09 as an actual script. To invoke a previously loaded script you use EVALSHA command.

Device dependency in ZABBIX 4.2

Suppose the following scenario in using Zabbix 4.2. We have a core switch, two distributed switches and 20 access switches, where the distributed switches are connected to the core and 10 access switches are connected to each distributed switch. I am monitoring all of them using SNMP v2c and using the template cisco switches (the official one). Now the problem arises as I cannot define device dependency in zabbix easily. By easily, I mean that if a distributed switch goes out, I want to have the alarm for that device and not for all access switches connected to it. I could define it as follows. Change the triggers for each device and made them dependent on the corresponding trigger for distributed switches. However, this is too time consuming. What should I do? Any help is appreciated.
You are right, there isn't an easy way to set this kind of dependancy.
I had to manage the same situation a while ago and I wrote a python dependancy setter which uses a "dependent hostgroup <--> master host" logic.
You can modify it to fit your needs (see masterTargetTriggerDescription and slaveTargetTriggerDescription for the dependancy targets), it works but contains little error checking: use at your own risk!
import csv
import re
import json
from zabbix.api import ZabbixAPI
# Zabbix Server endpoint
zabbixServer = 'https://yourzabbix/zabbix/'
zabbixUser = 'admin'
zabbixPass = 'zabbix'
zapi = ZabbixAPI(url=zabbixServer, user=zabbixUser, password=zabbixPass)
# Hostgrop variables - to reference IDs while building API parameters
hostGroupNames = [] # list = array
hostGroupId = {} # dict = associative array
# Csv file for dep settings - see the format:
"""
Hostgroup;Master
ACCESS_1;DistSwitch1
ACCESS_2;DistSwitch1
ACCESS_5;DistSwitch2
ACCESS_6;DistSwitch2
DIST;CoreSwitch1
"""
fileName = 'dependancy.csv'
masterTargetTriggerDescription = '{HOST.NAME} is unavailable by ICMP'
slaveTargetTriggerDescription = '{HOST.NAME} is unavailable by ICMP|Zabbix agent on {HOST.NAME} is unreachable'
# Read CSV file
hostFile = open(fileName)
hostReader = csv.reader(hostFile, delimiter=';', quotechar='|')
hostData = list(hostReader)
# CSV Parsing
for line in hostData:
hostgroupName = line[0]
masterName = line[1]
slaveIds = []
masterId = zapi.get_id('host', item=masterName, with_id=False, hostid=None)
hostGroupId = zapi.get_id('hostgroup', item=hostgroupName, with_id=False, hostid=None)
masterTriggerObj = zapi.trigger.get(hostids=masterId, filter=({'description': masterTargetTriggerDescription}) )
print "Group: " + hostgroupName + " - ID: " + str(hostGroupId)
print "Master host: " + masterName + " - ID: " + str(masterId)
print "Master trigger: " + masterTriggerObj[0]['description'] + " - ID: " + str(masterTriggerObj[0]['triggerid'])
# cycle through slave hosts
hostGroupObj = zapi.hostgroup.get(groupids=hostGroupId, selectHosts='extend')
for host in hostGroupObj[0]['hosts']:
#exclude master
if host['hostid'] != str(masterId):
print " - Host Name: " + host['name'] + " - ID: " + host['hostid'] + " - MASTER: " + str(masterId)
# cycle for all slave's triggers
slaveTargetTriggerObj = zapi.trigger.get(hostids=host['hostid'])
#print json.dumps(slaveTargetTriggerObj)
for slaveTargetTrigger in slaveTargetTriggerObj:
# search for dependancy targets
if re.search(slaveTargetTriggerDescription, slaveTargetTrigger['description'] ,re.IGNORECASE):
print " - Trigger: " + slaveTargetTrigger['description'] + " - ID: " + slaveTargetTrigger['triggerid']
# Clear existing dep. from the trigger, then create the new dep.
clear = zapi.trigger.deletedependencies(triggerid=slaveTargetTrigger['triggerid'].encode())
result = zapi.trigger.adddependencies(triggerid=slaveTargetTrigger['triggerid'].encode(), dependsOnTriggerid=masterTriggerObj[0]['triggerid'])
print "----------------------------------------"
print ""
I updated the code contributed by Simone Zabberoni and rewritten it to work with Python 3, PyZabbix, and YAML.
#!/usr/bin/python3
import re
import yaml
#https://pypi.org/project/py-zabbix/
from pyzabbix import ZabbixAPI
# Zabbix Server endpoint
zabbix_server = 'https://zabbix.example.com/zabbix/'
zabbix_user = 'zbxuser'
zabbix_pass = 'zbxpassword'
# Create ZabbixAPI class instance
zapi = ZabbixAPI(zabbix_server)
# Enable HTTP auth
zapi.session.auth = (zabbix_user, zabbix_pass)
# Login (in case of HTTP Auth, only the username is needed, the password, if passed, will be ignored)
zapi.login(zabbix_user, zabbix_pass)
# Hostgrop variables - to reference IDs while building API parameters
hostGroupNames = [] # list = array
hostGroupId = {} # dict = associative array
# yaml file for dep settings - see the format:
"""
pvebar16 CTs:
master: pvebar16.example.com
masterTargetTriggerDescription: 'is unavailable by ICMP'
slaveTargetTriggerDescription: 'is unavailable by ICMP|Zabbix agent is unreachable for 5 minutes'
"""
fileName = 'dependancy.yml'
with open('dependancy.yml') as f:
hostData = yaml.load(f)
for groupyml in hostData.keys():
masterTargetTriggerDescription = hostData[groupyml]['masterTargetTriggerDescription']
slaveTargetTriggerDescription = hostData[groupyml]['slaveTargetTriggerDescription']
masterName = hostData[groupyml]['master']
hostgroupName = groupyml
slaveIds = []
masterId = zapi.host.get(filter={'host': masterName},output=['hostid'])[0]['hostid']
hostGroupId = zapi.hostgroup.get(filter={'name': hostgroupName},output=['groupid'])[0]['groupid']
masterTriggerObj = zapi.trigger.get(host=masterName, filter={'description': masterTargetTriggerDescription}, output=['triggerid','description'])
print("Group: " + hostgroupName + " - ID: " + str(hostGroupId))
print("Master host: " + masterName + " - ID: " + str(masterId))
print("Master trigger: " + masterTriggerObj[0]['description'] + " - ID: " + str(masterTriggerObj[0]['triggerid']))
# cycle through slave hosts
hostGroupObj = zapi.hostgroup.get(groupids=hostGroupId, selectHosts='extend')
for host in hostGroupObj[0]['hosts']:
#exclude master
if host['hostid'] != str(masterId):
print(" - Host Name: " + host['name'] + " - ID: " + host['hostid'] + " - MASTER: " + str(masterId))
# cycle for all slave's triggers
slaveTargetTriggerObj = zapi.trigger.get(hostids=host['hostid'])
for slaveTargetTrigger in slaveTargetTriggerObj:
# search for dependancy targets
if re.search(slaveTargetTriggerDescription, slaveTargetTrigger['description'] ,re.IGNORECASE):
print(" - Trigger: " + slaveTargetTrigger['description'] + " - ID: " + slaveTargetTrigger['triggerid'])
# Clear existing dep. from the trigger, then create the new dep.
clear = zapi.trigger.deletedependencies(triggerid=slaveTargetTrigger['triggerid'])
result = zapi.trigger.adddependencies(triggerid=slaveTargetTrigger['triggerid'], dependsOnTriggerid=masterTriggerObj[0]['triggerid'])
print("----------------------------------------")
print("")

How do I find out what's buffering the communication from qemu to pexpect?

I have a Python2 program that runs qemu with a FreeBSD image.
expect()ing lines out output works.
However, expect()ing output that does not have its line terminated (such as when waiting for a prompt like login:) does not, this times out.
I suspect something in the communication between qemu and my program is doing line buffering, but how do I find out which of them it is? Candidates that I can think of:
FreeBSD itself. I find that unlikely, it shows prompts when running interactively, and qemu's -nographics options shouldn't make a difference for the emulated VM (but I may be wrong).
Something in the setup of the pty. I have zero experience with ptys. If that's the issue, this would be a bug in pexpect since pexpect is setting the pty up.
A bug in pexpect.
Something in my own script... but I have no clue what that could be.
For reference, here's the stripped-down code (including download and unpack, should anybody want to play with it):
#! /usr/bin/env python2
import os
import pexpect
import re
import sys
import time
def run(cmd):
'''Run command, log to stdout, no timeout, return the status code.'''
print('run: ' + cmd)
(output, rc) = pexpect.run(
cmd,
withexitstatus=1,
encoding='utf-8',
logfile=sys.stdout,
timeout=None
)
if rc != 0:
print('simple.py: Command failed with return code: ' + rc)
exit(rc)
download_path = 'https://download.freebsd.org/ftp/releases/VM-IMAGES/12.0-RELEASE/amd64/Latest'
image_file = 'FreeBSD-12.0-RELEASE-amd64.qcow2'
image_file_xz = image_file + '.xz'
if not os.path.isfile(image_file_xz):
run('curl -o %s %s/%s' % (image_file_xz, download_path, image_file_xz))
if not os.path.isfile(image_file):
# Reset image file to initial state
run('xz --decompress --keep --force --verbose ' + image_file_xz)
#cmd = 'qemu-system-x86_64 -snapshot -monitor none -display curses -chardev stdio,id=char0 ' + image_file
cmd = 'qemu-system-x86_64 -snapshot -nographic ' + image_file
print('interact with: ' + cmd)
child = pexpect.spawn(
cmd,
timeout=90, # FreeBSD takes roughly 60 seconds to boot
maxread=1,
)
child.logfile = sys.stdout
def expect(pattern):
result = child.expect([pexpect.TIMEOUT, pattern])
if result == 0:
print("timeout: %d reached when waiting for: %s" % (child.timeout, pattern))
exit(1)
return result - 1
if False:
# This does not work: the prompt is not visible, then timeout
expect('login: ')
else:
# Workaround, tested to work:
expect(re.escape('FreeBSD/amd64 (freebsd)')) # Line before prompt
time.sleep(1) # MUCH longer than actually needed, just to be safe
child.sendline('root')
# This will always time out, and terminate the script
expect('# ')
print('We want to get here but cannot')

subprocess.popen returning empty string

There was an earlier question on this, but the asker was just overwriting their output and solved their own problem.
I'm using a subprocess.popen to read video information and write the output to a json. It works fine on MOST videos, but on others is returning an empty string on others - even though it runs fine from the command line. I tried it several times and am getting the data fine through the command line.
Here's the relevant part of the script:
out_prj.write('[')
for m, i in enumerate(files):
print i
out_prj.write('{"$type":"BatchProcessor.Job, BatchProcessor","Id":0,"Ver":1.02,"CurrentTask":0,"IsSelected":true,"TaskList":[')
f_name = os.path.basename(i[0])
f_json = out_folder + os.sep + "06_Output" + os.sep + os.path.basename(i[0]).split(".")[0] + ".json"
trans_f = out_folder + os.sep + "04_Video" + os.sep + os.path.basename(i[0]).split(".")[0] + "-tr.ts"
trans_f_out = out_folder + os.sep + "06_Output" + os.sep + os.path.basename(i[0]).split(".")[0] + "-tr-out.ts"
ffprobe = 'ffprobe.exe'
command = [ffprobe, '-v', 'quiet', '-print_format', 'json', '-show_format', '-show_streams', i[0]]
p = sp.Popen(command, stdout=sp.PIPE, stderr=sp.PIPE, shell=True)
out, err = p.communicate()
io = cStringIO.StringIO(out)
info = json.load(io)
print info
filea = open(f_json, 'w')
filea.write(json.dumps(info))
filea.close()
f = open(f_json)
b = json.load(f)
print b
#########################
###################
f_format = str(b['streams'][0]['codec_long_name'])
Your code ignores error messages (err variable). print err or don't redirect stderr to see them.
Unrelated: the json handling in your code is insane: most operations are redundant.
To save output of the subprocess to a file:
import os
from subprocess import check_call
f_json = os.path.join(out_folder, "06_Output",
os.path.splitext(f_name)[0] + ".json")
with open(f_json, 'wb', 0) as file:
check_call(command, stdout=file)
Note: shell=True is not necessary here. If subprocess can't find ffprobe.exe then specify the full path e.g. (use the path appropriate for your system):
ffprobe = r'C:\Program Files\Real\RealPlayer\RPDS\Tools\ffmpeg\ffprobe.exe'
Note: r'' -- a raw string literal is used to avoid doubling the backslashes.

EMR Job Failing

Folks,
The following python script is terminating with
job state = FAILED
and
Last State Change: Access denied checking streaming input path: s3n://elasticmapreduce/samples/wordcount/input/
Code:
import boto
import boto.emr
from boto.emr.step import StreamingStep
from boto.emr.bootstrap_action import BootstrapAction
import time
S3_BUCKET="mytesetbucket123asdf"
conn = boto.connect_emr()
step = StreamingStep(
name='Wordcount',
mapper='s3n://elasticmapreduce/samples/wordcount/wordSplitter.py',
reducer='aggregate',
input='s3n://elasticmapreduce/samples/wordcount/input/',
output='s3n://' + S3_BUCKET + '/wordcount/output/2013-10-25')
jobid = conn.run_jobflow(
name="test",
log_uri="s3://" + S3_BUCKET + "/logs/",
visible_to_all_users="True",
steps = [step],)
state = conn.describe_jobflow(jobid).state
print "job state = ", state
print "job id = ", jobid
while state != u'COMPLETED':
print time.localtime()
time.sleep(10)
state = conn.describe_jobflow(jobid).state
print conn.describe_jobflow(jobid)
print "job state = ", state
print "job id = ", jobid
print "final output can be found in s3://" + S3_BUCKET + "/output" + TIMESTAMP
print "try: $ s3cmd sync s3://" + S3_BUCKET + "/output" + TIMESTAMP + " ."
The problem is somewhere in boto... If we specify IAM user instead of using Roles, job works perfectly. EMR supports IAM Roles ofcourse... and the IAM role we tested with has full rights to execute any task, so its not a mis-configuration issue...