subprocess.popen returning empty string - json

There was an earlier question on this, but the asker was just overwriting their output and solved their own problem.
I'm using a subprocess.popen to read video information and write the output to a json. It works fine on MOST videos, but on others is returning an empty string on others - even though it runs fine from the command line. I tried it several times and am getting the data fine through the command line.
Here's the relevant part of the script:
out_prj.write('[')
for m, i in enumerate(files):
print i
out_prj.write('{"$type":"BatchProcessor.Job, BatchProcessor","Id":0,"Ver":1.02,"CurrentTask":0,"IsSelected":true,"TaskList":[')
f_name = os.path.basename(i[0])
f_json = out_folder + os.sep + "06_Output" + os.sep + os.path.basename(i[0]).split(".")[0] + ".json"
trans_f = out_folder + os.sep + "04_Video" + os.sep + os.path.basename(i[0]).split(".")[0] + "-tr.ts"
trans_f_out = out_folder + os.sep + "06_Output" + os.sep + os.path.basename(i[0]).split(".")[0] + "-tr-out.ts"
ffprobe = 'ffprobe.exe'
command = [ffprobe, '-v', 'quiet', '-print_format', 'json', '-show_format', '-show_streams', i[0]]
p = sp.Popen(command, stdout=sp.PIPE, stderr=sp.PIPE, shell=True)
out, err = p.communicate()
io = cStringIO.StringIO(out)
info = json.load(io)
print info
filea = open(f_json, 'w')
filea.write(json.dumps(info))
filea.close()
f = open(f_json)
b = json.load(f)
print b
#########################
###################
f_format = str(b['streams'][0]['codec_long_name'])

Your code ignores error messages (err variable). print err or don't redirect stderr to see them.
Unrelated: the json handling in your code is insane: most operations are redundant.
To save output of the subprocess to a file:
import os
from subprocess import check_call
f_json = os.path.join(out_folder, "06_Output",
os.path.splitext(f_name)[0] + ".json")
with open(f_json, 'wb', 0) as file:
check_call(command, stdout=file)
Note: shell=True is not necessary here. If subprocess can't find ffprobe.exe then specify the full path e.g. (use the path appropriate for your system):
ffprobe = r'C:\Program Files\Real\RealPlayer\RPDS\Tools\ffmpeg\ffprobe.exe'
Note: r'' -- a raw string literal is used to avoid doubling the backslashes.

Related

redis.clients.jedis.exceptions.JedisDataException: ERR Error compiling script (new function): user_script:1: malformed number near

I'm writing a lua script in redis, and execute it in spring, the content is as simple as
local store = redis.call('hget',KEYS[1],'capacity')
print(store)
if store <= 0
then return 0
end
store = store - 1
redis.call('hset',KEYS[1],'capacity',store)
redis.call('sadd',KEYS[2],ARGV[1])
return 1
but when I run this script, an exception throws
redis.clients.jedis.exceptions.JedisDataException: ERR Error compiling script (new function): user_script:1: malformed number near '262b4ca69c1805485d135aa6298c2b00bc7c8c09'
And I tried the following script in redis-cli
eval "local s = tonumber(redis.call('hget',KEYS[1],'capacity')) return s" 1 001
It returns
(integer) 100
And the Java code is showing as follows:
String script ="local store = redis.call('hget',KEYS[1],'capacity')\n" +
"print(store)\n" +
"if store <= 0\n" +
"then return 0\n" +
"end\n" +
"store = store - 1\n" +
"redis.call('hset',KEYS[1],'capacity',store)\n" +
"redis.call('sadd',KEYS[2],ARGV[1])\n" +
"return 1\n" +
"\n";
if(sha==null){
sha = jedis.scriptLoad(script) ;
System.out.println("sha:"+sha);
}
Object ojb = jedis.eval(sha,2,id,userName,id) ;
Now I'm so confused and any help will be grateful
You want to use jedis.evalsha instead of jedis.eval.
The error you are getting is Redis server trying to interpret 262b4ca69c1805485d135aa6298c2b00bc7c8c09 as an actual script. To invoke a previously loaded script you use EVALSHA command.

Python Json creating dictionary from a text file, printing file issue

I was able to take a text file, read each line, create a dictionary per line, update(append) each line and store the json file. The issue is when reading the json file it will not read correctly. the error point to a storing file issue?
The text file looks like:
84.txt; Frankenstein, or the Modern Prometheus; Mary Wollstonecraft (Godwin) Shelley
98.txt; A Tale of Two Cities; Charles Dickens
...
import json
import re
path = "C:\\...\\data\\"
books = {}
books_json = {}
final_book_json ={}
file = open(path + 'books\\set_of_books.txt', 'r')
json_list = file.readlines()
open(path + 'books\\books_json.json', 'w').close() # used to clean each test
json_create = []
i = 0
for line in json_list:
line = line.replace('#', '')
line = line.replace('.txt','')
line = line.replace('\n','')
line = line.split(';', 4)
BookNumber = line[0]
BookTitle = line[1]
AuthorName = line[-1]
file
if BookNumber == ' 2701':
BookNumber = line[0]
BookTitle1 = line[1]
BookTitle2 = line[2]
AuthorName = line[3]
BookTitle = BookTitle1 + ';' + BookTitle2 # needed to combine title into one to fit dict format
books = json.dumps( {'AuthorName': AuthorName, 'BookNumber': BookNumber, 'BookTitle': BookTitle})
books_json = json.loads(books)
final_book_json.update(books_json)
with open(path + 'books\\books_json.json', 'a'
) as out_put:
json.dump(books_json, out_put)
with open(path + 'books\\books_json.json', 'r'
) as out_put:
'books\\books_json.json', 'r')]
print(json.load(out_put))
The reported error is: JSONDecodeError: Extra data: line 1 column 133
(char 132) - adding this is right between the first "}{". Not sure
how json should look in a flat-file format? The output file as seen on
an editor looks like: {"AuthorName": " Mary Wollstonecraft (Godwin)
Shelley", "BookNumber": " 84", "BookTitle": " Frankenstein, or the
Modern Prometheus"}{"AuthorName": " Charles Dickens", "BookNumber": "
98", "BookTitle": " A Tale of Two Cities"}...
I ended up changing the approach and used pandas to read the text and then spliting the single-cell input.
books = pd.read_csv(path + 'books\\set_of_books.txt', sep='\t', names =('r','t', 'a') )
#print(books.head(10))
# Function to clean the 'raw(r)' inoput data
def clean_line(cell):
...
return cell
books['r'] = books['r'].apply(clean_line)
books = books['r'].str.split(';', expand=True)

Device dependency in ZABBIX 4.2

Suppose the following scenario in using Zabbix 4.2. We have a core switch, two distributed switches and 20 access switches, where the distributed switches are connected to the core and 10 access switches are connected to each distributed switch. I am monitoring all of them using SNMP v2c and using the template cisco switches (the official one). Now the problem arises as I cannot define device dependency in zabbix easily. By easily, I mean that if a distributed switch goes out, I want to have the alarm for that device and not for all access switches connected to it. I could define it as follows. Change the triggers for each device and made them dependent on the corresponding trigger for distributed switches. However, this is too time consuming. What should I do? Any help is appreciated.
You are right, there isn't an easy way to set this kind of dependancy.
I had to manage the same situation a while ago and I wrote a python dependancy setter which uses a "dependent hostgroup <--> master host" logic.
You can modify it to fit your needs (see masterTargetTriggerDescription and slaveTargetTriggerDescription for the dependancy targets), it works but contains little error checking: use at your own risk!
import csv
import re
import json
from zabbix.api import ZabbixAPI
# Zabbix Server endpoint
zabbixServer = 'https://yourzabbix/zabbix/'
zabbixUser = 'admin'
zabbixPass = 'zabbix'
zapi = ZabbixAPI(url=zabbixServer, user=zabbixUser, password=zabbixPass)
# Hostgrop variables - to reference IDs while building API parameters
hostGroupNames = [] # list = array
hostGroupId = {} # dict = associative array
# Csv file for dep settings - see the format:
"""
Hostgroup;Master
ACCESS_1;DistSwitch1
ACCESS_2;DistSwitch1
ACCESS_5;DistSwitch2
ACCESS_6;DistSwitch2
DIST;CoreSwitch1
"""
fileName = 'dependancy.csv'
masterTargetTriggerDescription = '{HOST.NAME} is unavailable by ICMP'
slaveTargetTriggerDescription = '{HOST.NAME} is unavailable by ICMP|Zabbix agent on {HOST.NAME} is unreachable'
# Read CSV file
hostFile = open(fileName)
hostReader = csv.reader(hostFile, delimiter=';', quotechar='|')
hostData = list(hostReader)
# CSV Parsing
for line in hostData:
hostgroupName = line[0]
masterName = line[1]
slaveIds = []
masterId = zapi.get_id('host', item=masterName, with_id=False, hostid=None)
hostGroupId = zapi.get_id('hostgroup', item=hostgroupName, with_id=False, hostid=None)
masterTriggerObj = zapi.trigger.get(hostids=masterId, filter=({'description': masterTargetTriggerDescription}) )
print "Group: " + hostgroupName + " - ID: " + str(hostGroupId)
print "Master host: " + masterName + " - ID: " + str(masterId)
print "Master trigger: " + masterTriggerObj[0]['description'] + " - ID: " + str(masterTriggerObj[0]['triggerid'])
# cycle through slave hosts
hostGroupObj = zapi.hostgroup.get(groupids=hostGroupId, selectHosts='extend')
for host in hostGroupObj[0]['hosts']:
#exclude master
if host['hostid'] != str(masterId):
print " - Host Name: " + host['name'] + " - ID: " + host['hostid'] + " - MASTER: " + str(masterId)
# cycle for all slave's triggers
slaveTargetTriggerObj = zapi.trigger.get(hostids=host['hostid'])
#print json.dumps(slaveTargetTriggerObj)
for slaveTargetTrigger in slaveTargetTriggerObj:
# search for dependancy targets
if re.search(slaveTargetTriggerDescription, slaveTargetTrigger['description'] ,re.IGNORECASE):
print " - Trigger: " + slaveTargetTrigger['description'] + " - ID: " + slaveTargetTrigger['triggerid']
# Clear existing dep. from the trigger, then create the new dep.
clear = zapi.trigger.deletedependencies(triggerid=slaveTargetTrigger['triggerid'].encode())
result = zapi.trigger.adddependencies(triggerid=slaveTargetTrigger['triggerid'].encode(), dependsOnTriggerid=masterTriggerObj[0]['triggerid'])
print "----------------------------------------"
print ""
I updated the code contributed by Simone Zabberoni and rewritten it to work with Python 3, PyZabbix, and YAML.
#!/usr/bin/python3
import re
import yaml
#https://pypi.org/project/py-zabbix/
from pyzabbix import ZabbixAPI
# Zabbix Server endpoint
zabbix_server = 'https://zabbix.example.com/zabbix/'
zabbix_user = 'zbxuser'
zabbix_pass = 'zbxpassword'
# Create ZabbixAPI class instance
zapi = ZabbixAPI(zabbix_server)
# Enable HTTP auth
zapi.session.auth = (zabbix_user, zabbix_pass)
# Login (in case of HTTP Auth, only the username is needed, the password, if passed, will be ignored)
zapi.login(zabbix_user, zabbix_pass)
# Hostgrop variables - to reference IDs while building API parameters
hostGroupNames = [] # list = array
hostGroupId = {} # dict = associative array
# yaml file for dep settings - see the format:
"""
pvebar16 CTs:
master: pvebar16.example.com
masterTargetTriggerDescription: 'is unavailable by ICMP'
slaveTargetTriggerDescription: 'is unavailable by ICMP|Zabbix agent is unreachable for 5 minutes'
"""
fileName = 'dependancy.yml'
with open('dependancy.yml') as f:
hostData = yaml.load(f)
for groupyml in hostData.keys():
masterTargetTriggerDescription = hostData[groupyml]['masterTargetTriggerDescription']
slaveTargetTriggerDescription = hostData[groupyml]['slaveTargetTriggerDescription']
masterName = hostData[groupyml]['master']
hostgroupName = groupyml
slaveIds = []
masterId = zapi.host.get(filter={'host': masterName},output=['hostid'])[0]['hostid']
hostGroupId = zapi.hostgroup.get(filter={'name': hostgroupName},output=['groupid'])[0]['groupid']
masterTriggerObj = zapi.trigger.get(host=masterName, filter={'description': masterTargetTriggerDescription}, output=['triggerid','description'])
print("Group: " + hostgroupName + " - ID: " + str(hostGroupId))
print("Master host: " + masterName + " - ID: " + str(masterId))
print("Master trigger: " + masterTriggerObj[0]['description'] + " - ID: " + str(masterTriggerObj[0]['triggerid']))
# cycle through slave hosts
hostGroupObj = zapi.hostgroup.get(groupids=hostGroupId, selectHosts='extend')
for host in hostGroupObj[0]['hosts']:
#exclude master
if host['hostid'] != str(masterId):
print(" - Host Name: " + host['name'] + " - ID: " + host['hostid'] + " - MASTER: " + str(masterId))
# cycle for all slave's triggers
slaveTargetTriggerObj = zapi.trigger.get(hostids=host['hostid'])
for slaveTargetTrigger in slaveTargetTriggerObj:
# search for dependancy targets
if re.search(slaveTargetTriggerDescription, slaveTargetTrigger['description'] ,re.IGNORECASE):
print(" - Trigger: " + slaveTargetTrigger['description'] + " - ID: " + slaveTargetTrigger['triggerid'])
# Clear existing dep. from the trigger, then create the new dep.
clear = zapi.trigger.deletedependencies(triggerid=slaveTargetTrigger['triggerid'])
result = zapi.trigger.adddependencies(triggerid=slaveTargetTrigger['triggerid'], dependsOnTriggerid=masterTriggerObj[0]['triggerid'])
print("----------------------------------------")
print("")

parse a multiline text with pyparsing

I would like to parse a multiline text file having a content as
section1:
key1 val1
key2 val2
section2:
val1
val2
val3
section3:
section4:
somevalue
The header of the sections (section1, section2, ...) are defined. The goal is to read the values under the different sections. I'm getting in trouble with using the pyparsing module over several lines (the real problem is much more complex than this simple example).
When I use the following code, the parser expects on every line the full list of defined keywords:
# -*- coding: utf-8 -*-
from pyparsing import Literal, ZeroOrMore, LineEnd, ParseException
FileSyntax = None
def Grammar():
#section1:
section1 = Literal("section1:").suppress() + ZeroOrMore(LineEnd())
#section2:
section2 = Literal("section2:").suppress() + ZeroOrMore(LineEnd())
#section3:
section3 = Literal("section3:").suppress() + ZeroOrMore(LineEnd())
#section4:
section4 = Literal("section4:").suppress() + ZeroOrMore(LineEnd())
return section1 + section2 + section3 + section4
def parseFile(filename : str):
global FileSyntax
print("\nparse results:\n")
try:
TestFile = open(filename)
testdata = "".join( TestFile.readlines())
FileSyntax = Grammar()
FileSyntax.parseString(testdata)
except ParseException as err:
print(err.line)
print(" "*(err.column-1) + "^")
print("* " + str(err))
except Exception as e:
import traceback
traceback.print_exc(e)
parseFile("testdata.txt")
How can I make a stateful parsing (dependent on the different sections)? Thank you.
If you print out the grammar expression itself, you'll get something like:
{{{{Suppress:("section1:") [LineEnd]...} {Suppress:("section2:") [LineEnd]...}} {Suppress:("section3:") [LineEnd]...}} {Suppress:("section4:") [LineEnd]...}}
That is, you are parsing all the section headers, but not the body of the sections. So you are probably failing on the first line after 'section1:'.
Also, there is no need to call readlines() and then join everything back together. Just call TestFile.read(). Or even better, pathlib.Path(test_file_name).read_text()

I have json file when the json file is changed I need to show the changes in python Console

I have local json file, I am beginner stage at python and I need to develop a code for while running python code it continuously checks the json file and if any changes in Json file I need to show the changes in Python console.
Eg.,
{
"a":2,
"b":3
}
if I change "a":3
python output:
change detected at key a and value 3.
I am missing logic here. and Thank you in advance
Hia, Seems like you've got a project going on. Depending on what values you want to change and how deep in the JSON they are could add more work to the task. Let me see what i can come up with...
import essentials # pip install mknxgn_essentials (not gonna lie, i made this module. biased...)
import time
import os
recordedjson = essentials.EsFileObject("Tools/Json.json").json
recoredchanges = essentials.EsFileObject("Tools/Changes.json")
if recoredchanges.json == False: # Used to save changes, also makes sure to keep previous logs
recoredchanges.setjson([])
onchange_record_new_json = True # makes the new json the one you want to compare after each change
while True:
newjson = essentials.EsFileObject('Tools/Json.json').json
if recordedjson != newjson:
changes = 0
keychanges = 0
newkeys = 0
removedkeys = 0
valuechanges = 0
print("File has been changed! - Collecting Changes")
for obj in recordedjson:
if obj not in newjson:
print("Key Removed From New Json Key:", obj)
keychanges += 1
removedkeys += 1
changes += 1
else:
if recordedjson[obj] != newjson[obj]:
print("Value Change!")
print("Previous- Key: ", obj)
print("# Value:", recordedjson[obj])
print(" New- Value:", newjson[obj])
valuechanges += 1
changes += 1
for obj in newjson:
if obj not in recordedjson:
if obj not in newjson:
print("New Key Introduced- Key:", obj)
keychanges += 1
removedkeys += 1
changes += 1
changelog = "Change Count: " + str(changes)
changelog += " Key Changes: " + str(keychanges)
changelog += " New Keys: " + str(newkeys)
changelog += " Removed Keys: " + str(removedkeys)
changelog += " Value Changes: " + str(valuechanges)
print(changelog)
record = {"Change Time": essentials.EsTimeObject().string}
record["User Readable Time Change"] = essentials.EsTimeObject().readable
record["Change Log"] = changelog
recoredchanges.json.append(record)
recoredchanges.save()
time.sleep(10)
if onchange_record_new_json:
recordedjson = newjson
print("Waiting For Change")
time.sleep(1)
os.system("cls") # cls for windows, clear for linux and so on...
man is it annoying to post code on this site..