Apache "server-status" (mod_status) output as in JSON or XML format? - json

Apache "mod_status":
We know, by using "mod_status", we can check Apache's current status. It returns a lot of information, something like in this sample page (provided by Apache):
https://www.apache.org/server-status
What i need to do:
I need to parse and then process this results, especially the detailed connections section given by ExtendedStatus flag (inside httpd.conf). The section looks something like:
Srv PID Acc M CPU SS Req Conn Child Slot Client VHost Request
0-24 23433 0/94/338163 _ 208.04 2 0 0.0 1.85 22068.75 221.254.46.37
0-24 23433 0/99/337929 _ 208.93 1 1141 0.0 2.23 19373.00 197.89.161.5
0-24 23433 0/94/337834 _ 206.04 4 0 0.0 3.46 22065.36 114.31.251.82
0-24 23433 0/95/338139 _ 198.94 2 7 0.0 2.74 21101.66 122.252.253.242
0-24 23433 0/111/338215 _ 206.21 3 0 0.0 3.89 19496.71 186.5.109.211
My Question:
Is it possible to get this page (information) via a structured data format, like JSON? (Because i need to parse them via PHP. And then do some further stuffs later.)
I cannot just use some easy ways, like Javascript DOM Parsers (like: jQuery). Because i need the script to be running in the Server's Linux Commandline (locally) itself. Not via any fancy client Browsers from outside.
So, parsing this via Javascript (JQuery, etc) is almost not a choice. I better receive a structured data. So i can parse from PHP way easily. Trigger the PHP Script via Terminal, like:
# php /www/docroots/parse-server-status.php
Or, at least:
# curl -I http://localhost/parse-server-status.php
Question:
Any idea how to get the JSON or XML out of Apache's Server Status (mod_status), please?
Thanks all.

I don't think there is a way to get json in the standard apache mod_status.
But there was a discussion on the developer list about this topic.
In short: There is an other script that you have to install on your server. And you need mod_lua on the server. Here is the project page:
https://github.com/Humbedooh/server-status
After installing that lua script, you could get the json files.
Daniel installed a sample script here:
HTML view: http://httpd.apache.org/server-status
JSON: http://httpd.apache.org/server-status?view=json
Extended JSON:
http://httpd.apache.org/server-status?view=json&extended=true (LOT OF
DATA :p)

In JavaScript/jQuery (ES6) we can get Apache machine readable status with ?auto and parse the content via Regular Expressions:
$.get('http://localhost/parse-server-status.php?auto', (d) => {
const o = {};
const host = d.substring(0, d.indexOf('\n'));
Array.from(d.replace(host, '').matchAll(/^([\w\s]+)\:\s(.*)+/gm)).forEach(l => o[l[1].replace(/\s/, '')] = l[2]);
console.log(host, o);
});

Related

Opensmile: unreadable csv file while extracting prosody features from wav file

I am extracting prosody features from an audio file while using Opensmile using Windows version of Opensmile. It runs successful and an output csv is generated. But when I open csv, it shows some rows that are not readable. I used this command to extract prosody feature:
SMILEXtract -C \opensmile-3.0-win-x64\config\prosody\prosodyShs.conf -I audio_sample_01.wav -O prosody_sample1.csv
And the output of csv looks like this:
[
Even I tried to use the sample wave file given in Example audio folder given in opensmile directory and the output is same (not readable). Can someone help me in identifying where the problem is actually? and how can I fix it?
You need to enable the csvSink component in the configuration file to make it work. The file config\prosody\prosodyShs.conf that you are using does not have this component defined and always writes binary output.
You can verify that it is the standart binary output in this way: omit the -O parameter from your command so it becomesSMILEXtract -C \opensmile-3.0-win-x64\config\prosody\prosodyShs.conf -I audio_sample_01.wav and execute it. You will get a output.htk file which is exactly the same as the prosody_sample1.csv.
How output csv? You can take a look at the example configuration in opensmile-3.0-win-x64\config\demo\demo1_energy.conf where a csvSink component is defined.
You can find more information in the official documentation:
Get started page of the openSMILE documentation
The section on configuration files
Documentation for cCsvSink
This is how I solved the issue. First I added the csvSink component to the list of the component instances. instance[csvSink].type = cCsvSink
Next I added the configuration parameters for this instance.
[csvSink:cCsvSink]
reader.dmLevel = energy
filename = \cm[outputfile(O){output.csv}:file name of the output CSV
file]
delimChar = ;
append = 0
timestamp = 1
number = 1
printHeader = 1
\{../shared/standard_data_output_lldonly.conf.inc}`
Now if you run this file it will throw you errors because reader.dmLevel = energy is dependent on waveframes. So the final changes would be:
[energy:cEnergy]
reader.dmLevel = waveframes
writer.dmLevel = energy
[int:cIntensity]
reader.dmLevel = waveframes
[framer:cFramer]
reader.dmLevel=wave
writer.dmLevel=waveframes
Further reference on how to configure opensmile configuration files can be found here

How to convert an N-level dict to yaml in TCL

So I understand from looking around at other answers on this site that this is tricky due to TCLs type system, but what I want to do is take some arbitrarily shaped dict, and convert it into a correctly formatted yaml that could then be read in by any yaml parser in any other language.
The motivation is a metadata system that users can read and write to over different sessions. Users themselves will know the structure of the data (as they made it), but the system itself won't.
So take the following multi-level dict:
set dict_sensor_status [dict create main ok recieve ok send ok]
set dict_sensor_1 [dict create power 100 signal 80 status $dict_sensor_status]
set dict_sensor_2 [dict create power 75 signal 80]
set dict_parent [dict create sensor_1 $dict_sensor_1 sensor_2 $dict_sensor_2]
puts $dict_parent
sensor_1 {power 100 signal 80 status {main ok recieve ok send ok}} sensor_2 {power 75 signal 80}
I would like that to create a correctly formatted yaml, like so:
sensor_1:
power: 100
signal: 80
status:
main: ok
recieve: ok
send: ok
sensor_2:
power: 75
signal: 80
From what I can see, the yaml library will just produce a single layer yaml with long strings something like this:
sensor_1: power 100 signal 80 status {main ok recieve ok send ok}
sensor_2: power 75 signal 80
This is fine within tcl, as the users could treat that string as a dict and carry on their merry way, but if someone then tries to read this yaml in say python, it will not produce the correct result.
The huddle option seems to work for producing correctly formatted yamls, but the current infrastructure I have uses dicts so I'd rather not change everything.
Is there some way of doing this within tcl while keeping the data as a dict? Or is there some way to convert the dict to a huddle when writing out the yaml?
Or is this just a doomed endevour with dicts, and do I need to convert the code to use huddles over dicts.
EDIT: alternatively, I'm not married to the idea of a yaml file. And similarly structured file like json etc would work.
If you're going to JSON, the rl_json package is thoroughly recommended as a production-grade system. For example:
package require rl_json
# Using arrays for convenience; there's several other approaches that would work too
array set s1 {
power 100
signal 80
status,main ok
status,rcv ok
status,snd ok
}
array set s2 {
power 75
signal 80
}
puts [rl_json::json template {{
"sensor_1": {
"power": "~N:s1(power)",
"signal": "~N:s1(signal)",
"status": {
"main": "~S:s1(status,main)",
"recieve": "~S:s1(status,rcv)",
"send": "~S:s1(status,snd)"
}
},
"sensor_2": {
"power": "~N:s2(power)",
"signal": "~N:s2(signal)"
}
}}]
That produces this output:
{"sensor_1":{"power":100,"signal":80,"status":{"main":"ok","recieve":"ok","send":"ok"}},"sensor_2":{"power":75,"signal":80}}
There's several other ways to build the output.
Doing the equivalent for YAML seems to be… complex. These are the two packages you'll need:
yaml package
huddle package
There doesn't seem to be any package like rl_json that wraps up all the complexity nicely.

SSL Certs acceptance using AutoIT

Is it possible to accept SSL Certificates in Chrome/Firefox using AutoIT tool?
https://www.autoitscript.com/site/autoit/
Thanks!
The short answer is yes. You have 3 options depending on what you want to do. They are:
Use the FF.au3 and the Chrome.au3 UDFs plus other AutoIt automations. (Hard)
Use iUIAutomation plus other AutoIt automations. (A little less hard)
If you just need to get some certificate info you can use this script. (pretty easy)
If you go with option 3 you will need to download this UDF and update the WinINetConstants.Au3 file on line 5 from:
Global Const $AU3_UNICODE = Number($AU3_VERSION[2] & "." & $AU3_VERSION[3]) >= 2.13 Or #AutoItUnicode
To
Global Const $AU3_UNICODE = Number($AU3_VERSION[2] & "." & $AU3_VERSION[3]) >= 2.13 Or #AutoItVersion

Where is my output from the ReceiveHTTPFile function in PureBasic?

PureBasic adds a JSON library so it can play nice with web stuff. But I can't figure out what type of output I am getting from the ReceiveHTTPFile() function.
Their documentation is pretty sparse on this subject.
Here is my code.
Procedure GetBitminterData()
FireUpNetwork = InitNetwork()
Debug "If the number below is anything other than zero the network library is working."
Debug FireUpNetwork
URL$ = "https://bitminter.com/api/pool/stats/"
FileName$ = "stats.json"
BitMinterData = ReceiveHTTPFile(URL$, Filename$) ;THIS LINE HERE MEH
Debug URL$
Debug BitMinterData
; Read JSON data from a string
; More importantly parse the bitminter stats.json file from above.
Input$ = BitMinterData
If ParseJSON(#JSON_Parse, Input$)
NewList Numbers()
ExtractJSONList(JSONValue(#JSON_Parse), Numbers())
Debug "---------- Extracting values ----------"
Debug ""
ForEach Numbers()
Debug Numbers()
Next
EndIf
EndProcedure
The problem is that all http functions in purebasic can't handle "https". I tested it with the latest stable PureBasic 5.31. Try any url with "http" and it will work. Try "https" and it won't work.
There is a libcurl wrapper for purebasic (https://github.com/Progi1984/RWrappers/tree/master/LibCurl) but it's 5 years old and it's not compatible with purebasic 5.31 anymore.
If you just need something "done", you could still download the "curl" binary (http://curl.haxx.se/download.html#Win32) and execute it inside your purebasic application.

Ways to parse JSON using KornShell

I have a working code for parsing a JSON output using KornShell by treating it as a string of characters. The issue I have is that the vendor keeps changing the position of the field that I am intersted in. I understand in JSON, we can parse it by key-value pairs.
Is there something out there that can do this? I am intersted in a specific field and I would like to use it to run the checks on the status of another RESTAPI call.
My sample json output is like this:
JSONDATA value :
{
"status": "success",
"job-execution-id": 396805,
"job-execution-user": "flexapp",
"job-execution-trigger": "RESTAPI"
}
I would need the job-execution-id value to monitor this job through the rest of the script.
I am using the following command to parse it:
RUNJOB=$(print ${DATA} |cut -f3 -d':'|cut -f1 -d','| tr -d [:blank:]) >> ${LOGDIR}/${LOGFILE}
The problem with this is, it is field delimited by :. The field position has been known to be changed by the vendors during releases.
So I am trying to see if I can use a utility out there that would always give me the key-value pair of "job-execution-id": 396805, no matter where it is in the json output.
I started looking at jsawk, and it requires the js interpreter to be installed on our machines which I don't want. Any hint on how to go about finding which RPM that I need to solve it?
I am using RHEL5.5.
Any help is greatly appreciated.
The ast-open project has libdss (and a dss wrapper) which supposedly could be used with ksh. Documentation is sparse and is limited to a few messages on the ast-user mailing list.
The regression tests for libdss contain some json and xml examples.
I'll try to find more info.
Python is included by default with CentOS so one thing you could do is pass your JSON string to a Python script and use Python's JSON parser. You can then grab the value written out by the script. An example you could modify to meet your needs is below.
Note that by specifying other dictionary keys in the Python script you can get any of the values you need without having to worry about the order changing.
Python script:
#get_job_execution_id.py
# The try/except is because you'll probably have Python 2.4 on CentOS 5.5,
# and the straight "import json" statement won't work unless you have Python 2.6+.
try:
import json
except:
import simplejson as json
import sys
json_data = sys.argv[1]
data = json.loads(json_data)
job_execution_id = data['job-execution-id']
sys.stdout.write(str(job_execution_id))
Kornshell script that executes it:
#get_job_execution_id.sh
#!/bin/ksh
JSON_DATA='{"status":"success","job-execution-id":396805,"job-execution-user":"flexapp","job-execution-trigger":"RESTAPI"}'
EXECUTION_ID=`python get_execution_id.py "$JSON_DATA"`
echo $EXECUTION_ID