Export RTP statistics from wireshark/tshark into XML or CSV - csv

Basically I would like to export the analysics of wireshark to RTP streams into CSV or XML format to read it again for some tests. I can do the following using tshark through command line.
tshark -r rtp.pcap -q -z rtp,streams
Is there a way to specify and output file and it's format? If there's a way to do this through wireshark directly, it's welcome.
Note: what need to store is the overall statistics of all the streams not the detailed one per each stream.

You can save the output to a text file using the redirect operator. i.e. > output.txt. This is very basic and difficult to parse but unfortunately, there does not seem to be any way to control the format of the output. The -T -E -e combination outputs details from each packet and the -w option outputs a raw file.

Wireshark
Go to Telephony -> RTP -> Show All Streams
You can copy the values to the clipboard in CSV.
See also Wireshark Wiki

Related

Yarn parsing job logs stored in hdfs

Is there any parser, which I can use to parse the json present in yarn job logs(jhist files) which gets stored in hdfs to extract information from it.
The second line in the .jhist file is the avro schema for the other jsons in the file. Meaning that you can create avro data out of the jhist file.
For this you could use avro-tools-1.7.7.jar
# schema is the second line
sed -n '2p;3q' file.jhist > schema.avsc
# removing the first two lines
sed '1,2d' file.jhist > pfile.jhist
# finally converting to avro data
java -jar avro-tools-1.7.7.jar fromjson pfile.jhist --schema-file schema.avsc > file.avro
You've got an avro data, which you can for example import to a Hive table, and make queries on it.
You can check out Rumen, a parsing tool from the apache ecosystem
or When you visit the web UI, go to job history and look for the job for which you want to read .jhist file. Hit the Counters link at the left,now you will be able see an API which gives you all the parameters and the value like CPU time in milliseconds etc. which will read from a .jhist file itself.

How to export JMeter results to JSON?

We run load tests with JMeter and would like to export result data (throughput, latency, requests per second etc.) to JSON, either a file or STDOUT. How can we do that?
JMeter can save the results in a CSV format with header.
(Do not forget to select Save Field Names - it is OFF by default)
Then you can use this tool to covert the CSV to a JSON.
http://www.convertcsv.com/csv-to-json.htm
EDIT
JMeter stores the result in XML or CSV format. XML is by default (with .jtl extension). But It is always recommended to save the result in csv format.
If you want to convert XML to JSON
http://www.utilities-online.info/xmltojson/#.U9O2ifldVBk
If you are planning to use CSV, To save the result in CSV format automatically
When you are running your test via command line, to save the result in csv for a specific test
%JMETER_HOME%\bin\jmeter.bat" -n -t %TESTNAME% -p %PROPERTY_FILE_PATH% -l %RESULT_FILE_PATH% -j %LOG_FILE_PATH% -Djmeter.save.saveservice.output_format=csv
Or
You can update the jmeter.properties in bin folder to enable below property (for any test you run)
jmeter.save.saveservice.output_format=csv
Hope, it is clear!
There is no OOTB solution for this but you could inspire yourself from this patch:
https://issues.apache.org/bugzilla/show_bug.cgi?id=53668

Upload Data in MapQuest DMv2 through CSV using Data Manager API call

I need to upload data in MapQuest DMv2 through a CSV file. After going through the documentation I found following syntax of uploading data-
http://www.mapquestapi.com/datamanager/v2/upload-data?key=[APPLICATION_KEY]&inFormat=json&json={"clientId": "[CLIENT_ID]","password": "[REGISTRY_PASSWORD]","tableName": "mqap.[CLIENT_ID]_[TABLENAME]","append":true,"rows":[[{"name":"[NAME]","value":"[VALUE]"},...],...]}
This is fair enough if I want to put individual rows in in rows[], but there is no mention of the procedure to follow to upload data through a CSV file. It has been clearly mentioned that "CSV, KML, and zipped Shapefile uploads are supported ". How can I achieve it though this Data Manager API service?
Use a multipart post to upload the csv instead of the rows. You can see it working here.
I used the CURL program to accomplish that. Here is an example of a CURL.exe command line. You can call it from a batch file, or in my case, from a C# program.
curl.exe -F clientId=XXXXX -F password=XXXXX -F tableName=mqap.XXXXX_xxxxx -F append=false --referer http://www.mapquest.com -F "file=#C:\\file.csv" "http://www.mapquestapi.com/datamanager/v2/upload-data?key=KEY&ambiguities=ignore"

Easiest way to convert pcap to JSON

I have a bunch of pcap files, created with tcpdump. I would like to store these in a database, for easier querying, indexing etc. I thought mongodb might be a good choice, because storing a packet the way Wireshark/TShark presents them as JSON document seems to be natural.
It should be possible to create PDML files with tshark, parse these and insert them into mongodb, but I am curious if someone knows of an existing/other solution.
On the command line (Linux, Windows or MacOS), you can use tshark.
e.g.
tshark -r input.pcap -T json >output.json
or with a filter:
tshark -2 -R "your filter" -r input.pcap -T json >output.json
Considering you mentioned a set of pcap files, you can also pre-merge the pcap files into a single pcap and then export that in one go if preferred..
mergecap -w output.pcap input1.pcap input2.pcap..
Wireshark has a feature to export it's capture files to JSON.
File->Export Packet Dissections->As JSON
You could use pcaphar. More info about HAR here.

Print html file with CUPS

Is there a way to explicitly tell the CUPS server that the file you are sending is text/html thus overriding the mime.types lookup?
Yes, there is.
Use this commandline:
lp -d printername -o document-format=text/html file.html
Update (in response to comments)
I provided an exact answer to the OP's question.
However, this (alone) does not guarantee that the file will be successfully printed. To achieve that, CUPS needs a filter which can process the input of MIME type text/html.
Such a filter is not provided by CUPS itself. However, it is easy to plug your own filter into the CUPS filtering system, and some Linux distributions ship such a filter capable to consume HTML files and convert them to a printable format.
You can check what happens in such a situation on your system. The cupsfilter command is a helper utility to run available/installed CUPS filters without the need to do actual printing through the CUPS daemon:
touch 1.html
/usr/sbin/cupsfilter --list-filters 1.html
Now on a system with no HTML consuming filter ready, you'd get this response:
cupsfilter: No filter to convert from text/html to application/pdf.
On a different system (like on a Mac), you'll see this:
xhtmltopdf
You can even force input and output MIME types to see which filters CUPS would run automatically when asked to print this file an a printer supporting that particular output MIME type (-i sets the input MIME type, -m the output):
/usr/sbin/cupsfilter \
-i text/html \
-m application/postscript \
--list-filters \
1.html
xhtmltopdf
cgpdftops
Here it would first convert HTML to PDF using xhtmltopdf, then transform the resulting PDF to PostScript using cgpdftops.
If you skip the --list-filters parameter, the command would actually even go ahead and do the conversion by actively running (not just listing) the two filters and emit the result to <stdout>.
You could write your own CUPS filter based on a Shell script. The only other ingredient you need is a command line tool, such as htmldoc or wkhtmltopdf, which is able to process HTML input and produce some format that in turn could be consumed by the CUPS filtering chain further down the road.
Be aware, that some (especially JavaScript-heavy) HTML files cannot be successfully processed by simple command line tools into print-ready formats.
If you need more details about this, just ask another question...