How to convert a ruby file to json? - json

I want to extract information like Name, Description, LHOST, SOURCE from the following file.
https://github.com/rapid7/metasploit-framework/blob/master/modules/auxiliary/docx/word_unc_injector.rb
Could anybody let me know how to do so robustly? One way is to convert the ruby file in to json format (then json can be processed easily). But I am sure how to convert ruby file to json? Could anybody let me know? Thanks.

The following script rb2json0.rb works.
#!/usr/bin/env ruby
require 'ripper'
require 'json'
puts Ripper.sexp($stdin.read).to_json()
$ ./rb2json0.rb <<< 'def hello(world) "Hello, #{world}!"; end'
["program",[["def",["#ident","hello",[1,4]],["paren",["params",[["#ident","world",[1,10]]],null,null,null,null,null,null]],["bodystmt",[["string_literal",["string_content",["#tstring_content","Hello, ",[1,18]],["string_embexpr",[["var_ref",["#ident","world",[1,27]]]]],["#tstring_content","!",[1,33]]]]],null,null,null]]]]

Related

Convert JSONL to JSON

Is there a way to convert JSONL to JSON in Linux with the full JSONL file depth? I found some methods based on jq but they don't work with full depth of JSONL file
Would something like this work?
#!/bin/sh
echo "[" >$1.json
perl -pe 's/$/,/' <$1 >>$1.json
echo "]" >>$1.json
I am quite confused as to what you want to do. But when it comes to jq, normally I process things line by line, with each line being an atomic JSON object. Something like
cat file | jq some-options 'some commands' > output.txt
Sometimes I get the output in tsv format and pipe it into awk. jq is very friendly with line-by-line objects.
To convert a large JSON list into line by line format, just parse the large object in any programming language, and serialize the inner objects back to json line by line.
But if you have already parsed the large object, I suggest you do the required processing you want to do in jq directly, without serializing the inner objects back...

How do I get strings from a JSON file?

I'm writing a internationalized desktop program written in Vala where a use an extern JSON file to store a list of languages.
I'm using gettext for l10n so if I get the string from the json file and I do something like _(string_var) I could get the translated string. The problem is that I don't know how can I add the string to the pot file using xgettext or some similar tool.
Any idea??
If the tool jq (http://stedolan.github.io/jq/) is an option for you, below might work;
$ curl -s https://raw.githubusercontent.com/chavaone/gnomecat/master/data/languages.json | jq .languages[43].name
"English"
The solution I finally use was to modify the JSON file to use double quoted strings only when I wanted to translate that string. For example:
{
'code' : 'es',
'name' : "Spanish; Castilian",
'pluralform' : 'nplurals=2; plural=(n != 1);',
'default-team-email': 'gnome-es-list#gnome.org'
}
In the previous piece of JSON file the only string I wanted to translate was "Spanish; Castillian". Then in the POTFILES.in, I just use the gettext/quoted type.
# List of source files containing translatable strings.
# Please keep this file sorted alphabetically.
[encoding: UTF-8]
[type: gettext/quoted]data/languages.json
[type: gettext/quoted]data/plurals.json
[type: gettext/glade]data/ui/appmenu.ui
[...]

Converting Shell Output to json

I want to convert the output of octave execution in shell to json format.
For example if I execute
$ octave --silent --eval 'a=[1,3],b=2'
I get
a =
1 3
b = 2
I want the output to be formatted to a json string as in
"{'a':[1,3], 'b':2}"
How do I achieve this, It would be great if it is in node/js, but anthing is fine. I am looking for any existing solutions to rather than writing my own logic for parsing it. Need suggestion.
I doubt if any such package exists. Its easy to write your own rather thank waiting to find one.

Ways to parse JSON using KornShell

I have a working code for parsing a JSON output using KornShell by treating it as a string of characters. The issue I have is that the vendor keeps changing the position of the field that I am intersted in. I understand in JSON, we can parse it by key-value pairs.
Is there something out there that can do this? I am intersted in a specific field and I would like to use it to run the checks on the status of another RESTAPI call.
My sample json output is like this:
JSONDATA value :
{
"status": "success",
"job-execution-id": 396805,
"job-execution-user": "flexapp",
"job-execution-trigger": "RESTAPI"
}
I would need the job-execution-id value to monitor this job through the rest of the script.
I am using the following command to parse it:
RUNJOB=$(print ${DATA} |cut -f3 -d':'|cut -f1 -d','| tr -d [:blank:]) >> ${LOGDIR}/${LOGFILE}
The problem with this is, it is field delimited by :. The field position has been known to be changed by the vendors during releases.
So I am trying to see if I can use a utility out there that would always give me the key-value pair of "job-execution-id": 396805, no matter where it is in the json output.
I started looking at jsawk, and it requires the js interpreter to be installed on our machines which I don't want. Any hint on how to go about finding which RPM that I need to solve it?
I am using RHEL5.5.
Any help is greatly appreciated.
The ast-open project has libdss (and a dss wrapper) which supposedly could be used with ksh. Documentation is sparse and is limited to a few messages on the ast-user mailing list.
The regression tests for libdss contain some json and xml examples.
I'll try to find more info.
Python is included by default with CentOS so one thing you could do is pass your JSON string to a Python script and use Python's JSON parser. You can then grab the value written out by the script. An example you could modify to meet your needs is below.
Note that by specifying other dictionary keys in the Python script you can get any of the values you need without having to worry about the order changing.
Python script:
#get_job_execution_id.py
# The try/except is because you'll probably have Python 2.4 on CentOS 5.5,
# and the straight "import json" statement won't work unless you have Python 2.6+.
try:
import json
except:
import simplejson as json
import sys
json_data = sys.argv[1]
data = json.loads(json_data)
job_execution_id = data['job-execution-id']
sys.stdout.write(str(job_execution_id))
Kornshell script that executes it:
#get_job_execution_id.sh
#!/bin/ksh
JSON_DATA='{"status":"success","job-execution-id":396805,"job-execution-user":"flexapp","job-execution-trigger":"RESTAPI"}'
EXECUTION_ID=`python get_execution_id.py "$JSON_DATA"`
echo $EXECUTION_ID

Parsing JSON DATA in AWS

I still cannot parse JSON data in linux.
I need a linux command to parse Json data to readable string.
someone told me to use underscore-cli.(https://npmjs.org/package/underscore-cli)
I install and use it, still the result is unreadable.
my data:
"2005\u5e7405\u670812\u65e5(\u6728) 02\u664216\u5206"
according to this link
http://json.parser.online.fr/
the result is
"2005年05月12日(木) 02時16分"
Is there any other way to parse this Json data?
Please help.
Try jq: http://stedolan.github.com/jq/
echo '"2005\u5e7405\u670812\u65e5(\u6728) 02\u664216\u5206"' | ./jq .
"2005年05月12日(木) 02時16分"
jq takes escaped unicode and outputs it in utf-8.