Unix command/s or Tcl proc to convert hex data to binary with output 1 bit per line - tcl

I have a plain text file with hex data information (32-bit word per line). Example :
cafef00d
deadbeef
That I need to convert to this :
11001010111111101111000000001101
11011110101011011011111011101111
BUT with 1 bit per line only. Starting from the LSB of the first 32-bit hex and so on. Final output file will be :
1
0
1
1
... and so on
Is there a unix command/s or can I do this in a Tcl proc ?

A tcl solution...
Assuming you've read the file into a string, the first thing is to convert the hex strings into numbers expressed in binary, LSB first. There's a few ways to do it, here's one (I like scan and format):
set binaryData [lmap hexValue $inputData {
scan $hexValue "%x" value
string reverse [format "%b" $value]
}]
For your input, that produces:
10110000000011110111111101010011 11110111011111011011010101111011
We can then convert that to be one digit per line with this:
set oneDigitPerLine [join [split [join $binaryData ""] ""] "\n"]
The innermost join gets rid of the whitespace, the split breaks it up into characters, and the outer join inserts the newline separators. (I'll not produce the result here.)

If you want to do it with linux commands, try the following:
tac: reverse text lines in a file
fold -w 1: fold a text file, column width 1
sed: replace strings
tac input_file | \
fold -w 1 | \
sed -e 's/0/0000/' | \
sed -e 's/1/0001/' | \
sed -e 's/2/0010/' | \
sed -e 's/3/0011/' | \
sed -e 's/4/0100/' | \
sed -e 's/5/0101/' | \
sed -e 's/6/0110/' | \
sed -e 's/7/0111/' | \
sed -e 's/8/1000/' | \
sed -e 's/9/1001/' | \
sed -e 's/a/1010/' | \
sed -e 's/b/1011/' | \
sed -e 's/c/1100/' | \
sed -e 's/d/1101/' | \
sed -e 's/e/1110/' | \
sed -e 's/f/1111/' | \
told -w 1 | \
tac

Another way, using a perl one-liner:
$ perl -nE 'say for split "", reverse sprintf("%032b", hex)' < input.txt
1
0
1
1
...
For each line, converts from a base-16 string to a number and turns that into a binary string, and then prints each individual character on its own line.

Related

How can I sort a Bash output and save it to a sql db

I've been trying to get the data from this command ioreg -r -c "AppleSmartBattery" and save each one of its inputs to a sql db
$ ioreg -r -c "AppleSmartBattery"
+-o AppleSmartBattery <class AppleSmartBattery, id 0x1000222c9, registered, ma$
{
"TimeRemaining" = 179
"AvgTimeToEmpty" = 179
"AdapterDetails" = {"FamilyCode"=0}
"ChargingOverride" = 0
"AppleRawCurrentCapacity" = 2373
"InstantTimeToEmpty" = 154
"AppleRawMaxCapacity" = 3811
"ExternalChargeCapable" = No
I would need to save it to a sql table, where one column is "*" and the next one is the value after the equal
I was trying to build a "for loop", I got this far I cant figure out how to continue
batstat=$(ioreg -r -c "AppleSmartBattery")
for i in ${batstat[#]}; do
sed 's/^[^{]*{\([^{}]*\)}.*/\1/' $i
echo $i
done
I would need to accomplish the following
get one single value in quotes "" out each time the for goes by the line
assign the correct value after the equals sign to the respective quoted value
thanks :)
Not that it's impossible, but I think doing this entirely in a shell script is a bit much when there are easier solutions available.
What I'd do here is convert the output to JSON and then use a Node module like JSON-to-SQL to generate the table from the JSON schema, and JSON-SQL to convert the output to an INSERT statement, which you can then use with any Node SQL client, like sql-client.
You can also probably parse the output more cleanly and easily in Node using a module like sh to capture the ioreg command output, but here's what I came up with for converting the command output into valid JSON.
#!/bin/bash
function parseData() {
tail -n +2 $1 | \
sed -re 's/\=/\:/g' | \
sed -re 's/</\"/g' | \
sed -re 's/>/\"/g' | \
sed -re 's/No/false/g' | \
sed -re 's/Yes/true/g' | \
sed -re 's/\(/\[/g' | \
sed -re 's/\)/\]/g' | \
sed '$d' | \
sed '$d' | \
sed 's/$/,/' | \
sed '1 s/\,//' | \
sed '$ s/\,//' | \
sed '52 s/,//'
}
ioreg -r -c "AppleSmartBattery" | parseData
The only issue is if the number of lines in the output ever changes, the 52 in the last line of the parseData function would need to be updated.

Replace a string if it is not followed by another string

I would like to replace the fortawesome string (if it is not followed by the /fontawesome-common-type string) by the stephane string.
sed -e 's,"#fortawesome(/^fontawesome-common-types+),"#stephaneeybert\1,g'
sed: -e expression #1, char 65: invalid reference \1 on `s' command's RHS
An example input:
"#fortawesome/fontawesome-common-types": "^0.2.32"
"name": "#fortawesome/pro-duotone-svg-icons",
And its expected output:
"#fortawesome/fontawesome-common-types": "^0.2.32"
"name": "#stephane/pro-duotone-svg-icons",
UPDATE: I went with the simple alternative of using an intermediate variable:
EXCLUDE=fontawesome-common-types
BUFFER=EkSkLUdE
cat package/package.json \
| sed -e "s,\"#$REPO_SOURCE/$EXCLUDE,\"#$BUFFER,g" \
| sed -e "s,\"#$REPO_SOURCE,\"#$REPO_DEST,g" \
| sed -e "s,\"#$BUFFER,\"#$REPO_SOURCE/$EXCLUDE,g" \
> package/package.out.json;
sed doesn't support negative lookahead functionality. Other than the obvious perl fallback that supports lookaheads, uou may use this awk as a work-around:
awk -F 'fortawesome' -v OFS='stephane' 'NF > 1 {
s = ""
for (i=1; i<NF; ++i)
s = s $i ($(i+1) ~ /^\/fontawesome-common-type/ ? FS : OFS)
$0 = s $i
} 1' file
This awk uses fortawesome as input field separator and stephane as OFS
NF > 1 will be true when we have fortawesome in a line
we loop through fields split by fortawesome and keep track of next field
if next field starts with /fontawesome-common-type then we keep same FS otherwise use OFS
Use temporary values:
exclude='fortawesome/fontawesome-common-type';
match='fortawesome';
repl='stephane';
tmpvar='EkSkLUdE';
sed "s#$exclude#$tmpvar#g;s#$match#$repl#g;s#$tmpvar#$exclude#g" file > newfile
All cases of exclude are replaced with tmpvars, then real expected matches are replaced with repls, and then tmpvars are changed back to excludes.

How to find value of a key in a json response trace file using shell script

I have a response trace file containing below response:
#RESPONSE BODY
#--------------------
{"totalItems":1,"member":[{"name":"name","title":"PatchedT","description":"My des_","id":"70EA96FB313349279EB089BA9DE2EC3B","type":"Product","modified":"2019 Jul 23 10:22:15","created":"2019 Jul 23 10:21:54",}]}
I need to fetch the value of the "id" key in a variable which I can put in my further code.
Expected result is
echo $id - should give me 70EA96FB313349279EB089BA9DE2EC3B value
With valid JSON (remove first to second row with sed and parse with jq):
id=$(sed '1,2d' file | jq -r '.member[]|.id')
Output to variable id:
70EA96FB313349279EB089BA9DE2EC3B
I would strongly suggest using jq to parse json.
But given that json is mostly compatible with python dictionaries and arrays, this HACK would work too:
$ cat resp
#RESPONSE BODY
#--------------------
{"totalItems":1,"member":[{"name":"name","title":"PatchedT","description":"My des_","id":"70EA96FB313349279EB089BA9DE2EC3B","type":"Product","modified":"2019 Jul 23 10:22:15","created":"2019 Jul 23 10:21:54",}]}
$ awk 'NR==3{print "a="$0;print "print a[\"member\"][0][\"id\"]"}' resp | python
70EA96FB313349279EB089BA9DE2EC3B
$ sed -n '3s|.*|a=\0\nprint a["member"][0]["id"]|p' resp | python
70EA96FB313349279EB089BA9DE2EC3B
Note that this code is
1. dirty hack, because your system does not have the right tool - jq
2. susceptible to shell injection attacks. Hence use it ONLY IF you trust the response received from your service.
Quick and dirty (don't use eval):
eval $(cat response_file | tail -1 | awk -F , '{ print $5 }' | sed -e 's/"//g' -e 's/:/=/')
It is based on the exact structure you gave, and hoping there is no , in any value before "id".
Or assign it yourself:
id=$(cat response_file | tail -1 | awk -F , '{ print $5 }' | cut -d: -f2 | sed -e 's/"//g')
Note that you can't access the name field with that trick, as it is the first item of the member array and will be "swallowed" by the { print $2 }. You can use an even-uglier hack to retrieve it though:
id=$(cat response_file | tail -1 | sed -e 's/:\[/,/g' -e 's/}\]//g' | awk -F , '{ print $5 }' | cut -d: -f2 | sed -e 's/"//g')
But, if you can, jq is the right tool for that work instead of ugly hacks like that (but if it works...).
When you can't use jq, you can consider
id=$(grep -Eo "[0-9A-F]{32}" file)
This is only working when the file looks like what I expect, so you might need to add extra checks like
id=$(grep "My des_" file | grep -Eo "[0-9A-F]{32}" | head -1)

Variable in sed parsing

I have a question about json and parsing with sed:
Here is what I get in json:
response='{"found":"true","downloadLink":"http:\/\/www.addic7ed.com\/updated\/1\/86593\/2"}'
If I use this:
downloadLink=`echo $response | sed -e 's/^.*"downloadLink"[ ]*:[ ]*"//' -e 's/".*//'`
then downloadLink will contain http:\/\/www.addic7ed.com\/updated\/1\/86593\/2.
I tried to put a variable instead of downloadLink:
downloadLink=`echo $response | sed -e 's/^.*"$value"[ ]*:[ ]*"//' -e 's/".*//'`
But it doesn't seem to work properly. Do you know how to do it?
The single quotes are not expanded in bash. Use double quotes and escape those already used: - like this:
echo $response | sed -e "s/^.*\"$value\"[ ]*:[ ]*\"//" -e 's/".*//'
Rather than using two sed commands, you can capture the value you are interested like this:
echo "$response" | sed -e "s/^.*\"$value\"\s*:\s*\"\([^\"]*\)\".*$/\1/"
The contents of the \( \) are captured into the variable \1. I have chosen to capture [^\"]* (any number of characters that are not a double quote), which works for your example.
I am also using the \s "whitespace" character class rather than [ ], as I believe it is clearer.
Testing it out:
$ echo "$response"
{"found":"true","downloadLink":"http:\/\/www.addic7ed.com\/updated\/1\/86593\/2"}
$ value=downloadLink
$ echo "$response" | sed -e "s/^.*\"$value\"\s*:\s*\"\([^\"]*\)\".*$/\1/"
http:\/\/www.addic7ed.com\/updated\/1\/86593\/2
$ value=found
$ echo "$response" | sed -e "s/^.*\"$value\"\s*:\s*\"\([^\"]*\)\".*$/\1/"
true
By the way, if you're using bash, you can avoid echo $var | sed by using <<<:
sed -e "s/^.*\"$value\"\s*:\s*\"\([^\"]*\)\".*$/\1/" <<<"$response"
Variables are not expanded inside single quotes. You could use double quotes insted like
sed "s/$variable/newlaue/g" ...
but then you should be extra careful with the contents of $variable, since sed will interpret any special characters in the variable (like the slash /) in this specific example.

Parsing JSON array: 'paste' for bash variables?

At first, I parsed an array JSON file with a loop using jshon, but it takes too long.
To speed things up, I thought I could return every value of id from every index, repeat with word another type, put these into variables, and finally join them together before echo-ing. I've done something similar with files using paste, but I get an error complaining that the input is too long.
If there is a more efficient way of doing this in bash without too many dependencies, let me know.
I forgot to mention that I want to have the possibility of colorizing the different parts independently (red id). Also, I don't store the json; it's piped:
URL="http://somewebsitewithanapi.tld?foo=no&bar=yes"
API=`curl -s "$URL"`
id=`echo $API | jshon -a -e id -u`
word=`echo $API | jshon -a -e word -u | sed 's/bar/foo/'`
red='\e[0;31m' blue='\e[0;34`m' #bash colors
echo "${red}$id${x}. ${blue}$word${x}" #SOMEHOW CONCATENATED SIDE-BY-SIDE,
# PRESERVING THE ABILITY TO COLORIZE THEM INDEPENDENTLY.
My input (piped; not a file):
[
{
"id": 1,
"word": "wordA"
},
{
"id": 2,
"word": "wordB"
},
{
"id": 3,
"word": "wordC"
}
]
Tried:
jshon -a -e id -u :
That yields:
1
2
3
And:
jshon -a -e text -u :
That yields:
wordA
wordB
wordC
Expected result after joining:
1 wordA
2 wordB
3 wordC
4 wordD
you can use the json parser jq:
jq '.[] | "\(.id) \(.word)"' jsonfile
It yields:
"1 wordA"
"2 wordB"
"3 wordC"
If you want to get rid of double quotes, pipe the output to sed:
jq '.[] | "\(.id) \(.word)"' jsonfile | sed -e 's/^.\(.*\).$/\1/'
That yields:
1 wordA
2 wordB
3 wordC
UPDATE: See Martin Neal's comment for a solution to remove quotes without an additional sed command.
The paste solution you're thinking of is this:
paste <(jshon -a -e id -u < foo.json) <(jshon -a -e word -u < foo.json)
Of course, you're processing the file twice.
You could also use a language with a JSON library, for example ruby:
ruby -rjson -le '
JSON.parse(File.read(ARGV.shift)).each {|h| print h["id"], " ", h["word"]}
' foo.json
1 wordA
2 wordB
3 wordC
API=$(curl -s "$URL")
# store ids and words in arrays
id=( $(jshon -a -e id -u <<< "$API") )
word=( $(jshon -a -e word -u <<< "$API" | sed 's/bar/foo/') )
red='\e[0;31m';
blue='\e[0;34m'
x='\e[0m'
for (( i=0; i<${#id[#]}; i++ )); do
printf "%s%s%s %s%s%s\n" "$red" "${id[i]}" "$x" \
"$blue" "${word[i]}" "$x"
done
I would go with Birei's solution but if your output is constrained along the lines of your sample, the following may work (with GNU grep)
paste -d ' ' <(grep -oP '(?<=id": ).*(?=,)' file.txt) \
<(grep -oP '(?<=word": ").*(?=",)' file.txt)