exclude words those may or may not end with slash - language-agnostic

I am trying to exclude certain words from dictionary file.
# cat en.txt
test
testing
access/p
batch
batch/n
batches
cross
# cat exclude.txt
test
batch
# grep -vf exclude.txt en.txt
access/p
cross
The words like "testing" and "batches" should be included in the results.
expected result:
testing
access/p
batches
cross
Because the word "batch" may or may not be followed by a slash "/". There can be one or more tags after slash (n in this case). But the word "batches" is a different word and should not match with "batch".

I would harness GNU AWK for this task following way, let en.txt content be
test
testing
access/p
batch
batch/n
batches
cross
and exclude.txt content be
test
batch
then
awk 'BEGIN{FS="/"}FNR==NR{arr[$1];next}!($1 in arr)' exclude.txt en.txt
gives output
testing
access/p
batches
cross
Explanation: I inform GNU AWK that / is field separator (FS), then when processing first file (where number of row globally is equal to number of row inside file, that is FNR==NR) I simply use 1st column value as key in array arr and then go to next line, so nothing other happens, for 2nd (and following files if present) I select lines whose 1st column is not (!) one of keys of array arr.
(tested in GNU Awk 5.0.1)

Using grep matching whole words:
grep -wvf exclude.txt en.txt
Explanation (from man grep)
-w --word-regexp Select only those lines containing matches that form whole words.
-v --invert-match Invert the sense of matching, to select non-matching lines.
-f -f FILE Obtain patterns from FILE, one per line.
Output
testing
access/p
batches
cross

Since there are many words in a dictionary that may have a root in one of those to exclude we cannot conveniently† use a look-up hash (built of the exclude list), but have to check all of them. One way to do that more efficiently is to use an alternation pattern built from the exclude list
use warnings;
use strict;
use feature 'say';
use Path::Tiny; # to read ("slurp") a file conveniently
my $excl_file = 'exclude.txt';
my $re_excl = join '|', split /\n/, path($excl_file)->slurp;
$re_excl = qr($re_excl);
while (<>) {
if ( m{^ $re_excl (?:/.)? $}x ) {
# say "Skip printing (so filter out): $_";
next;
}
say;
}
This is used as program.pl dictionary-filename and it prints the filtered list.
Here I've assumed that what may follow the root-word to exclude is / followed by one character, (?:/.)?, since examples in the question use that and there is no precise statement on it. The pattern also assumes no spaces around the word.
Please adjust as/if needed for what may actually follow /. For example, it'd be (?:/.+)? for at least one character, (?:/[np])? for any character from a specific list (n or p), (?:[^xy]+)? for any characters not in the given list, etc.
The qr operator forms a proper regex pattern.
† Can still first strip non-word endings, then use a look-up, then put back those endings
use Path::Tiny; # to read a file conveniently
my %lu = map { $_ => 1 } path($excl_file)->lines({ chomp => 1 });
while (<>) {
chomp;
# [^\w-] protects hyphenated words (or just use \W)
# Or: s{(/.+$}{}g; if "/" is the only possibility
s/([^\w-].+)$//g;
next if exists $lu{$_};
$_ .= $1 if $1;
say;
}
This will be far more efficient, on large dictionaries and long lists of exclude words.
However, it is far more complex and probably fails some (unstated) requirements

Related

Mask certain file paths in binary files

I have a binary file containing some file paths. If the path starts with a certain string, the rest of the file path [\x20-\x7f]+ should be masked, leaving the general structure and size of the file intact!
So with a list of paths to search for is this:
/usr/local/bin/
/home/joe/
Then an occurrence like this in the binary data:
^#^#^#^#/home/joe/documents/hello.docx^#^#^#^#
Should be changed to this:
^#^#^#^#/home/joe/********************^#^#^#^#
What is the best way to do this? Do sed, perl or awk have a way? Or do I have to write a C or PHP program where I find the string and write strlen() number of mask characters in its place?
perl is a good choice for working on binary data. For sed and awk, only the GNU implementations can generally cope with binary data, the other ones would choke on the NUL byte or on long sequences between two newline characters, or on non-terminated lines.
perl -pi.back -e 's{(/usr/local/bin|/home/joe)/\K[\x20-\x7f]+}{
$& =~ s/./*/rg}ge' binary-file
You'd need not too old a version of perl for the /r flag (returns the result of the substitution instead of applying it on the variable) and \K (reset the start of the matched string).
By default, perl -p works on one line at a time, since the newline character is not part of [\x20-\x7f], that's fine.
Here is some perl code that works, though I'm sure it can be optimised. It is a filter, so it reads all of stdin into $data, then for each string in the array #dirs it does a substitute for the pattern. The replacement however is not a fixed string but a function call replace($dir,$1) which is evaluated because of the e modifier to the substitute command.
#!/usr/bin/perl
use strict;
sub replace{
my ($dir,$rest) = #_;
$rest =~ s/./*/g;
return $dir.$rest;
}
my #dirs = ('/usr/local/bin/','/home/joe/');
my $data = join("",<STDIN>);
foreach my $dir (#dirs){
$data =~ s|$dir([\x20-\x7f]+)|replace($dir,$1)|ge;
}
print $data;
The function is given 2 arguments, the directory and the captured part of the pattern. It returns these concatenated after replacing each character in the captured string.

Similar strings, different results

I'm creating a Bash script to parse the air pollution levels from the webpage:
http://aqicn.org/city/beijing/m/
There is a lot of stuff in the file, but this is the relevant bit:
"iaqi":[{"p":"pm25","v":[59,21,112],"i":"Beijing pm25 (fine
particulate matter) measured by U.S Embassy Beijing Air Quality
Monitor
(\u7f8e\u56fd\u9a7b\u5317\u4eac\u5927\u4f7f\u9986\u7a7a\u6c14\u8d28\u91cf\u76d1\u6d4b).
Values are converted from \u00b5g/m3 to AQI levels using the EPA
standard."},{"p":"pm10","v":[15,5,69],"i":"Beijing pm10
(respirable particulate matter) measured by Beijing Environmental
Protection Monitoring Center
I want the script to parse and display 2 numbers: current PM2.5 and PM10 levels (the numbers in bold in the above paragraph).
CITY="beijing"
AQIDATA=$(wget -q 0 http://aqicn.org/city/$CITY/m/ -O -)
PM25=$(awk -v FS="(\"p\":\"pm25\",\"v\":\\\[|,[0-9]+)" '{print $2}' <<< $AQIDATA)
PM100=$(awk -v FS="(\"p\":\"pm10\",\"v\":\\\[|,[0-9]+)" '{print $2}' <<< $AQIDATA)
echo $PM25 $PM100
Even though I can get PM2.5 levels to display correctly, I cannot get PM10 levels to display. I cannot understand why, because the strings are similar.
Anyone here able to explain?
The following approach is based on two steps:
(1) Extracting the relevant JSON;
(2) Extracting the relevant information from the JSON using a JSON-aware tool -- here jq.
(1) Ideally, the web service would provide a JSON API that would allow one to obtain the JSON directly, but as the URL you have is intended for viewing with a browser, some form of screen-scraping is needed. There is a certain amount of brittleness to such an approach, so here I'll just provide something that currently works:
wget -O - http://aqicn.org/city/beijing/m |
gawk 'BEGIN{RS="function"}
$1 ~/getAqiModel/ {
sub(/.*var model=/,"");
sub(/;return model;}/,"");
print}'
(gawk or an awk that supports multi-character RS can be used; if you have another awk, then first split on "function", using e.g.:
sed $'s/function/\\\n/g' # three backslashes )
The output of the above can be piped to the following jq command, which performs the filtering envisioned in (2) above.
(2)
jq -c '.iaqi | .[]
| select(.p? =="pm25" or .p? =="pm10") | [.p, .v[0]]'
The result:
["pm25",59]
["pm10",15]
I think your problem is that you have a single line HTML file that contains a script that contains a variable that contains the data you are looking for.
Your field delimiters are either "p":"pm100", "v":[ or a comma and some digits.
For pm25 this works, because it is the first, and there are no occurrences of ,21 or something similar before it.
However, for pm10, there are some that are associated with pm25 ahead of it. So the second field contains the empty string between ,21 and ,112
#karakfa has a hack that seems to work -- but he doesn't explain very well why it works.
What he does is use awk's record separator (which is usually a newline) and sets it to either of :, ,, or [. So in your case, one of the records would be "pm25", because it is preceded by a colon, which is a separator, and succeeded by a comma, also a separator.
Once it hits the matching content ("pm25") it sets a counter to 4. Then, for this and the next records, it counts this counter down. "pm25" itself, "v", the empty string between : and [, and finally reaches one when hitting the record with the number you want to output: 4 && ! 3 is false, 3 && ! 2 is false, 2 && ! 1 is false, but 1 && ! 0 is true. Since there is no execution block, awk simply prints this record, which is the value you want.
A more robust work would probably be using xpath to find the script, then use some json parser or similar to get the value.
chw21's helpful answer explains why your approach didn't work.
peak's helpful answer is the most robust, because it employs proper JSON parsing.
If you don't want to or can't use third-party utility jq for JSON parsing, I suggest using sed rather than awk, because awk is not a good fit for field-based parsing of this data.
$ sed -E 's/^.*"pm25"[^[]+\[([0-9]+).+"pm10"[^[]+\[([0-9]+).*$/\1 \2/' <<< "$AQIDATA"
59 15
The above should work with both GNU and BSD/OSX sed.
To read the result into variables:
read pm25 pm10 < \
<(sed -E 's/^.*"pm25"[^[]+\[([0-9]+).+"pm10"[^[]+\[([0-9]+).*$/\1 \2/' <<< "$AQIDATA")
Note how I've chosen lowercase variable names, because it's best to avoid all upper-case variables in shell programming, so as to avoid conflicts with special shell and environment variables.
If you can't rely on the order of the values in the source string, use two separate sed commands:
pm25=$(sed -E 's/^.*"pm25"[^[]+\[([0-9]+).*$/\1/' <<< "$AQIDATA")
pm10=$(sed -E 's/^.*"pm10"[^[]+\[([0-9]+).*$/\1/' <<< "$AQIDATA")
awk to the rescue!
If you have to, you can use this hacky way using smart counters with hand-crafted delimiters. Setting RS instead of FS transfers looping through fields to awk itself. Multi-char RS is not available for all awks (gawk supports it).
$ awk -v RS='[:,[]' '$0=="\"pm25\""{c=4} c&&!--c' file
59
$ awk -v RS='[:,[]' '$0=="\"pm10\""{c=4} c&&!--c' file
15

Expect: extract specific string from output

I am navigating a Java-based CLI menu on a remote machine with expect inside a bash script and I am trying to extract something from the output without leaving the expect session.
Expect command in my script is:
expect -c "
spawn ssh user#host
expect \"#\"
send \"java cli menu command here\r\"
expect \"java cli prompt\"
send \"java menu command\"
"
###I want to extract a specific string from the above output###
Expect output is:
Id Name
-------------------
abcd 12 John Smith
I want to extract abcd 12 from the above output into another expect variable for further use within the expect script. So that's the 3rd line, first field by using a double-space delimiter. The awk equivalent would be: awk -F ' ' 'NR==3 {$1}'
The big issue is that the environment through which I am navigating with Expect is, as I stated above, a Java CLI based menu so I can't just use awk or anything else that would be available from a bash shell.
Getting out from the Java menu, processing the output and then getting in again is not an option as the login process lasts for 15 seconds so I need to remain inside and extract what I need from the output using expect internal commands only.
You can use regexp in expect itself directly with the use of -re flag. Thanks to Donal on pointing out the single quote and double quote issues. I have given solution using both ways.
I have created a file with the content as follows,
Id Name
-------------------
abcd 12 John Smith
This is nothing but your java program's console output. I have tested this in my system with this. i.e. I just simulated your program's output with cat. You just replace the cat code with your program commands. Simple. :)
Double Quotes :
#!/bin/bash
expect -c "
spawn ssh user#domain
expect \"password\"
send \"mypassword\r\"
expect {\\\$} { puts matched_literal_dollar_sign}
send \"cat input_file\r\"; # Replace this code with your java program commands
expect -re {-\r\n(.*?)\s\s}
set output \$expect_out(1,string)
#puts \$expect_out(1,string)
puts \"Result : \$output\"
"
Single Quotes :
#!/bin/bash
expect -c '
spawn ssh user#domain
expect "password"
send "mypasswordhere\r"
expect "\\\$" { puts matched_literal_dollar_sign}
send "cat input_file\r"; # Replace this code with your java program commands
expect -re {-\r\n(.*?)\s\s}
set output $expect_out(1,string)
#puts $expect_out(1,string)
puts "Result : $output"
'
As you can see, I have used {-\r\n(.*?)\s\s}. Here the braces prevent any variable substitutions. In your output, we have a 2nd line with full of hyphens. Then a newline. Then your 3rd line content. Let's decode the regex used.
-\r\n is to match one literal hyphen and a new line together. This will match the last hyphen in the 2nd line and the newline which in turn make it to 3rd line now. So, .*? will match the required output (i.e. abcd 12) till it encounters double space which is matched by \s\s.
You might be wondering why I need parenthesis which is used to get the sub-match patterns.
In general, expect will save the expect's whole match string in expect_out(0,string) and buffer all the matched/unmatched input to expect_out(buffer). Each sub match will be saved in subsequent numbering of string such as expect_out(1,string), expect_out(2,string) and so on.
As Donal pointed out, it is better to use single quote's approach since it looks less messy. :)
It is not required to escape the \r with the backslash in case of double quotes.
Update :
I have changed the regexp from -\r\n(\w+\s+\w+)\s\s to -\r\n(.*?)\s\s.
With this way - your requirement - such as match any number of letters and single spaces until you encounter first occurrence of double spaces in the output
Now, let's come to your question. You have mentioned that you have tried -\r\n(\w+)\s\s. But, there is a problem here with \w+. Remember \w+ will not match space character. Your output has some spaces in it till double spaces.
The use of regexp will matter based on your requirements on the input string which is going to get matched. You can customize the regular expressions based on your needs.
Update version 2 :
What is the significance of .*?. If you ask separately, I am going to repeat what you commented. In regular expressions, * is a greedy operator and ? is our life saver. Let us consider the string as
Stackoverflow is already overflowing with number of users.
Now, see the effect of the regular expression .*flow as below.
* matches any number of characters. More precisely, it matches the longest string possible while still allowing the pattern itself to match. So, due to this, .* in the pattern matched the characters Stackoverflow is already over and flow in pattern matched the text flow in the string.
Now, in order to prevent the .* to match only up to the first occurrence of the string flow, we are adding the ? to it. It will help the pattern to behave as non-greedy manner.
Now, again coming back to your question. If we have used .*\s\s, then it will match the whole line since it is trying to match as much as possible. This is common behavior of regular expressions.
Update version 3:
Have your code in the following way.
x=$(expect -c "
spawn ssh user#host
expect \"password\"
send \"password\r\"
expect {\\\$} { puts matched_literal_dollar_sign}
send \"cat input\r\"
expect -re {-\r\n(.*?)\s\s}
if {![info exists expect_out(1,string)]} {
puts \"Match did not happen :(\"
exit 1
}
set output \$expect_out(1,string)
#puts \$expect_out(1,string)
puts \"Result : \$output\"
")
y=$?
# $x now contains the output from the 'expect' command, and $y contains the
# exit status
echo $x
echo $y;
If the flow happened properly, then exit code will have value as 0. Else, it will have 1. With this way, you can check the return value in bash script.
Have a look at here to know about the info exists command.

Compare CSV files

I am currently using a windows utility called TableTexCompare
This tool can take 2 CSV files and compare them. The nice thing about it is that it can make the comparison even if the records of the 2 files are not sorted in the same order or the fields are not positioned in the same order.
As such, the following 2 files would result in a successful comparison
(File1.csv)
FirstName,LastName,Age
Mona,Sax,30
Max,Payne,43
Jack,Lupino,50
(File2.csv)
FirstName,Age,LastName
Max,43,Payne
Jack,50,Lupino
Mona,30,Sax
What I am looking for is to do the same thing from the command-line with just 1 difference:
I would like the comparison to be performed in one direction only, i.e. if File2.csv is as follows (a subset of File1.csv), the comparison should pass
(File2.csv)
FirstName,Age,LastName
Jack,50,Lupino
I do not particularly care if it's going to be in some programming language, a dedicated cli tool or a shell script (e.g. using awk). I have some experience with Java and Groovy but would like to be pointed to some initial direction.
I can offer a Python solution:
import csv
with open("file1.csv") as f1, open("file2.csv") as f2:
r1 = list(csv.DictReader(f1))
r2 = csv.DictReader(f2)
for item in r2:
if not item in r1:
print("r2 is not a subset of r1!")
break
This is actually a bit more verbose than necessary in Python (but easier to understand); I personally would have used a generator expression:
import csv
with open("file1.csv") as f1, open("file2.csv") as f2:
r1 = list(csv.DictReader(f1))
r2 = csv.DictReader(f2)
if all(item in r1 for item in r2):
print("r2 is a subset of r1")
If you can afford to do a case insensitive comparison, and if there are no duplicates within File2.csv that must be matched within File1.csv, and if File1.csv does not contain \\ or \", then all you need is a simple FINDSTR command.
The following will list lines in File2.csv that do not appear in File1.csv:
findstr /vxig:"File1.csv" "File2.csv"
If all you want is an indication whether File1.csv is a superset of File2.csv, then
findstr /vxig:"File1.csv" "File2.csv" >nul && (echo File1 is NOT a superset of File2) || (echo File1 IS a superset of File2)
The search should not have to be case insensitive, except there is a nasty FINDSTR bug: it may fail to find matches when there are multiple case sensitive literal search strings of varying size. The case insensitive option avoids the bug. See Why doesn't this FINDSTR example with multiple literal search strings find a match? for more info.
The search will not work properly if File2.csv contains \\ or \" because FINDSTR will treat them as \ and " respectively. See What are the undocumented features and limitations of the Windows FINDSTR command? for more info. The accepted answer has sections describing FINDSTR escape sequences about half way down.
You can take a look at q - Text as a Database , which allows executing SQL directly on csv files, including joins. This will allow doing a compare easily, and much more, such as matching specific columns for equality, and getting specific columns from rows that don't match etc.
Full disclosure - It's my own open source tool.
Harel Ben-Attia

Match any character (including newlines) in sed

I have a sed command that I want to run on a huge, terrible, ugly HTML file that was created from a Microsoft Word document. All it should do is remove any instance of the string
style='text-align:center; color:blue;
exampleStyle:exampleValue'
The sed command that I am trying to modify is
sed "s/ style='[^']*'//" fileA > fileB
It works great, except that whenever there is a new line inside of the matching text, it doesn't match. Is there a modifier for sed, or something I can do to force matching of any character, including newlines?
I understand that regexps are terrible at XML and HTML, blah blah blah, but in this case, the string patterns are well-formed in that the style attributes always start with a single quote and end with a single quote. So if I could just solve the newline problem, I could cut down the size of the HTML by over 50% with just that one command.
In the end, it turned out that Sinan Ünür's perl script worked best. It was almost instantaneous, and it reduced the file size from 2.3 MB to 850k. Good ol' Perl...
sed goes over the input file line by line which means, as I understand, what you want is not possible in sed.
You could use the following Perl script (untested), though:
#!/usr/bin/perl
use strict;
use warnings;
{
local $/; # slurp mode
my $html = <>;
$html =~ s/ style='[^']*'//g;
print $html;
}
__END__
A one liner would be:
$ perl -e 'local $/; $_ = <>; s/ style=\047[^\047]*\047//g; print' fileA > fileB
Sed reads the input line by line, so it is not simple to do processing over one line... but it is not impossible either, you need to make use of sed branching. The following will work, I have commented it to explain what is going on (not the most readable syntax!):
sed "# if the line matches 'style='', then branch to label,
# otherwise process next line
/style='/b style
b
# the line contains 'style', try to do a replace
: style
s/ style='[^']*'//
# if the replace worked, then process next line
t
# otherwise append the next line to the pattern space and try again.
N
b style
" fileA > fileB
You could remove all CR/LF using tr, run sed, and then import into an editor that auto-formats.
Another way is like:
$ cat toreplace.txt
I want to make \
this into one line
I also want to \
merge this line
$ sed -e 'N;N;s/\\\n//g;P;D;' toreplace.txt
Output:
I want to make this into one line
I also want to merge this line
The N loads another line, P prints the pattern space up to the first newline, and D deletes the pattern space up to the first newline.
You can try this:
awk '/style/&&/exampleValue/{
gsub(/style.*exampleValue\047/,"")
}
/style/&&!/exampleValue/{
gsub(/style.* /,"")
f=1
}
f &&/exampleValue/{
gsub(/.*exampleValue\047 /,"")
f=0
}
1
' file
Output:
# more file
this is a line
style='text-align:center; color:blue; exampleStyle:exampleValue'
this is a line
blah
blah
style='text-align:center; color:blue;
exampleStyle:exampleValue' blah blah....
# ./test.sh
this is a line
this is a line
blah
blah
blah blah....
Remove XML elements across several lines
My use case was pretty much the same, but I needed to match opening and closing tags from XML elements and remove them completely --including whatever was inside.
<xmlTag whatever="parameter that holds in the tag header">
<whatever_is_inside/>
<InWhicheverFormat>
<AcrossSeveralLines/>
</InWhicheverFormat>
</xmlTag>
Still, sed works on one single line. What we do here is tricking it to append subsequent lines to the current one so we can edit all lines we like, then rewrite the output (\n is a legal char you can output with sed to divide lines again).
Inspired by the answer from #beano, and another answer in Unix stackExchange, I built up my working sed "program":
sed -s --in-place=.back -e '/\(^[ ]*\)<xmlTag/{ # whenever you encounter the xmlTag
$! { # do
:begin # label to return to
N; # append next line
s/\(^[ ]*\)<\(xmlTag\)[^·]\+<\/\2>//; # Attempt substitution (elimination) of pattern
t end # if substitution succeeds, jump to :end
b begin # unconditional jump to :begin to append yet another line
:end # label to mark the end
}
}' myxmlfile.xml
Some explanations:
I match <xmlTag without closing the > because my XML element contains parameters.
What precedes <xmlTag is a very helpful piece of RegExp to match any existing indentation: \(^[ ]*\) so you can later output it with just \1 (even if it was not needed this time).
The addition of ; in several places is so that sed will understand that the command (N, s or whichever) ends there and following character(s) are another command.
most of my trouble was trying to find a RegExp that would match "anything in between". I finally settled by anything but · (i.e. [^·]\+), counting on not having that char in any of the data files. I needed to scape + because is special for GNU sed.
my original files remain as .back, just in case something goes wrong --tests still do fail after modification-- and are flagged easily by version control for removal in bulk.
I use this kind of sed-automation to evolve .XML files that we use with serialized data to run our unit and Integration tests. Whenever our classes change (loose or gain fields), the data have to be updated. I do that with a single ´find´ that executes a sed-automation in the files that contain the modified class. We hold hundreds of xml data files.