Fastest way to iterates through records in file and replace contents on another file - language-agnostic

Need suggestions on the fastest way to read from a file containing list of values (~30k lines) and use the values to search and replace contents in another file (~500k lines) in Linux.
Currently I am iterating through the list file and forming the 'sed -e' commands then executing the command. However, it is taking ~1 hour to complete.
I'm looking to reduce the time taken by maybe 50%.
Here is a snippet of the code that I am currently using:
declare -a sedArgs
while IFS="" read -r line; do
IFS=',' read -ra cols <<< ${line}
col2=$(echo "${col2}" | sed 's/\*/\\*/g' | sed 's/\./\\./g')
col4=$(echo "${col4}" | sed 's/\*/\\*/g' | sed 's/\./\\./g')
sedArgs+=("-e")
sedArgs+=("s|${col2}${col1}|${col4}${col3}|g")
fi
done < list.txt
sed -i "${sedArgs[#]}" target.txt
list.txt example:
OLDVAL1,1234,NEWVAL1,1222
OLDVAL2,2234,NEWVAL2,2222
target.txt example:
CUST1 OLDVAL1 1234 00000000000000000
CUST2 OLDVAL2 2234 00000000000000000

Build a hash with old-new pairs from the first file and use it for replacements
use warnings;
use strict;
use feature 'say';
use Path::Tiny; # for convenience to read a file
my $repl_data_file = 'list.txt';
my %repl = map { (split /,/)[0,2] } path($repl_data_file)->lines;
while (<>) {
s{\S+ \s+\K (\S+) (.*)}{ ($repl{$1}//$1) . $2 }ex;
print
}
The <> operator reads line by line files with names given on the command line, so use this as prog.pl target.txt > new_target.txt (with output redirected to a file).
I make some assumptions since the description is sparse: the list.txt file has in the first and third column the old-new values, the value to replace in the target file is in the second column.
This should take mere seconds on the described files (sizes of 30k vs 500k lines).
I use Path::Tiny for convenience. This is a very useful module to have and easy to install but here is an alternative with builtin tools only
my %repl =
map { (split /,/)[0,2] }
do { open my $fh, $repl_data_file or die $!; <$fh> };

Related

exclude words those may or may not end with slash

I am trying to exclude certain words from dictionary file.
# cat en.txt
test
testing
access/p
batch
batch/n
batches
cross
# cat exclude.txt
test
batch
# grep -vf exclude.txt en.txt
access/p
cross
The words like "testing" and "batches" should be included in the results.
expected result:
testing
access/p
batches
cross
Because the word "batch" may or may not be followed by a slash "/". There can be one or more tags after slash (n in this case). But the word "batches" is a different word and should not match with "batch".
I would harness GNU AWK for this task following way, let en.txt content be
test
testing
access/p
batch
batch/n
batches
cross
and exclude.txt content be
test
batch
then
awk 'BEGIN{FS="/"}FNR==NR{arr[$1];next}!($1 in arr)' exclude.txt en.txt
gives output
testing
access/p
batches
cross
Explanation: I inform GNU AWK that / is field separator (FS), then when processing first file (where number of row globally is equal to number of row inside file, that is FNR==NR) I simply use 1st column value as key in array arr and then go to next line, so nothing other happens, for 2nd (and following files if present) I select lines whose 1st column is not (!) one of keys of array arr.
(tested in GNU Awk 5.0.1)
Using grep matching whole words:
grep -wvf exclude.txt en.txt
Explanation (from man grep)
-w --word-regexp Select only those lines containing matches that form whole words.
-v --invert-match Invert the sense of matching, to select non-matching lines.
-f -f FILE Obtain patterns from FILE, one per line.
Output
testing
access/p
batches
cross
Since there are many words in a dictionary that may have a root in one of those to exclude we cannot conveniently† use a look-up hash (built of the exclude list), but have to check all of them. One way to do that more efficiently is to use an alternation pattern built from the exclude list
use warnings;
use strict;
use feature 'say';
use Path::Tiny; # to read ("slurp") a file conveniently
my $excl_file = 'exclude.txt';
my $re_excl = join '|', split /\n/, path($excl_file)->slurp;
$re_excl = qr($re_excl);
while (<>) {
if ( m{^ $re_excl (?:/.)? $}x ) {
# say "Skip printing (so filter out): $_";
next;
}
say;
}
This is used as program.pl dictionary-filename and it prints the filtered list.
Here I've assumed that what may follow the root-word to exclude is / followed by one character, (?:/.)?, since examples in the question use that and there is no precise statement on it. The pattern also assumes no spaces around the word.
Please adjust as/if needed for what may actually follow /. For example, it'd be (?:/.+)? for at least one character, (?:/[np])? for any character from a specific list (n or p), (?:[^xy]+)? for any characters not in the given list, etc.
The qr operator forms a proper regex pattern.
† Can still first strip non-word endings, then use a look-up, then put back those endings
use Path::Tiny; # to read a file conveniently
my %lu = map { $_ => 1 } path($excl_file)->lines({ chomp => 1 });
while (<>) {
chomp;
# [^\w-] protects hyphenated words (or just use \W)
# Or: s{(/.+$}{}g; if "/" is the only possibility
s/([^\w-].+)$//g;
next if exists $lu{$_};
$_ .= $1 if $1;
say;
}
This will be far more efficient, on large dictionaries and long lists of exclude words.
However, it is far more complex and probably fails some (unstated) requirements

match text in a csv file, for the X firsts lines and the last X results and get a value in lua

i'm translating a bash script to a Lua program. In bash script there is a line:
mapfile -t vol < <( cat csv_file | head -$id | grep locateme | tail -3 | cut -f6 -d\,)
the result of that is:
vol[0]=22
vol[1]=33
vol[2]=44
the csv_file is like:
16,a,b,c,d,9,16,0,3,65,0,0,locateme
16,a,b,c,d,11,16,0,3,65,0,0,notme
16,a,b,c,d,22,16,0,3,65,0,0,locateme
16,a,b,c,d,33,16,0,3,65,0,0,locateme
16,a,b,c,d,32,16,0,3,65,0,0,notme
16,a,b,c,d,44,16,0,3,65,0,0,locateme
I need a table with the same results than bash:
vol[1]=22
vol[2]=33
vol[3]=44
please, i have no idea how to start with this
Instead of a Bash array you're going to use a Lua table.
local vol = {}
You'll need a generic for loop and the file:lines(...) iterator. It is a good idea to read through the whole io library.
This will allow you to get each line of the csv file as a string for further processing.
No you'll need Lua's string library. There are multiple ways to do this. One option is to use another generic for loop with string.gmatch and a suitable string pattern that captures the value you're interested in.

Search in large csv files

The problem
I have thousands of csv files in a folder. Every file has 128,000 entries with four columns in each line.
From time to time (two times a day) I need to compare a list (10,000 entries) with all csv files. If one of the entries is identical with the third or fourth column of one of the csv files I need to write the whole csv row to an extra file.
Possible solutions
Grep
#!/bin/bash
getArray() {
array=()
while IFS= read -r line
do
array+=("$line")
done < "$1"
}
getArray "entries.log"
for e in "${array[#]}"
do
echo "$e"
/bin/grep $e ./csv/* >> found
done
This seems to work, but it lasts forever. After almost 48 hours the script checked only 48 entries of about 10,000.
MySQL
The next try was to import all csv files to a mysql database. But there I had problems with my table at around 50,000,000 entries.
So I wrote a script which created a new table after 49,000,000 entries and so I was able to import all csv files.
I tried to create an index on the second column but it always failed (timeout). To create the index before the import process wasn't possible, too. It slowed down the import to days instead of only a few hours.
The select statement was horrible, but it worked. Much faster than the "grep" solution but still to slow.
My question
What else can I try to search within the csv files?
To speed things up I copied all csv files to an ssd. But I hope there are other ways.
This is unlikely to offer you meaningful benefits, but some improvements to your script
use the built-in mapfile to slurp a file into an array:
mapfile -t array < entries.log
use grep with a file of patterns and appropriate flags.
I assume you want to match items in entries.log as fixed strings, not as regex patterns.
I also assume you want to match whole words.
grep -Fwf entries.log ./csv/*
This means you don't have to grep the 1000's of csv files 1000's of times (once for each item in entries.log). Actually this alone should give you a real meaningful performance improvement.
This also removes the need to read entries.log into an array at all.
In awk assuming all the csv files change, otherwise it would be wise to keep track of the already checked files. But first some test material:
$ mkdir test # the csvs go here
$ cat > test/file1 # has a match in 3rd
not not this not
$ cat > test/file2 # no match
not not not not
$ cat > test/file3 # has a match in 4th
not not not that
$ cat > list # these we look for
this
that
Then the script:
$ awk 'NR==FNR{a[$1];next} ($3 in a) || ($4 in a){print >> "out"}' list test/*
$ cat out
not not this not
not not not that
Explained:
$ awk ' # awk
NR==FNR { # process the list file
a[$1] # hash list entries to a
next # next list item
}
($3 in a) || ($4 in a) { # if 3rd or 4th field entry in hash
print >> "out" # append whole record to file "out"
}' list test/* # first list then the rest of the files
The script hashes all the list entries to a and reads thru the csv files looking for 3rd and 4th field entries in the hash outputing when there is a match.
If you test it, let me know how long it ran.
You can build a patterns file and then use xargs and grep -Ef to search for all patterns in batches of csv files, rather than one pattern at a time as in your current solution:
# prepare patterns file
while read -r line; do
printf '%s\n' "^[^,]+,[^,]+,$line,[^,]+$" # find value in third column
printf '%s\n' "^[^,]+,[^,]+,[^,]+,$line$" # find value in fourth column
done < entries.log > patterns.dat
find /path/to/csv -type f -name '*.csv' -print0 | xargs -0 grep -hEf patterns.dat > found.dat
find ... - emits a NUL-delimited list of all csv files found
xargs -0 ... - passes the file list to grep, in batches

how do remove carriage returns in a txt file

I recently received some data items 99 pipe delimited txt files, however in some of them and ill use dataaddress.txt as an example, where there is a return in the address eg
14 MakeUp Road
Hull
HU99 9HU
It goming out on 3 rows rather than one, bear in made there is data before and after this address separated by pipes. It just seems to be this addresss issue which is causing me issues in oading the txt file correcting using SSIS.
Rather than go back to source I wondered if there was a way we can manipulate the txt file to remove these carriage returns while not affected the row end returns if that makes sense.
I would use sed or awk. I will show you how to do this with awk, because it more platform independent. If you do not have awk, you can download a mawk binary from http://invisible-island.net/mawk/mawk.html.
The idea is as follows - tell awk that your line separator is something different, not carriage return or line feed. I will use comma.
Than use a regular expression to replace the string that you do not like.
Here is a test file I created. Save it as test.txt:
1,Line before ...
2,Broken line ... 14 MakeUp Road
Hull
HU99 9HU
3,Line after
And call awk as follows:
awk 'BEGIN { RS = ","; ORS=""; s=""; } $0 != "" { gsub(/MakeUp Road[\n\r]+Hull[\n\r]+HU99 9HU/, "MakeUp Road Hull HU99 9HU"); print s $0; s="," }' test.txt
I suggest that you save the awk code into a file named cleanup.awk. Here is the better formatted code with explanations.
BEGIN {
# This block is executed at the beginning of the file
RS = ","; # Tell awk our records are separated by comma
ORS=""; # Tell awk not to use record separator in the output
s=""; # We will print this as record separator in the output
}
{
# This block is executed for each line.
# Remember, our "lines" are separated by commas.
# For each line, use a regular expression to replace the bad text.
gsub(/MakeUp Road[\n\r]+Hull[\n\r]+HU99 9HU/, "MakeUp Road Hull HU99 9HU");
# Print the replaced text - $0 variable represents the line text.
print s $0; s=","
}
Using the awk file, you can execute the replacement as follows:
awk -f cleanup.awk test.txt
To process multiple files, you can create a bash script:
for f in `ls *.txt`; do
# Execute the cleanup.awk program for each file.
# Save the cleaned output to a file in a directory ../clean
awk -f cleanup.awk $f > ../clean/$f
done
You can use sed to remove the line feed and carriage return characters:
sed ':a;N;$!ba;s/MakeUp Road[\n\r]\+/MakeUp Road /g' test.txt | sed ':a;N;$!ba;s/Hull[\n\r]\+/Hull /g'
Explanation:
:a create a label 'a'
N append the next line to the pattern space
$! if not the last line, ba branch (go to) label 'a'
s substitute command, \n represents new line, \r represents carriage return, [\n\r]+ - match new line or carriage return in a sequence as many times as they occur (at least one), /g global match (as many times as it can)
sed will loop through step 1 to 3 until it reach the last line, getting all lines fit in the pattern space where sed will substitute all \n characters

Similar strings, different results

I'm creating a Bash script to parse the air pollution levels from the webpage:
http://aqicn.org/city/beijing/m/
There is a lot of stuff in the file, but this is the relevant bit:
"iaqi":[{"p":"pm25","v":[59,21,112],"i":"Beijing pm25 (fine
particulate matter) measured by U.S Embassy Beijing Air Quality
Monitor
(\u7f8e\u56fd\u9a7b\u5317\u4eac\u5927\u4f7f\u9986\u7a7a\u6c14\u8d28\u91cf\u76d1\u6d4b).
Values are converted from \u00b5g/m3 to AQI levels using the EPA
standard."},{"p":"pm10","v":[15,5,69],"i":"Beijing pm10
(respirable particulate matter) measured by Beijing Environmental
Protection Monitoring Center
I want the script to parse and display 2 numbers: current PM2.5 and PM10 levels (the numbers in bold in the above paragraph).
CITY="beijing"
AQIDATA=$(wget -q 0 http://aqicn.org/city/$CITY/m/ -O -)
PM25=$(awk -v FS="(\"p\":\"pm25\",\"v\":\\\[|,[0-9]+)" '{print $2}' <<< $AQIDATA)
PM100=$(awk -v FS="(\"p\":\"pm10\",\"v\":\\\[|,[0-9]+)" '{print $2}' <<< $AQIDATA)
echo $PM25 $PM100
Even though I can get PM2.5 levels to display correctly, I cannot get PM10 levels to display. I cannot understand why, because the strings are similar.
Anyone here able to explain?
The following approach is based on two steps:
(1) Extracting the relevant JSON;
(2) Extracting the relevant information from the JSON using a JSON-aware tool -- here jq.
(1) Ideally, the web service would provide a JSON API that would allow one to obtain the JSON directly, but as the URL you have is intended for viewing with a browser, some form of screen-scraping is needed. There is a certain amount of brittleness to such an approach, so here I'll just provide something that currently works:
wget -O - http://aqicn.org/city/beijing/m |
gawk 'BEGIN{RS="function"}
$1 ~/getAqiModel/ {
sub(/.*var model=/,"");
sub(/;return model;}/,"");
print}'
(gawk or an awk that supports multi-character RS can be used; if you have another awk, then first split on "function", using e.g.:
sed $'s/function/\\\n/g' # three backslashes )
The output of the above can be piped to the following jq command, which performs the filtering envisioned in (2) above.
(2)
jq -c '.iaqi | .[]
| select(.p? =="pm25" or .p? =="pm10") | [.p, .v[0]]'
The result:
["pm25",59]
["pm10",15]
I think your problem is that you have a single line HTML file that contains a script that contains a variable that contains the data you are looking for.
Your field delimiters are either "p":"pm100", "v":[ or a comma and some digits.
For pm25 this works, because it is the first, and there are no occurrences of ,21 or something similar before it.
However, for pm10, there are some that are associated with pm25 ahead of it. So the second field contains the empty string between ,21 and ,112
#karakfa has a hack that seems to work -- but he doesn't explain very well why it works.
What he does is use awk's record separator (which is usually a newline) and sets it to either of :, ,, or [. So in your case, one of the records would be "pm25", because it is preceded by a colon, which is a separator, and succeeded by a comma, also a separator.
Once it hits the matching content ("pm25") it sets a counter to 4. Then, for this and the next records, it counts this counter down. "pm25" itself, "v", the empty string between : and [, and finally reaches one when hitting the record with the number you want to output: 4 && ! 3 is false, 3 && ! 2 is false, 2 && ! 1 is false, but 1 && ! 0 is true. Since there is no execution block, awk simply prints this record, which is the value you want.
A more robust work would probably be using xpath to find the script, then use some json parser or similar to get the value.
chw21's helpful answer explains why your approach didn't work.
peak's helpful answer is the most robust, because it employs proper JSON parsing.
If you don't want to or can't use third-party utility jq for JSON parsing, I suggest using sed rather than awk, because awk is not a good fit for field-based parsing of this data.
$ sed -E 's/^.*"pm25"[^[]+\[([0-9]+).+"pm10"[^[]+\[([0-9]+).*$/\1 \2/' <<< "$AQIDATA"
59 15
The above should work with both GNU and BSD/OSX sed.
To read the result into variables:
read pm25 pm10 < \
<(sed -E 's/^.*"pm25"[^[]+\[([0-9]+).+"pm10"[^[]+\[([0-9]+).*$/\1 \2/' <<< "$AQIDATA")
Note how I've chosen lowercase variable names, because it's best to avoid all upper-case variables in shell programming, so as to avoid conflicts with special shell and environment variables.
If you can't rely on the order of the values in the source string, use two separate sed commands:
pm25=$(sed -E 's/^.*"pm25"[^[]+\[([0-9]+).*$/\1/' <<< "$AQIDATA")
pm10=$(sed -E 's/^.*"pm10"[^[]+\[([0-9]+).*$/\1/' <<< "$AQIDATA")
awk to the rescue!
If you have to, you can use this hacky way using smart counters with hand-crafted delimiters. Setting RS instead of FS transfers looping through fields to awk itself. Multi-char RS is not available for all awks (gawk supports it).
$ awk -v RS='[:,[]' '$0=="\"pm25\""{c=4} c&&!--c' file
59
$ awk -v RS='[:,[]' '$0=="\"pm10\""{c=4} c&&!--c' file
15