Bash for loop picking up filenames and a column from read -r and gnu plot - mysql

The top part of the following script works great, the .dat files are created via the MySQL command, and work perfectly with gnu plot (via the command line). The problem is getting the bottom (gnuplot) to work correctly. I'm pretty sure I have a couple of problems in the code: variables and the array. I need to call each .dat file (plot), have the title in the graph (from title in customers.txt)and name it (.png)
any guidance would be appreciated. Thanks a lot -- RichR
#!/bin/bash
set -x
databases=""
titles=""
while read -r ipAddr dbName title; do
dbName=$(echo "$dbName" | sed -e 's/pacsdb//')
rm -f "$dbName.dat"
touch "$dbName.dat"
databases=("$dbName.dat")
titles="$titles $title"
while read -r period; do
mysql -uroot -pxxxx -h "$ipAddr" "pacsdb$dbName" -se \
"SELECT COUNT(*) FROM tables WHERE some.info BETWEEN $period;" >> "$dbName.dat"
done < periods.txt
done < customers.txt
for database in "${databases[#]}"; do
gnuplot << EOF
set a bunch of options
set output "/var/www/$dbName.png"
plot "$dbName.dat" using 2:xtic(1) title "$titles"
EOF
done
exit 0
customers.txt example line-
192.168.179.222 pacsdbgibsonia "Gibsonia Animal Hospital"
Error output.....
+ for database in '"${databases[#]}"'
+ gnuplot
line 0: warning: Skipping unreadable file ".dat"
line 0: No data in plot
+ exit 0

to initialise databases array:
databases=()
to append $dbName.dat to databases array:
databases+=("$dbName.dat")
to retrieve dbName, remove suffix pattern .dat
dbName=${database%.dat}

Related

Storing aws ssm parameter as variable in bash script [duplicate]

I have a pretty simple script that is something like the following:
#!/bin/bash
VAR1="$1"
MOREF='sudo run command against $VAR1 | grep name | cut -c7-'
echo $MOREF
When I run this script from the command line and pass it the arguments, I am not getting any output. However, when I run the commands contained within the $MOREF variable, I am able to get output.
How can one take the results of a command that needs to be run within a script, save it to a variable, and then output that variable on the screen?
In addition to backticks `command`, command substitution can be done with $(command) or "$(command)", which I find easier to read, and allows for nesting.
OUTPUT=$(ls -1)
echo "${OUTPUT}"
MULTILINE=$(ls \
-1)
echo "${MULTILINE}"
Quoting (") does matter to preserve multi-line variable values; it is optional on the right-hand side of an assignment, as word splitting is not performed, so OUTPUT=$(ls -1) would work fine.
$(sudo run command)
If you're going to use an apostrophe, you need `, not '. This character is called "backticks" (or "grave accent"):
#!/bin/bash
VAR1="$1"
VAR2="$2"
MOREF=`sudo run command against "$VAR1" | grep name | cut -c7-`
echo "$MOREF"
Some Bash tricks I use to set variables from commands
Sorry, there is a loong answer, but as bash is a shell, where the main goal is to run other unix commands and react on result code and/or output, ( commands are often piped filter, etc... ).
Storing command output in variables is something basic and fundamental.
Therefore, depending on
compatibility (posix)
kind of output (filter(s))
number of variable to set (split or interpret)
execution time (monitoring)
error trapping
repeatability of request (see long running background process, further)
interactivity (considering user input while reading from another input file descriptor)
do I miss something?
First simple, old (obsolete), and compatible way
myPi=`echo '4*a(1)' | bc -l`
echo $myPi
3.14159265358979323844
Compatible, second way
As nesting could become heavy, parenthesis was implemented for this
myPi=$(bc -l <<<'4*a(1)')
Using backticks in script is to be avoided today.
Nested sample:
SysStarted=$(date -d "$(ps ho lstart 1)" +%s)
echo $SysStarted
1480656334
bash features
Reading more than one variable (with Bashisms)
df -k /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/dm-0 999320 529020 401488 57% /
If I just want a used value:
array=($(df -k /))
you could see an array variable:
declare -p array
declare -a array='([0]="Filesystem" [1]="1K-blocks" [2]="Used" [3]="Available" [
4]="Use%" [5]="Mounted" [6]="on" [7]="/dev/dm-0" [8]="999320" [9]="529020" [10]=
"401488" [11]="57%" [12]="/")'
Then:
echo ${array[9]}
529020
But I often use this:
{ read -r _;read -r filesystem size using avail prct mountpoint ; } < <(df -k /)
echo $using
529020
( The first read _ will just drop header line. ) Here, in only one command, you will populate 6 different variables (shown by alphabetical order):
declare -p avail filesystem mountpoint prct size using
declare -- avail="401488"
declare -- filesystem="/dev/dm-0"
declare -- mountpoint="/"
declare -- prct="57%"
declare -- size="999320"
declare -- using="529020"
Or
{ read -a head;varnames=(${head[#]//[K1% -]});varnames=(${head[#]//[K1% -]});
read ${varnames[#],,} ; } < <(LANG=C df -k /)
Then:
declare -p varnames ${varnames[#],,}
declare -a varnames=([0]="Filesystem" [1]="blocks" [2]="Used" [3]="Available" [4]="Use" [5]="Mounted" [6]="on")
declare -- filesystem="/dev/dm-0"
declare -- blocks="999320"
declare -- used="529020"
declare -- available="401488"
declare -- use="57%"
declare -- mounted="/"
declare -- on=""
Or even:
{ read _ ; read filesystem dsk[{6,2,9}] prct mountpoint ; } < <(df -k /)
declare -p mountpoint dsk
declare -- mountpoint="/"
declare -a dsk=([2]="529020" [6]="999320" [9]="401488")
(Note Used and Blocks is switched there: read ... dsk[6] dsk[2] dsk[9] ...)
... will work with associative arrays too: read _ disk[total] disk[used] ...
Other related sample: Parsing xrandr output: and end of Firefox tab by bash in a size of x% of display size? or at AskUbuntu.com Parsing xrandr output
Dedicated fd using unnamed fifo:
There is an elegent way! In this sample, I will read /etc/passwd file:
users=()
while IFS=: read -u $list user pass uid gid name home bin ;do
((uid>=500)) &&
printf -v users[uid] "%11d %7d %-20s %s\n" $uid $gid $user $home
done {list}</etc/passwd
Using this way (... read -u $list; ... {list}<inputfile) leave STDIN free for other purposes, like user interaction.
Then
echo -n "${users[#]}"
1000 1000 user /home/user
...
65534 65534 nobody /nonexistent
and
echo ${!users[#]}
1000 ... 65534
echo -n "${users[1000]}"
1000 1000 user /home/user
This could be used with static files or even /dev/tcp/xx.xx.xx.xx/yyy with x for ip address or hostname and y for port number or with the output of a command:
{
read -u $list -a head # read header in array `head`
varnames=(${head[#]//[K1% -]}) # drop illegal chars for variable names
while read -u $list ${varnames[#],,} ;do
((pct=available*100/(available+used),pct<10)) &&
printf "WARN: FS: %-20s on %-14s %3d <10 (Total: %11u, Use: %7s)\n" \
"${filesystem#*/mapper/}" "$mounted" $pct $blocks "$use"
done
} {list}< <(LANG=C df -k)
And of course with inline documents:
while IFS=\; read -u $list -a myvar ;do
echo ${myvar[2]}
done {list}<<"eof"
foo;bar;baz
alice;bob;charlie
$cherry;$strawberry;$memberberries
eof
Practical sample parsing CSV files:
As this answer is loong enough, for this paragraph,
I just will let you refer to
this answer to How to parse a CSV file in Bash?, I read a file by using an unnamed fifo, using syntax like:
exec {FD}<"$file" # open unnamed fifo for read
IFS=';' read -ru $FD -a headline
while IFS=';' read -ru $FD -a row ;do ...
... But using bash loadable CSV module.
On my website, you may find the same script, reading CSV as inline document.
Sample function for populating some variables:
#!/bin/bash
declare free=0 total=0 used=0 mpnt='??'
getDiskStat() {
{
read _
read _ total used free _ mpnt
} < <(
df -k ${1:-/}
)
}
getDiskStat $1
echo "$mpnt: Tot:$total, used: $used, free: $free."
Nota: declare line is not required, just for readability.
About sudo cmd | grep ... | cut ...
shell=$(cat /etc/passwd | grep $USER | cut -d : -f 7)
echo $shell
/bin/bash
(Please avoid useless cat! So this is just one fork less:
shell=$(grep $USER </etc/passwd | cut -d : -f 7)
All pipes (|) implies forks. Where another process have to be run, accessing disk, libraries calls and so on.
So using sed for sample, will limit subprocess to only one fork:
shell=$(sed </etc/passwd "s/^$USER:.*://p;d")
echo $shell
And with Bashisms:
But for many actions, mostly on small files, Bash could do the job itself:
while IFS=: read -a line ; do
[ "$line" = "$USER" ] && shell=${line[6]}
done </etc/passwd
echo $shell
/bin/bash
or
while IFS=: read loginname encpass uid gid fullname home shell;do
[ "$loginname" = "$USER" ] && break
done </etc/passwd
echo $shell $loginname ...
Going further about variable splitting...
Have a look at my answer to How do I split a string on a delimiter in Bash?
Alternative: reducing forks by using backgrounded long-running tasks
In order to prevent multiple forks like
myPi=$(bc -l <<<'4*a(1)'
myRay=12
myCirc=$(bc -l <<<" 2 * $myPi * $myRay ")
or
myStarted=$(date -d "$(ps ho lstart 1)" +%s)
mySessStart=$(date -d "$(ps ho lstart $$)" +%s)
This work fine, but running many forks is heavy and slow.
And commands like date and bc could make many operations, line by line!!
See:
bc -l <<<$'3*4\n5*6'
12
30
date -f - +%s < <(ps ho lstart 1 $$)
1516030449
1517853288
So we could use a long running background process to make many jobs, without having to initiate a new fork for each request.
You could have a look how reducing forks make Mandelbrot bash, improve from more than eight hours to less than 5 seconds.
Under bash, there is a built-in function: coproc:
coproc bc -l
echo 4*3 >&${COPROC[1]}
read -u $COPROC answer
echo $answer
12
echo >&${COPROC[1]} 'pi=4*a(1)'
ray=42.0
printf >&${COPROC[1]} '2*pi*%s\n' $ray
read -u $COPROC answer
echo $answer
263.89378290154263202896
printf >&${COPROC[1]} 'pi*%s^2\n' $ray
read -u $COPROC answer
echo $answer
5541.76944093239527260816
As bc is ready, running in background and I/O are ready too, there is no delay, nothing to load, open, close, before or after operation. Only the operation himself! This become a lot quicker than having to fork to bc for each operation!
Border effect: While bc stay running, they will hold all registers, so some variables or functions could be defined at initialisation step, as first write to ${COPROC[1]}, just after starting the task (via coproc).
Into a function newConnector
You may found my newConnector function on GitHub.Com or on my own site (Note on GitHub: there are two files on my site. Function and demo are bundled into one unique file which could be sourced for use or just run for demo.)
Sample:
source shell_connector.sh
tty
/dev/pts/20
ps --tty pts/20 fw
PID TTY STAT TIME COMMAND
29019 pts/20 Ss 0:00 bash
30745 pts/20 R+ 0:00 \_ ps --tty pts/20 fw
newConnector /usr/bin/bc "-l" '3*4' 12
ps --tty pts/20 fw
PID TTY STAT TIME COMMAND
29019 pts/20 Ss 0:00 bash
30944 pts/20 S 0:00 \_ /usr/bin/bc -l
30952 pts/20 R+ 0:00 \_ ps --tty pts/20 fw
declare -p PI
bash: declare: PI: not found
myBc '4*a(1)' PI
declare -p PI
declare -- PI="3.14159265358979323844"
The function myBc lets you use the background task with simple syntax.
Then for date:
newConnector /bin/date '-f - +%s' #0 0
myDate '2000-01-01'
946681200
myDate "$(ps ho lstart 1)" boottime
myDate now now
read utm idl </proc/uptime
myBc "$now-$boottime" uptime
printf "%s\n" ${utm%%.*} $uptime
42134906
42134906
ps --tty pts/20 fw
PID TTY STAT TIME COMMAND
29019 pts/20 Ss 0:00 bash
30944 pts/20 S 0:00 \_ /usr/bin/bc -l
32615 pts/20 S 0:00 \_ /bin/date -f - +%s
3162 pts/20 R+ 0:00 \_ ps --tty pts/20 fw
From there, if you want to end one of background processes, you just have to close its fd:
eval "exec $DATEOUT>&-"
eval "exec $DATEIN>&-"
ps --tty pts/20 fw
PID TTY STAT TIME COMMAND
4936 pts/20 Ss 0:00 bash
5256 pts/20 S 0:00 \_ /usr/bin/bc -l
6358 pts/20 R+ 0:00 \_ ps --tty pts/20 fw
which is not needed, because all fd close when the main process finishes.
As they have already indicated to you, you should use `backticks`.
The alternative proposed $(command) works as well, and it also easier to read, but note that it is valid only with Bash or KornShell (and shells derived from those),
so if your scripts have to be really portable on various Unix systems, you should prefer the old backticks notation.
I know three ways to do it:
Functions are suitable for such tasks:**
func (){
ls -l
}
Invoke it by saying func.
Also another suitable solution could be eval:
var="ls -l"
eval $var
The third one is using variables directly:
var=$(ls -l)
OR
var=`ls -l`
You can get the output of the third solution in a good way:
echo "$var"
And also in a nasty way:
echo $var
Just to be different:
MOREF=$(sudo run command against $VAR1 | grep name | cut -c7-)
When setting a variable make sure you have no spaces before and/or after the = sign. I literally spent an hour trying to figure this out, trying all kinds of solutions! This is not cool.
Correct:
WTFF=`echo "stuff"`
echo "Example: $WTFF"
Will Fail with error "stuff: not found" or similar
WTFF= `echo "stuff"`
echo "Example: $WTFF"
If you want to do it with multiline/multiple command/s then you can do this:
output=$( bash <<EOF
# Multiline/multiple command/s
EOF
)
Or:
output=$(
# Multiline/multiple command/s
)
Example:
#!/bin/bash
output="$( bash <<EOF
echo first
echo second
echo third
EOF
)"
echo "$output"
Output:
first
second
third
Using heredoc, you can simplify things pretty easily by breaking down your long single line code into a multiline one. Another example:
output="$( ssh -p $port $user#$domain <<EOF
# Breakdown your long ssh command into multiline here.
EOF
)"
You need to use either
$(command-here)
or
`command-here`
Example
#!/bin/bash
VAR1="$1"
VAR2="$2"
MOREF="$(sudo run command against "$VAR1" | grep name | cut -c7-)"
echo "$MOREF"
If the command that you are trying to execute fails, it would write the output onto the error stream and would then be printed out to the console.
To avoid it, you must redirect the error stream:
result=$(ls -l something_that_does_not_exist 2>&1)
This is another way and is good to use with some text editors that are unable to correctly highlight every intricate code you create:
read -r -d '' str < <(cat somefile.txt)
echo "${#str}"
echo "$str"
You can use backticks (also known as accent graves) or $().
Like:
OUTPUT=$(x+2);
OUTPUT=`x+2`;
Both have the same effect. But OUTPUT=$(x+2) is more readable and the latest one.
Here are two more ways:
Please keep in mind that space is very important in Bash. So, if you want your command to run, use as is without introducing any more spaces.
The following assigns harshil to L and then prints it
L=$"harshil"
echo "$L"
The following assigns the output of the command tr to L2. tr is being operated on another variable, L1.
L2=$(echo "$L1" | tr [:upper:] [:lower:])
Mac/OSX nowadays come with old Bash versions, ie GNU bash, version 3.2.57(1)-release (arm64-apple-darwin21). In this case, one can use:
new_variable="$(some_command)"
A concrete example:
newvar="$(echo $var | tr -d '123')"
Note the (), instead of the usual {} in Bash 4.
Some may find this useful.
Integer values in variable substitution, where the trick is using $(()) double brackets:
N=3
M=3
COUNT=$N-1
ARR[0]=3
ARR[1]=2
ARR[2]=4
ARR[3]=1
while (( COUNT < ${#ARR[#]} ))
do
ARR[$COUNT]=$((ARR[COUNT]*M))
(( COUNT=$COUNT+$N ))
done

Visually update size of file during mysql dump via tunnel

I have a bash script called copydata which does the following to do a MySQL dump of specific tables from our production MySQL server to a local file, and then push it into my local MySQL database.
#!/bin/sh
#set up tunnel
ssh -f -i ~/.ssh/ec2-eu-keypair.pem -o CompressionLevel=9 -o ExitOnForwardFailure=yes -L 3307:elr2.our-id.eu-west-1.rds.amazonaws.com:3306 username#example.com
echo "Dumping tables \"$#\" to /tmp/data.sql"
#dump tables to local file
mysqldump -u root -h 127.0.0.1 -pmypass -P 3307 live_db_name --extended-insert --single-transaction --default-character-set=utf8 --skip-set-charset $# > /tmp/data.sql
pv /tmp/data.sql | mysql -u root local_db_name --default-character-set=utf8 --binary-mode --force
So, it is called like copydata table1 table2
It works, but the mysqldump part can take a very long time, and it would be nice to have some visual feedback on progress. One thing which occurred to me is that I could show the size of /tmp/data.sql while the dump is in progress - if I just keep doing the following, in a seperate tab, for example, I can see it going up at the rate of approx 2mb per second:
ls -lh /tmp/data.sql
Can I add the above command, or something similar, to the above script so that I can see the file size updating while i'm waiting for the mysqldump line to complete?
Thanks to #YuriLachin in the comments, I did the following:
added a & to the mysqldump line, so it becomes asynchronous, ie the script carries on to the next line while the mysqldump continues in the background
added this line, to repeatedly call ls -lh on the local file:
pid=$!; while [ -d "/proc/$pid" ] ; do echo -n "$(ls -lh /tmp/data.sql)"\\r; sleep 1; done
Lets break that down, to aid my own learning as much as anything else:
#get the process id of that last backgrounded task (the mysqldump) so we
#can tell when it's finished running
pid=$!
#while it *is* still running
while [ -d "/proc/$pid" ] ; do
#get the size of the file, with ls, but do it inside an echo command.
#Wrapping it like this allows us to use the `-n` option which means "omit newline",
#or don't go onto the next line. Then, at the end, do \\r which is a carriage return,
#meaning 'go back to the start of the current line', so the next line will
#overwrite the first one.
#Now it updates in place rather than spewing out loads of lines.
echo -n "$(ls -lh /tmp/data.sql)"\\r
#then do nothing for 1 second, to avoid wasting cpu time.
sleep 1
done

Trying to populate video file data to mysql database

I'm trying on a mac to loop through all videos in a directory and add the details (Duration, size and time) of the file to a mysql db. But for some reason every time it fails on the mysql part.
If I take the mysql query generated by the script and run it on the mysql db it works fine. Can anyone help at all?
#!/bin/bash
OrDir="/Volumes/Misc/video"
find "$OrDir" -type f -exec /bin/bash -c \
'name=$(basename "$1")
name=${name%.*}
duration=$( ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 -sexagesimal "$1")
hash=$( md5 "$1" | cut -f 4 -d " ")
size=$( stat -f%z "$1")
QUERY="UPDATE Video SET Duration=\"$duration\", Hash=\"$hash\", Bytes=$size WHERE Name=\"$name\" "
echo "$QUERY \n"
mysql --host=**.**.**.** --user=**** --password=****** **** << EOF
$QUERY;
EOF
' _ {} \;
The * are omitting sensitive data and are correct (they're used in another shell script with the same method that runs fine on the same server)
This is just part of a larger script which will then be combined once this works properly

Running a .sh file on windows to recover single table from Mysql .sql file

I have a backup of of mysql database and i just need 1 table from it in a hurry.
Its 4gb and ive tried trying to open it with programs like VIM and it didnt go well, guess its too big. Even doing that trying to extract one table from so much text would be difficult.
So i came across this:
http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script
Which explains how to do it with a shell script. And i found out with http://cygwin.com you can run shell scripts in windows, im running windows 8.1.
Im not really clear what the steps are:
So i run cwygin and get into the shell script window
i put my database file and the mysqldumpsplitter.sh in the C:\cygwin64\usr\mysql folder i created.
Then i go to the /usr/mysql and i run this:
sh mysqldumpsplitter.sh mydatabase.sql tbl_activity
tbl_activity is the table im trying to access. and mydatabase.sql is the sql backup
but when i run that i get
mysqldumpsplitter.sh: line 5: tput: command not found
mysqldumpsplitter.sh: line 6: tput: command not found
mysqldumpsplitter.sh: line 7: tput: command not found
mysqldumpsplitter.sh: line 8: tput: command not found
mysqldumpsplitter.sh: line 9: tput: command not found
mysqldumpsplitter.sh: line 10: tput: command not found
mysqldumpsplitter.sh: line 11: tput: command not found
mysqldumpsplitter.sh: line 12: tput: command not found
mysqldumpsplitter.sh: line 13: tput: command not found
mysqldumpsplitter.sh: line 14: tput: command not found
0 Table extracted from mydatabase.sql at .
Line 5=14 is below
txtund=$(tput sgr 0 1) # Underline
txtbld=$(tput bold) # Bold
txtred=$(tput setaf 1) # Red
txtgrn=$(tput setaf 2) # Green
txtylw=$(tput setaf 3) # Yellow
txtblu=$(tput setaf 4) # Blue
txtpur=$(tput setaf 5) # Purple
txtcyn=$(tput setaf 6) # Cyan
txtwht=$(tput setaf 7) # White
txtrst=$(tput sgr0) # Text reset
While i could potentially get access to a ubuntu machine and run this (i assume this will work better there) i would have to wait hours for the 4gb .sql dump to upload and im hoping to do this quickly. Is it simply a hack running this on windows and i should switch to ubuntu to run it instead?
Full .sh file here since its small
#!/bin/sh
# http://kedar.nitty-witty.com
#SPLIT DUMP FILE INTO INDIVIDUAL TABLE DUMPS
# Text color variables
txtund=$(tput sgr 0 1) # Underline
txtbld=$(tput bold) # Bold
txtred=$(tput setaf 1) # Red
txtgrn=$(tput setaf 2) # Green
txtylw=$(tput setaf 3) # Yellow
txtblu=$(tput setaf 4) # Blue
txtpur=$(tput setaf 5) # Purple
txtcyn=$(tput setaf 6) # Cyan
txtwht=$(tput setaf 7) # White
txtrst=$(tput sgr0) # Text reset
TARGET_DIR="."
DUMP_FILE=$1
TABLE_COUNT=0
if [ $# = 0 ]; then
echo "${txtbld}${txtred}Usage: sh MyDumpSplitter.sh DUMP-FILE-NAME${txtrst} -- Extract all tables as a separate file from dump."
echo "${txtbld}${txtred} sh MyDumpSplitter.sh DUMP-FILE-NAME TABLE-NAME ${txtrst} -- Extract single table from dump."
echo "${txtbld}${txtred} sh MyDumpSplitter.sh DUMP-FILE-NAME -S TABLE-NAME-REGEXP ${txtrst} -- Extract tables from dump for specified regular expression."
exit;
elif [ $# = 1 ]; then
#Loop for each tablename found in provided dumpfile
for tablename in $(grep "Table structure for table " $1 | awk -F"\`" {'print $2'})
do
#Extract table specific dump to tablename.sql
sed -n "/^-- Table structure for table \`$tablename\`/,/^-- Table structure for table/p" $1 > $TARGET_DIR/$tablename.sql
TABLE_COUNT=$((TABLE_COUNT+1))
done;
elif [ $# = 2 ]; then
for tablename in $(grep -E "Table structure for table \`$2\`" $1| awk -F"\`" {'print $2'})
do
echo "Extracting $tablename..."
#Extract table specific dump to tablename.sql
sed -n "/^-- Table structure for table \`$tablename\`/,/^-- Table structure for table/p" $1 > $TARGET_DIR/$tablename.sql
TABLE_COUNT=$((TABLE_COUNT+1))
done;
elif [ $# = 3 ]; then
if [ $2 = "-S" ]; then
for tablename in $(grep -E "Table structure for table \`$3" $1| awk -F"\`" {'print $2'})
do
echo "Extracting $tablename..."
#Extract table specific dump to tablename.sql
sed -n "/^-- Table structure for table \`$tablename\`/,/^-- Table structure for table/p" $1 > $TARGET_DIR/$tablename.sql
TABLE_COUNT=$((TABLE_COUNT+1))
done;
else
echo "${txtbld}${txtred} Please provide proper parameters. ${txtrst}";
fi
fi
#Summary
echo "${txtbld}$TABLE_COUNT Table extracted from $DUMP_FILE at $TARGET_DIR${txtrst}"
Try the program UltraEdit: it opens files without buffering the entire content. You can use the evaluation version for 30 days I believe.
Strangely enough, this is the only program (Windows/Linux) I know of which does not buffer an entire file. It has helped me on many occasions.
I would not go that long way. I would just use what I have at hand. I guess, you know the table structure and need the data only. So I would use something like the following in cmd:
C:\tmp>findstr "^INSERT INTO your_table" < mydatabase.sql > filtered.sql
To be sure, how the INSERT statements look like in file you might run something like:
C:\tmp>findstr "INSERT INTO" < mydatabase.sql | more
And then exit by Ctrl+C.

How do I name the output textfile to YYYYMMDD based on the system date?

How do I name the output textfile to YYYYMMDD based on the system date?
sqlcmd -S DataBBB -i c:\scripts\followup.sql
-o %DATE:~4,2%_%DATE:~7,2%_%DATE:~-4%.txt -s ; -W -u
Now the output text file is 01_31_2012.txt.
How can I change it to 2012_01_31.txt?
Tried it under Windows 7 Premium (German), may be dependent upon OS and Local Time Format.
sqlcmd -S DataBBB -i c:\scripts\followup.sql
-o %date:~-4%_%date:~3,2%_%date:~0,2%.txt -s;
You have to edit this part for your system
%date:~-4%_%date:~3,2%_%date:~0,2%.txt
The statement used the command line internal %date% - variable with the extension :~start,length. So you can create the filename with different parts from the date- variable.