I need to print the lines which are newly added in file.
My code looks as follows:
proc dd {} {
global line_number
set line_number 0
set a [open "pkg.v" r]
#global count
while {[gets $a line]>=0} {
incr line_number
global count
set count [.fr.lst2 size]
puts "enter $count"
if {[eof $a]} {
#.fr.lst2 insert end "$line"
# set count [.fr.lst2 size]
close $a
} elseif {$count > 0} {
.fr.lst2 delete 0 end
if {$count+1} {
.fr.lst2 insert end "$line"
puts "i am $count"
}
} else {
.fr.lst2 insert end "$line"
puts "i am not"
}
}
puts "$count"
}
Assuming we're talking about lines written to the end of a log file on any Unix system (Linux, OSX, etc.) then it's trivially done with the help of tail:
# Make the pipeline to read from 'tail -f'; easy easy stuff!
set mypipe [exec |[list tail -f $theLogfile]]
# Make the pipe be non-blocking; usually a good idea for anything advanced
fconfigure $mypipe -blocking 0
# Handle data being available by calling a procedure which will read it
# The procedure takes two arguments, and we use [list] to build the callback
# script itself (Good Practice in Tcl coding)
fileevent $mypipe readable [list processLine $mypipe .fr.lst2]
proc processLine {pipeline widget} {
if {[gets $pipeline line] >= 0} {
# This is probably too simplistic for you; adapt as necessary
$widget insert end $line
} elseif {[eof $pipeline]} { # Check for EOF *after* [gets] fails!
close $pipeline
}
}
Related
set filePointer [open "fileName" "r"]
set fileWritePointer [open "fileNameWrite" "w"]
set lines [split [read $filePointer] "\n"]
close $filePointer
set length [llength $lines]
for {set i 0} {$i<$length} {incr i} {
if {[regexp "Matching1" $line]} {
puts $fileWritePointer $line
}
if {[regexp "Matching" $line]} {
puts $fileWritePointer $line
}
}
close $fileWritePointer
I am reading all the lines of the file at a time and splitting it by new line character and reading each line at a time inside the for loop.
After some syntax checks using regexp for the lines I am dumping only selected lines into a new file using the below syntax.
puts $filePointer $line
My file has around 2 million lines of code.
Like this many regexp matching is present roughly around 1.5.
Without knowing why the code is slow (or what exactly you're using a baseline for measurement against) it's hard to be sure what to do to accelerate it. However, you can try switching to streaming processing:
set fin [open "fileName"]
set fout [open "fileNameWrite" "w"]
while {[gets $fin line] >= 0} {
if {[regexp "Matching1" $line]} {
puts $fout $line
}
if {[regexp "Matching" $line]} {
puts $fout $line
}
}
close $fout
close $fin
You should make sure that your regular expressions are constant values for the duration of the processing to avoid recompiling them for every line (which would be very slow!) though those constant values can be stored in variables, so long as those variables are used without anything being added to them:
set RE1 "Matching1"
set RE2 "Matching"
# Note: these variables are NOT assigned to below! They are just used!
set fin [open "fileName"]
set fout [open "fileNameWrite" "w"]
while {[gets $fin line] >= 0} {
# Added “--” to make sure that the REs are never interpreted as anything else
if {[regexp -- $RE1 $line]} {
puts $fout $line
}
if {[regexp -- $RE2 $line]} {
puts $fout $line
}
}
close $fout
close $fin
You might also get extra speed by choosing the right encodings, putting all this code in a procedure, etc. As noted, it's hard to be sure what is the best thing to try without knowing why the code is actually slow, and that in part depends on the system on which it is being run.
Do you actually need regular expression matching? String matching is likely to be faster.
Can more than one match be made against the same line, and in that case do you really need the line to be written once for each match? If not, you can speed things up by skipping the rest of the matching attempts once one has succeeded:
if {[regexp -- $RE1 $line]} {
puts $fout $line
} elseif {[regexp -- $RE2 $line]} {
puts $fout $line
} elseif { ... } {
or
if {
[regexp -- $RE1 $line] ||
[regexp -- $RE2 $line] ||
...
} then {
puts $fout $line
}
or
switch -regexp -- $line \
$RE1 - \
$RE2 - \
... - \
default {
puts $fout $line
}
# Prints the string in a file
puts $chan stderr "$timestamp - Running test: $test"
# Prints the string on a console
puts "$timestamp - Running test: $test"
Is there a way I can send the output of puts to the screen and to a log file at the same time? Currently I have both the above two lines one after the other in my script to achieve this.
Or is there any other solution in tcl ?
Use the following proc instead of puts:
proc multiputs {args} {
if { [llength $args] == 0 } {
error "Usage: multiputs ?channel ...? string"
} elseif { [llength $args] == 1 } {
set channels stdout
} else {
set channels [lrange $args 0 end-1]
}
set str [lindex $args end]
foreach ch $channels {
puts $ch $str
}
}
Examples:
# print on stdout only
multiputs "1"
# print on stderr only
multiputs stderr "2"
set brieflog [open brief.log w]
set fulllog [open detailed.log w]
# print on stdout and in the log files
multiputs stdout $brieflog $fulllog "3"
This isn't something I've used extensively, but it seems to work (Tcl 8.6+ only):
You need the channel transform tcl::transform::observe package:
package require tcl::transform::observe
Open a log file for writing and set buffering to none:
set f [open log.txt w]
chan configure $f -buffering none
Register stdout as a receiver:
set c [::tcl::transform::observe $f stdout {}]
Anything written to the channel $c will now go to both the log file and stdout.
puts $c foobar
Note that it would seem to make more sense to have the channel transformation on top of stdout, with the channel to the log file as receiver, but I haven't been able to make that work.
Documentation:
chan,
open,
package,
puts,
set,
tcl::transform::observe (package)
I have a program which I made in vimscript which checks two files if they are the same. It makes a system call to diff to verify if they are differents or not.
I need something similar in Tcl but without resorting to external commands or system calls. I don't need to know the difference or have comparison between the files, just to return 1 if both files have the same content or 0 if the contents are different.
proc comp_file {file1 file2} {
# optimization: check file size first
set equal 0
if {[file size $file1] == [file size $file2]} {
set fh1 [open $file1 r]
set fh2 [open $file2 r]
set equal [string equal [read $fh1] [read $fh2]]
close $fh1
close $fh2
}
return $equal
}
if {[comp_file /tmp/foo /tmp/bar]} {
puts "files are equal"
}
For a straight binary comparison, you can just work a chunk at a time. (4kB is probably quite enough per chunk though you can pick larger values; I/O overhead will dominate in any case.) The simplest way to express this is with a loop inside a try…finally (requires Tcl 8.6):
proc sameContent {file1 file2} {
set f1 [open $file1 "rb"]
set f2 [open $file2 "rb"]
try {
while 1 {
if {[read $f1 4096] ne [read $f2 4096]} {
return 0
} elseif {[eof $f1]} {
# The same if we got to EOF at the same time
return [eof $f2]
} elseif {[eof $f2]} {
return 0
}
}
} finally {
close $f1
close $f2
}
}
Otherwise, we can take advantage of the fact that we can see if a variable has been set to keep the logic fairly simple (which is quite a lot less clear) to make code that works in older versions of Tcl:
proc sameContent {file1 file2} {
set f1 [open $file1]
fconfigure $f1 -translation binary
set f2 [open $file2]
fconfigure $f2 -translation binary
while {![info exist same]} {
if {[read $f1 4096] ne [read $f2 4096]} {
set same 0
} elseif {[eof $f1]} {
# The same if we got to EOF at the same time
set same [eof $f2]
} elseif {[eof $f2]} {
set same 0
}
}
close $f1
close $f2
return $same
}
Both are invoked in the same way:
if {[sameContent "./foo.txt" "some/dir/bar.txt"]} {
puts "They're the same contents, byte-for-byte"
} else {
puts "A difference was found"
}
I am calling a proc through fileevent. that proc returns a line od data.
how to receive this data?
the following code I have written to receive data from pipe when ever data is available. I dont want to block by using direct gets.
proc GetData1 { chan } {
if {[gets $chan line] >= 0} {
return $line
}
}
proc ReadIO {chan {timeout 2000} } {
set x 0
after $timeout {set x 1}
fconfigure $chan -blocking 0
fileevent $chan readable [ list GetData1 $chan ]
after cancel set x 3
vwait x
# Do something based on how the vwait finished...
switch $x {
1 { puts "Time out" }
2 { puts "Got Data" }
3 { puts "App Cancel" }
default { puts "Time out2 x=$x" }
}
# How to return data from here which is returned from GetData1
}
ReadIO $io 5000
# ** How to get data here which is returned from GetData1 ? **
There are probably as many ways of doing this as there are Tcl programmers. Essentially, you shouldn't use return to pass the data back from your fileevent handler as it isn't called in the usual way so you can get at what it returns.
Here are a few possible approaches.
Disclaimer None of these is tested, and I'm prone to typing mistakes, so treat with a little care!
1) Get your fileevent handler to write to a global veriable:
proc GetData1 {chan} {
if {[gets $chan line]} >= 0} {
append ::globalLine $line \n
}
}
.
.
.
ReadIO $io 5000
# ** The input line is in globalLine in the global namespace **
2) Pass the name of a global variable to your fileevent handler, and save the data there
proc GetData2 {chan varByName} {
if {[gets $chan line]} >= 0} {
upvar #0 $varByName var
append var $line \n
}
}
fileevent $chan readable [list GetData1 $chan inputFromChan]
.
.
.
ReadIO $chan 5000
# ** The input line is in ::inputFromChan **
A good choice for the variable here might be an array indexed by $chan, e.g. fileevent $chan readable [list GetDetail input($chan)]
3) Define some kind of class to look after your channels that stashes the data away internally and has a member function to return it
oo::class create fileeventHandler {
variable m_buffer m_chan
constructor {chan} {
set m_chan $chan
fileevent $m_chan readable [list [self object] handle]
set m_buffer {}
}
method data {} {
return $m_buffer
}
method handle {} {
if {[gets $m_chan line]} >= 0 {
append m_buffer $line \n
}
}
}
.
.
.
set handlers($chan) [fileeventHandler new $chan]; # Save object address for later use
ReadIO $io 5000
# Access the input via [handlers($chan) data]
How can I split a huge file into n number of smaller files using Tcl? The file name to split and number of files to be created have to be given through command line. Here is what I have so far:
proc splitter { file no } {
set lnum 0
set file_open [open $file r]
while {[gets $file_open line] >= 0 } {
incr lnum
}
puts "$lnum"
set num [expr $lnum/$no]
close $file_open
}
Here is one way to split text files, which has the advantage of not holding much in memory at once. (You can also split binary files, but then you need to use read instead of gets, and also to consider whether there are record boundaries in the data; text is mostly simpler.)
#!/usr/bin/env tclsh8.5
proc splitter {filename fileCount} {
set targetFileSize [expr {[file size $filename] / $fileCount}]
set n 0
set fin [open $filename]
while {[gets $fin line]} {
if {![info exist fout]} {
set fout [open $filename.split_[incr n] w]
}
puts $fout $line
if {[tell $fout] > $targetFileSize} {
close $fout
unset fout
}
}
if {[info exist fout]} {
close $fout
}
close $fin
}
splitter {*}$argv; # Connect to outside command line
use the global argv array to access command line parameters
after you read the file to count the lines, instead of closing the file handle, you can seek back to the top of the file.
if you're on *nix, have you considered using exec to call out to split?