Reading the wrong number of bytes from a binary file - tcl

I have the following code:
set myfile "the path to my file"
set fsize [file size $myfile]
set fp [open $myfile r]
fconfigure $fp -translation binary
set data [read $fp $fsize]
close $fp
puts $fsize
puts [string bytelength $data]
And it shows that the bytes read are different from the bytes requested. The bytes requested match what the filesystem shows; the actual bytes read are 22% more (requested 29300, got 35832). I tested this on Windows, with Tcl 8.6.

Use string length. Don't use string bytelength. It gives the “wrong” answers, or rather it answers a question you probably don't want to ask.
More Depth
The string bytelength command returns the length in bytes of the data in Tcl's internal almost-UTF-8 encoding. If you're not working with Tcl's C API directly, you really have no sensible use for that value, and C code is actually pretty able to get the value without that command. For ASCII text, the length and the byte-length are the same, but for binary data or text with NULs or characters greater than U+00007F (the Unicode character that is equivalent to ASCII DEL), the values will differ. By contrast, the string length command knows how to handle binary data correctly, and will report the number of bytes in the byte-string that you read in. We plan to deprecate the string bytelength command, as it turns out to be a bug in someone's code almost every time they use it.
(I'm guessing that your input data actually has 6532 bytes outside the range 1–127 in it; the other bytes internally use a two-byte representation in almost-UTF-8. Fortunately, Tcl doesn't actually convert into that format until it needs to, and instead uses a compact array of bytes in this case; you're forcing it by asking for the string bytelength.)
Background Information
The question of “how much memory is actually being used by Tcl to read this data” is quite hard to answer, because Tcl will internally mutate data to hold it in the form that is most efficient for the operations you are applying to it. Because Tcl's internal types are all precisely transparent (i.e., conversions to and from them don't lose information) we deliberately don't talk about them much except from an optimisation perspective; as a programmer, you're supposed to pretend that Tcl has no types other than string of unicode characters.
You can peel the veil back a bit with the tcl::unsupported::representation command (introduced in 8.6). Don't use the types for decisions on what to do in your code, as that is really not something guaranteed by the language, but it does let you see a lot more about what is really going on under the covers. Just remember, the values that you see are not the same as the values that Tcl's implementation thinks about. Thinking about the values that you see (without that magic command) will keep you thinking about things that it is correct to write.

Related

How to control reading of bits in XOR data frames?

I'm trying to learn to read the XOR data frames used in web sockets in Tcl.
I was reading the HTTP requests using:
chan configure $sock -buffering line -blocking 0 -encoding iso8859-1 -translation crlf
chan event $sock readable [list ReadLine $sock]
[catch {chan gets $sock line} len]
Now after the socket is opened, chan configure $sock -translation binary to read the component bits of the XOR frame, but I'm confused about the -buffering and -buffersize
and I changed the chan event to not get a full line but chan read numChars; but the readable event seems to fire for every character or again after each character is read.
Should the various segments of bits be read directly from the channel or should larger pieces be read from the channel into variables and then the bits separated from those pieces?
What is the proper channel configuration in order to read the bits in a controlled manner?
Also, it reads here https://www.tcl.tk/man/tcl/TclCmd/chan.html#M35 that in non-blocking mode chan read may not read all the requested characters. What is to be done? Count them and read again until get them all?
Thank you.
The -buffering and -buffersize are options used to manage the output side of the channel, i.e., when you write data to the socket with puts (or chan puts; it's an alternate name for the same thing). They're not used for input.
When you have the channel in binary mode, the characters you read and write correspond one-to-one with the bytes. You probably shouldn't use gets (chan gets) on binary data; read (chan read) is more likely to be appropriate. (For writing, the -nonewline option to puts is virtually mandatory.)
When you read a non-blocking channel with a number of characters/bytes requested, you can get up to that amount of data. If the request can be satisfied with what is in the read buffer, that is used and no request to the underlying file descriptor is done. If the request can be partially satisfied with buffered data, that's used first and only then is a request done for more data; if that request produces more data than needed, it is stored in the buffer (you can see how much with chan pending, but that's not normally important for binary channels). However, if that one non-blocking request does not deliver enough data to give you what you asked for, read returns anyway: you have a short read. Short reads don't necessarily mean that you're at the end of the channel, use chan eof and chan blocked to find out more (especially if you get the special case of a zero-length read). Being blocked might also not mean that you're at the end of a message within a higher-level protocol; more data may be coming, but it hasn't reached the OS yet (which is why you need a framing protocol on top of TCP; websockets are one such framing protocol).
Counting the data is easy: string length.
tl;dr: In non-blocking mode, the maximum amount that read of a binary channel can return is whatever is currently in the input buffers plus whatever is obtained from one non-blocking read of the file descriptor. In blocking mode, read will wait until the requested amount of data is available or definitely not available (end-of-file), performing multiple reads of the file descriptor if necessary.

Is there such a thing as "non-binary" data?

When you get down to the bare metal, all data is stored in bits, which are binary (1 or 0). However, I sometimes see terms like "binary file" which implies the existence of files that aren't binary. Also, for things like base64 encoding, which Wikipedia describes as a "binary-to-text encoding scheme". But if I'm not mistaken, text is also stored in a binary format on the hardware, so isn't base64 encoding ultimately converting binary to binary? Is there some other definition of "binary" I am unaware of?
You are right that deep down, everything is a binary file. However at its base, a binary file is intended to be read as an array of bytes, where each byte has a value between 0 and 255. A text file is intended to be read as an array of characters.
When, in Python, I open a file with open("myfile", "r"), I am telling it that I expect the underlying file to contain characters, and that Python just do the necessary processing to give me characters. It may convert multiple bytes into a single characters. It may canonicalize all possible newline combinations into just a single newline character. Some characters have multiple byte representations, but all will give me the same character.
When I open a file with open("myfile", "rb"), I literally want the file read byte by byte, with no interpretation of what it is seeing.

Convert UTF-8 to ANSI in tcl

proc pub:write { nick host handle channel arg } {
set fid [open /var/www/test.txt w]
puts $fid "█████████████████████████████████████████████████████████████████"
puts $fid "██"
close $fid
}
when i open i Webbrowser its Result so :
█████████████████████████████████████████████████████████████████
but it should :
█████████████████████████████████████████████████████████████████
Welcome to the yawning pit of complexity that is string encodings. You've got to get two things right to make what you're trying to do work. READ EVERYTHING BELOW BEFORE MAKING CHANGES as it all interacts horribly.
The character needs to be written to the file using the right encoding. This is done by configuring the encoding on the channel, which defaults to a system-specific value that is usually but not always right.
I'm taking a very wild guess that an encoding like “cp437 DOSLatinUS” is the right one.
fconfigure $fid -encoding cp437
However, Tcl's usually pretty good at picking the right thing to do by default.
Also, there's a huge number of different encodings. Some are very similar to each other and picking which one to use is a bit of a black art. The usual best bet is to stick with utf8 when possible, and otherwise to use the correct encoding (defined by protocol or by the system) and take a vast amount of care. This is really complicated!
You've also got to get the character into Tcl correctly in the first place. This means that the character has to be encoded in the source file, and Tcl has to read that file with the right encoding. Since the file is being written by another program (your editor usually) there's all sorts of potential for trouble. If you can discover what encoding is being used there (usually a matter of complete guesswork) then you can use the -encoding option to tclsh or source to allow Tcl to figure out what is going on.
Alternatively, stick with the ASCII subset in your source as that's pretty reliably handled the same whatever encoding is in use. You do this by converting each █ to the Tcl escape sequence \u2588. At least like that, you can be sure that you're only hunting down problems with the output encoding.
When debugging this thing, only change one thing at a time before retesting as there's a lot of bits that can go wrong and poison what is going on in ways that produce weird results downstream. I advise trying the escape sequence first as that at least means that you know that the input data is correct; once you know that you're not pushing garbage in, you can try hunting down whether you're actually getting problems with getting garbage out and what to do about it.
Finally, be aware that mixing in networking in this makes the problems about ten times harder…

In relative terms, how fast should TCL on Windows 10 be?

I have the latest TCL build from Active State installed on a desktop and laptop both running Windows 10. I'm new to TCL and a novice developer and my reason for learning TCL is to enhance my value on the F5 platform. I figured a good first step would be to stop the occasional work I do in VBScript and port that to TCL. Learning the language itself is coming along alright, but I'm worried my project isn't viable due to performance. My VBScripts absolutely destroy my TCL scripts in performance. I didn't expect that outcome as my understanding was TCL was so "fast" and that's why it was chosen by F5 for iRules etc.
So the question is, am I doing something wrong? Is the port for Windows just not quite there? Perhaps I misunderstood the way in which TCL is fast and it's not fast for file parsing applications?
My test application is a firewall log parser. Take a log with 6 million hits and find the unique src/dst/port/policy entries and count them; split up into accept and deny. Opening the file and reading the lines is fine, TCL processes 18k lines/second while VBScript does 11k. As soon as I do anything with the data, the tide turns. I need to break the four pieces of data noted above from the line read and put in array. I've "split" the line, done a for-next to read and match each part of the line, that's the slowest. I've done a regexp with subvariables that extracts all four elements in a single line, and that's much faster, but it's twice as slow as doing four regexps with a single variable and then cleaning the excess data from the match away with trims. But even this method is four times slower than VBScript with ad-hoc splits/for-next matching and trims. On my desktop, i get 7k lines/second with TCL and 25k with VBscript.
Then there's the array, I assume because my 3-dimensional array isn't a real array that searching through 3x as many lines is slowing it down. I may try to break up the array so it's looking through a third of the data currently. But the truth is, by the time the script gets to the point where there's a couple hundred entries in the array, it's dropped from processing 7k lines/second to less than 2k. My VBscript drops from about 25k lines to 22k lines. And so I don't see much hope.
I guess what I'm looking for in an answer, for those with TCL experience and general programming experience, is TCL natively slower than VB and other scripts for what I'm doing? Is it the port for Windows that's slowing it down? What kind of applications is TCL "fast" at or good at? If I need to try a different kind of project than reading and manipulating data from files I'm open to that.
edited to add code examples as requested:
while { [gets $infile line] >= 0 } {
some other commands I'm cutting out for the sake of space, they don't contribute to slowness
regexp {srcip=(.*)srcport.*dstip=(.*)dstport=(.*)dstint.*policyid=(.*)dstcount} $line -> srcip dstip dstport policyid
the above was unexpectedly slow. the fasted way to extract data I've found so far
regexp {srcip=(.*)srcport} $line srcip
set srcip [string trim $srcip "cdiloprsty="]
regexp {dstip=(.*)dstport} $line dstip
set dstip [string trim $dstip "cdiloprsty="]
regexp {dstport=(.*)dstint} $line dstport
set dstport [string trim $dstport "cdiloprsty="]
regexp {policyid=(.*)dstcount} $line a policyid
set policyid [string trim $policyid "cdiloprsty="]
Here is the array search that really bogs down after a while:
set start [array startsearch uList]
while {[array anymore uList $start]} {
incr f
#"key" returns the NAME of the association and uList(key) the VALUE associated with name
set key [array nextelement uList $start]
if {$uCheck == $uList($key)} {
##puts "$key CONDITOIN MET"
set flag true
adduList $uCheck $key $flag2
set flag2 false
break
}
}
Your question is still a bit broad in scope.
F5 has published some comment why they choose Tcl and how it is fast for their specific usecases. This is actually a bit different to a log parsing usecase, as they do all the heavy lifting in C-code (via custom commands) and use Tcl mostly as a fast dispatcher and for a bit of flow control. And Tcl is really good at that compared to various other languages.
For things like log parsing, Tcl is often beaten in performance by languages like Python and Perl in simple benchmarks. There are a variety of reasons for that, here are some of them:
Tcl uses a different regexp style (DFA), which are more robust for nasty patterns, but slower for simple patterns.
Tcl has a more abstract I/O layer than for example Python, and usually converts the input to unicode, which has some overhead if you do not disable it (via fconfigure)
Tcl has proper multithreading, instead of a global lock which costs around 10-20% performance for single threaded usecases.
So how to get your code fast(er)?
Try a more specific regular expression, those greedy .* patterns are bad for performance.
Try to use string commands instead of regexp, some string first commands followed by string range could be faster than a regexp for these simple patterns.
Use a different structure for that array, you probably want either a dict or some form of nested list.
Put your code inside a proc, do not put it all in a toplevel script and use local variables instead of globals to make the bytecode faster.
If you want, use one thread for reading lines from file and multiple threads for extracting data, like a typical producer-consumer pattern.

Excel does not display currency symbol(for example ¥) generated in my tcl code

I actually am generating an MS Excel file with the currencies and if you see the file I generated (tinyurl.com/currencytestxls), opening it in the text editor shows the correct symbol but somehow, MS Excel does not display the symbol. I am guessing there is some issue with the encoding. Any thoughts?
Here is my tcl code to generate the symbol:
set yen_val [format %c 165]
Firstly, this does produce a Yen symbol (I put format string in double quotes here just for clarity with the formatting):
format "%c" 165
You can then pass it around just fine. The problem is likely to come when you try to output it; when Tcl writes a string to the outside world (with the possible exception of the terminal on Windows, as that's tricky) it encodes that string into a definite byte sequence. The default encoding is the one reported by:
encoding system
But you can see what it is and change it for any channel (if you pass in the new name):
fconfigure $theChannel -encoding $theEncoding
For example, on my system (which uses UTF-8, which can handle any character):
% fconfigure stdout -encoding
utf-8
% puts [format %c 165]
¥
If you use an encoding that cannot represent a particular character, the replacement character for that encoding is used instead. For many encodings, that's a “?”. When you are sending data to another program (including to a web server or to a browser over the internet) it is vital that both sides agree on what the encoding of the data is. Sometimes this agreement is by convention (e.g., the system encoding), sometimes it is defined by the protocol (HTTP headers have this clearly defined), and sometimes this is done by explicitly transferred metadata (HTTP content).
If you're writing a CSV file to be ingested by Excel, use either the “unicode” or the “utf-8” encoding and make sure you put the byte-order mark in correctly. Tcl doesn't write BOMs automatically (because it's the wrong thing to do in some cases). To write a BOM, do this as the first thing when you start writing the file:
puts -nonewline $channel "\ufeff"