I'm very new to tcl and I hope to become proficient in this language, so I thought I should ask if there was a reason why some example codes have stdout in the code and some just use puts. The book doesn't seem to explain this. Was the IDE updated to automatically assume stdout?
The puts command writes to stdout if you don't provide it with a channel name (and has worked this way since… well, since forever). If you want, you can put it in or leave it out; it doesn't matter, but might make things clearer if done one way or the other. Let it be your choice. The only mandatory argument to puts is the string to write.
In performance terms, it's slightly faster to omit the value, but you should disregard that as the additional cost of a lookup is absolutely measly by comparison with the cost of doing IO at all. Clarity of expression is the only valid reason for picking one over the other.
There is a case where it matters though. If you want to write code that writes to standard output by default, but where you can override it to write to a file (or a socket or a pipe or …) then it's much easier to have the channel name stored in a variable (that is set to stdout by default, of course). Like that, all your code is the same except for one bit: it's not a good idea to close stdout normally. That's easy to avoid with code like this:
# Also don't want to close stdin or stderr...
if {![string match std* $channel]} {
close $channel
}
Related
I know I am really picky here, but like to throw it out in case I am off in my interpreting the TCL man page, actually, I wish I was wrong here, as you see the below story.
So for every new TCL developer, we recommend reading the famous "11 rules" (now it is 12 rules).
Yesterday I was asked this question: why does the following script fail?
# puts "hello
world!"
Of course it fails, I said, the first line is taken as comment, that leaves world!" as a command.
But, the newbie said, the manpage indicates that the script is parsed in certain order:
As #2 Evaluation states, the command is parsed to words first.
As #4 Double quotes states, newline is taken as is in parsing double quotes. This makes hello and world! into one word, with a newline in between.
Comments at #10 does states everything up till the next newline is ignored, but after the above processing, the newline should be the 2nd newline, the one after world!.
I see he had a point.
It makes more sense to move the comment section way ahead in the man page, maybe at the second section. With this order change, it indicates that comment recognition is preceding the word-tokenizing process.
How do you think?
Again, I have no intention to ask for change of the manpage, just want to make sure if I miss anything in interpreting the bible.
[UPDATE]
To the people suggesting to close this question as not-a-technical question, it is the same as if my colleague came here asking why that script fails even though his understanding of TCL man page indicates it is a good script.
Again, I am not asking to change the man page.
Let me re-phrase my question - when you are asked this same question, what flaw do you see in his reasoning?
[UPDATE2]
Thanks Donal. I think this is what I learnt, TCL parser goes one char by another, there is no look-ahead.
This is another example:
puts [#haha]
Such script fails at tclsh for the same reason, TCL parser does not break down the script first and only parses the string embedded inside the matching brackets, instead it recognizes "#" as the start of comment and ignores everything after it.
The rules in the Tcl(n) manual page describe pretty precisely the parser that Tcl uses. Requests to change it substantively are usually denied as they tend to have far-reaching consequences and interact with each other trickily. Verifying that a reordering of the rules is not substantive is a non-trivial task, as they correspond to quite a bit of code (our parser and a chunk of our bytecode compiler).
Adding non-normative sections (e.g., EXAMPLES) is easier.
Update based on your updated question
The problem with the reasoning is that the rules are a whole, not really a layered set of parts. They do interact with each other. (The one that usually trips people up is the interaction between the brace rule and the comment rule when inside a braced string such as a procedure body.) Comments really are true comments, and extend up to the end of the line (allowing for backslash-newline sequences) but not beyond, but they only start at places where commands start, not at other places with a # character, and that's the genuinely tricky bit.
Unfortunately, the way that the Tcl parser works is a bit different to the way that programmers think, but most of the time it's pretty good at pretending to work in a “reasonable fashion”. The tricky edge cases don't actually come up too often other than when dealing with the brace-comment interaction mentioned above. The other cases which I hit tend to be either with a switch (resolvable by just putting the comment in the arm) or with long literal lists of things where I want to comment some sections of the list; in that latter case, I actually post-process the string before using it as a list.
set exampleList {
a b c
d e f
# Not really a comment but I want to use it like one!
g h i
j k l
}
# Convert “comment” lines to empty lines
regsub -all -line "^\\s*#.*$" $exampleList "" exampleList
The general advantage of Tcl's rules is that it is actually pretty easy to embed other languages within Tcl, precisely because Tcl only treats # (and other character) as special in well-defined contexts. As long as you can have the embedded language be one that uses balanced braces — and that's almost all of them in practice — then embedding it is utterly trivial. The other cases have to use backslashes and/or double quotes and are pretty ugly, but are also a minuscule fraction of all the embedding cases.
Your colleague's problem is that he's looking at the whole script in one go, whereas the Tcl parser handles one character at a time and doesn't do meaningful amounts of lookahead. It's just some dumb code.
I have created an expect script using expect, and I realize that the expect -exact are constructed to match lines contain lots of unnecesary output. For example when I execute a postgresql restore command all the psql output gets included in the expect -exact line.
What other syntax can be used to trim the expect -exact lines? Can they be removed in some cases?
Tools that generate scripts by watching activity in a session are definitely prone to thinking far too much is significant. They simply do not understand what matters. Typically, the way to get rid of the unnecessarily-expected pieces is to just remove them with a text editor. The places to watch out for are where you're coming up to a send (or exp_send; there's a few variations); it's a good idea to always have an expect of something before a send, but what that should be needs some smarts on your part.
Later on, you may want to write scripts that can handle multiple conditions under which to take action by expecting several things at once. That's when you write things like this:
expect {
"abc" {
send "foo\r"
}
"def" {
send "bar\r"
}
}
There's a lot more complexity to automating access to an application than there appears to be at first because applications often are more complex than they appear to be at first.
I'm a big fan of abbrev-mode and I'd like something a bit similar: you start typing and as soon as you enter some punctation (or just a space would be enough) it invokes a function (if I type space after a special abbreviation, of course, just like abbrev-mode does).
I definitely do NOT want to execute some function every single time I hit space...
So instead of expanding the abbreviation using abbrev-mode, it would run a function of my choice.
Of course it needs to be compatible with abbrev-mode, which I use all the time.
How can I get this behavior?
One approach could be to use pre-abbrev-expand-hook. I don't use abbrev mode myself, but it rather sounds as if you could re-use the abbrev mode machinery this way, and simply define some 'abbreviations' which expand to themselves (or to nothing?), and then you catch them in that hook and take whatever action you wish to.
The expand library is apparently related, and that provides expand-expand-hook, which may be another alternative?
edit: Whoops; pre-abbrev-expand-hook is obsolete since 23.1
abbrev-expand-functions is the correct variable to use:
Wrapper hook around `expand-abbrev'.
The functions on this special hook are called with one argument:
a function that performs the abbrev expansion. It should return
the abbrev symbol if expansion took place.
See M-x find-function RET expand-abbrev RET for the code, and you'll also want to read C-h f with-wrapper-hook RET to understand how this hook is used.
EDIT:
Your revised question adds some key details that my answer didn't address. phils has provided one way to approach this issue. Another would be to use yasnippet . You can include arbitrary lisp code in your snippet templates, so you could do something like this:
# -*- mode: snippet -*-
# name: foobars
# key: fbf
# binding: direct-keybinding
# --
`(foo-bar-for-the-win)`
You'd need to ensure your function didn't return anything, or it would be inserted in the buffer. I don't use abbrev-mode, so I don't know if this would introduce conflicts. yas/snippet takes a bit of experimenting to get it running, but it's pretty handy once you get it set up.
Original answer:
You can bind space to any function you like. You could bind all of the punctuation keys to the same function, or to different functions.
(define-key your-mode-map " " 'your-choice-function)
You probably want to do this within a custom mode map, so you can return to normal behaviour when you switch modes. Globally setting space to anything but self-insert would be unhelpful.
Every abbrev is composed of several elements. Among the main elements are the name (e.g. "fbf"), the expansion (any string you like), and the hook (a function that gets called). In your case it sounds like you want the expansion to be the empty string and simply specify your foo-bar-for-the-win as the hook.
I $P(GIH,,24)= S $P(GIH,,24)="C" S
What is the meaning of two S's in above MUMPS statement?
Let me start out by saying the original statement is NOT either Standard MUMPS or InterSystems Cache, or GT.M code. Even broadly guessing what was originally meant, the final S on the line isn't something you would do in MUMPS. A single S could be a SET command, but you still don't have any arguments telling what variable could be assigned, or what value should be assigned to it.
The rest of my reply is trying to figure out what it could have meant.
Your question seems to be broken by some software. either that on stackoverflow or the cut-and-paste process to put it here:
I saw:
I $P(GIH,,24)= S $P(GIH,,24)="C" S
What is the meaning of two S's in above MUMPS statement?
It is hard to figure out what you meant, since it would require hypothesizing where quotes might be and which ones could have been deleted by the transmission of the question.
First of all, let's do something we can guess is reasonable. $P is usually an abbreviation for the built-in (intrinsic) function $PIECE. an I standing alone is probably an IF command
and an S standing alone is probably a SET command. This runs into a problem with your example, because the format of a line of MUMPS code is COMMAND COMMAND-ARGUMENT.
Aside Note: I also just tried to put the text COMMAND-ARGUMENT in "angle brackets" ie: with a less-than character at the beginning of the word and a greater-than character at the end. The text COMMAND-ARGUMENT just disappeared. Which means that stackoverflow sees it as HTML markup. I notice there is a Code marker on the top of this edit window which may or may not help.
If we do the expansions to the code above, we get:
IF $PIECE(GIH,,24)= SET $PIECE(GIH,,24)="C" SET
When we expand the final S but it looks like a SET command, but without any set-argument.
Note, if this was in a Cache system, we might have an example of extra spaces allowed by Cache, which are not allowed in Standard MUMPS, ie the S may have been the right hand side of an equality operator in the IF command. This would only make sense if Cache also allowed the argument of the SET command to be in code without an actual SET command.
i.e.:
IF $PIECE(GIH,,24)=S $PIECE(GIH,,24)="C" SET
We still would have to deal with the two commas in a row for the $PIECE intrinsic function. Currently using two commas in a row to indicate a missing argument is only allowed in Programmer-written code, not when using built-in functions. So this might be a place where we can guess what you meant, or originally pasted in.
If we put in double-quotes we run into the problem that $PIECE command (which separates a string based on a delimiter) would have an quoted string of zero length given as its second argument. Which is just as erroneous as having an empty argument.
So if we hypothesize a quoted string that has angle brackets, we would get something this for your original line:
IF $PIECE(GIH,"<something>",24)="<something>" SET $PIECE(GIH,"<something>",24)="C" SET
Note: I just saw the Code marker allows use of grave accents to keep from assuming a line is HTML - which is good since grave accent is not a character used in MUMPS coding.
As has been mentioned on another reply, the SET-$PIECE-ARGUMENT form is used to change the data stored in a database at a particular delimited substring location.
So this code might be fine for guessing, but it has gone far afield of what you may or may not have done. So I'm stopping now until we get feedback that this is even close to what you wanted. As I said at the first, this is still not quite valid code.
This is pretty bizarre, but what I think is going on is:
I $P(GIH,<null>,24)=<null>
Calling $PIECE with the second argument null will replace the entire string with the value you're assigning, which, in this case, is also null. It looks like a convoluted way of clearing the value of GIH and permitting control to flow into the following SET statement. I seriously doubt that $PIECE sets the $T flag, though, which means that calling this as the condition for the IF operator probably isn't working the way you want it to.
S $P(GIH,,24)="C"
The next statement looks a lot like the first -- replace the entirety of GIH with "C".
S
I don't think the last SET is valid MUMPS.
Why this isn't written as follows is beyond me:
s GIH="C"
Hope that helps!
Maybe Intersystems Caché handles this syntax differently, but that code results in a syntax error when I try it in Caché. There may be other versions of MUMPS for which that is valid, but I don't think it is.
As other have pointed out this statement is not valid, It appears pieces are missing
But S is the SET command in Mumps
Here is what a statement like this might look like:
I $P(GIH,"^",24)="P" S $P(GIH,"^",24)="C" S UPDATEFLG=1
in this case GIH might look something like:
GIH=256^^^42^^^^Mike^^^^^^^^^^^^^^^^P^^^
which would make this evaluate to TRUE:
I $P(GIH,"^",24)="P"
so after:
S $P(GIH,"^",24)="C"
GIH will be:
GIH=256^^^42^^^^Mike^^^^^^^^^^^^^^^^C^^^
then it would set the variable UPDATEFLG=1
Hope this helps :-)
Is it a requirement of good TCL code? What would happen if we don't use the "unset" keyword in a script? Any ill-effects I should know about?
I'm inheriting some legacy code and the errors that come about due to "unset"-ing non-existent variables are driving me up the wall!
It's possible to determine whether a variable exists before using it, using the info exists command. Be sure that if you're not using unset, that you don't upset the logic of the program somewhere else.
There's no Tcl-specific reason to unset a variable, that is, it's not going to cause a memory leak or run out of variable handles or anything crazy like that. Using unset may be a defensive programming practice, because it prevents future use of a variable after it's no longer relevant. Without knowing more about the exact code you're working with, it's hard to give more detailed info.
In addition to the other responses, if your Tcl version is new enough, you can also use:
unset -nocomplain foo
That'll unset foo if it exists, but won't complain if it doesn't.
Depends on the system stats it may give "unable to allocate bytes" issue as and when your script is storing huge data into variables and arrays. it'll break once the cache or RAM is full saying "unable to allocate XXXXXXXX bytes".
Make sure you're not storing that much data into variables, otherwise do unset once the use is over for the respective datasets(variables)
For note as I don't seem able to comment on the "info exists" above;
I use this form often..
if { [info exists pie] && [$pie == "ThisIsWhatIWantInPie"]} {
puts "I found what I wanted in pie."
} else {
puts "Pie did not exist; but I still did not error,TCL's evaluation \
will see the conditional failed on the [info exists] and not \
continue onto the comparison."
}
In addition to the other responses, I want to add that, if you want to neglect the errors raising as a result of unsetting non-existent variable use 'catch'.
#!/bin/bash
catch {unset newVariable}