As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been working with TCL/TK ,recently started to use TCL/TK with my automation applications and I'm hungry for knowledge.
To continue with the long line of Hidden Feature questions, I would like to know any hidden or handy features of TCL/TK or any easy method to achieve some big operations
My favorite "hidden or handy feature" is how quoting works in Tcl. I like to use the phrase "quoting is a tool, not a rule". I also like to say "you only need curly braces when you need curly braces"
While most languages have rules for which block delimiters must be used for certain things (for example, in C you must use {} to designate a block), Tcl is not so stringent.
With Tcl, you can choose whatever quoting characters give you the effect you need. There are certainly best practices, but in the end you get to pick the quoting character that best lets you get the job done.
That means, for example, you can define a procedure in many ways, including:
proc foo {args} {
.... body here ....
}
proc foo "args" "
.... body here ....
"
proc foo args [some code that returns the body]
... and so on. Same goes for conditional statements, loops and everything else. (for the uninitiated, curly braces are roughly equivalent to the shell single quote, double quotes are like the shell double quote, and square brackets are like the shell backtick. ).
Now, many people look at that and say WTF? but it really gives a lot of power to the programmer. We often get questions in comp.lang.tcl along the lines of "if I do 'this {and $that}', how do I get $that to be expanded?". The answer follows the old joke "patient: doctor, it hurts when I do this doctor: don't do that". That is, if you don't like the behavior you get with one set of delimiters, choose some other delimiter. Just because an if statement is normally constructed with curly braces doesn't mean it must be constructed with curly braces.
That's my favorite "hidden" feature of Tcl. It's not hidden -- it's right on the wonderfully complete yet concise Tcl(n) man page, but the ramifications aren't clear until you fully grok how Tcl works.
When a marketing guy at Sun declared that Tcl was "Enterprise Ready", the developers added the following feature:
$ tclsh
% clock format [clock seconds] -format %Q
Stardate 63473.2
Another non-obvious feature is that unrecognised commands fall through to a handler called "unknown" which you can redefine. Eg. to have unknown commands treated as expressions to evaluate:
$ tclsh
% 2+2
invalid command name "2+2"
% proc unknown args {uplevel 1 [linsert $args 0 expr]}
% 2+2
4
More examples can be found at the wiki page on Radical Language Modification
Tcl's [trace] command allows you to intercept reads and writes to any variable. This allows you to implement an observer on any variable, and to add automatic range checking of arbitrary complexity to any variable (as if you were accessing the variable via a setter/getter). You could also create auto-incrementing variables using this technique.
proc varlimit_re {re var key op} {
upvar $var v
if { [regexp -- $re $v] <= 0 } {
error "$var out of range"
}
}
trace add variable ::myvar {write} [list varlimit_re {^[A-H]\d{3}-[0-9a-f]+$}]`
If you try to set 'myvar' to anything that doesn't match the regular expression, you will get a runtime error.
A handy feature which is not hidden but tends not to be obvious to people coming from other languages is that you can define your own control structures (or even redefine the existing ones if you want to live dangerously). There are examples on the Tcl Wiki
All of Tcl's "keywords" are regular Tcl commands, including control structures like [for], [foreach], [while], etc. This means that you can extend the language by writing new control structures in pure Tcl code.
For example, the try/on/trap structure has been implemented in Tcl 8.6a using only Tcl code. Similarly tcllib contains control::do, a do/while control structure.
A lot of this is made possible through the [upvar] and [uplevel] commands, which allow you to access variables or execute code in a different stack frame.
Tcl is such a simple, open language there are very few hidden features. It's all exposed for the programmer to extend and adapt.
IMHO the greatest hidden feature of Tcl is its C API. Using this, it's really easy to wrap a core C program or subsystem and write a GUI or other functionality in Tcl. While this feature is not unique to Tcl, Tcl was designed to do this from the ground up and the C API is particularly easy to work with.
The second greatest hidden feature is the packer, the grand-daddy of all geometry managers. With this, a GUI can have sizeable windows with a surprisingly small amount of code. It's important to note that Tcl/Tk had geometry management at least 10 years before .net came out.
The third greatest feature of Tcl is the ability to exend the language, either through the C API or with commands defined in Tcl. Not quite LISP macros, but quite flexible nonetheless. Expect is a very good example of an application built around extending the basse Tcl language to make a domain-specific scripting language.
EDIT: well, bugger me, Xt really did have a geometry manager, although I agree with Nat in that it's somewhat more painful than pack ;-}
[array names] is one of the first questions newbies ask about how to iterate over an array
also, the fact that you can foreach {key1 key2} {$list1 $list2} {...} - even if the lists are of different size
you should not use comments between switch cases (this is not a cool feature but a problem most developers do not understand
the rename command can rename any function/proc
I think the time command is wonderful. It's not exactly hidden but that doesn't stop people from asking "which function is faster" once in a while in comp.lang.tcl.
Anytime you want to know "how long does this take?" or "which method is faster?" you just call it via the "time" command. No creating of objects, no math, no overhead, exceptionally simple. Other languages have a similar feature, though some are a bit less elegant.
The well documented C API also allowed easy integration in Perl. My experience with Tcl/Tk goes back to 1995, but in 2000 or so, I discovered Perl/Tk and never looked back.
And lately, the Tcl and Tkx Perl packages give us modern-looking GUI's. And the two aforementioned modules, while not trivial, involve relatively little code, considering what they allow one to do across language boundaries. And that can be directly attributable to the excellent API (and the power of Perl, obviously).
Related
I have a couple questions about adding options/switches (with and without parameters) to procedures/commands. I see that tcllib has cmdline and Ashok Nadkarni's book on Tcl recommends the parse_args package and states that using Tcl to handle the arguments is much slower than this package using C. The Nov. 2016 paper on parse_args states that Tcl script methods are or can be 50 times slower.
Are Tcl methods really signicantly slower? Is there some minimum threshold number of options to be reached before using a package?
Is there any reason to use parse_args (not in tcllib) over cmdline (in tcllib)?
Can both be easily included in a starkit?
Is this included in 8.7a now? (I'd like to use 8.7a but I'm using Manjaro Linux and am afraid that adding it outside the package manager will cause issues that I won't know how to resolve or even just "undo").
Thank you for considering my questions.
Are Tcl methods really signicantly slower? Is there some minimum threshold number of options to be reached before using a package?
Potentially. Procedures have overhead to do with managing the stack frame and so on, and code implemented in C can avoid a number of overheads due to the way values are managed in current Tcl implementations. The difference is much more profound for numeric code than for string-based code, as the cost of boxing and unboxing numeric values is quite significant (strings are always boxed in all languages).
As for which is the one to use, it really depends on the details as you are trading off flexibility for speed. I've never known it be a problem for command line parsing.
(If you ask me, fifty options isn't really that many, except that it's quite a lot to pass on an actual command line. It might be easier to design a configuration file format — perhaps a simple Tcl script! — and then to just pass the name of that in as the actual argument.)
Is there any reason to use parse_args (not in tcllib) over cmdline (in tcllib)?
Performance? Details of how you describe things to the parser?
Can both be easily included in a starkit?
As long as any C code is built with Tcl stubs enabled (typically not much more than define USE_TCL_STUBS and link against the stub library) then it can go in a starkit as a loadable library. Using the stubbed build means that the compiled code doesn't assume exactly which version of the Tcl library is present or what its path is; those are assumptions that are usually wrong with a starkit.
Tcl-implemented packages can always go in a starkit. Hybrid packages need a little care for their C parts, but are otherwise pretty easy.
Many packages either always build in stubbed mode or have a build configuration option to do so.
Is this included in 8.7a now? (I'd like to use 8.7a but I'm using Manjaro Linux and am afraid that adding it outside the package manager will cause issues that I won't know how to resolve or even just "undo").
We think we're about a month from the feature freeze for 8.7, and builds seem stable in automated testing so the beta phase will probably be fairly short. The list of what's in can be found here (filter for 8.7 and Final). However, bear in mind that we tend to feel that if code can be done in an extension then there's usually no desperate need for it to be in Tcl itself.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Weird question but here it is. What are the programming concepts that were "automated" by modern languages? What I mean are the concepts you had to manually do before. Here is an example: I have just read that in C, you manually do garbage collection; with "modern" languages however, the compiler/interpreter/language itself takes care of it. Do you know of any other, or there aren't any more?
Optimizations.
For the longest time, it was necessary to do this by hand. Now most compilers can do it infinitely better than any human ever could.
Note: This is not to say that hand-optimizations aren't still done, as pointed out in the comments. I'm merely saying that a number of optimizations that used to be done by hand are automatic now.
I think writing machine code deserves a mention.
Data collections
Hash tables, linked lists, resizable arrays, etc
All these had to be done by hand before.
Iteration over a collection:
foreach (string item in items)
{
// Do item
}
Database access, look at the ActiveRecord pattern in Ruby.
The evil goto.
Memory management, anybody? I know it's more efficient to allocate and deallocate your own memory explicitly, but it also leads to Buffer Overruns when it's not done correctly, and it's so time consuming - a number of modern languages will allocate and garbage collect automatically.
Event System
Before you had to implement the Observer Pattern all by yourself. Today ( at least in .NET ) you can simply use "delegates" and "events"
Line Numbers
No more BASIC, no more Card Decks!
First in list, extension method. Facilitates fluent programming
Exceptions, compartmentalization of what is the program trying to do (try block) and what it will do if something fail (catch block), makes for saner programming. Whereas before, you should be always in alert to intersperse error handling in between your program statements
Properties, makes the language very component-centric, very modern. But sadly that would make Java not modern.
Lambda, functions that captures variables. whereas before, we only have function pointer. This also precludes the need for nested function(Pascal has nested function)
Convenient looping on collection, i.e. foreach, whereas before you have to make a design pattern for obj->MoveNext, obj->Eof
goto-less programming using modern construct like break, continue, return. Whereas before, I remember in Turbo Pascal 5, there's no break continue, VB6 has Exit Do/For(analogous to break), but it doesn't have continue
C#-wise, I love the differentiation of out and ref, so the compiler can catch programming errors earlier
Reflection and attributes, making programs/components able to discover each other functionalities, and invoke them during runtime. Whereas in C language before (I don't know in modern C, been a long time not using C), this things are inconceivable
Remote method invocations, like WebServices, Remoting, WCF, RMI. Gone are the days of low-level TCP/IP plumbing for communication between computers
Declarations
In single-pass-compiler languages like C and C++, declarations had to precede usage of a function. More modern languages use multi-pass compilers and don't need declarations anymore. Unfortunately, C and C++ were defined in such a poor way that they can't deprecate declarations now that multi-pass compilers are feasible.
Pattern matching and match expressions
In modern languages you can use pattern matching which is more powerful than a switch statement, imbricated if statements of ternary operations:
E.g. this F# expression returns a string depending the value of myList:
match myList with
| [] -> "empty list"
| 2::3::tl -> "list starts with elements 2 and 3"
| _ -> "other kind of list"
in C# you would have to write such equivalent expression that is less readable/maintanable:
(myList.Count == 0) ? "empty list" :
(myList[0] == 2 && myList[1] == 3 ? "list starts with elements 2 and 3" :
"other kind of list")
If you go back to assembly you could find many more, like the concept of classes, which you could mimic to a certain extent, were quite difficult to achieve... or even having multiple statements in a single line, like saying "int c = 5 + 10 / 20;" is actually many different "instructions" put into a single line.
Pretty much anything you can think of beyond simple assembly (concepts such as scope, inheritance & polymorphism, overloading, "operators" anything beyond your basic assembly are things that have been automated by modern languages, compilers and interpreters.)
Some languages support Dynamic Typing, like Python! That is one of the best things ever (for certain fields of applications).
Functions.
It used to be that you needed to manually put all the arguments to stack, jump to piece of code where you defined your function, then manually take care of its return values. Now you just write a function!
Programming itself
With some modern IDE (e.g. Xcode/Interface Builder) a text editor or an address book is just a couple of clicks away.
Also built-in functions for sorting(such as bubble sort,quick sort,....).
Especially in Python 'containers' are a powerful data structures that in also in other high level and modern programming languages require a couple of lines to implement.You can find many examples of this kind in Python description.
Multithreading
Native support for multithreading in the language (like in java) makes it more "natural" than adding it as an external lib (e.g. posix thread layer in C). It helps in understanding the concept because there are many examples, documentation, and tools available.
Good String Types make you have to worry less about messing up your string code,
Most Languages other then c and occasionally c++ (depending on how c like they are being) have much nicer strings then c style arrays of char with a '\0' at the end (easier to work with a lot less worry about buffer overflows,ect.). C strings suck.
I probably have not worked with c strings enough to give such a harsh (ok not that harsh but I'd like to be harsher) but after reading this (especially the parts about saner seeming pascal string arrays which used the zeroth element to mark the length of the string), and a bunch of flamewars over which strcpy/strcat is better to use (the older strn* first security enhancement efforts, the openbsd strl*, or the microsoft str*_s) I just have come to really dislike them.
Type inference
In languages like F# or Haskell, the compiler infers types, making programming task much easier:
Java: float length = ComputeLength(new Vector3f(x,y,z));
F#: let length = ComputeLength(new Vector3f(x,y,z))
Both program are equivalent and statically type. But F# compiler knows for instance that ComputeLength function returns a float so it automagically deduces the type of length, etc..
A whole bunch of the Design Patterns. Many of the design patterns, such as Observer (as KroaX mentioned), Command, Strategy, etc. exist to model first-class functions, and so many more languages have begun to support them natively. Some of the patterns related to other concepts, such as Lock and Iterator, have also been built into languages.
dynamic library
dynamic libraries automatically share common code between processes, saving RAM and speed up starting time.
plugins
a very clean and efficient way to extend functionality.
Data Binding. Sure cuts down on wiring to manually shuffle data in and out of UI elements.
OS shell Scripting, bash/sh/or even worse batch scripts can to a large extent be replaced with python/perl/ruby(especially for long scripts, less so at least currently for some of the core os stuff).
You can have most of the same ability throw out a few lines of script to glue stuff together while still working in a "real" programming language.
Context Switching.
Most modern programming languages use the native threading model instead of cooperative threads. Cooperative threads had to actively yield control to let the next thread work, with native threads this is handled by the operating system.
As Example (pseudo code):
volatile bool run = true;
void thread1()
{
while(run)
{
doHeavyWork();
yield();
}
}
void thread2()
{
run = false;
}
On a cooperative system thread2 would never run without the yield() in the while loop.
Variable Typing
Ruby, Python and AS will let you declare variables without a type if that's what you want. Let me worry about whether this particular object's implementation of Print() is the right one, is what I say.
How about built-in debuggers?
How many here remember "The good old days" when he had to add print-lines all over the program to figure out what was happening?
Stupidity
That's a thing I've gotten lot of help from modern programming languages. Some programming languages are a mess to start with so you don't need to spend effort to shuffle things around without reason. Good modern libraries enforce stupidity through forcing the programmer inside their framework and writing redundant code. Java especially helps me enormously in stupidity by forcing me inside OOPS box.
Auto Type Conversion.
This is something that I don`t even realize that language is doing to me, except when I got errors for wrong type conversion.
So, in modern languages you can:
Dim Test as integer = "12"
and everthing should work fine...
Try to do something like that in a C compiler for embedded programming for example. You have to manually convert all type conversions!!! That is a lot of work.
In Unix shell programming the pipe operator is an extremely powerful tool. With a small set of core utilities, a systems language (like C) and a scripting language (like Python) you can construct extremely compact and powerful shell scripts, that are automatically parallelized by the operating system.
Obviously this is a very powerful programming paradigm, but I haven't seen pipes as first class abstractions in any language other than a shell script. The code needed to replicate the functionality of scripts using pipes seems to always be quite complex.
So my question is why don't I see something similar to Unix pipes in modern high-level languages like C#, Java, etc.? Are there languages (other than shell scripts) which do support first class pipes? Isn't it a convenient and safe way to express concurrent algorithms?
Just in case someone brings it up, I looked at the F# pipe-forward operator (forward pipe operator), and it looks more like a function application operator. It applies a function to data, rather than connecting two streams together, as far as I can tell, but I am open to corrections.
Postscript: While doing some research on implementing coroutines, I realize that there are certain parallels. In a blog post Martin Wolf describes a similar problem to mine but in terms of coroutines instead of pipes.
Haha! Thanks to my Google-fu, I have found an SO answer that may interest you. Basically, the answer is going against the "don't overload operators unless you really have to" argument by overloading the bitwise-OR operator to provide shell-like piping, resulting in Python code like this:
for i in xrange(2,100) | sieve(2) | sieve(3) | sieve(5) | sieve(7):
print i
What it does, conceptually, is pipe the list of numbers from 2 to 99 (xrange(2, 100)) through a sieve function that removes multiples of a given number (first 2, then 3, then 5, then 7). This is the start of a prime-number generator, though generating prime numbers this way is a rather bad idea. But we can do more:
for i in xrange(2,100) | strify() | startswith(5):
print i
This generates the range, then converts all of them from numbers to strings, and then filters out anything that doesn't start with 5.
The post shows a basic parent class that allows you to overload two methods, map and filter, to describe the behavior of your pipe. So strify() uses the map method to convert everything to a string, while sieve() uses the filter method to weed out things that aren't multiples of the number.
It's quite clever, though perhaps that means it's not very Pythonic, but it demonstrates what you are after and a technique to get it that can probably be applied easily to other languages.
You can do pipelining type parallelism quite easily in Erlang. Below is a shameless copy/paste from my blogpost of Jan 2008.
Also, Glasgow Parallel Haskell allows for parallel function composition, which amounts to the same thing, giving you implicit parallelisation.
You already think in terms of
pipelines - how about "gzcat
foo.tar.gz | tar xf -"? You may not
have known it, but the shell is
running the unzip and untar in
parallel - the stdin read in tar just
blocks until data is sent to stdout by
gzcat.
Well a lot of tasks can be expressed
in terms of pipelines, and if you can
do that then getting some level of
parallelisation is simple with David
King's helper code (even across erlang
nodes, ie. machines):
pipeline:run([pipeline:generator(BigList),
{filter,fun some_filter/1},
{map,fun_some_map/1},
{generic,fun some_complex_function/2},
fun some_more_complicated_function/1,
fun pipeline:collect/1]).
So basically what he's doing here is
making a list of the steps - each step
being implemented in a fun that
accepts as input whatever the previous
step outputs (the funs can even be
defined inline of course). Go check
out David's blog entry for the
code and more detailed explanation.
magrittr package provides something similar to F#'s pipe-forward operator in R:
rnorm(100) %>% abs %>% mean
Combined with dplyr package, it brings a neat data manipulation tool:
iris %>%
filter(Species == "virginica") %>%
select(-Species) %>%
colMeans
You can find something like pipes in C# and Java, for example, where you take a connection stream and put it inside the constructor of another connection stream.
So, you have in Java:
new BufferedReader(new InputStreamReader(System.in));
You may want to look up chaining input streams or output streams.
Thanks to all of the great answers and comments, here is a summary of what I learned:
It turns out that there is an entire paradigm related to what I am interested in called Flow-based programming. A good example of a language designed specially for flow-based programming is Hartmann pipelines. Hartamnn pipelines generalize the idea of streams and pipes used in Unix and other OS's, to allows for multiple input and output streams (rather than just a single input stream, and two output streams). Erlang contains powerful abstractions that make it easy to express concurrent processes in a manner which resembles pipes. Java provides PipedInputStream and PipedOutputStream which can be used with threads to achieve the same kind of abstractions in a more verbose manner.
I think the most fundamental reason is because C# and Java tend to be used to build more monolithic systems. Culturally, it's just not common to even want to do pipe-like things -- you just make your application implement the necessary functionality. The notion of building a multitude of simple tools and then gluing them together in arbitrary ways just isn't common in those contexts.
If you look at some of the scripting languages, like Python and Ruby, there are some pretty good tools for doing pipe-like things from within those scripts. Check out the Python subprocess module, for example, which allows you to do things like:
proc = subprocess.Popen('cat -',
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,)
stdout_value = proc.communicate('through stdin to stdout')[0]
print '\tpass through:', stdout_value
Are you looking at the F# |> operator? I think you actually want the >> operator.
Usually you just don't need it and programs run faster without it.
Basically piping is consumer/producer pattern. And it's not that hard to write those consumers and producers because they don't share much data.
Piping for Python : pypes
Mozart-OZ can do pipes using ports and threads.
Objective-C has the NSPipe class. I use it quite frequently.
I've had a lot of fun building pipeline functions in Python. I have a library I wrote, I put the contents and a sample run here. The best fit me for has been XML processing, described in this Wikipedia article.
You can do pipe like operations in Java by chaining/filtering/transforming iterators.
You can use Google's Guava Iterators.
I will say even with the very helpful guava library and static imports its still ends up being lots of Java code.
In Scala its quite easy to make your own pipe operator.
Streaming libraries based on coroutines have existed in Haskell for quite some time now. Two popular examples are conduit and pipes.
Both libraries are well-written and well-documented, and are relatively mature. The Yesod web framework is based on conduit, and it's pretty damn fast. Yesod is competitive with Node on performance, even beating it in a few places.
Interestingly, all of the these libraries are single-threaded by default. This is because the single motivating use case for pipelines is servers, which are I/O bound.
Since R added pipe operator today, it's worth to mention Julialang has pipe all a long:
help?> |>
search: |>
|>(x, f)
Applies a function to the preceding argument. This allows for easy function chaining.
Examples
≡≡≡≡≡≡≡≡≡≡
julia> [1:5;] |> x->x.^2 |> sum |> inv
0.01818181818181818
if you're still interested in an answer...
you can look at factor, or the older joy and forth for the concatenative paradigm.
in arguments and out arguments are implicit, dumped to a stack. then the next word (function) takes that data and does something with it.
the syntax is postfix.
"123" print
where print takes one argument, whatever is in the stack.
You can use my library in python: github.com/sspipe/sspipe
In Mathematica, you can use //
for example
f[g[h[x,parm1],parm2]]
quite a mess.
could be written as
x // h[#, parm1]& // g[#, parm2]& // f
the # and & is lambda in Mathematica
In js, there seems to be pipe operator |> soon.
https://github.com/tc39/proposal-pipeline-operator
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I just came across an idea in The Structure And Interpretation of Computer Programs:
Data is just dumb code, and code is just smart data
I fail to understand what it means. Can some one help me to understand it better?
This is one of the fundamental lessons of SICP and one of the most powerful ideas of computer science. It works like this:
What we think of as "code" doesn't actually have the power to do anything by itself. Code defines a program only within a context of interpretation -- outside of that context, it is just a stream of characters. (Really a stream of bits, which is really a stream of electrical impulses. But let's keep it simple.) The meaning of code is defined by the system within which you run it -- and this system just treats your code as data that tells it what you wanted to do. C source code is interpreted by a C compiler as data describing an object file you want it to create. An object file is treated by the loader as data describing some machine instructions you want to queue up for execution. Machine instructions are interpreted by the CPU as data defining the sequence of state transitions it should undergo.
Interpreted languages often contain mechanisms for treating data as code, which means you can pass code into a function in some form and then execute it -- or even generate code at run time:
#!/usr/bin/perl
# Note that the above line explicitly defines the interpretive context for the
# rest of this file. Without the context of a Perl interpreter, this script
# doesn't do anything.
sub foo {
my ($expression) = #_;
# $expression is just a string that happens to be valid Perl
print "$expression = " . eval("$expression") . "\n";
}
foo("1 + 1 + 2 + 3 + 5 + 8"); # sum of first six Fibonacci numbers
foo(join(' + ', map { $_ * $_ } (1..10))); # sum of first ten squares
Some languages like scheme have a concept of "first-class functions", which means that you can treat a function as data and pass it around without evaluating it until you really want to.
The upshot is that the division between "code" and "data" is pretty much arbitrary, a function of perspective only. The lower the level of abstraction, the "smarter" the code has to be: it has to contain more information about how it should be executed. On the other hand, the more information the interpreter supplies, the more dumb the code can be, until it starts to look like data with no smarts at all.
One of the most powerful ways to write code is as a simple description of what you need: Data which will be turned into code describing how to get you what you need by the interpretive context. We call this "declarative programming".
For a concrete example, consider HTML. HTML does not describe a Turing-complete programming language. It is merely structured data. Its structure contains some smarts that let it control the behavior of its interpretive context -- but not a lot of smarts. On the other hand, it contains more smarts than the paragraphs of text that appear on an average web page: Those are pretty dumb data.
In the context of security: Due to buffer overflows, what you thought of as data and thus harmless (such as an image) can become executed as code and p0wn your machine.
In the context of software development: Many developers are very afraid of "hardcoding" things and very keen on extracting parameters that might have to change into configuration files. This is often based on the idea that config files are just "data" and thus can be changed easily (perhapy by customers) without raising the issues (compilation, deployment, testing) that changing anything in the code would.
What these developers don't realize is that since this "data" influences the behaviour of the program, it really is code; it could break the program and the only reason not to require complete testing after such a change is that, if done correctly, the configurable values have a very specific, well-documented effect and any invalid value or a broken file structure will be caught by the program.
However, what all too often happens is that the config file structure becomes a programming language in its own right, complete with control flow and everything - one that's badly documented, has a quirky syntax and parser and which only the most experienced developers in the team can touch without breaking the application completely.
So, in a language like Scheme, even code is treated as first class data. You can treat functions and lambda expressions much like you treat other code, say passing them into other functions and lambda expressions. I recommend continuing with the text as this will all become quite clear.
This is something you should come to understand from writing in a compiler.
One common step in compilers is to transform the program into an abstract syntax tree. Representation will often be like trees such as [+, 2, 3] where + is the root, and 2, 3 are the children.
Lisp languages simply treats this as its data. So there is no separation between data and code which are both lists that look like AST trees.
Code is definitely data, but data is definitely not always code. Let's take a basic example - customer name. It's nothing to do with code, it's a functional (essential), as opposed to a technical (accidental) aspect of an application.
You could probably say that any technical/accidental data is code and that functional/essential data is not.
What language(s) have comments with side effects? In essence, comments which are not comments....
English. Do I win?
DOS Batch Shell programming
The REM (Remark) allows you to put in a comment. But it has the side-effect of modifying the ERRORLEVEL variable to 0.
In a sense, it makes last operation a success.
I don't know how a comment can fail, but if it does, you are covered.
I can think of several places where comments aren't really comments.
HTML and script tags (providing support for browsers that don't allow or support scripts).
And then, considerably more obscurely:
IBM Informix 4GL (I4GL) and 4J's Genero (successor to Informix Dynamic 4GL, D4GL). The notation '--#' was used by D4GL to include material only applicable to D4GL; I4GL would see that as a comment. The inverse notation was '--#', which looked like a comment to D4GL but was treated as active material by I4GL.
And, even more obscurely:
I wrote an I4GL file which was dual-languaged, exploiting I4GL's multiple comment facilities. Material starting '#' (hash) marked the start of a comment outside of strings - up to the next newline, as does '--' (double-dash). Also, '{...}' (braces) enclose multiline comments.
The top of the source file was actually a shell script, mostly enclosed in '{...}' which is, of course, perfectly legitimate in shell. The shell script was a data-driven code generator that copied itself to the top of the output, and then generated about 100 functions which were all depressingly similar but slightly different (in a language without templates or a pre-processor). The code had to validate what was in the database for a given ship against incoming data from an external source (Lloyds of London, in fact), to see what had changed since the last time the external data was received. Non-trivial comparison work, especially since it had to deal with database (SQL) nulls.
The file was not really a Quine program, but it had some points in common with it. In particular, you could feed the script broken I4GL code and the regenerated file would be perfect again, basically because it ignored the existing I4GL code.
Haskell can turn the usual comments in code paradigm upside down by having code in comments - also Mathematica and the like; literal programming is a nice feature for the more mathematically inclined languages.
I also find annotations in Java are like comments with behaviour.
Then of course there are "polyglots" -- programs which can be compiled/executed in multiple languages. Usually these rely on the fact that the same line is a comment in one language, but an actual line of code in another.
QBasic has a use of comments all its own: REM $STATIC or REM $DYNAMIC set how arrays are allocated.
Another example: When web browsers parse comments <!-- -- -->in<!-- -- -->correctly.
CSS for clever cross-browser hacks. Of course, I wouldn't really call CSS a language.
Just stumbled upon this old question and my first thought was javadoc comments.