Performance comparison: one argument or a list of arguments? - tcl

I am defining a new TCL command whose implementation is C++. The command is to query a data stream and the syntax is something like this:
mycmd <arg1> <arg2> ...
The idea is this command takes a list of arguments and returns a list which has the corresponding data for each argument.
My colleague commented that it is best just to use a single argument and when multi values are needed, just call the command multiple times.
There are some other discussions, but one thing we cannot agree with each other is, the performance.
I think my version, list of argument should be quicker because when we want multi arguments, it is one time cost going through TCL interpreter.
His comment is new to me -
function implementation is cached
accessing TCL function is quicker than accessing TCL data
Is this reasoning sound?

If you use Tcl_EvalObjv to invoke the command, you won't go through the Tcl interpreter. The cost will be one hash-table lookup (or less, if you reuse the Tcl_Obj* containing the command name) and then you'll be in the implementation of the command. Otherwise, constructing a list Tcl_Obj* (e.g., with Tcl_NewListObj) and then calling Tcl_EvalObj is nearly as cheap, as that's a special case because the list construction code is guaranteed to produce lists that are also substitution-free commands.
Building a normal string and passing that through Tcl_Eval (or Tcl_EvalObj) is significantly slower, as that has to be parsed. (OTOH, passing the same Tcl_Obj* through Tcl_EvalObj multiple times in a row will be faster as it will be compiled internally to bytecode.)
Accessing into values (i.e., into Tcl_Obj* references) is pretty fast, provided the internal representation of those values matches the type that the access function requires. If there's a mismatch, an internal type conversion function may be called and they're often relatively expensive. To understand internal representations, here's a few for you to think about:
string — array of unicode characters
integer — a C long (except when you spill over into arbitrary precision work)
list — array of Tcl_Obj* references
dict — hash table that maps Tcl_Obj* to Tcl_Obj*
script — bytecoded version
command — pointer to the implementation function
OK, those aren't the exact types (there's often other bookkeeping data too) but they're what you should think of as the model.
As to “which is fastest”, the only sane way to answer the question is to try it and see which is fastest for real: the answer will depend on too many factors for anyone without the actual code to predict it. If you're calling from Tcl, the time command is perfect for this sort of performance analysis work (it is what it is designed for). If you're calling from C or C++, use that language's performance measurement idioms (which I don't know, but would search Stack Overflow for).
Myself? I advise writing the API to be as clear and clean as possible. Describe the actual operations, and don't distort everything to try to squeeze an extra 0.01% of performance out.

Related

Why is my %h is List = 1,2; a valid assignment?

While finalizing my upcoming Raku Advent Calendar post on sigils, I decided to double-check my understanding of the type constraints that sigils create. The docs describe sigil type constraints with the table
below:
Based on this table (and my general understanding of how sigils and containers work), I strongly expected this code
my %percent-sigil is List = 1,2;
my #at-sigil is Map = :k<v>;
to throw an error.
Specifically, I expected that is List would attempt to bind the %-sigiled variable to a List, and that this would throw an X::TypeCheck::Binding error – the same error that my %h := 1,2 throws.
But it didn't error. The first line created a List that seemed perfectly ordinary in every way, other than the sigil on its variable. And the second created a seemingly normal Map. Neither of them secretly had Scalar intermediaries, at least as far as I could tell with VAR and similar introspection.
I took a very quick look at the World.nqp source code, and it seems at least plausible that discarding the % type constraint with is List is intended behavior.
So, is this behavior correct/intended? If so, why? And how does that fit in with the type constraints and other guarantees that sigils typically provide?
(I have to admit, seeing an %-sigiled variable that doesn't support Associative indexing kind of shocked me…)
I think this is a grey area, somewhere between DIHWIDT (Docter, It Hurts When I Do This) and an oversight in implementation.
Thing is, you can create your own class and use that in the is trait. Basically, that overrides the type with which the object will be created from the default Hash (for %) and Array (for # sigils). As long as you provide the interface methods, it (currently) works. For example:
class Foo {
method AT-KEY($) { 42 }
}
my %h is Foo;
say %h<a>; # 42
However, if you want to pass such an object as an argument to a sub with a % sigil in the signature, it will fail because the class did not consume the Associatve role:
sub bar(%) { 666 }
say bar(%h);
===SORRY!=== Error while compiling -e
Calling bar(A) will never work with declared signature (%)
I'm not sure why the test for Associative (for the % sigil) and Positional (for #) is not enforced at compile time with the is trait. I would assume it was an oversight, maybe something to be fixed in 6.e.
Quoting the Parameters and arguments section of the S06 specification/speculation document about the related issue of binding arguments to routine parameters:
Array and hash parameters are simply bound "as is". (Conjectural: future versions ... may do static analysis and forbid assignments to array and hash parameters that can be caught by it. This will, however, only happen with the appropriate use declaration to opt in to that language version.)
Sure enough the Rakudo compiler implemented some rudimentary static analysis (in its AOT compilation optimization pass) that normally (but see footnote 3 in this SO answer) insists on binding # routine parameters to values that do the Positional role and % ones to Associatives.
I think this was the case from the first official Raku supporting release of Rakudo, in 2016, but regardless, I'm pretty sure the "appropriate use declaration" is any language version declaration, including none. If your/our druthers are static typing for the win for # and % sigils, and I think they are, then that's presumably very appropriate!
Another source is the IRC logs. A quick search quickly got me nothing.
Hmm. Let's check the blame for the above verbiage so I can find when it was last updated and maybe spot contemporaneous IRC discussion. Oooh.
That is an extraordinary read.
"oversight" isn't the right word.
I don't have time tonight to search the IRC logs to see what led up to that commit, but I daresay it's interesting. The previous text was talking about a PL design I really liked the sound of in terms of immutability, such that code could become increasingly immutable by simply swapping out one kind of scalar container for another. Very nice! But reality is important, and Jonathan switched the verbiage to the implementation reality. The switch toward static typing certainty is welcome, but has it seriously harmed the performance and immutability options? I don't know. Time for me to go to sleep and head off for seasonal family visits. Happy holidays...

Having Multiple Commands for Calling a Specific Programming Language: To Provide a Delimiter-less Option or Not?

After re-reading the off/on topic lists, I'm still not certain if this question is best posted to this site, so apologies in advance, if it is not.
Overview:
I am working on a project that mixes several programming languages and we are trying to determine important considerations for the command used to call one in particular.
For definiteness, I will list the specific languages; however, I think the principles ought to be general, so familiarity with these specific languages is not really essential.
Specific Context
Specifically, we are using: Maxima, KaTeX, Markdown and HTML). While building the prototype, we have used the following (I believe, standard) conventions:
KaTeX delimited by $ $ or $$ $$;
HTML delimited by < > </ > pairs;
Markdown works anywhere in the body, except within KaTeX or Maxima environments;
The only non-standard convention we used during this design phase was to call on Maxima using \comp{<Maxima commands>}. This command works within all the other environments (which is desired).
Now that we are ready to start using the platform, it has become apparent that this temporary command for calling Maxima is cumbersome for our users. The vast majority of use cases involve simply calling a single variable or function, e.g.
As such, we have $\eval{function-name()}(\eval{variable-name})$
as opposed to actually using Maxima for computation, e.g.
Here, it is clear that $\eval{a} + \eval{b} = \eval{a+b}$
(where \eval{a+b} would return the actual sum, as calculated by Maxima).
As such, our users would prefer a delimiter-less command option for invoking a single variable or function, e.g. \#<variable-name-in-Maxima> and \#<function-name>(<argument>) (where # is some reserved character not used in the other languages), while also having a delimited alternative for the (much less frequent) cases where they actually want to use Maxima for computation; perhaps something like \#{a+b}.
However, we have a general sense that this is not a best practice, even though we can't foresee any specific issue.
"Research" / Comparisons:
Indeed, there is precedence for delimit-less expressions for single arguments like x^2 (on any calculator) or Knuth's a \over b in TeX (which persists in LaTeX with \frac12 being parsed as \frac{1}{2}.
IIRC Knuth's point was that this delimit-less notation was more semantic (and so, in his view, preferable), and because delimiters can be added, ambiguity can be avoided, whenever the need arises: e.g. x^{22}, {a+b}\over{c+d} and \frac{12}{3}.
The Question, Proper:
Can anyone point to or explain actual shortcomings / risks associated with a dual solution like:
\#<var>, \#<function>(<arg>) and,
\#[<extended expression>],
(where # is a reserved (& escapable) character), for calling one language amongst others, as opposed to only using a delimited command?
Any alternative suggestions for how to achieve the ease-of-use and more semantic code enabled by the above solution, while keeping the code unambiguous would be very much welcome and appreciated.

Is everything really a string in TCL?

And what is it, if it isn't?
Everything I've read about TCL states that everything is just a string in it. There can be some other types and structures inside of an interpreter (for performance), but at TCL language level everything must behave just like a string. Or am I wrong?
I'm using an IDE for FPGA programming called Vivado. TCL automation is actively used there. (TCL version is still 8.5, if it helps)
Vivado's TCL scripts rely on some kind of "object oriented" system. Web search doesn't show any traces of this system elsewhere.
In this system objects are usually obtained from internal database with "get_*" commands. I can manipulate properties of these objects with commands like get_property, set_property, report_property, etc.
But these objects seem to be something more than just a string.
I'll try to illustrate:
> set vcu [get_bd_cells /vcu_0]
/vcu_0
> puts "|$vcu|"
|/vcu_0|
> report_property $vcu
Property Type Read-only Value
CLASS string true bd_cell
CONFIG.AXI_DEC_BASE0 string false 0
<...>
> report_property "$vcu"
Property Type Read-only Value
CLASS string true bd_cell
CONFIG.AXI_DEC_BASE0 string false 0
<...>
But:
> report_property "/vcu_0"
ERROR: [Common 17-58] '/vcu_0' is not a valid first class Tcl object.
> report_property {/vcu_0}
ERROR: [Common 17-58] '/vcu_0' is not a valid first class Tcl object.
> report_property /vcu_0
ERROR: [Common 17-58] '/vcu_0' is not a valid first class Tcl object.
> puts |$vcu|
|/vcu_0|
> report_property [string range $vcu 0 end]
ERROR: [Common 17-58] '/vcu_0' is not a valid first class Tcl object.
So, my question is: what exactly is this "valid first class Tcl object"?
Clarification:
This question might seem like asking for help with Vivado scripting, but it is not. (I was even in doubt about adding [vivado] to tags.)
I can just live and script with these mystic objects.
But it would be quite useful (for me, and maybe for others) to better understand their inner workings.
Is this "object system" a dirty hack? Or is it a perfectly valid TCL usage?
If it's valid, where can I read about it?
If it is a hack, how is it (or can it be) implemented? Where exactly does string end and object starts?
Related:
A part of this answer can be considered as an opinion in favor of the "hack" version, but it is quite shallow in a sense of my question.
A first class Tcl value is a sequence of characters, where those characters are drawn from the Basic Multilingual Plane of the Unicode specification. (We're going to relax that BMP restriction in a future version, but that's not yet in a version we'd recommend for use.) All other values are logically considered to be subtypes of that. For example, binary strings have the characters come from the range [U+000000, U+0000FF], and integers are ASCII digit sequences possibly preceded by a small number of prefixes (e.g., - for a negative number).
In terms of implementation, there's more going on. For example, integers are usually implemented using 64-bit binary values in the endianness that your system uses (but can be expanded to bignums when required) inside a value boxing mechanism, and the string version of the value is generated on demand and cached while the integer value doesn't change. Floating point numbers are IEEE double-precision floats. Lists are internally implemented as an array of values (with smartness for handling allocation). Dictionaries are hash tables with linked lists hanging off each of the hash buckets. And so on. THESE ARE ALL IMPLEMENTATION DETAILS! As a programmer, you can and should typically ignore them totally. What you need to know is that if two values are the same, they will have the same string, and if they have the same string, they are the same in the other interpretation. (Values with different strings can also be equal for other reasons: for example, 0xFF is numerically equal to 255 — hex vs decimal — but they are not string equal. Tcl's true natural equality is string equality.)
True mutable entities are typically represented as named objects: only the name is a Tcl value. This is how Tcl's procedures, classes, I/O system, etc. all work. You can invoke operations on them, but you can only see inside to a limited extent.
Vivado TCL is not TCL. Vivado will not really document their language they call TCL, but refer you to the real TCL language documentation. Where Vivado TCL and TCL differ, you are left on your own without help. TCL was a poor choice for a scripting language given the very large data bases, so they had to bastardize it to get it half functional. You are better off getting help on the Xilinx forums then in general TCL forums. Why they went with TCL rather than python is beyond anyone's comprehension.

Iterating over a string in Vimscript or Parse a JSON file

So I'm creating a vim script that needs to load and parse a JSON file into a local object graph. I searched and I couldn't find any native way to process a JSON file, and I don't want to add any dependencies to the script. So I wrote my own function to parse the JSON string (gotten from the file), but it's really slow. At the moment, I iterate through each character in the file like so:
let len = strlen(jsonString) - 1
let i = 0
while i < len
let c = strpart(jsonString, i, 1)
let i += 1
" A lot of code to process file....
" Note: I've tried short cutting the process by searching for enclosing double-quotes when I come across the initial double quotes (also taking into account escaping '\' character. It doesn't help
endwhile
I've also tried this method:
for c in split(jsonString, '\zs')
" Do a lot of parsing ....
endfor
For reference, a file with ~29,000 characters takes about 4 seconds to process, which is unacceptable.
Is there a better way to iterate over a string in vim script?
Or better yet, have I missed a native function to parse JSON?
Update:
I asked for no dependencies because I:
Didn't want to deal with them
Genuinely wanted some ideas for best way to do this without someone else's work.
Sometimes I just like to do things manually even though the problem has already been solved.
I'm not against plugins or dependencies at all, it's just that I'm curious. Thus the question.
I ended up creating my own function to parse the JSON file. I was creating a script that could parse the package.json file associated with node.js modules. Because of this, I could rely on a fairly consistent format and quit the processing whenever I'd retrieved the information I needed. This usually cut out large chunks of the file since most developers put the largest chunk of the file, their "readme" section, at the end. Because the package.json file is strictly defined, I left the process somewhat fragile. It assumed a root dictionary { } and actively looks for certain entries. You can find the script here: https://github.com/ahayman/vim-nodejs-complete/blob/master/after/ftplugin/javascript.vim#L33.
Of course, this doesn't answer my own question. It's only the solution to my unique problem. I'll wait a few days for new answers and pick the best one before the bounty ends (already set an alarm on my phone).
The simplest solution with the least dependencies is just using the json_decode vim function.
let dict = json_decode(jsonString)
Even though Vim's origin dates back a lot it happens that its internal string() eval() representation is that close to JSON that its likely to work unless you need special characters.
You can lookup the implementation here which even supports true/false/null if you want:
https://github.com/MarcWeber/vim-addon-json-encoding
Better use that library (vim-addon-manager allows to install dependencies easily).
Now it depends on your data whether this is good enough.
Now Benjamin Klein posted your question to vim_use which is why I'm replying.
Best and fast replies happen if you subscribe to the Vim mailinglist.
Goto vim.sf.net and follow the community link.
You cannot expect the Vim community to scrape stackoverflow.
I've added the keyword "json" and "parsing" to that little code that it can be found easier.
If this solution does not work for you you can try the many :h if_* bindings or write an external script which extracts the information you're looking for, or turns JSON into Vim's dictionary representation which can be read by eval() escaping special characters you care about correctly.
If you seek for completely correct solution omitting dependencies is one of the worst thing you can do. The eval() variant mentioned by #MarcWeber is one of the fastest, but it has its disadvantages:
Using solution for securing eval I mentioned in comment makes it no longer the fastest. In fact after you use this it makes eval() slower by more then an order of magnitude (0.02s vs 0.53s in my test).
It does not respect surrogate pairs.
It cannot be used to verify that you have correct JSON: it accepts some strings (e.g. "\<C-o>") that are not JSON strings and it allows trailing commas.
It fails to give normal error messages. It fails badly if you use vam#VerifyIsJSON I mentioned in p.1.
It fails to load floating point values like 1e10 (vim requires numbers to look like 1.0e10, but numbers like 1e10 are allowed: note “and/or” in the first paragraph).
. All of the above (except for the first) statements also apply to vim-addon-json-encoding mentioned by #MarcWeber because it uses eval. There are some other possibilities:
Fastest and the most correct is using python: pyeval('json.loads(vim.eval("varname"))'). Not faster then eval, but fastest among other possibilities. (0.04 in my test: approximately two times slower then eval())
Note that I use pyeval() here. If you want solution for vim version that lacks this functionality it will no longer be one of the fastest.
Use my json.vim plugin. It has an advantages of slightly better error reporting compared to failed vam#VerifyIsJSON, slightly worse compared to eval() and it correctly loads floating-point numbers. It can be used for verification of strings (it does not accept "\<C-a>"), but it loads lists with trailing comma just fine. It does not support surrogate pairs. It is also very slow: in the test I used (it uses 279702 character long strings) it takes 11.59s to load. Json.vim tries to use python if possible though.
For the best error reporting you can take yaml.vim and purge YAML support out of it leaving only JSON (I once have done the same thing for pyyaml, though in python: see markedjson library used in powerline: it is pyyaml minus YAML stuff plus classes with marks). But this variant is even slower then json.vim and should only be used if the main thing you need is error reporting: 207 seconds for loading the same 279702 character long string.
Note that the only variant mentioned that satisfies both requirements “no dependencies” and “no python” is eval(). If you are not fine with its disadvantages you have to throw away one or both of these requirements. Or copy-paste code. Though if you take speed into account only two candidates are left: eval() and python: if you want to parse json fast you really must use C and only these solutions spend most time in functions written in C.
Most other interpreters (ruby/perl/TCL) do not have pyeval() equivalent so they will be slower even if their JSON implementation is written in C. Some other (lua/racket (mzscheme)) have pyeval() equivalent, but e.g. luaeval('{}') is zero meaning that you will have to add additional step explicitly and recursively converting objects into vim dictionaries and lists (e.g. luaeval('vim.dict({})')) which will impact performance. Cannot say anything about mzeval(), but I have never heard about anybody actually using racket (mzscheme) with vim.

Tcl and records (structs)

Package struct::record from TCLLIB provides means for emulating record types. But record instances are commands in the current namespace and not variables in the current scope. This means there is no garbage collection for record instances. Passing name of the record instance to a procedure means passing it by reference not by value, it is possible to pass string representation of the record as parameter but it requires to create another instance in the procedure, configure it and delete by hand, it's annoying. I wonder about the rationale behind this design. A simple alternative is provide a lisp-style records - a set of construction, access and modification procedures and represent records as lists.
The struct::record implementation is, from my viewpoint, an oo-style implementation. If you're searching for a data-style implementation (like lisp) where the commands are totally separate from the data, you might want to look at the dict command.
I'll note that oo-style and data-style are really not good descriptions, but they were the best I could think of offhand.
You most certainly can do it “the Lisp way”.
proc mkFooBarRecord {foo bar} {
# Keep index #0 for a "type" for easier debugging
return [list "fooBarRecord" $foo $bar]
}
proc getFoo {fooBarRecord} {
if {[lindex $fooBarRecord 0] ne "fooBarRecord"} {error "not fooBarRecord"}
return [lindex $fooBarRecord 1]
}
# Etc.
That works quite well. (Write it in C and you can make it more efficient too.) Mind you, as a generic data structure, it seems that many people prefer Tcl 8.5's dictionaries. There are many ways to use them; here's one:
proc mkFooBarRecord {foo bar} {
return [dict create "type" fooBarRecord "foo" $foo "bar" $bar]
}
proc getFoo {fooBarRecord} {
dict with fooBarRecord {
if {$type ne "fooBarRecord"} {error "not fooBarRecord"}
return $foo
}
}
As for the whole structures versus objects debate, Tcl tends to regard objects as state with operations (leading to a natural presentation as a command, a fairly heavyweight concept) whereas structures are pure values (and so lightweight). Having written a fair chunk on this, I really don't know what's best in general; I work on a case-by-case basis. If you are going with “structures”, also consider whether you should have collections that represent fields across many structures (equivalent to using column-wise storage instead of row-wise storage in a database) as that can lead to more efficient handling in some cases.
Also consider using a database; SQLite integrates extremely well with Tcl, is reasonably efficient, and supports in-memory databases if you don't want to futz around with disk files.
I will not answer your question, because I was not been using Tcl for many years and I never use this kind of struct, but I can give you the path to two possible places that are very plausible to provide a good answer for you:
The Tcl'ers Wiki http://wiki.tcl.tk
The Frenode's Tcl IRC channel
At the time I used Tcl they proved to be invaluable resources.