Need MUMPS Sample Code - mumps

I am working on an analysis tool for which I need MUMPS sample code. Can anyone provide me MUMPS live code or sample code? Also suggest some links for same.

This is some MUMPS i wrote for fun. I guess if you can analyze this, your tool works:
Q N R,Q,C,D,E,W,B,G,H,S,T,U,V,F,L,P,N,J,A S N=$G(N),Q='N,F=Q+Q,P=F+F,W=$L($T(Q))
S W=$E(W,Q),S='N_+N,W=W-F*S,L=$G(L),R=$C(Q_F_P),R(F)=$C(F+Q_F),R(P)=$C(W-F) W #
S T=$E($T(Q+F),F,W\S)_$C(W+S+F) X T S B=$P(T,$C(P_P),F),C=B\(W*W),D=B-(C*W*W)\W
F G=S-Q:F:S+F+Q S E=B-(C*W*W+(D*W)),H=$E($T(Q),G),#H=$S(#H<S:'Q,Q:N)_#H,T=C_D_E
F A=Q:Q:W\S S J=$E(T,A),C(F)=$S(J>(F+Q)&(J<(S-F)):Q,Q:+N),C(P)=$S(J#F:Q,Q:+N) D
.S C(Q)=$S(J<(S-F):+N,Q:Q),C(F+Q)=$S(J>Q&(J<(S-F))&(J'=(P+'L))&(J'=(P)):Q,Q:+N)
.S H('L)=L F S H(N?.E)=$O(C(H('$G(N)))) Q:H('+L)=L S F(A,H('L))=C(H(W[(W\S)))
F U=Q:Q:P W !,R F V=Q:Q:P+F W $S(F(V,U):'Q,Q:$C(P_(W\S))) W:'(V#F) $C('N_F_F+F)
W !!,R(F)_C_R(P)_D_R(P)_E_R(F) X $RE($E($T(Q),Q+F,P+Q))_R(P)_'N W # G:N=L Q+F Q
look ma, no literals!
This outputs a binary clock:
:D Q^ROU
|..|..|..|
|..|..|.0|
|..|.0|0.|
|..|00|..|
00:13:24

GitHub actually host many MUMPS software, but it unfortunatly get tagged as Objective-C or Matlab so it is not easy to search for MUMPS code over there. Here are some projects I know are done at least partially using MUMPS :
OSEHRA
Reynard GT.M Server
GT.M Term Size
GT.M POSIX Extension
Tetris in MUMPS
Juicy MUMPS Example
GT.M PCRE Extension
GT.M Digest Extension
DataBallet
Source KIDS
Software development tools for MUMPS

I don't think any of this will be enough for analysis purposes, but there are a lot of small examples at M[UMPS] by Example. There's also some lengthy samples on the MUMPS Wikipedia page. I don't know if they are stand alone or not. Haven't tested them myself.

VistA is an open source EMR for the Veteran's Administration written on MUMPS. You can download it off of the VistA wiki here: OpenVistA Download Page
I haven't tried to download it myself, so you may need to install MUMPS to get access to the source. Good Luck!

Look here:
http://www.faqs.org/faqs/m-technology-faq/part2/
Scroll down to (or search for) the section heading "Appendix 6" (without the double-quotes).
HTH
Nathan

Here is the sample piece of code, to loop though a global, traverse it
and print the data in the terminal.
TESTLOG
S TC=""
F S TC=$O(^TCLOG(TC)) Q:TC="" D
. S LogDT=""
. F S LogDT=$O(^TCLOG(TC,LogDT)) Q:LogDT="" D
. . S Type=""
. . F S Type=$O(^TCLOG(TC,LogDT,Type)) Q:Type="" D
. . . Q:Type'="UPDATE"
. . . S LogData=$G(^TCLOG(TC,LogDT,"UPDATE"))
. . . W !,LogData
Q
And find the below link for some more reference
http://www.vistapedia.com/index.php/MUMPS_Code_Examples

Here's "hello world": w "Hello world!",!
The w is an abbreviation of write - either is acceptable but the abbreviation is more idiomatic. The literal ! is a newline.
Here's a fibonacci implementation, first without abbreviations then with
innerFibonacci(value,cache)
if $data(cache(value))=1 quit cache(value)
set cache(value)=$$innerFibonacci(value-1,.cache)+$$innerFibonacci(value-2,.cache)
quit cache(value)
fibonacci(value)
new cache
set cache(0)=1
set cache(1)=1
quit $$innerFibonacci(value,.cache)
Here's the same thing with the more idiomatic abbreviations:
innerFibonacci(value,cache)
i $d(cache(value))=1 q cache(value)
s cache(value)=$$innerFibonacci(value-1,.cache)+$$innerFibonacci(value-2,.cache)
q cache(value)
fibonacci(value)
n cache
s cache(0)=1
s cache(1)=1
q $$innerFibonacci(value,.cache)
Now - recursion in MUMPS is a pretty dangerous thing to do because the MUMPS interpreter won't automatically convert tail recursions to iterations - so this could easily blow up for a large value.
Here's a little more "MUMPS-y" example, one that actually leverages MUMPS' single data structure, which is essentially a sorted array whose indices can be numbers or strings. Prefixing these arrays with ^ saves to disk. The $ things are functions built in to the language. The q: is a postcondition on the quit command, meaning 'quit if person is equal to ""'.
Here it is without abbreviations, then with:
peopleFoodCombinations(people,food)
new person
for set person=$order(people(person)) quit:person="" do
. set ^PEOPLE(person,"favoriteFood")=$get(food(person))
quit
Now with abbrevs:
peopleFoodCombinations(people,food)
n person
f s person=$o(people(person)) q:person="" d
. s ^PEOPLE(person,"favoriteFood")=$g(food(person))
q

Related

Fuzzing command line arguments [argv]

I have a binary I've been trying to fuzz with AFL, the only thing is AFL only fuzzes STDIN, and File inputs and this binary takes input through its arguments pass_read [input1] [input2]. I was wondering if there are any methods/fuzzers that allow fuzzing in this manner?
I don't not have the source code so making a harness is not really applicable.
Michal Zalewski, the creator of AFL, states in this post:
AFL doesn't support argv fuzzing, because TBH, it's just not horribly useful in
practice. There is an example in experimental/argv_fuzzing/ showing how to do it
in a general case if you really want to.
Link to the mentioned example on GitHub: https://github.com/google/AFL/tree/master/experimental/argv_fuzzing
There are some instructions in the file argv-fuzz-inl.h (haven't tried myself).
Bash only Solution
As an example, lets generate 10 random strings and store them in a file
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 10 | head -n 10 > string-file.txt
Next, lets read 2 lines from string-file and pass it into our application
exec handle< string-file.txt
while read string1 <&handle ; do
read string2 <&handle
pass_read $line1 $line2 >> crash_file.txt
done
exec handle<&-
We then have any crashes stored within crash_file.txt for further analysis.
This may not be the most elegant solution, but perhaps you gives you an idea of some other possibilities if no tool necessarily fulfills the current requirements
I looked at the AFLplusplus repo on GitHub. Inside AFLplusplus/utils/argv_fuzzing/, there is a Makefile. If you run it, you will get a .so file (a shared library) that you can use to do argv fuzzing, even if you only have the binary. Obviously, you must use AFL_PRELOAD. You can read more in the README.

Pari/Gp directing output

Is there an easy convenient way to direct output in Pari/GP to file? My aim is to get the full decimal expansion of 2^400000-1 either on screen or in a text file?
(23:37) gp > 2^400000-1
%947 = 996014342993......(4438 digits)......609762267975[+++]
GP terminal output gives this, which is not the goal. Basic output re-direction does not work either. Any ideas? Thanks.
(23:38) gp > 2^400000-1 > output.txt
There is a manual online, it does not say much about the output, except for the variable TeXstyle. I am unsure how to work with this though.
Quick and easy is to just do print(2^400000-1) and then you can cut+paste. Otherwise write(filename, 2^400000-1) if you want in a file.
Some other possibilities:
writebin(filename,2^400000-1) writes the object binary structure in a file: this is faster than traditional output (which implies a binary to decimal conversion), and loading it into another session will be faster as well. This is useful for a huge atomic write.
C-style output: fileopen, then successive filewrite allows many writes to a file referenced by a descriptor (which avoids re-opening / flushing / closing the file after each write). This is useful for a large write operation done through many tiny writes to a given file, e.g, character by character.

In relative terms, how fast should TCL on Windows 10 be?

I have the latest TCL build from Active State installed on a desktop and laptop both running Windows 10. I'm new to TCL and a novice developer and my reason for learning TCL is to enhance my value on the F5 platform. I figured a good first step would be to stop the occasional work I do in VBScript and port that to TCL. Learning the language itself is coming along alright, but I'm worried my project isn't viable due to performance. My VBScripts absolutely destroy my TCL scripts in performance. I didn't expect that outcome as my understanding was TCL was so "fast" and that's why it was chosen by F5 for iRules etc.
So the question is, am I doing something wrong? Is the port for Windows just not quite there? Perhaps I misunderstood the way in which TCL is fast and it's not fast for file parsing applications?
My test application is a firewall log parser. Take a log with 6 million hits and find the unique src/dst/port/policy entries and count them; split up into accept and deny. Opening the file and reading the lines is fine, TCL processes 18k lines/second while VBScript does 11k. As soon as I do anything with the data, the tide turns. I need to break the four pieces of data noted above from the line read and put in array. I've "split" the line, done a for-next to read and match each part of the line, that's the slowest. I've done a regexp with subvariables that extracts all four elements in a single line, and that's much faster, but it's twice as slow as doing four regexps with a single variable and then cleaning the excess data from the match away with trims. But even this method is four times slower than VBScript with ad-hoc splits/for-next matching and trims. On my desktop, i get 7k lines/second with TCL and 25k with VBscript.
Then there's the array, I assume because my 3-dimensional array isn't a real array that searching through 3x as many lines is slowing it down. I may try to break up the array so it's looking through a third of the data currently. But the truth is, by the time the script gets to the point where there's a couple hundred entries in the array, it's dropped from processing 7k lines/second to less than 2k. My VBscript drops from about 25k lines to 22k lines. And so I don't see much hope.
I guess what I'm looking for in an answer, for those with TCL experience and general programming experience, is TCL natively slower than VB and other scripts for what I'm doing? Is it the port for Windows that's slowing it down? What kind of applications is TCL "fast" at or good at? If I need to try a different kind of project than reading and manipulating data from files I'm open to that.
edited to add code examples as requested:
while { [gets $infile line] >= 0 } {
some other commands I'm cutting out for the sake of space, they don't contribute to slowness
regexp {srcip=(.*)srcport.*dstip=(.*)dstport=(.*)dstint.*policyid=(.*)dstcount} $line -> srcip dstip dstport policyid
the above was unexpectedly slow. the fasted way to extract data I've found so far
regexp {srcip=(.*)srcport} $line srcip
set srcip [string trim $srcip "cdiloprsty="]
regexp {dstip=(.*)dstport} $line dstip
set dstip [string trim $dstip "cdiloprsty="]
regexp {dstport=(.*)dstint} $line dstport
set dstport [string trim $dstport "cdiloprsty="]
regexp {policyid=(.*)dstcount} $line a policyid
set policyid [string trim $policyid "cdiloprsty="]
Here is the array search that really bogs down after a while:
set start [array startsearch uList]
while {[array anymore uList $start]} {
incr f
#"key" returns the NAME of the association and uList(key) the VALUE associated with name
set key [array nextelement uList $start]
if {$uCheck == $uList($key)} {
##puts "$key CONDITOIN MET"
set flag true
adduList $uCheck $key $flag2
set flag2 false
break
}
}
Your question is still a bit broad in scope.
F5 has published some comment why they choose Tcl and how it is fast for their specific usecases. This is actually a bit different to a log parsing usecase, as they do all the heavy lifting in C-code (via custom commands) and use Tcl mostly as a fast dispatcher and for a bit of flow control. And Tcl is really good at that compared to various other languages.
For things like log parsing, Tcl is often beaten in performance by languages like Python and Perl in simple benchmarks. There are a variety of reasons for that, here are some of them:
Tcl uses a different regexp style (DFA), which are more robust for nasty patterns, but slower for simple patterns.
Tcl has a more abstract I/O layer than for example Python, and usually converts the input to unicode, which has some overhead if you do not disable it (via fconfigure)
Tcl has proper multithreading, instead of a global lock which costs around 10-20% performance for single threaded usecases.
So how to get your code fast(er)?
Try a more specific regular expression, those greedy .* patterns are bad for performance.
Try to use string commands instead of regexp, some string first commands followed by string range could be faster than a regexp for these simple patterns.
Use a different structure for that array, you probably want either a dict or some form of nested list.
Put your code inside a proc, do not put it all in a toplevel script and use local variables instead of globals to make the bytecode faster.
If you want, use one thread for reading lines from file and multiple threads for extracting data, like a typical producer-consumer pattern.

Difference tcl script tkconsole to load gro file in VMD

My problem is simple: I'm trying to write a tcl script to use $grofile instead writing every time I need this file name.
So, what I did in TkConsole was:
% set grofile "file.gro"
% mol load gro ${grofile}
and, indeed, I succeeded uploading the file.
In the script I have the same lines, but still have this error:
wrong # args: should be "set varName ?newValue?"
can't read "grofile": no such variable
I tried to solve my problem with
% set grofile [./file.gro]
and I have this error,
invalid command name "./file.gro"
can't read "grofile": no such variable
I tried also with
% set grofile [file ./file.gro r]
and I got the first error, again.
I haven't found any simple way to avoid using the explicit name of the file I want to upload. It seems like you only can use the most trivial, but tedious way:
mol load file.gro
mol addfile file.xtc
and so on and so on...
Can you help me with a brief explanation about why in the TkConsole I can upload the file and use it as a variable while I can not in the tcl script?
Also, if you have where is my mistake, I will appreciate it.
I apologize if it is basic, but I could not find any answer. Thanks.
I add the head of my script:
set grofile "sim.part0001_protein_lipid.gro"
set xtcfile "protein_lipid.xtc"
set intime "0-5ms"
set system "lower"
source view_change_render.tcl
source cg_bonds.tcl
mol load gro $grofile xtc ${system}_${intime}_${xtcfile}
It was solved, thanks for your help.
You may think you've typed the same thing, but you haven't. I'm guessing that your real filename has spaces in it, and that you've not put double-quotes around it. That will confuse set as Tcl's general parser will end up giving set more arguments than it expects. (Tcl's general parser does not know that set only takes one or two arguments, by very long standing policy of the language.)
So you should really do:
set grofile "file.gro"
Don't leave the double quotes out if you have a complicated name.
Also, this won't work:
set grofile [./file.gro]
because […] is used to indicate running something as a command and using the result of that. While ./file.gro is actually a legal command name in Tcl, it's… highly unlikely.
And this won't work:
set grofile [file ./file.gro r]
Because the file command requires a subcommand as a first argument. The word you give is not one of the standard file subcommands, and none of them accept those arguments anyway, which look suitable for open (though that returns a channel handle suitable for use with commands like gets and read).
The TkConsole is actually pretty reasonable as quick-and-dirty terminal emulations go (given that it omits a lot of the complicated cases). The real problem is that you're not being consistently accurate about what you're really typing; that matters hugely in most programming languages, not just Tcl. You need to learn to be really exacting; cut-n-paste when creating a question helps a lot.

What does "the composition of UNIX byte streams" mean?

In the opening page of the book of "Lisp In Small Pieces", there is a paragraph goes like this:
Based on the idea of "function", an idea that has matured over
several centuries of mathematical research, applicative languages are
omnipresent in computing; they appear in various forms, such as the
composition of Un*x byte streams, the extension language for the Emacs
editor, as well as other scripting languages.
Can anyone elaborate a bit on "the composition of unix byte streams"? What does it mean? and how it is related to applicative/functional programming?
Thanks,
/bruin
My guess is that this is a reference to something like a pipe under linux.
cal | wc
the symbol | it's what invokes a pipe between 2 applications, a pipe is a feature provided by the kernel so you can use pipes where the applications are written using this kind of kernel APIs.
In this example cal is just the utility that prints a calendar, wc is an utility that counts words, rows and columns in the input that you pass to it, in this case the input is the result of piping cal to wc which makes things easier for you because it's more functional, you only care about what each applications does, you don't care, for example, about what is the name of the argument or where to allocate a temporary file to store the input/output in between.
Without the pipes you should do something like
cal > temp.txt
wc temp.txt
rm temp.xt
to obtain pretty much the same information. Also this second solution could possibly generate problems, for example what if temp.txt already exists ? Following what kind of rationale you will tell to your script to pick a name for your temporary file ? What if another process modifies your file in between the 2 calls to cal and wc ?