.tbc to .tcl file - tcl

this is a strange question and i searched but couldn't find any satisfactory answer.
I have a compiled tcl file i.e. a .tbc file. So is there a way to convert this .tbc file back to .tcl file.
I read here and someone mentioned about ::tcl_traceCompile and said this could be used to disassemble the .tbc file. But being a novice tcl user i am not sure if this is possible, or to say more, how exactly to use it.
Though i know that tcl compiler doesn't compile all the statements and so these statements can be easily seen in .tbc file but can we get the whole tcl back from .tbc file.
Any comment would be great.

No, or at least not without a lot of work; you're doing something that quite a bit of effort was put in to prevent (the TBC format is intended for protecting commercial code from prying eyes).
The TBC file format is an encoding of Tcl's bytecode, which is not normally saved at all; the TBC stands for Tcl ByteCode. The TBC format data is only produced by one tool, the commercial “Tcl Compiler” (originally written by either Sun or Scriptics; the tool dates from about the time of the transition), which really is a leveraging of the built-in compiler that every Tcl system has together with some serialization code. It also strips as much of the original source code away as possible. The encoding used is unpleasant; you want to avoid writing your own loader of it if you can, and instead use the tbcload extension to do the work.
You'll then need to use it with a custom build of Tcl that disables a few defensive checks so that you can disassemble the loaded code with the tcl::unsupported::disassemble command (which normally refuses to take apart anything coming from tbcload); that command exists from Tcl 8.5 onwards. After that, you'll have to piece together what the code is doing from the bytecodes; I'm not aware of any tools for doing that at all, but the bytecodes are mostly fairly high level so it's not too difficult for small pieces of code.
There's no manual page for disassemble; it's formally unsupported after all! However, that wiki page I linked to should cover most of the things you need to get started.

I can say partially "yes" and conditionaly too. That condition is if original tcl code is written in namespace and procs are defined within namespace curly braces. Then you source tbc file in tkcon/wish and see code using info procs and namespace command. Offcourse you need to know namespace name. However that also can be found.

Related

The use of packages to parse command arguments employing options/switches?

I have a couple questions about adding options/switches (with and without parameters) to procedures/commands. I see that tcllib has cmdline and Ashok Nadkarni's book on Tcl recommends the parse_args package and states that using Tcl to handle the arguments is much slower than this package using C. The Nov. 2016 paper on parse_args states that Tcl script methods are or can be 50 times slower.
Are Tcl methods really signicantly slower? Is there some minimum threshold number of options to be reached before using a package?
Is there any reason to use parse_args (not in tcllib) over cmdline (in tcllib)?
Can both be easily included in a starkit?
Is this included in 8.7a now? (I'd like to use 8.7a but I'm using Manjaro Linux and am afraid that adding it outside the package manager will cause issues that I won't know how to resolve or even just "undo").
Thank you for considering my questions.
Are Tcl methods really signicantly slower? Is there some minimum threshold number of options to be reached before using a package?
Potentially. Procedures have overhead to do with managing the stack frame and so on, and code implemented in C can avoid a number of overheads due to the way values are managed in current Tcl implementations. The difference is much more profound for numeric code than for string-based code, as the cost of boxing and unboxing numeric values is quite significant (strings are always boxed in all languages).
As for which is the one to use, it really depends on the details as you are trading off flexibility for speed. I've never known it be a problem for command line parsing.
(If you ask me, fifty options isn't really that many, except that it's quite a lot to pass on an actual command line. It might be easier to design a configuration file format — perhaps a simple Tcl script! — and then to just pass the name of that in as the actual argument.)
Is there any reason to use parse_args (not in tcllib) over cmdline (in tcllib)?
Performance? Details of how you describe things to the parser?
Can both be easily included in a starkit?
As long as any C code is built with Tcl stubs enabled (typically not much more than define USE_TCL_STUBS and link against the stub library) then it can go in a starkit as a loadable library. Using the stubbed build means that the compiled code doesn't assume exactly which version of the Tcl library is present or what its path is; those are assumptions that are usually wrong with a starkit.
Tcl-implemented packages can always go in a starkit. Hybrid packages need a little care for their C parts, but are otherwise pretty easy.
Many packages either always build in stubbed mode or have a build configuration option to do so.
Is this included in 8.7a now? (I'd like to use 8.7a but I'm using Manjaro Linux and am afraid that adding it outside the package manager will cause issues that I won't know how to resolve or even just "undo").
We think we're about a month from the feature freeze for 8.7, and builds seem stable in automated testing so the beta phase will probably be fairly short. The list of what's in can be found here (filter for 8.7 and Final). However, bear in mind that we tend to feel that if code can be done in an extension then there's usually no desperate need for it to be in Tcl itself.

Need help downloading and reading a zipped CSV file in memory with Clojure

I have an external site from which I want to download a zipped CSV file. Currently, I'm downloading it unzipped, saving it to disk, then unzipping it, saving the unzipped file to disk, then reading the unzipped file with the CSV reader. Lots of useless steps in the process can be trimmed out, and I went on my way to do so.
This amazing answer helped me to get myself going. I tried to use the first option linked there (GZIPInputStream), but I get a "Not GZIP format" error, so I suppose I have to go to the second option.
This is my current code, and it does what I want it to do:
(defn download-zipped-stream!
(:body (clj-http.client/get "www.example.com" {:as :stream})))
(with-open
[stream (ZipInputStream. download-zipped-stream!)]
(.getNextEntry stream)
(doall (clojure.data.csv/read-csv (clojure.java.io/reader stream) :separator \;)))
I literally got to this by trial and error. There are mainly three things I'd like to change / understand about this code.
Ideally, I would want to break my code in two parts: one to download and unzip the content, returning a stream - the reason being that I want to decide later whether I want to read it as a csv directly, or write to disk (I don't want to lose this option, because, during development, it is much easier to read a pre-downloaded csv file than downloading the big content every single time). Turns out that, if I try to access the stream outside of the with-open call, I get a "stream closed" error (which, from what I understand, makes total sense).
On the above code, I have to call this .getNextEntry, or I get an empty list. As someone who is striving to write functional code, this bothers me, because, from what I can understand, I'm dealing with states here - my stream object looks mutable, which is something I really don't want. Isn't there a way to work around this step and straight-up not have it there?
I tried to call the read-csv method directly on the stream object, but the read-csv doesn't really know how to handle ZipInputStreams, apparently. Seeing this, I simply and hopefully throwed an io/reader call in between, and it worked. I don't know if this is the best approach, though. Is it correct?
I'm quite new to Clojure, and I'm completely clueless about Java in general, so, as you can see, my knowledge about those stream objects is pretty limited. I tried to read something about it in Java, but I quitted because I was not sure about how much of it could be useful for someone learning Clojure, so any pointers are also appreciated.
I think you are on the right approach. Suggestions to consider:
Consider using wget to manually download the *.csv.gz file to your local disk. Then, just open that local file instead of using clj-http.client/get.
I haven't played much with ZipInputStream, but if using .getNextEntry() seems to be required, just go with it.
The examples for read-csv show using a Reader to give access to the input file, so this is the expected behavior.
This template project shows how I like to organize a Clojure project & source code. Be sure to peruse the list of documentation provided.
Don't forget to utilize cljdoc.org for looking up Clojure library API docs. For example, see the API docs for data.csv.
Update
You may also want to review this answer.
Use https://github.com/techascent/tech.ml.dataset optionally with https://scicloj.github.io/tablecloth/index.html (a dplyr like api for TMD)
Also has advantage of being extremely fast and able to handle datasets that can't fit in memory, talks SQL, Arrow, et. al. Join conversation about it here:
https://clojurians.zulipchat.com/#narrow/stream/151924-data-science/topic/tech.2Eml.2Edataset

How do I find where a function is declared in Tcl?

I think this is more of a Tcl configuration question rather than a Tcl coding question...
I inherited a whole series of Tcl scripts that are used within a simulation tool that my company built in-house. In my scripts, I'm finding numerous instances where there are function calls to functions that don't seem to be declared anywhere. How can I trace the path to these phantom functions?
For example, rather than use source, someone build a custom include function that they named INCLUDE. Tclsh obviously balks when I try to run it there, but with my simulation software, it runs fine.
I've tried grep-ing through the entire simulation software for INCLUDE, but I'm not having any luck. Are there any other obvious locations outside the simulation software where a Tcl function might be defined?
The possibilities:
Within your software. (you have checked for this).
Within some other package included by the software.
Check and see if the environment variable TCLLIBPATH is set.
Also check and see if the simulation software sets TCLLIBPATH.
This will be a list of directories to search for Tcl packages, and you
will need to search the packages that are located outside of the
main source tree.
Another possibility is that the locations are specified in the pkgIndex.tcl file.
Check any pkgIndex.tcl files and look for locations outside the main source tree.
Within an unknown command handler. This could be in
your software or within some other package. You should be able to find
some code that processes the INCLUDE statement.
Within a binary package. These are shared libraries that are loaded
by Tcl. If this is the case, there should be some C code used to
build the shared library that can be searched.
Since you say there are numerous instances of unknown functions, my first
guess is that you have
not found all the directories where packages are loaded from. But an
''unknown'' command handler is also a possibility.
Edit:
One more possibility I forgot. Check and see if your software sets the auto_path variable. Check any directories added to the auto_path for
other packages.
This isn't a great answer for you, but I suspect it is the best you're going to get...
The procedure could be defined in a great many places. Your best bet for finding it is to use a tool like findstr (on Windows) or grep -R (on POSIX platforms) to search across all the relevant source files. But that still might not help! It might not be a procedure but instead a general command, which could be implemented in C and not as a procedure, or it could be defined in a packaged application archive (which are usually awkward to look inside). There are also other types of script-implemented command too, which could make things awkward. Generally searching and investigating is your best bet, but it might not work.
Tcl doesn't really differentiate strongly between different types of command except in some introspection operations. If you're lucky, you could find that info body tells you the definition of the procedure (and info args and info default tell you about the arguments) but that won't help with other command types at all. Tcl 8.7 will include a command (info cmdtype) that would help a lot with narrowing down what to do next, but that's no use to you now and it definitely doesn't exist in older versions.

How should I manage too many TCL procedures?

Me and my team have been working on multiple tools flow for awhile now.
We keep adding new procedure in either a same file or creating new file in the same directory. There are also a lot of nested procedures; one calls others.
The number of procedures will only keep growing, and the flow involves at least 10 people who love to do their own things.
My question is, how would we go about managing all these procedures in a tidy manner?
We assume you follow good practices with general software engineering (keeping files in source control, etc.) as without those you're stuck anyway.
Tcl doesn't really support nested procs; you can call proc from inside another procedure, but it doesn't do any kind of scoping.
You should be thinking in terms of dividing up your code into pieces of “coherent API”. What exactly that is depends on your application, but it is only rarely a single procedure; a particular dialog box or screen is a much more useful unit. That might end up as one procedure, but it's often several related ones.
Once you've identified these coherent pieces, they form the contents of what you put in a file, typically one coherent piece per file, though if the file is rather long when you do that, using a group of files instead (probably in their own directory) makes a lot of sense. At the same time, you probably should make the variables and commands defined by each coherent piece be all in a Tcl namespace, which isolates the piece a little bit from the rest of the world, mostly to stop code from treading on the toes of other code.
Now that you've done that, and if you've got what you think is a stable API to your coherent piece, you can make that piece into a Tcl package. That's just done by giving it a higher-level name and version number; you put this in one of your files in the coherent piece:
package provide YourPackageName 1.0
and then (usually in the same directory) you make a pkgIndex.tcl file with contents like this:
package ifneeded YourPackageName 1.0 [list source [file join $dir yourFilename.tcl]]
That is, it says that to get YourPackageName version 1.0 in a Tcl interpreter, you source the file $dir/yourFilename.tcl; the $dir is a convenience in package index files that refers to the directory containing the current package index file. Then the rest of your code can stop thinking about “reading the right files”, and start thinking in terms of “use this defined API”. (This is great if you then choose to start implementing the package using a mixture of Tcl and C or even pure C code; a change to the index file to use load of the right thing and everything else can be oblivious.) It does that by doing:
package require YourPackageName
# Give the version if necessary, of course
Then write yourself some documentation (even if it is just listing the entry point commands into the package) and tests, and you've migrated to being a very well behaved piece of code indeed.
There are some additional techniques that can help you in some cases with making coherent pieces. In particular, if you're using an OO system like TclOO, iTcl, or XOTcl, each class is almost certainly a candidate coherent piece. Also, it's sometimes better to put several related coherent pieces together in a package. However, there's absolutely no hard and fast rule on that.
Finally, Tcl uses a bunch of techniques to find packages, but they mostly come down to looking using the auto_path global variable. In your application main script, it's best (if the rest of your code is mostly in the library directory) to use something like this as one of the first steps:
lappend auto_path [file join [file dirname [info script]] library]
You can also gather the contents of many pkgIndex.tcl files in one place, provided you take into account any pathname changes needed from moving things around.
So regarding the TCL, You can look for creating the packages and namespaces. Let me know if that can help. So more details can be provided

How to analyze binary file?

I have a binary file. I don't know how it's formatted, I only know it comes from a delphi code.
Does it exist any way to analyze a binary file?
Does it exist any "pattern" to analyze and deserialize the binary content of a file with unknown format?
Try these:
Deserialize data: analyze how it's compiled your exe (try File Analyzer). Try to deserialize the binary data with the language discovered. Then serialize it in a xml format (language-indipendent) that every programming language can understand
Analyze the binary data: try to save various versions of the file with little variation and use a diff program to analyze the meaning of every bit with an hex editor. Use it in conjunction with binary hacking techniques (like How to crack a Binary File Format by Frans Faase)
Reverse Engineer the application: try getting code using reverse engineering tools for the programming language used for build the app (found with File Analyzer). Otherwise use disassembler analysis tool like IDA Pro Disassembler
For my hobby project I had to reverse engineer some old game files. My approaches were:
Have a good hex editor.
Look for readable words in the binary file. Note how their distribution is. If the distance between them is constant you know it is a listing.
Look for 2-3 consequent zeros. Might indicate an int32 value.
Some dwords might be pointers into the file.
Try to identify reoccurring patterns in the file.
Seeing lots of C0-CF might indicate RLE compressed data.
I've developed Hexinator (Window & Linux) and Synalyze It! (macOS) exactly for this purpose. These applications allow you to see the binary files like in other hex editors but additionally you can create a "grammar" with the specifics of a binary file format. The grammar contains all the building blocks and is used to parse the file automatically.
Thus you can keep the knowledge you gain in the analysis and apply it to multiple files simultaneously. You can also color-code the bits and pieces of file formats for a quick overview in the hex editor.
The parsing results are displayed in a tree view where you can also modify the files easily (applying endianness et cetera).
Reverse engineering a binary file when you have some idea of what it represents is a very time consuming process. If you have no idea what it is then it will be even harder.
It is possible though, but you have to have a pretty good reason for doing so.
The first step would be to open it up in a hex editor of your choice and see if you can find any English text to point you in the direction of what the file is even supposed to represent. From there, Google "Reverse Engineering binary files", there are much more knowledgeable people than me that have written guides about it.
The "strings" program from GNU binutils is very useful. It will print the strings of printable characters in a file, quite often giving a clue to what a file contains or a program does.
If the data represents serialized Delphi objects, you should start reading about the Delphi serialization process. If that's the case, I think your best bet would be to load it using Delphi and continue your analysis from the IDE. Some informations about Delphi serialization can be found here.
EDIT: if the file does contain serialized delphi objects, then you should write a small delphi program that loads it, and "convert" the data yourself to something neutral, like xml. If you manage to do this, you should check and see if delphi supports serializing to xml. Then, you could access those objects from any language.
The unix "file" command is really useful - I don't know if there is anything like it in windows. You run it like this:
file myfile.ext
And it spits out a text description based on the magic numbers and data contained therein.
Probably it is contained within cygwin.
If you have access to the application that creates the file, you can apply changes to the application, then save the file and see the effects (Keep in mind that numbers are probably stored in little endian):
First create the file repeatedly. If the files are not binary equal, the current date/time is probably stored in the area where hte differences occur.
Maybe you want to repeat that with the software running under different environments, to see if OS version etc are stored, but this is rather unusual.
Next you can try to change single variables and create several files that only differ in the value of this variable. This helps you identify where this variable is stored.
That way you can also exclude variables that are not stored in the file: If you change them, but the files created are identical, they are not stored.
In order to test the hypotheses you worked out with the steps above, edit one of the files and have the application read it.
If you don't have access to the application itself, I suggest that you forget about it and find another way to solve your problem. There is a very high probability that it will be faster...
If file does not give a meaningful answer, you may want to try TRiD by Marco Pontello to determine whether your data is stored in a known format.
Get the Delphi application and open it in IDA Pro freeware version, and find where it writes the file, and decode how it writes the file that way.
Unless it's plan text.
Do you know the program that uses it? If so you can hook that programs write to file function and get an idea of what data its writing, the size of the data and where.
More Info: http://www.codeproject.com/KB/DLL/Win32APIHooking_Trouble.aspx
Unlike traditional hex editors which only display the raw hex bytes of a file, 010 Editor can also parse a file into a hierarchical structure using a Binary Template. The results of running a Binary Template are much easier to understand and edit than using just the raw hex bytes.
http://www.sweetscape.com/010editor/
Try to open it in a hex editor and analyse.