TCL no such file or directory - tcl

I am running this script nmrCube.tcl for generating 3D box from NMR data.
I initially had problem with Library before which is now sorted
While running the script I get this, (even though it is indeed there):
Error in startup script: couldn't read file "“./nmrCube.tcl”": no such file or directory

Tcl regards “curly quotes” as entirely ordinary characters. They're not alphanumerics or one of Tcl's metacharacters, so they follow the same basic rules as characters like / and . and so on.
You probably don't want to use them in a Tcl script except in text for display to the user. You might want to use the "straight quotes" instead, which are metacharacters for Tcl. If your editor insists on converting those to fancy quotes, find another text editor. (You'd have problems using it for virtually any other programming language as well.)

Related

Why does my GTFS data contain "invisible" line breaks?

So I've been looking at a way to import GTFS data into an SQLdb for my application. I found a solution available on GitHub.
But, this is written using python. I don't think I can use this directly in my windows application. Please correct me if I am wrong here.
But I have no issues with understanding the logic behind the solution and creating my own 'parser'.
So, I opened the GTFS data file "calendar dates.txt" on Notepad and found its content confusing. It was like:
service_id,date,exception_type1,20151012,11,20151111,12,20150822,12,20150829,12.....
You can see that its confusing when there are no line breaks.
But I paste the code here to show it to you guys, and it automatically formats to:
service_id,date,exception_type
1,20151012,1
1,20151111,1
2,20150822,1
2,20150829,1
2
Now it clearly makes sense!! (There are spaces in between for parsing)..
But I don't understand. Is Notepad showing it wrong? How do I see the data "properly" then, in order to write my own parser?
Most likely your GTFS data is written with UNIX end-of-line characters (linefeed only) as opposed to MS-DOS/Windows characters (carriage return followed by linefeed). This is permitted by the GTFS spec, which says:
Each line must end with a CRLF or LF linebreak character.
Most application software available for Windows, including Notepad, recognizes only Windows end-of-line characters and opening a file created on UNIX will show the entire contents as a single line, as you've observed. However, tools like Notepad++ that are meant for developers, as well as most programming libraries (such as those meant to parse CSV files), are usually smart enough to recognize both formats and handle them appropriately.
Wikipedia has more information about end-of-line representations across operating systems if you're interested.
Finally, I'll mention that I've recently posted to Github my own GTFS-to-SQLite loading tool, which is written in C and uses libcsv to parse GTFS data. If you're developing in a language lower-level than Python you may find it useful as an example.
First of all copy your related GTFS(routes,shaps etc) and than paste in an online text editor(for example: http://www.editpad.org/)
And than copy from this online text editor and paste again to your original .txt.

OpenLDAP #SUFFIX# notation

I've started researching LDAP and started following a tutorial to at least start familiarizing with it. While doing that, I noticed that there is an odd (for me) notation in the /usr/share/slapd/slapd.conf file on my computer, namely #SUFFIX# and other things surrounding by the at sign. I think that this is supposed to be interpreted as some kind of a variable or substitution point, but any search on Google turned out nothing, as they ignore special characters.
How should I interpret this and where can I change it?
Sounds like you're talking about the slapd.conf template from the Debian package. #SUFFIX# and the other tokens in that file are replaced at configure time by the maintainer scripts.
(The package also contains a similar template for dynamic configuration; slapd.conf is no longer used.)

How to edit built in command behavior

I want to edit find_under_expand (ctrl+d) to consider hyphenated words, as single words. So when I try to replace all instance of var a, it shouldn't match substrings of "a" in words like a-b, which it currently does.
I'm assuming find_under_expand wraps your current selection in regex boundaries like this: \ba\b
I need it to wrap in something like this: \b(?<!-)a(?!-)\b
Is the find_under_expand command's source available to edit? Or do I have to rewrite the whole thing? I'm not sure where to begin.
Sublime's commands are implemented in one of several ways: as macros, as plugins, and internally as part of the compiled program (probably as C++). The default macros and plugins can be found in the Packages/Default directory in ST2 (where Packages is the directory opened when selecting Preferences -> Browse Packages...), or zipped in the Installed Packages/Default.sublime-package file in ST3, extractable using #skuroda's excellent PackageResourceViewer plugin, available via Package Control. Macros have .sublime-macro extensions, while plugins are written in Python and have .py extensions.
I searched all through the Defaults package in ST3 (things are generally the same as in ST2), and was unable to find a macro or .py file that included the find_under_expand command, or FindUnderExpand, which is the convention when naming command classes in plugins. Therefore, I strongly suspect that this command is internal to Sublime, probably written in C++ and linked into the executable or in a .dll|.dylib|.so library.
So, it doesn't look like there's an existing file that you could easily modify to adjust for your negative lookahead/lookbehind patterns (I assume that's what those are, my regex is a bit rusty...). Instead, you'll have to implement your own plugin from scratch that reads the "word_separators" value in your settings file, which the current implementation of find_under_expand doesn't seem to be doing, judging from your previous question and my own testing. Theoretically, this shouldn't be too terribly difficult - you can just open up a quick panel where the user enters the pattern/regex to be searched for, and you can just iterate through the current view looking for matches and highlighting/selecting them.
Good luck!

.tbc to .tcl file

this is a strange question and i searched but couldn't find any satisfactory answer.
I have a compiled tcl file i.e. a .tbc file. So is there a way to convert this .tbc file back to .tcl file.
I read here and someone mentioned about ::tcl_traceCompile and said this could be used to disassemble the .tbc file. But being a novice tcl user i am not sure if this is possible, or to say more, how exactly to use it.
Though i know that tcl compiler doesn't compile all the statements and so these statements can be easily seen in .tbc file but can we get the whole tcl back from .tbc file.
Any comment would be great.
No, or at least not without a lot of work; you're doing something that quite a bit of effort was put in to prevent (the TBC format is intended for protecting commercial code from prying eyes).
The TBC file format is an encoding of Tcl's bytecode, which is not normally saved at all; the TBC stands for Tcl ByteCode. The TBC format data is only produced by one tool, the commercial “Tcl Compiler” (originally written by either Sun or Scriptics; the tool dates from about the time of the transition), which really is a leveraging of the built-in compiler that every Tcl system has together with some serialization code. It also strips as much of the original source code away as possible. The encoding used is unpleasant; you want to avoid writing your own loader of it if you can, and instead use the tbcload extension to do the work.
You'll then need to use it with a custom build of Tcl that disables a few defensive checks so that you can disassemble the loaded code with the tcl::unsupported::disassemble command (which normally refuses to take apart anything coming from tbcload); that command exists from Tcl 8.5 onwards. After that, you'll have to piece together what the code is doing from the bytecodes; I'm not aware of any tools for doing that at all, but the bytecodes are mostly fairly high level so it's not too difficult for small pieces of code.
There's no manual page for disassemble; it's formally unsupported after all! However, that wiki page I linked to should cover most of the things you need to get started.
I can say partially "yes" and conditionaly too. That condition is if original tcl code is written in namespace and procs are defined within namespace curly braces. Then you source tbc file in tkcon/wish and see code using info procs and namespace command. Offcourse you need to know namespace name. However that also can be found.

How to analyze binary file?

I have a binary file. I don't know how it's formatted, I only know it comes from a delphi code.
Does it exist any way to analyze a binary file?
Does it exist any "pattern" to analyze and deserialize the binary content of a file with unknown format?
Try these:
Deserialize data: analyze how it's compiled your exe (try File Analyzer). Try to deserialize the binary data with the language discovered. Then serialize it in a xml format (language-indipendent) that every programming language can understand
Analyze the binary data: try to save various versions of the file with little variation and use a diff program to analyze the meaning of every bit with an hex editor. Use it in conjunction with binary hacking techniques (like How to crack a Binary File Format by Frans Faase)
Reverse Engineer the application: try getting code using reverse engineering tools for the programming language used for build the app (found with File Analyzer). Otherwise use disassembler analysis tool like IDA Pro Disassembler
For my hobby project I had to reverse engineer some old game files. My approaches were:
Have a good hex editor.
Look for readable words in the binary file. Note how their distribution is. If the distance between them is constant you know it is a listing.
Look for 2-3 consequent zeros. Might indicate an int32 value.
Some dwords might be pointers into the file.
Try to identify reoccurring patterns in the file.
Seeing lots of C0-CF might indicate RLE compressed data.
I've developed Hexinator (Window & Linux) and Synalyze It! (macOS) exactly for this purpose. These applications allow you to see the binary files like in other hex editors but additionally you can create a "grammar" with the specifics of a binary file format. The grammar contains all the building blocks and is used to parse the file automatically.
Thus you can keep the knowledge you gain in the analysis and apply it to multiple files simultaneously. You can also color-code the bits and pieces of file formats for a quick overview in the hex editor.
The parsing results are displayed in a tree view where you can also modify the files easily (applying endianness et cetera).
Reverse engineering a binary file when you have some idea of what it represents is a very time consuming process. If you have no idea what it is then it will be even harder.
It is possible though, but you have to have a pretty good reason for doing so.
The first step would be to open it up in a hex editor of your choice and see if you can find any English text to point you in the direction of what the file is even supposed to represent. From there, Google "Reverse Engineering binary files", there are much more knowledgeable people than me that have written guides about it.
The "strings" program from GNU binutils is very useful. It will print the strings of printable characters in a file, quite often giving a clue to what a file contains or a program does.
If the data represents serialized Delphi objects, you should start reading about the Delphi serialization process. If that's the case, I think your best bet would be to load it using Delphi and continue your analysis from the IDE. Some informations about Delphi serialization can be found here.
EDIT: if the file does contain serialized delphi objects, then you should write a small delphi program that loads it, and "convert" the data yourself to something neutral, like xml. If you manage to do this, you should check and see if delphi supports serializing to xml. Then, you could access those objects from any language.
The unix "file" command is really useful - I don't know if there is anything like it in windows. You run it like this:
file myfile.ext
And it spits out a text description based on the magic numbers and data contained therein.
Probably it is contained within cygwin.
If you have access to the application that creates the file, you can apply changes to the application, then save the file and see the effects (Keep in mind that numbers are probably stored in little endian):
First create the file repeatedly. If the files are not binary equal, the current date/time is probably stored in the area where hte differences occur.
Maybe you want to repeat that with the software running under different environments, to see if OS version etc are stored, but this is rather unusual.
Next you can try to change single variables and create several files that only differ in the value of this variable. This helps you identify where this variable is stored.
That way you can also exclude variables that are not stored in the file: If you change them, but the files created are identical, they are not stored.
In order to test the hypotheses you worked out with the steps above, edit one of the files and have the application read it.
If you don't have access to the application itself, I suggest that you forget about it and find another way to solve your problem. There is a very high probability that it will be faster...
If file does not give a meaningful answer, you may want to try TRiD by Marco Pontello to determine whether your data is stored in a known format.
Get the Delphi application and open it in IDA Pro freeware version, and find where it writes the file, and decode how it writes the file that way.
Unless it's plan text.
Do you know the program that uses it? If so you can hook that programs write to file function and get an idea of what data its writing, the size of the data and where.
More Info: http://www.codeproject.com/KB/DLL/Win32APIHooking_Trouble.aspx
Unlike traditional hex editors which only display the raw hex bytes of a file, 010 Editor can also parse a file into a hierarchical structure using a Binary Template. The results of running a Binary Template are much easier to understand and edit than using just the raw hex bytes.
http://www.sweetscape.com/010editor/
Try to open it in a hex editor and analyse.