I'm new to Tcl and I have a script that is wrapped using freewrapTCLSH.exe
At first, when started, the program complained about not finding a package
I edited the line the seems to "include" it to
lappend auto_path ../../lib/crc
This worked fine and the .exe started without issues. But then I moved the exe to another folder and it started complaining again. I thought that once the exe was created everything would be done. But it doesn't seem to handle this very well.
At first the entire path to the lib was hard coded into the script and then everything worked fine. But since we can't rely on the exe always being built in the same folder this had to be changed.
Any ideas on how to get around this annoying problem?
../../lib/crc is interpreted using the current working directory each time a package is searched. Having this thing it your ::auto_path is almost always not what you want.
I use [file dirname [info script]] to get a directory of currently sourced Tcl file, adding a relative path to some lib/crc with file join, ensuring to get a full pathname with file normalize. The result of file normalize is what I add to ::auto_path (or remember for future use in some other way):
lappend ::auto_path [file normalize [file join [file dirname [info script]] ../mylib]]
It might be obvious, but still: info script returns the path to file currently being sources, not the path somehow remembered when the file containing a call to it was sourced. If you want to get the current script location, ensure it happens at right time (e.g. do it at top level).
You should deliver the required package (and the dependencies of that package) into your exe.
usually this only involves copying the directory of the required packages to the lib folder in your vfs.
Related
I have a folder for Octave M-files in C:\\Users\Dropbox\Octave, under which are various subfolders by function categories (normal distribution, chisq...). I just started making those subfolders and they will keep changing (adding, removing, reshuffling) as time goes on.
I would just like to set that folder as root and have Octave search for functions recursively there, just like you set a classpath in Java and JVM searches all folders there.
I used addpath(genpath('C:\\Users\Dropbox\Octave')), but the paths generated are then fixed, not reflecting subsequent subfolder changes.
Shall I add addpath(genpath('C:\\Users\Dropbox\Octave')) to the .octaverc file?
I think there is some confusion here. There are several ways to interact with the path, but for the most part these do not result in permanent changes, unless you save this somehow.
Simply adding a path for an existing octave session will not result in any permanent changes to the usual path that octave initialises at startup. Therefore when you say:
I used addpath(genpath('C:\Users\Dropbox\Octave')), but the paths generated are then fixed, not reflecting subsequent subfolder changes.
this makes no sense, because as soon as you exit your octave session, those added paths should have been gone altogether, and not appear in later octave sessions.
It is more likely that at some point you added these paths, and then used the savepath command, which resulted in your custom paths being added to your .octaverc file.
If that is the case, then yes, you can expect that octave will not "update" what was written in your .octaverc file, unless you call savepath again with an updated path definition.
If you would like the addpath(genpath('C:\Users\Dropbox\Octave')) command you mentioned to be called every time octave starts, so that the current/updated directory structure is loaded, then yes, the best way to do it would be to add that command to your .octaverc file. Make sure you remove the lines in your .octaverc that refer to the previous changes made by savepath. Note that there may be several levels of octaverc files that you need to check (see the relevant page in the manual)
Alternatively, you could simply make sure that this line appears in every script you want to call which intends to make use of those files.
While you may consider this last approach tedious, programmatically it is the most recommended one, since it makes dependencies clear in your code. This is especially important if you ever plan to share your code (and doubly so if you'd like it to be matlab compatible).
PS. All the above mostly applies to matlab too, with the exception that a) matlab's savepath saves path information in a file called pathdef.m, rather than directly in your startup files, and b) matlab uses startup.m instead of .octaverc as startup files. Also, if you don't care about doing this programmatically, matlab provides pathtool, which is a graphical interface for adding / saving directories to the matlab path.
I am trying to map libraries using a do file in ModelSim PE 10.4a and am having trouble making them local to the project. E.g. I don't want to hard-code the commands for changing directories to compile sources into a working directory there, but I would be okay with providing a path to the .do/.tcl file that would define a static library or something. For Xilinx core libs, the compiled sources don't move and I don't need to recompile so I can just have a hard mapping. However, I am developing some stuff for a project and want a nice way to map libraries and compile them. For unit tests, I don't mind using this hard-coded method. However, for projects where these locations may change or the directories apart from my libs may be far, what is a better way of doing what I have done below?
Below is how I compile my library (do_map_lfsr.do)
# 0) Create work directory for modelsim
vlib LFSR_lib
# 2) Compile files in use order
#vcom -93 -work work src/*.vhd
vcom -93 -work LFSR_lib GaloisLfsrBody.vhd
vcom -93 -work LFSR_lib LfsrPack.vhd
Below is the method I use to run this do file from the location of my testbench
# 1a) map/compile libs
# trying to find better way to do this
cd ../
do do_map_lfsr.do
cd unit_test/
vmap -modelsim_quiet LFSR_lib ../LFSR_lib
Is there a fancy way of finding and recompiling my libraries using .do/.tcl files and then mapping them for my development outside unit tests? Is there a way of defining a static library or something that doesn't disappear when I change directories?
For finding the files, the Tcllib find command (in the fileutil package) should be very useful.
package require fileutil
proc needsRecompiling {name} {
return [expr {[file extension $name] in {.do .tcl}}]
}
foreach filename [fileutil::find . needsRecompiling] {
if {[file extension $filename] eq ".do"} {
# Process the .do file here
} else {
# Process the .tcl file here
}
}
Of course, that assumes that you can just process each file independently; the order of listing isn't guaranteed (and likely depends on the order in the underlying directory node in the filesystem and things like that). If you need to do more complex things like matching files with different extensions, you'll need to write more code.
I have a tcl script named main.tcl in a folder called App. One of the lines in the script uses a command from the twapi module (that line is actually in a proc and I'm trying to minimize the app to system tray when a user closes the app through the 'X' window button):
package require twapi
# ... code here
set hand [twapi::load_icon_from_file tclkit.ico]
# ... code here
The file tclkit.ico is in the same directory as the script (i.e. in the folder App).
When main.tcl is run through wish, the script works without any issues, but after wrapping it into an executable through command line,
> tclkit sdx.kit wrap App -runtime tclsh863.exe
the executable raises an error, notably that the icon file could not be found:
The system cannot find the file specified.
The system cannot find the file specified.
while executing
"LoadImage $hmod $path $type $opts(width) $opts(height) $flags"
(procedure "twapi::_load_image" line 18)
invoked from within
"twapi::load_icon_from_file tclkit.ico"
(procedure "min_to_tray" line 2)
invoked from within
"min_to_tray"
(command for "WM_DELETE_WINDOW" window manager protocol)
The current workaround right now is to have a copy of the tclkit.ico file in the same directory as the .exe but I want to avoid that as much as possible and only have the standalone .exe file. I tried using the full path with:
set hand [twapi::load_icon_from_file [file join [pwd] App.exe tclkit.ico]]
which normally works when I want to read a file (.txt, .png files, etc.) within the .exe, without success.
So basically, is there a way to enable the .exe to load the .ico file from within itself or another workaround that will not require some dependence on a file outside the .exe app?
The core issue is that the relevant Windows API actually takes a filename, and not something that it's more easy to wrap loading-from-archive around (such as a buffer). This means that you have to copy the file out of the archive somewhere and then pass that name to the system call. This is in fact what Tcl does internally for load when it's pulling the DLL from a source that isn't directly visible to the OS; it doesn't do it automatically for TWAPI though, as that library takes the philosophical position of being just a thin wrapper and letting the caller handle the consequences (which does mean you can easily do more tricks, provided you're inventive).
I suggest copying the file to a temporary file somewhere (i.e., the standard location for these things; Tcl 8.6 has file tempfile to help with this sort of trick) and then passing the full filename into the TWAPI call. I think everywhere in the Windows API that you could pass a simple filename in, you can also pass a full filename. (That's actually very convenient…)
I am trying to zip the contents of a Folder in SSIS, there are files and folders in the source folder and I need to zip them all individually. I can get the files to zip fine my problem is the folders.
I have to use 7.zip to create the zipped packages.
Can anyone point me to a good tutorial. I haven't been able to implement any of the samples that I have found.
Thanks
This is how I have configured it.
Its easy to configure but the trick is in constructing the Arguments. Though you see the Arguments as static in the screenshot, its actually coming from a variable and that variable is set in the Arguments expression of Execute Process Task.
I presume you will have this Execute Process task in a For Each File Ennumerator with Traverse SubFolders checked.
Once you have this basic setup in place, all you need to do is work on building the arguments to do the zipping, how you want them. A good place to find all the command line arguments is here.
Finally, the only issue I ran into was not providing a working directory in the command line arguments for 7zip. The package used to run fine on my dev environment but used to fail when running on the server via a SQL job. This was because 7zip didn't have access to the 'Temp' folder on the SQL Server, which it uses by default as the 'working directory'. I got round this problem by specifying the 'working directory as follows at the end of the command line arguments, using the -ws switch:
For e.g:
a -t7z DestinationFile.7z SourceFile -wS:YourTempDirectoryToWhichTheSQLAgentHasRights
I have a file named test7.tcl:
namespace eval ::dai {
variable name "ratzip"
variable birthday "1982"
proc hello {} {
variable name
variable birthday
puts "Hello, I am $name birthday is $birthday"
}
}
and I want to source this file into another file, called test8.tcl in this way:
source test7.tcl
::dai::hello
but it gives me error: couldn't read file "test7.tcl": no such file or directory
but the two files are under the same folder, what happened?
To source a file that is in the same directory as the currently executing script, use this:
source [file join [file dirname [info script]] "test7.tcl"]
Note that this doesn't work inside procedures defined inside the outer script (test8.tcl in your case) because they're typically called after the source finishes. If that's the case for you, the simplest fix is to just save the output of info script in a variable in your outer script (or just source all files immediately instead of lazily for the ultimately best approach).
Use source [file join [file dirname [info script]] test7.tcl] -- that way you'll be sourcing the target file by its full pathname constructed from the full pathname of the file executing source; this will work no matter what your current directory is during the execution.
You don't have to specify the path of the file to be sourced relative to the path of test8.tcl but relative to the current working directory. E.g. use the absolute path:
source /path/to/test7.tcl