Is there any way to get GDE (Global Directory Editor) Raw unformatted values as output - mumps

I'm writing a utility in JavaScript to interact with GTM's GDE(Global Directory Editor). One of the things it needs to accomplish is to run the GDE SHOW -ALL command and get a listing of all information in the Global Directory. However, SHOW command formats the values for display in terminal environment, which is unusable for my purpose, without extensive parsing.
Does anyone know of any ways to get the the Global Directory information unformatted(perhaps in key-value pairs/zwrite format)?
If GDE can't provide that, are there any ways or algorithms to read the actual Global Directory File, to get that information?

GDE has a command that prints out the list of commands to regenerate the GDE. That can be easily parsed.
> $gtm_dist/mumps -run GDE show -command -file="gde.cmd"
> head -6 gde.cmd
TEMPLATE -REGION -NOAUTODB
TEMPLATE -REGION -COLLATION_DEFAULT=0
TEMPLATE -REGION -EPOCHTAPER
TEMPLATE -REGION -NOINST_FREEZE_ON_ERROR
TEMPLATE -REGION -JOURNAL=(ALLOCATION=2048,AUTOSWITCHLIMIT=8386560,BEFORE_IMAGE,BUFFER_SIZE=2312,EXTENSION=2048)
> tail -6 gde.cmd
ADD -REGION DEFAULT -DYNAMIC_SEGMENT=DEFAULT
!
ADD -SEGMENT DEFAULT -FILE_NAME="mumps.dat"
!
LOCKS -REGION=DEFAULT
!
Note that "Template" corresponds to configuration that is common to all/most regions. Any exception is logged under the individual region/segment.
Also note that the latest GT.M release (V7.0-000) removed ^%DSEWRAP.

You can try ^%DSEWRAP against the main global file.
E.g.: https://github.com/shabiel/Kernel-GTM/blob/master/Kernel/Routines/ZISHGUX.m#L216
--Sam

Related

How to load user-specific configuration for CMake project

I like to use a configuration file that sets several cached variables. The purpose is to reuse it for every projects running on a machine or to select different library versions for testing or special purpose.
I can achieve it with a CMake file like this one:
set(path_to_lib_one path/to/lib/one)
set(option1 dont_want_to_bother_setting_this_option)
set(option2 that_option_have_to_be_set_again)
And call include(myConfigfile).
But I would like to know if their is a cache-like way of doing it and what are the best practices to manage user/setup specific configurations.
Use the initial cache option offered by CMake. You store your options in the right format (set withCACHE`) and call
cmake -C <cacheFile> <pathToSourceDir>
Self-contained example
The CMakeLists.txt looks like
project(blabla)
cmake_minimum_required(VERSION 3.2)
message("${path_to_lib_one} / ${option1} / ${option2}")
and you want to pre-set the three variables. The cacheFile.txt looks like
set(path_to_lib_one path/to/lib/one CACHE FILEPATH "some path")
set(option1 "dont_want_to_bother_setting_this_option" CACHE STRING "some option 1")
set(option2 42 CACHE INT "and an integer")
and your CMake call (from a directory build below the source directory)
cmake -C cacheFile.txt ..
The output is
loading initial cache file ../cacheFile.txt
[..]
path/to/lib/one / dont_want_to_bother_setting_this_option / 42
Documentation:
https://cmake.org/cmake/help/latest/manual/cmake.1.html#options
Load external cache files
Additionally, CMake offer a way to read in a cache file, that was created by another project. The command is load_cache. You can use it to just load the variables from the external cache or to copy them to the cache of the current project.
Documentation: https://cmake.org/cmake/help/latest/command/load_cache.html

TCL help: How to check for unmount/bad disks before read file

I need help here.
I have list of directory/file path and my program will read through every one.
Somehow one of the directory is unmount/bad disks and cause my program hang over there when I'm try to open the file using command below.
catch {set directory_fid [open $filePath r]}
So, how can I check the directory status before I'm reading/open the file? I want to skip that file if no response for certain time and continue to read next file.
*file isdir $dir is not working as well
*There is no response when i'm using ls -dir in Unix also.
Before you start down this path, I would review your requirements and see if there's any easier way to handle this. It would be better to fix the mounts so that they don't cause a hang condition if an access attempt is made.
The main problem is that for the directories you are checking, you need to know the corresponding mount point. If you don't know the mount point, it's hard to tell whether the directory you want to check will cause any hangs when you try to access it.
First, you would have to parse /etc/fstab and get a list of possible filesystem mount points (Assumption, Linux system -- if not Linux, there will be an equivalent file).
Second, to see what is currently mounted you need the di Tcl extension (wiki page) (or main page w/download links). (*). Using this extension, you can get a list of mounted filesystems.
# the load only needs to be done once...
set ext [info sharedlibextension]
set lfn [file normalize [file join [file dirname [info script]] diskspace$ext]]
load $lfn
# there are various options that can be passed to the `diskspace`
# command that will change which filesystems are listed.
set fsdata [diskspace -f {}]
set fslist [dict keys $fsdata]
Now you have a list of possible mount points, and you know which are mounted.
Third, you need to figure out which mount point corresponds to the directory you want to check. For example, if you have:
/user/bll/source/stuff.c
You need to check for /user/bll/source, then /user/bll, then /user, then / as possible mount points.
There's a huge assumption here that the file or any of its parent directories are not symlinked to another place.
Once you determine the probable mount point, you can check if it is mounted:
if { $mountpoint in $fslist } {
...
} else {
# better skip this one, the probable mount point is not mounted.
}
As you can see, this is a lot of work. It's fragile.
Better to fix the mounts so they don't hang.
(*) I wrote di and the di Tcl extension. This is a portable solution. You can of course use exec to run df or mount, but there are other issues (parsing, portability, determining which filesystems to use) if you use the more manual method.

/etc/dpkg/buildflags.conf example?

dpkg-buildflags mentions /etc/dpkg/buildflags.conf file that can be used to configure dpkg-buildpackage. I cannot find any example of what the file should look like though. How could I for example make it pass --disable-static to --configure?
As the man page explains, that file is used to set or modify compilation build flags, which are those passed to the preprocessors/compilers/linkers (cpp/gcc/ld for example, but other languages are also supported). An example content could be:
APPEND CFLAGS -ggdb -O3
STRIP CXXFLAGS -O2
I don't know of any --configure option, I guess you are talking about the configure script usually found on projects using autotools. But there's no global option to pass to that, because what build system each package uses is specific to that source package. If you need to pass that option you'll need to modify the debian/rules file.
Or propose to the debian-policy list, the addition of a new DEB_BUILD_OPTION tag to disable static libraries globally, which will need to be supported by every source package producing them.

Setting different Hex-Filenames in MPLAB X for different project configurations

I want to set different hex file names for different configurations of a project. In detail I want to have a release configuration where compiler optimization is turned on and a debug configuration where optimization is turned off.
So far I have discovered the possibility to add a second configuration to the project, where I can set a different optimization level. The binary for the other configuration is automatically compiled to another directory but the name of the result hex file stays the same. I tried to change the macro "ImageName" under the "Building" options for the configuration but they are read only and the makefiles containing these macros seem to be automatically regenerated so manual changing is futile.
Is there any way to separate these two builds (one with optimization and one without) by name of the result file? I don't want to release a build without optimization by accident since this is really critical in my current project as I already have experienced.
Use the Execute this line after build option. It is right above the Macros section (Right Click > Properties > Conf:[name] > Building). Commands you type there will be inserted into the auto-generated makefile (nbproject/Makefile-$CONF.mk) and executed at the end of the build process.
Example:
To copy the output hex file to "out_dir" and tag it with the configuration, use this line:
${MKDIR} out_dir && ${CP} ${ImagePath} out_dir && ${MV} out_dir/${ImageName} out_dir/${ConfName}_${IMAGE_TYPE}.${OUTPUT_SUFFIX}
This line will create "out_dir/", copy the hexfile to the "out_dir" folder and then rename the hexfile to configuration-name_build-type.hex.

Solution to programmatically generate an export script from a directory hierarchy

I often have the following scenario: in order to reproduce a bug for reporting, I create a small sample project, sometimes a maven multi module project. So there may be a hierarchy of directories and it will usually contain a few small text files. Standard procedure would of course be to create a zip file and send that. But on some mailing lists attachments are not allowed, and so I am looking for a way to automatically create an installation script, that I can post to such mailing lists.
Basically I would be happy with a Unix flavor only that creates mkdir statements to create directories and >> statements to write the file contents. (Actually, apart from the relative path delimiters, the windows and unix versions can probably be identical)
Does such a tool exist somewhere? If not, I'll probably write one in java, but I'm happy to accept solutions in all kinds of languages.
(The tool could run under windows or unix, but the target platform for the generated scripts should be either unix or configurable)
I think you're looking for shar, which creates a shell archive (shell script that when run produces a given directory hierarchy). It is available on most systems; you can use GNU sharutils if you don't already have it.
Normal usage for packing up a directory tree would be something like:
shar `find somedirectory -print` > archive.sh
If you're using GNU sharutils, and want to create "vanilla" archives which use only the most portable of shell builtins, mkdir, and sed, then you should invoke it as shar -V. You can remove some more extra baggage from the scripts by using -xQ; -x to remove checks for existing files, and -Q to remove verbose output from the archive.
shar -VxQ `find somedir -print` > archive.sh
If you really want something even simpler, here's a dirt-simple version of shar as a shell script. It takes filenames on standard input instead of arguments for simplicity and to be a little more robust.
#!/bin/sh
while read filename
do
if test -d $filename
then
echo "mkdir -p '$filename'"
else
echo "sed 's/^X//' <<EOF > '$filename'"
sed 's/^/X/' < "$filename"
echo 'EOF'
fi
done
Invoke as:
find somedir -print | simpleshar > archive.sh
You still need to invoke sed, as you need some way of ensuring that no lines in the here document begin with the delimiter, which would close the document and cause later lines to be interpreted as part of the script. I can't think of any really good way to solve the quoting problem using only shell builtins, so you will have to rely on sed (which is standard on any Unix-like system, and has been practically forever).
if your problem are non-text-file-hating filters:
in times long forgotten, we used uuencode to get past 8-bit eating relays -
is that a way to get past attachment eating mail boxes these days ?
So why not zip and uuencode ?
(or base64, which is its younger cousin)