Program consisting of multiple executable files - language-agnostic

If you're familiar with the internals of git (even a little bit), you may know that the git command is just the frontend to many other executables (as in, git is comprised of multiple executables, instead of a single binary file.
How does this "multiple-executable" architecture work, and is there any example of a program (ideally in C/C++) of a program that consists of multiple executables?
As you can see, this directory contains a lot of git-* executables.

One way of doing it could be to execute shell commands, such as Python os.system or system() in C/C++.
Say we have an app named app, consisting of three executables app-cmd1, app-cmd2, and app itself.
I could program app in a way that it would call app-cmd1 if I ran app cmd1, same with app-cmd2 which could be called with app cmd2. Note the lack of the - hyphen.

Related

Need to obfuscate/wrap txt files in Tcl based project flow

I have TCL based project in Linux env, where TCL scripts are used to create the project, run and perform error analysis. Once the run is complete, a set of algorithms (in txt format) are fed back to the flow for error correction.
To hide the txt files, I need to obfuscate/wrap them for delivery to the customer so as not to reveal the algorithms in the files. Please could someone suggest any utility/tool that can obfuscate/wrap and interface the txt files to the project flow so that TCL can read the files automatically without user intervention?
One of many ways is using tools to make a stand-alone executable, for example freewrap utility:
http://freewrap.sourceforge.net/
It's regularly updated and really modern and easy to use in Linux and Windows.

How do I find where a function is declared in Tcl?

I think this is more of a Tcl configuration question rather than a Tcl coding question...
I inherited a whole series of Tcl scripts that are used within a simulation tool that my company built in-house. In my scripts, I'm finding numerous instances where there are function calls to functions that don't seem to be declared anywhere. How can I trace the path to these phantom functions?
For example, rather than use source, someone build a custom include function that they named INCLUDE. Tclsh obviously balks when I try to run it there, but with my simulation software, it runs fine.
I've tried grep-ing through the entire simulation software for INCLUDE, but I'm not having any luck. Are there any other obvious locations outside the simulation software where a Tcl function might be defined?
The possibilities:
Within your software. (you have checked for this).
Within some other package included by the software.
Check and see if the environment variable TCLLIBPATH is set.
Also check and see if the simulation software sets TCLLIBPATH.
This will be a list of directories to search for Tcl packages, and you
will need to search the packages that are located outside of the
main source tree.
Another possibility is that the locations are specified in the pkgIndex.tcl file.
Check any pkgIndex.tcl files and look for locations outside the main source tree.
Within an unknown command handler. This could be in
your software or within some other package. You should be able to find
some code that processes the INCLUDE statement.
Within a binary package. These are shared libraries that are loaded
by Tcl. If this is the case, there should be some C code used to
build the shared library that can be searched.
Since you say there are numerous instances of unknown functions, my first
guess is that you have
not found all the directories where packages are loaded from. But an
''unknown'' command handler is also a possibility.
Edit:
One more possibility I forgot. Check and see if your software sets the auto_path variable. Check any directories added to the auto_path for
other packages.
This isn't a great answer for you, but I suspect it is the best you're going to get...
The procedure could be defined in a great many places. Your best bet for finding it is to use a tool like findstr (on Windows) or grep -R (on POSIX platforms) to search across all the relevant source files. But that still might not help! It might not be a procedure but instead a general command, which could be implemented in C and not as a procedure, or it could be defined in a packaged application archive (which are usually awkward to look inside). There are also other types of script-implemented command too, which could make things awkward. Generally searching and investigating is your best bet, but it might not work.
Tcl doesn't really differentiate strongly between different types of command except in some introspection operations. If you're lucky, you could find that info body tells you the definition of the procedure (and info args and info default tell you about the arguments) but that won't help with other command types at all. Tcl 8.7 will include a command (info cmdtype) that would help a lot with narrowing down what to do next, but that's no use to you now and it definitely doesn't exist in older versions.

What are the output files of the VxWorks Workbench kernel configuration GUI

I'm trying to generate a VxWorks 6.9.4.8 kernel configuration that is identical to another kernel workbench project. The Workbench 3.3.6 only allows GUI configuration.
Is there an underlying kernel configuration file, produced by the GUI, which can be replaced?
After updating the kernel configuration using the Workbench GUI, I see the following files have changed:
linkSyms.c,
prjComps.h,
prjConfig.c, and
prjParams.h
I guess my question is, which one, if any uniquely identifies the kernel as built?
prjComps.h will contain all the component's names, as you have chosen in your kernel configuration GUI.
First step to create new Kernel configuration based on some other Kernel configuration is to use GUI configurator and add the missing component in prjComps.h, Better use some diff tool like 'beyond compare', and keep reducing the differences by adding/removing the components. Remember not to edit this file directly, but via GUI configurator only. As the tool calculates the dependent component and adds/removes them.
Second step is to create the new prjParams.h as above.
The Workbench actually allows to use command line to edit Kernel configuration via vxprj tool in vxworks 6.9(this tool has been replaced by "wrtool" in vxworks 7), you can right click on the Image project and chose 'Open Wind River vxWorks 6.9 Developement Shell'.
If you want to add a component for e.g. telnet client (INCLUDE_TELNET_CLIENT)
, you can use the following command
vxprj component add INCLUDE_TELNET_CLIENT
To remove a component
vxprj component remove INCLUDE_TELNET_CLIENT
For more of vxprj tool, you can look up the documentation in the workbench itself.
The project configuration is held in a handful of files in the kernel project directory.
These are:
.project
.cproject
.wrproject
projectname.wpj
Files such as prjComps.h, prjParams.h prjConfig.c are all generated by the configuration tool, however these are not configuration files themselves. Instead, this is generated C code that contains, amongst other things, a list of selected components.
These files are also re-generated, I believe, when you rebuild the project.
As such, these are not really the authoritative source you are interested in.
For this, you need to look at the project files. In terms of a list of components, the most interesting is the .wpj file, which contains amongst other things a list of explicitly and implicitly included components.
The explicitly included components are those you manually selected in the Kernel Configuration GUI, the implicitly included are those that were then included to satisfy dependencies.
This distinction can sometimes make comparing kernel configurations tricky, then you may want to fall back on the generated files eg prjComps.h, however you should always remember that this is a representation of the configuration, not the source.
The .project etc configuration files are big and complex, but a decent diff tool, such as BeyondCompare can make comparisons of the project directories fairly easy
Thanks for the clue, #endTunnel. I looked at that file, and noticed that a few files get modified when I save my GUI selections.
prjComps.h - all the components #included in the kernel build
prjParams.h - the additional parameters set for the enabled components
prjConfig.c - the configuration and initialization calls for each module included.
'linkSyms.c' also gets modified. Not sure how that is used, yet.
I can now use diff to compare kernel configurations, and perhaps even duplicate a configuration (haven't tried that yet).

How to allow web-component-tester to run tests stored with my components

I am experimenting with the framework to build an SPA using polymer. This will include a large number of custom elements at various levels in the overall application hierarchy. I would like to use web-component-tester to run the module tests on them.
web-component-tester seems to be opinionated about where the tests will be stored - in a separate test directory, where it will run all files found.
I am of an opposite opinion. I would like to store tests in the same directory as the element definition. I would like to differentiate tests by naming them xxx.test.html (or possibly xxx.test.js). I also want to run different "sets" of tests controlled by gulp some of which will be watching for changes and then running the tests (for the app side of my project) and some of which will be elements that use core-ajax to unit test my server side scripts. These will more than likely be in a totally different directory hierarchy (my dist directory) and will be served by a proper web server.
I "think" the "suite" config option wct-conf.js file in my project root might be how I can define this, or alternatively a wct command with some file globs. Unfortunately web-component-tester's README is somewhat confusing on any detail and when you have your own web server it says "You'll need to save WCT's browser.js in order to go this route." What does that mean?
Can someone enlighten me on how can get WCT to run each of the elements/**/*.test.html files as its own "suite" ( I actually intend to use describe, it format - but I assume I still need to use the term suite).
Can someone also explain what I need to do the browser.js when I have my own web server.
I ran some experiments and did a bit of debugging with node-inspector. Firstly, the command line overwrites the suites parameter in the config file
wct app/elements**/*.test.html
does find all my module tests if I have them stored with the elements and ignores the contents of the wct.conf.js file's suites parameter.
also putting the same value (ie app/elements/**/*.test.html) in the wct-conf.js file for the suite parameter does the same job. In fact in this mode, gulp test:local
Also works correctly
So to run different tests for module and distribution, I just need to set up for wct.conf.js for my module tests, and set up gulp to run a command line with the correct location of my test file
I still haven't understood the instructions for running with your own web server.

Command line parameters or configuration file?

I'm developing a tool that will perform several types of analysis, and each analysis can have different levels of thoroughness. This app will have a fair amount of options to be given before it starts. I started implementing this using a configuration file, since the number of types of analysis specified were little. As the number of options implemented grew, I created more configuration files. Then, I started mixing some command line parameters since some of the options could only be flags. Now, I've mixed a bunch of command line parameters with configuration files and feel I need refactoring.
My question is, When and why would you use command line parameters instead of configuration files and vice versa?
Is it perhaps related to the language you use, personal preference, etc.?
EDIT: I'm developing a java app that will work in Windows and Mac. I don't have a GUI for now.
Command line parameters are useful for quickly overriding some parameter setting from the configuration file. As well, command line parameters are useful if there are not so many parameters. For your case, I'd suggest that you export parameter presets to command line.
Command line arguments:
Pros:
concise - no extra config files to maintain by itself
great interaction with bash scripts - e.g. variable substitution, variable reference, bash math, etc.
Cons:
it could get very long as the options become more complex
formatting is inflexible - besides some command line utilities that help you parse the high level switches and such, anything more complex (e.g. nested structured information) requires custom syntax such as using Regex, and the structure could be quite rigid - while JSON or YAML would be hard to specify at the command line level
Configuration files:
Pros:
it can be very large, as large as you need it to be
formatting is more flexible - you can use JSON, YAML, INI, or any other structural format to represent the information in a more human consumable way
Cons:
inflexible to interact with bash variable substitutions and references (as well as bash math) - you have to probably define your own substitution rules if you want the config file to be "generic" and reusable, while this is the biggest advantage of using command line arguments - variable math would be difficult in config files (if not impossible) - you have to define your own "operator" in the config files, or you have to rely on another bash script to carry out the variable math, and perform your custom variable substitution so the "generic" config file could become "concretely usable".
for all that it takes to have a generic config file (with custom defined variable substitution rules) ready, a bash script is still needed to carry out the actual substitution, and you still have to code your command line to accept all the variable substitutions, so either you have config files with no variable substitution, which means you "hard code" and repeat the config file for different scenarios, or the substitution logic with custom variable substitution rules make your in-app config file logic much more complex.
In my use case, I value being able to do variable substitution / reference (as well as bash math) in the bash scripts more important, since I'm using the same binary to start many server nodes with different responsibilities in a server backend cluster, and I kind of use the bash scripts as sort of a container or actually a config file to start the many different nodes with differing command line arguments.
my vote = both ala mysqld.exe
What environment/platform? In Windows you'd rather use a config file, or even a configuration panel/window in the gui.
I place configuration that don't really change in a configuration file.
Configuration that change often I place on the command-line.