Multiple versions of Tcl - tcl

Are there any particular things to think about when building and installing (globally) a new version of Tcl from source, besides relinking /usr/local/bin/tclsh and wish to the new versions?
I know that the interpreter executables tclsh and wish are installed with different names, but what about the include and library files? When I build eggdrop, will it link with the latest version? How about the man pages - are the old ones overwritten by the new ones?

The usual approach for this case is to configure the build so that it's installed under a single directory (the Windows approach), say, under /opt/tcltk/8.6. You're then guaranteed against clashes with other versions and deinstallation is a matter of running rm -rf on that single directory. This approach has its downsides though:
You'll have to link (some) installed third-party Tcl libraries under your new hierarchy. This is because Tcl derives the set of paths to look for libraries from its own location.
/opt/tcltk/8.6/bin won't be listed in $PATH.
With certain OSes, another (possibly more sensible) approach is to do a "backport", that is, to take the source package of the required Tcl/Tk version and make it build for the installed version of the OS; then install the resulting packages in a normal way. On systems where various versions of Tcl/Tk are co-installable (for instance, Debian and its derivatives), this possibly provides the most sensible solution.
As to manual pages in the latter case, in Debian, they just end up being packaged in a separate package, installation of which is not required; so you just select one of the available documentation packages and install it.

In terms of having multiple versions present, this is a normal thing to do (do this by setting the --prefix option to configure when building) and has been so for quite a while. You'll probably want to avoid having multiple patchlevels of a single version if you can, but having, say, 8.4, 8.5 and 8.6 co-installed is entirely OK. You'll want to have the different installations in different directories too, and you're right about linking the unversioned tclsh name to the one you want normally (though I just use the versioned executable name instead).
The only way to have the manpages coexist nicely is to have them installed in separate directory trees and to update the MANPATH environment variable to point to the right one (unless you've got a man executable that will take paths to manpages directly — some do, some don't — and that is hardly as convenient). If you can bear being online, we've got official HTML builds of the documentation hosted at http://www.tcl.tk/man/ which includes all significant versions going back quite a long way.

Related

NativeScript, Code Sharing and different environments

Note: this is not a dupe of this or this other question. Read on: this question is specific to the Code-Sharing template.
I am doing some pretty basic experiments with NativeScript, Angular and the code sharing templates (see: #nativescript/schematics).
Now I am doing some exploration / poc work on how different "build configuration" are supported by the framework. To be clear, I am searching for a simple -and hopefully official- way to have the application use a different version of a specific file (let's call it configuration.ts) based on the current platform (web/ios/android) and environment (development/production/staging?).
Doing the first part is obviously trivial - after all that is the prime purpose of the code sharing schematics. So, different versions of the same file are identified by different extensions. This page explain things pretty simply.
What I don't get as easily is if the framework/template supports any similar convention-based rule that can be used to switch between debug/release (or even better development/staging/production) versions of a file. Think for example of a config.ts file that contains different parameters based on the environment.
I have done some research in the topic, but I was unable to find a conclusive answer:
the old and now retired documentation for the appbuilder platform mentions a (.debug. and .release.) naming convention for files. I don't think this work anymore.
other sources mention passing parameters during the call to tns build / tns run and then fetching them via webpack env variable... See here. This may work, but seems oddly convoluted
third option that gets mentioned is to use hooks to customize the build (or use a plugin that should do the same)
lastly, for some odd reason, the #nativescript/schematics seems to generate a default project that contains two files called environment.ts and environment.prod.ts. I suspect those only work for the web version of the project (read: ng serve) - I wasn't able to get the mobile compiler to recognize files that end with debug.ts, prod.ts or release.ts
While it may be possible that what I am trying to do isn't just supported (yet?), the general confusion an dissenting opinions on the matter make me think I may be missing something.. somewhere.
In case this IS somehow supported, I also wonder how it may integrate with the NativeScript Sidekick app that is often suggested as a tool to ease the build/run process of NativeScript applications (there is no way to specify additional parameters for the tns commands that the Sidekick automates, the only options available are switching between debug/release mode), but this is probably better to be left for another question.
Environment files are not yet supported, passing environment variables from build command could be the viable solution for now.
But of course, you may write your own schematics if you like immediate support for environment files.
I did not look into sharing environment files between web and mobile yet - I do like Manoj's suggestion regarding modifying the schematics, but I'll have to cross that bridge when I get there I guess. I might have an answer to your second question regarding Sidekick. The latest version does support "Webpack" build option which seems to pass the --bundle parameter to tns. The caveat is that this option seems to be more sensitive to typescript errors, even relatively benign ones, so you have to be careful and make sure to fix them all prior to building. In my case I had to lock the version of #types/jasmine in package.json to "2.8.6" in order to avoid some incompatibility between that and the version of typescript that Sidekick's cloud solution is using. Another hint is to check "Clean Build" after npm dependency changes are made. Good luck!

What's the mariadb tarball different between glibc and non-glibc?

I want to download the mariadb with gzip type ,but I found that it has many files could been downloaded ,such as mariadb-10.2.6-linux-x86_64.tar.gz ,mariadb-10.2.6-linux-glibc_214-x86_64.tar.gz (requires GLIBC_2.14+) ,mariadb-10.2.6-linux-systemd-x86_64.tar.gz (for systems with systemd) .
I don't know what's different between them?
First, please note that tarballs are generic, but not universal. Even though there seems to be many of them, there are still far less than supported systems and flavors. None of tarballs is guaranteed to work on any particular system. The common problem is absence of certain libraries which MariaDB server, client programs or plugins are linked with.
Back to the actual question, the main difference is highlighted in the package names/comments.
mariadb-10.2.6-linux-glibc_214-x86_64.tar.gz (requires GLIBC_2.14+) -- the binaries built on reasonably modern systems. The package most likely contains more plugins/engines, because some of them have requirements for modern compilers and libraries; but it can only be run on systems that have globc 2.14 or higher.
mariadb-10.2.6-linux-systemd-x86_64.tar.gz (for systems with systemd) -- the package with systemd support. It's important if you actually install the service and run it this way. If you just keep the binaries locally and start them manually, it shouldn't matter.
mariadb-10.2.6-linux-x86_64.tar.gz -- the package provided mostly for legacy/compatibility purposes, for older systems which are still not EOL-ed. Generally it has somewhat better chances to be successfully run on an arbitrary system, but you need to check whether it contains everything you need, as it might be not the case.

What is "vendoring"?

What is "vendoring" exactly? How would you define this term?
Does it mean the same thing in different programming languages? Conceptually speaking, not looking at the exact implementation.
Based on this answer
Defined here for Go as:
Vendoring is the act of making your own copy of the 3rd party packages
your project is using. Those copies are traditionally placed inside
each project and then saved in the project repository.
The context of this answer is in the Go language, but the concept still applies.
If your app depends on certain third-party code to be available you could declare a dependency and let your build system install the dependency for you.
If however the source of the third-party code is not very stable you could "vendor" that code. You take the third-party code and add it to your application in a more or less isolated way. If you take this isolation seriously you should "release" this code internally to your organization/working environment.
Another reason for vendoring is if you want to use certain third-party code but you want to change it a little bit (a fork in other words). You can copy the code, change it, release it internally and then let your build system install this piece of code.
Vendoring means putting a dependency into you project folder (vs. depending on it globally) AND committing it to the repo.
For example, running cp /usr/local/bin/node ~/yourproject/vendor/node & committing it to the repo would "vendor" the Node.js binary – all devs on the project would use this exact version. This is not commonly done for node itself but e.g. Yarn 2 ("Berry") is used like this (and only like this; they don't even install the binary globally).
The committing act is important. As an example, node_modules are already installed in your project but only committing them makes them "vendored". Almost nobody does that for node_modules but e.g. PnP + Zero Installs of Yarn 2 are actually built around vendoring – you commit .yarn/cache with many ZIP files into the repo.
"Vendoring" inherently brings tradeoffs between repo size (longer clone times, more data transferred, local storage requirements etc.) and reliability / reproducibility of installs.
Summarizing other, (too?) long answers:
Vendoring is hard-coding the often forked version of a dependency.
This typically involves static linking or some other copy but it doesn't have to.
Right or wrong, the term "hard-coding" has an old and bad reputation. So you won't find it near projects openly vendoring, however I can't think of a more accurate term.
As far as I know the term comes from Ruby on Rails.
It describes a convention to keep a snapshot of the full set of dependencies in source control, in directories that contain package name and version number.
The earliest occurrence of vendor as a verb I found is the vendor everything post on err the blog (2007, a bit before the author co-founded GitHub). That post explains the motivation and how to add dependencies. As far as I understand the code and commands, there was no special tool support for calling the directory vendor at that time (patches and code snippets were floating around).
The err blog post links to earlier ones with the same convention, like this fairly minimal way to add vendor subdirectories to the Rails import path (2006).
Earlier articles referenced from the err blog, like this one (2005), seemed to use the lib directory, which didn't make the distinction between own code and untouched snapshots of dependencies.
The goal of vendoring is more reproducibility, better deployment, the kind of things people currently use containers for; as well as better transparency through source control.
Other languages seem to have picked up the concept as is; one related concept is lockfiles, which define the same set of dependencies in a more compact form, involving hashes and remote package repositories. Lockfiles can be used to recreate the vendor directory and detect any alterations. The lockfile concept may have come from the Ruby gems community, but don't quote me on that.
The solution we’ve come up with is to throw every Ruby dependency in vendor. Everything. Savvy? Everyone is always on the same page: we don’t have to worry about who has what version of which gem. (we know) We don’t have to worry about getting everyone to update a gem. (we just do it once) We don’t have to worry about breaking the build with our libraries. […]
The goal here is simple: always get everyone, especially your production environment, on the same page. You don’t want to guess at which gems everyone does and does not have. Right.
There’s another point lurking subtlety in the background: once all your gems are under version control, you can (probably) get your app up and running at any point of its existence without fuss. You can also see, quite easily, which versions of what gems you were using when. A real history.

kbskit build for different linux flavours

I am creating a kbskit for my tcl executable application as follows on Suse :
./kbs.tcl -builddir=85 -r -mk-bi -bi="itcl3.4 itk3.4 iwidgets4.0.2 img1.4.1" install kbskit8.5
cp 85/bin/kbsmk8.5-bi kbsmk8.5-bi-run
./kbsmk8.5-bi sdx.kit wrap sim -runtime kbsmk8.5-bi-run
The application will be used on several flavours of linux like Redhat,Ubuntu etc. I am trying my best to test it myself under many combinations. Neverthless, i would be like to know someone thinks this would/wouldn’t work seamlessly across different platforms since I wont be able to cover all combinations exhaustively.
A Linux/x86 kbskit is at least reasonable to run on that collection of platforms. Unfortunately, the only way to be sure is to try. It should work, but if your script refers to files in a particular location and another platform (or deployment!) puts them elsewhere, then things will fail. The other thing that might go wrong is if there are significant incompatibilities in the small number of system libraries that Tcl uses, especially the C library; I do not know whether such problems exist, but I suspect they're not a major problem in practice.
You can try using the platform package (a standard part of Tcl since at least 8.5) to report what platform you're dealing with. That's the usual level of granularity you need to pay attention to.
package require platform
puts [platform::identify]

What should NOT be under source control?

It would be nice to have a more or less complete list over what files and/or directories that shouldn't (in most cases) be under source control. What do you think should be excluded?
Suggestion so far:
In general
Config files with sensitive information (passwords, private keys etc.)
Thumbs.db, .DS_Store and desktop.ini
Editor backups: *~ (emacs)
Generated files (for instance DoxyGen output)
C#
bin\*
obj\*
*.exe
Visual Studio
*.suo
*.ncb
*.user
*.aps
*.cachefile
*.backup
_UpgradeReport_Files
Java
*.class
Eclipse
I don't know, and this is what I'm looking for right now :-)
Python
*.pyc
Temporary files
- .*.sw?
- *~
Anything that is generated. Binary, bytecode, code/documents generated from XML.
From my commenters, exclude:
Anything generated by the build, including code documentations (doxygen, javadoc, pydoc, etc.)
But include:
3rd party libraries that you don't have the source for OR don't build.
FWIW, at my work for a very large project, we have the following under ClearCase:
All original code
Qt source AND built debug/release
(Terribly outdated) specs
We do not have built modules for our software. A complete binary is distributed every couple weeks with the latest updates.
OS specific files, generated by their file browsers such as
Thumbs.db and .DS_Store
Some other Visual Studio typical files/folders are
*.cachefile
*.backup
_UpgradeReport_Files
My tortoise global ignore pattern for example looks like this
bin obj *.suo *.user *.cachefile *.backup _UpgradeReport_Files
files that get built should not be checked in
I would approach the problem a different way; what things should be included in source control? You should only source control those files that:
( need revision history OR are created outside of your build but are part of the build, install, or media ) AND
can't be generated by the build process you control AND
are common to all users that build the product (no user config)
The list includes things like:
source files
make, project, and solution files
other build tool configuration files (not user related)
3rd party libraries
pre-built files that go on the media like PDFs & documents
documentation
images, videos, sounds
description files like WSDL, XSL
Sometimes a build output can be a build input. For example, an obfuscation rename file may be an output and an input to keep the same renaming scheme. In this case, use the checked-in file as the build input and put the output in a different file. After the build, check out the input file and copy the output file into it and check it in.
The problem with using an exclusion list is that you will never know all the right exclusions and might end up source controlling something that shouldn't be source controlled.
Like Corey D has said anything that is generated, specifically anything that is generated by the build process and development environment are good candidates. For instance:
Binaries and installers
Bytecode and archives
Documents generated from XML and code
Code generated by templates and code generators
IDE settings files
Backup files generated by your IDE or editor
Some exceptions to the above could be:
Images and video
Third party libraries
Team specific IDE settings files
Take third party libraries, if you need to ship or your build depends on a third party library it wouldn't be unreasonable to put it under source control, especially if you don't have the source. Also consider some source control systems aren't very efficient at storing binary blobs and you probably will not be able to take advantage of the systems diff tools for those files.
Paul also makes a great comment about generated files and you should check out his answer:
Basically, if you can't reasonably
expect a developer to have the exact
version of the exact tool they need,
there is a case for putting the
generated files in version control.
With all that being said ultimately you'll need to consider what you put under source control on a case by case basis. Defining a hard list of what and what not to put under it will only work for some and only probably for so long. And of course the more files you add to source control the longer it will take to update your working copy.
Anything that can be generated by the IDE, build process or binary executable process.
An exception:
4 or 5 different answers have said that generated files should not go under source control. Thats not quite true.
Files generated by specialist tools may belong in source control, especially if particular versions of those tools are necessary.
Examples:
parsers generated by bison/yacc/antlr,
autotools files such as configure or Makefile.in, created by autoconf, automake, libtool etc,
translation or localization files,
files may be generated by expensive tools, and it might be cheaper to only install them on a few machines.
Basically, if you can't reasonably expect a developer to have the exact version of the exact tool they need, there is a case for putting the generated files in version control.
This exception is discussed by the svn guys in their best practices talk.
Temp files from editors.
.*.sw?
*~
etc.
desktop.ini is another windows file I've seen sneak in.
Config files that contain passwords or any other sensitive information.
Actual config files such a web.config in asp.net because people can have different settings. Usually the way I handle this is by having a web.config.template that is on SVN. People get it, make the changes they want and rename it as web.config.
Aside from this and what you said, be careful of sensitive files containing passwords (for instance).
Avoid all the annoying files generated by Windows (thumb) or Mac OS (.ds_store)
*.bak produced by WinMerge.
additionally:
Visual Studio
*.ncb
The best way I've found to think about it is as follows:
Pretend you've got a brand-new, store-bought computer. You install the OS and updates; you install all your development tools including the source control client; you create an empty directory to be the root of your local sources; you do a "get latest" or whatever your source control system calls it to fetch out clean copies of the release you want to build; you then run the build (fetched from source control), and everything builds.
This thought process tells you why certain files have to be in source control: all of those necessary for the build to work on a clean system. This includes .designer.cs files, the outputs of T4 templates, and any other artifact that the build will not create.
Temp files, config for anything other than global development and sensitive information
Things that don't go into source control come in 3 classes
Things totally unrelated to the project (obviously)
Things that can be found on installation media, and are never changed (eg: 3rd-party APIs).
Things that can be mechanically generated, via your build process, from things that are in source control (or from things in class 2).
Whatever the language :
cache files
generally, imported files should not either (like images uploaded by users, on a web application)
temporary files ; even the ones generated by your OS (like thumbs.db under windows) or IDE
config files with passwords ? Depends on who has access to the repository
And for those who don't know about it : svn:ignore is great!
If you have a runtime environment for your code (e.g. dependency libraries, specific compiler versions etc.) do not put the packages into the source control. My approach is brutal, but effective. I commit a makefile, whose role is to downloads (via wget) the stuff, unpack it, and build my runtime environment.
I have a particular .c file that does not go in source control.
The rule is nothing in source control that is generated during the build process.
The only known exception is if a tool requires an older version of itself to build (bootstrap problem). In that case you will need a known good bootstrap copy in source control so you can build from blank.
Going out on a limb here, but I believe that if you use task lists in Visual Studio, they are kept in the .suo file. This may not be a reason to keep them in source control, but it is a reason to keep a backup somewhere, just in case...
A lot of time has passed since this question was asked, and I think a lot of the answers, while relevant, don't have hard details on .gitignore on a per language or IDE level.
Github came out with a very useful, community collaborated list of .gitignore files for all sorts of projects and IDEs that is worth taking a look.
Here's a link to that git repo: https://github.com/github/gitignore
To answer the question, here are the related examples for:
C# -> see Visual Studio
Visual Studio
Java
Eclipse
Python
There are also OS-specific .gitignore files. Following:
Windows
OS X
Linux
So, assuming you're running Windows and using Eclipse, you can just concatenate Eclipse.gitignore and Windows.gitignore to a .gitignore file in the top level directory of your project. Very nifty stuff.
Don't forget to add the .gitignore to your repo and commit it!
Chances are, your IDE already handles this for you. Visual Studio does anyway.
And for the .gitignore files, If you see any files or patterns missing in a particular .gitignore, you can open a PR on that file with the proposed change. Take a look at the commit and pull request trackers for ideas.
I am always using www.gitignore.io to generate a proper one .ignore file.
Opinion: everything can be in source control, if you need to, unless it brings significant repository overhead such as frequently changing or large blobs.
3rd party binaries, hard-to-generate (in terms of time) generated files to speed up your deployment process, all are ok.
The main purpose of source control is to match one coherent system state to a revision number. If it would be possible, I'd freeze the entire universe with the code - build tools and the target operating system.