What's the mariadb tarball different between glibc and non-glibc? - mysql

I want to download the mariadb with gzip type ,but I found that it has many files could been downloaded ,such as mariadb-10.2.6-linux-x86_64.tar.gz ,mariadb-10.2.6-linux-glibc_214-x86_64.tar.gz (requires GLIBC_2.14+) ,mariadb-10.2.6-linux-systemd-x86_64.tar.gz (for systems with systemd) .
I don't know what's different between them?

First, please note that tarballs are generic, but not universal. Even though there seems to be many of them, there are still far less than supported systems and flavors. None of tarballs is guaranteed to work on any particular system. The common problem is absence of certain libraries which MariaDB server, client programs or plugins are linked with.
Back to the actual question, the main difference is highlighted in the package names/comments.
mariadb-10.2.6-linux-glibc_214-x86_64.tar.gz (requires GLIBC_2.14+) -- the binaries built on reasonably modern systems. The package most likely contains more plugins/engines, because some of them have requirements for modern compilers and libraries; but it can only be run on systems that have globc 2.14 or higher.
mariadb-10.2.6-linux-systemd-x86_64.tar.gz (for systems with systemd) -- the package with systemd support. It's important if you actually install the service and run it this way. If you just keep the binaries locally and start them manually, it shouldn't matter.
mariadb-10.2.6-linux-x86_64.tar.gz -- the package provided mostly for legacy/compatibility purposes, for older systems which are still not EOL-ed. Generally it has somewhat better chances to be successfully run on an arbitrary system, but you need to check whether it contains everything you need, as it might be not the case.

Related

What is "vendoring"?

What is "vendoring" exactly? How would you define this term?
Does it mean the same thing in different programming languages? Conceptually speaking, not looking at the exact implementation.
Based on this answer
Defined here for Go as:
Vendoring is the act of making your own copy of the 3rd party packages
your project is using. Those copies are traditionally placed inside
each project and then saved in the project repository.
The context of this answer is in the Go language, but the concept still applies.
If your app depends on certain third-party code to be available you could declare a dependency and let your build system install the dependency for you.
If however the source of the third-party code is not very stable you could "vendor" that code. You take the third-party code and add it to your application in a more or less isolated way. If you take this isolation seriously you should "release" this code internally to your organization/working environment.
Another reason for vendoring is if you want to use certain third-party code but you want to change it a little bit (a fork in other words). You can copy the code, change it, release it internally and then let your build system install this piece of code.
Vendoring means putting a dependency into you project folder (vs. depending on it globally) AND committing it to the repo.
For example, running cp /usr/local/bin/node ~/yourproject/vendor/node & committing it to the repo would "vendor" the Node.js binary – all devs on the project would use this exact version. This is not commonly done for node itself but e.g. Yarn 2 ("Berry") is used like this (and only like this; they don't even install the binary globally).
The committing act is important. As an example, node_modules are already installed in your project but only committing them makes them "vendored". Almost nobody does that for node_modules but e.g. PnP + Zero Installs of Yarn 2 are actually built around vendoring – you commit .yarn/cache with many ZIP files into the repo.
"Vendoring" inherently brings tradeoffs between repo size (longer clone times, more data transferred, local storage requirements etc.) and reliability / reproducibility of installs.
Summarizing other, (too?) long answers:
Vendoring is hard-coding the often forked version of a dependency.
This typically involves static linking or some other copy but it doesn't have to.
Right or wrong, the term "hard-coding" has an old and bad reputation. So you won't find it near projects openly vendoring, however I can't think of a more accurate term.
As far as I know the term comes from Ruby on Rails.
It describes a convention to keep a snapshot of the full set of dependencies in source control, in directories that contain package name and version number.
The earliest occurrence of vendor as a verb I found is the vendor everything post on err the blog (2007, a bit before the author co-founded GitHub). That post explains the motivation and how to add dependencies. As far as I understand the code and commands, there was no special tool support for calling the directory vendor at that time (patches and code snippets were floating around).
The err blog post links to earlier ones with the same convention, like this fairly minimal way to add vendor subdirectories to the Rails import path (2006).
Earlier articles referenced from the err blog, like this one (2005), seemed to use the lib directory, which didn't make the distinction between own code and untouched snapshots of dependencies.
The goal of vendoring is more reproducibility, better deployment, the kind of things people currently use containers for; as well as better transparency through source control.
Other languages seem to have picked up the concept as is; one related concept is lockfiles, which define the same set of dependencies in a more compact form, involving hashes and remote package repositories. Lockfiles can be used to recreate the vendor directory and detect any alterations. The lockfile concept may have come from the Ruby gems community, but don't quote me on that.
The solution we’ve come up with is to throw every Ruby dependency in vendor. Everything. Savvy? Everyone is always on the same page: we don’t have to worry about who has what version of which gem. (we know) We don’t have to worry about getting everyone to update a gem. (we just do it once) We don’t have to worry about breaking the build with our libraries. […]
The goal here is simple: always get everyone, especially your production environment, on the same page. You don’t want to guess at which gems everyone does and does not have. Right.
There’s another point lurking subtlety in the background: once all your gems are under version control, you can (probably) get your app up and running at any point of its existence without fuss. You can also see, quite easily, which versions of what gems you were using when. A real history.

list of gentoo binary packages

Installing gentoo in my old laptop is a painful work, as the weekly update can make the poor cpu extra hot.
To stick into gentoo with less emerge effort, I decided to use binary files for large packages e.g., chrome, firefox, libreoffice ect.
Just wondering if there is a list of packages that provides binary ebuilds in the repo, so that I can quickly identify these package and swap them into binary ones?
eix *-bin will give you all the bin packages in portage. There is also PORTAGE_BINHOST, which is probably what you are looking for.
If you have a different machine, you could prebuild binary packages for the laptop and just merge it there. Another alternative would be icecream or distcc for offloading the work to a different box which does the compile for you.

kbskit build for different linux flavours

I am creating a kbskit for my tcl executable application as follows on Suse :
./kbs.tcl -builddir=85 -r -mk-bi -bi="itcl3.4 itk3.4 iwidgets4.0.2 img1.4.1" install kbskit8.5
cp 85/bin/kbsmk8.5-bi kbsmk8.5-bi-run
./kbsmk8.5-bi sdx.kit wrap sim -runtime kbsmk8.5-bi-run
The application will be used on several flavours of linux like Redhat,Ubuntu etc. I am trying my best to test it myself under many combinations. Neverthless, i would be like to know someone thinks this would/wouldn’t work seamlessly across different platforms since I wont be able to cover all combinations exhaustively.
A Linux/x86 kbskit is at least reasonable to run on that collection of platforms. Unfortunately, the only way to be sure is to try. It should work, but if your script refers to files in a particular location and another platform (or deployment!) puts them elsewhere, then things will fail. The other thing that might go wrong is if there are significant incompatibilities in the small number of system libraries that Tcl uses, especially the C library; I do not know whether such problems exist, but I suspect they're not a major problem in practice.
You can try using the platform package (a standard part of Tcl since at least 8.5) to report what platform you're dealing with. That's the usual level of granularity you need to pay attention to.
package require platform
puts [platform::identify]

Multiple versions of Tcl

Are there any particular things to think about when building and installing (globally) a new version of Tcl from source, besides relinking /usr/local/bin/tclsh and wish to the new versions?
I know that the interpreter executables tclsh and wish are installed with different names, but what about the include and library files? When I build eggdrop, will it link with the latest version? How about the man pages - are the old ones overwritten by the new ones?
The usual approach for this case is to configure the build so that it's installed under a single directory (the Windows approach), say, under /opt/tcltk/8.6. You're then guaranteed against clashes with other versions and deinstallation is a matter of running rm -rf on that single directory. This approach has its downsides though:
You'll have to link (some) installed third-party Tcl libraries under your new hierarchy. This is because Tcl derives the set of paths to look for libraries from its own location.
/opt/tcltk/8.6/bin won't be listed in $PATH.
With certain OSes, another (possibly more sensible) approach is to do a "backport", that is, to take the source package of the required Tcl/Tk version and make it build for the installed version of the OS; then install the resulting packages in a normal way. On systems where various versions of Tcl/Tk are co-installable (for instance, Debian and its derivatives), this possibly provides the most sensible solution.
As to manual pages in the latter case, in Debian, they just end up being packaged in a separate package, installation of which is not required; so you just select one of the available documentation packages and install it.
In terms of having multiple versions present, this is a normal thing to do (do this by setting the --prefix option to configure when building) and has been so for quite a while. You'll probably want to avoid having multiple patchlevels of a single version if you can, but having, say, 8.4, 8.5 and 8.6 co-installed is entirely OK. You'll want to have the different installations in different directories too, and you're right about linking the unversioned tclsh name to the one you want normally (though I just use the versioned executable name instead).
The only way to have the manpages coexist nicely is to have them installed in separate directory trees and to update the MANPATH environment variable to point to the right one (unless you've got a man executable that will take paths to manpages directly — some do, some don't — and that is hardly as convenient). If you can bear being online, we've got official HTML builds of the documentation hosted at http://www.tcl.tk/man/ which includes all significant versions going back quite a long way.

compatibility on changing tcl/tk intepreter from active TCL to tclkit

Since active tcl will be charged, I want to change to a free interpreter like tclkit,
what is the main difference between these two interpreters, do I need to modify my source in a large scale or simple just modify some modules.
Both are Tcl interpreters, and if you have the same version (as reported by info patchlevel) then you have the same version. There are very few differences indeed. Those differences:
ActiveTcl comes with more third-party packages than Tclkit (though you can use kit-built libraries or build your own packages with both). This is what you'd expect from the kind of full-service Tcl distribution that it is.
Tclkit tends to come with support for fewer character sets and timezones; you can add these back in if you need them. This is because the Tclkit distribution was designed to be used in much more embedded situations (and, originally, to fit on a floppy disk; that's mostly irrelevant now that nobody has floppies any more).
There are differences in startup, library locations, etc. Of course.
That said, the commercial tools built on top of the ActiveTcl platform (notably the ActiveState TDK) can actually produce packaged software using what they term basekits, which are effectively tclkits. They use the same packaging technology, the same file format. (The name is different for branding reasons, and they might have slightly different sets of default-packaged goodies.)
Myself, I use ActiveTcl and Tclkit on the same system. (I also compile my own builds of Tcl direct from source, but then you'd expect that as I'm a developer of Tcl itself.) ActiveTcl is very convenient for when I just want to write code, and Tclkit is nice for when I'm distributing an app to other people in my organization.