What is the best way to create a custom OpenShift cartridge?
Looking at documentation and examples, I am seeing a lot of old-school compiling from source installation of the component that the cartridge needs to run.
Some examples https://www.openshift.com/blogs/lightweight-http-serving-using-nginx-on-openshift https://github.com/boekkooi/openshift-diy-nginx-php/blob/master/.openshift/action_hooks/build_nginx https://github.com/razorinc/redis-openshift-example/blob/master/.openshift/action_hooks/build and a ton of others are compiling from source..
I need to create some custom cartridges on my project, but doing it this way feels wrong.
Is there any reason I cant use yum and puppet/augeas to do the building, instead of curl, make and sed?
Or is this the best practice? In that case, why are we doing this 2000 style?
I'll do my best to explain this the best way I can. Feel free to let me know If I need to explain anything in more detail.
I'm assuming you're creating a custom binary cartridge (ie. you're creating a language cartridge such as ruby, python, etc.). Since none of the nodes have that binary installed on the system the custom cartridge you're creating will need to provide that binary and its libraries.
When you install a package with yum its going to install items in several different directories (/etc, /usr/, /var, etc). Since you're creating cartridge that will be copied over to several nodes you'll need to package all these items in a way that can be copied over to a node and then be executed without having to install them to the system.
As for doc's, I would suggest taking a look at these:
https://www.openshift.com/developers/download-cartridges
https://www.openshift.com/blogs/new-openshift-cartridge-format-part-1
https://www.openshift.com/blogs/new-openshift-cartridge-format-part-2
Related
Both docker and containerd provide golang clients and provide some interfaces, such as list images, export images or tag images. How can this be done in cri-o?
eg:
github.com/containerd/containerd
and
github.com/docker/docker/client
it seemed logical to me that such an option would be present for such a simple need, searched around and it seems it's a wanted feature but not fulfilled as it seems by these issues 1 2 3. there is some sense to this since crictl was destined to be a debugging to cri-o and not a container management tool.
from personal use, if you prefer switching from docker, podman could be an option for such operations, it's a daemon-less alternative to docker and cri-o, and employs other opensource tools to achieve its goals:
buildah - handles building and manipulating container images
skopeo - registry specific tasks relating to container images handling (probably the first candidate for your use case even by itself)
If you want to stick to the popular CLI commands podman is your guy, if you want to go as minimalist as possible, using skopeo directly could be an option
hope this helps you in your decision-making process ;)
The other day i was looking if it is possible to sync my chef workstation/server for easy manage/visualization of all cookbook components. I already tried looking for one solution and I didn't found any good info about this topic. So my questions are:
Is it posible to do?
Is it a good solution? And if not recommend one better?
If it's viable explain how can i do it?
Normally cookbooks already live in source control so it's not really a normal request. You can use the knife download command to pull back cookbook data but probably not in a format you want. tl;dr go the other direction, git -> chef server.
I've been an avid user of CGI.pm since the previous millennium so I was a bit surprised when it disappeared from my old Ubuntu server when I upgraded it recently. My short-term fix was sudo cpan install CGI, but a quick web search to find out why it was missing in the first place revealed CGI::Alternatives which explains why it has gone and offers some suggestions for alternatives. For my purposes, HTML::Tiny looks best for replacing my programmatic HTML generation, but Alternatives is strangely silent on the subject of HTTP headers and CGI parameters.
I broadened by search and found lighter alternatives to CGI.pm on perlmonks where one response suggests CGI::Simple, but the recommendation is less than whole-hearted - "its not quite as up to date as CGI.pm".
So is CGI::Simple the way to go, or is there a better alternative?
Please don't spend time suggesting "rewrite everything using framework XXX". I really don't have the time or energy for that. I'm happy to replace all my HTML generation with HTML::Tiny, so I'm looking for something with a similar (or lower!) amount of rework to replace header() and param().
You're missing the point if you're looking for an alternative that provides header and param.
The argument for the removal of CGI.pm from core (but not from CPAN) is that you shouldn't have to deal with CGI yourself; you should be using a framework that handles this for you.
If you don't agree with this — if you're looking for an equivalent to header and param — go ahead and keep using CGI.pm.
If you do agree, CGI::Simple is no better than CGI.pm.
As others have said, there's no reason not to use CGI together with HTML::Tiny. So that's the answer to your question. For the last five years that I was using CGI, my programs all started something like:
use CGI qw[param header];
which is the approach you're talking about here.
If you wait a year or two, the plan is for the HTML generation functions to be removed from the main module, so your problems will all go away at that point.
But that's not what I'd do in your situation. I'd switch to using PSGI and Plack. You said that you don't want anyone to suggest a new framework, so I'm not going to do that. Plack isn't a framework, it's a toolbox for writing PSGI applications. Certainly, I'd use a framework like Dancer, but you don't have to. You can happily use Plack without any of the frameworks built on top of it.
You'll still get most of the advantages of PSGI. You'll be able to deploy your applications in any way you like. You'll have access to all the awesome Plack middleware. Testing your program will be far easier.
When you're using "raw" Plack, the equivalent of CGI::param is Plack::Request::parameters and the equivalent of CGI::header is Plack::Response::headers.
So there are three answers to your question.
Carry on using CGI.pm. Just stop using the HTML generation functions and replace them with HTML::Tiny
Use raw PSGI/Plack and bring your web development into the 21st century
Use one of Perl's many great web frameworks.
Unfortunately, you don't seem to like any of those answers.
The issue with CGI.pm is not that it's going away, merely that it will no longer be distributed as part of the core Perl distribution. However that doesn't mean you have to install from CPAN. On your Ubuntu system you can just do:
sudo apt-get install libcgi-pm-perl
and you'll be off and running with the same old CGI you know and love :-)
The correct answer to my question is that use CGI::Simple is better than use CGI qw(header param) because it loads faster.
Answers along the lines of "Use Plack, it's the future of Perl for websites" weren't helpful to me because I didn't have time to learn a new programming paradigm or to discover how to reconfigure my web server to make it work, no matter how insistent the Plack Evangelists were that I was wrong in what I was trying to do.
I've now had a bit of time to wade through the links to documentation and presentation slides I was offered and I can see what they were getting at, but one failing in what I've read so far is the lack of a concise end-to-end working example to help get my head around things ... so here's what I knocked together to get me started (and, no, I haven't finished yet!). I hope that others beginning the journey from CGI to PSGI will find this useful to help get them underway...
First you need to install Plack. I'm running an Ubuntu 14.04 installation so it was simply a matter of running sudo apt-get install libplack-perl. The generic way is to install Task::Plack from CPAN.
Next you need to know where your cgi-bin directory is located. You ought to know already if you're a CGI die-hard! Since I'm running Apache mine is defined in /etc/apache2/conf-available/serve-cgi-bin.conf by ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/.
Now for the magic. We're going to create a CGI script that runs a PSGI app, handing it data from the CGI environment. This is good for experimentation and testing but NOT for deployment, as you don't get any of the speed benefit that PSGI can give you (for that you need something like Plack::Handler::Apache2, Plack::Handler::FCGI or mod_psgi in Apache, or a dedicated PSGI server such as Starman or Starlet, or one of the other handlers mentioned on PlackPerl.org). Create /usr/lib/cgi-bin/psgi-cgi.pl with the following contents and make it executable:-
#!/usr/bin/perl
use Plack::Util;
use Plack::Handler::CGI;
my $app = Plack::Util::load_psgi($ENV{PATH_TRANSLATED});
Plack::Handler::CGI->new->run($app);
Next we need to tell Apache to pass PSGI app files to this handler. I did this by creating /etc/apache2/conf-available/psgi-cgi.conf containing:-
Action psgi-cgi /cgi-bin/psgi-cgi.pl
AddHandler psgi-cgi .psgi
then loaded it into my Apache server by running sudo a2enconf psgi-cgi and sudo service apache2 reload. Basically you need to get these lines into your httpd.conf file and restart the server.
Finally, my first PSGI script, which I created in my server's DocumentRoot as /var/www/html/hello.psgi:-
use Plack::Request;
my $app = sub {
my $env = shift;
my $req = Plack::Request->new($env);
my $par = $req->parameters;
return [
200,
[ 'Content-Type', 'text/plain' ],
[ "Hello world!\n",
map("$_ = ".join(", ", $par->get_all($_))."\n", sort keys %$par),
]
];
};
The application is a coderef which returns a 3-element arrayref: the first is the HTTP status code, the second is the name,value pairs for the HTTP header, the third is the body of the response (which could be generated using HTML::Tiny for a web page). The first 2 elements answer the question of what you need instead of the CGI::header function - nothing! (though for more complex handling you'll need Plack::Response::headers). The example also shows how to replace CGI::param - use Plack::Request::parameters, which returns a Hash::MultiValue object containing the values of URL (GET) and BODY (POST) parameters, including the ones with multiple values.
Finally, a test:-
$ wget -q -O- 'http://localhost/hello.psgi?a=1&a=2&a=3&b=1&b=4'
Hello world!
a = 1, 2, 3
b = 1, 4
I hope this is useful to other CGI die-hards in taking their first steps towards PSGI proficiency, and I hope the Plack Evangelists will acknowledge that it takes a lot of reading and comprehension to get even this far.
CGI::Minimal would a good option, it is much lighter than CGI & CGI::Simple, but it lacks advanced methods like CGI & CGI::Simple
What is "vendoring" exactly? How would you define this term?
Does it mean the same thing in different programming languages? Conceptually speaking, not looking at the exact implementation.
Based on this answer
Defined here for Go as:
Vendoring is the act of making your own copy of the 3rd party packages
your project is using. Those copies are traditionally placed inside
each project and then saved in the project repository.
The context of this answer is in the Go language, but the concept still applies.
If your app depends on certain third-party code to be available you could declare a dependency and let your build system install the dependency for you.
If however the source of the third-party code is not very stable you could "vendor" that code. You take the third-party code and add it to your application in a more or less isolated way. If you take this isolation seriously you should "release" this code internally to your organization/working environment.
Another reason for vendoring is if you want to use certain third-party code but you want to change it a little bit (a fork in other words). You can copy the code, change it, release it internally and then let your build system install this piece of code.
Vendoring means putting a dependency into you project folder (vs. depending on it globally) AND committing it to the repo.
For example, running cp /usr/local/bin/node ~/yourproject/vendor/node & committing it to the repo would "vendor" the Node.js binary – all devs on the project would use this exact version. This is not commonly done for node itself but e.g. Yarn 2 ("Berry") is used like this (and only like this; they don't even install the binary globally).
The committing act is important. As an example, node_modules are already installed in your project but only committing them makes them "vendored". Almost nobody does that for node_modules but e.g. PnP + Zero Installs of Yarn 2 are actually built around vendoring – you commit .yarn/cache with many ZIP files into the repo.
"Vendoring" inherently brings tradeoffs between repo size (longer clone times, more data transferred, local storage requirements etc.) and reliability / reproducibility of installs.
Summarizing other, (too?) long answers:
Vendoring is hard-coding the often forked version of a dependency.
This typically involves static linking or some other copy but it doesn't have to.
Right or wrong, the term "hard-coding" has an old and bad reputation. So you won't find it near projects openly vendoring, however I can't think of a more accurate term.
As far as I know the term comes from Ruby on Rails.
It describes a convention to keep a snapshot of the full set of dependencies in source control, in directories that contain package name and version number.
The earliest occurrence of vendor as a verb I found is the vendor everything post on err the blog (2007, a bit before the author co-founded GitHub). That post explains the motivation and how to add dependencies. As far as I understand the code and commands, there was no special tool support for calling the directory vendor at that time (patches and code snippets were floating around).
The err blog post links to earlier ones with the same convention, like this fairly minimal way to add vendor subdirectories to the Rails import path (2006).
Earlier articles referenced from the err blog, like this one (2005), seemed to use the lib directory, which didn't make the distinction between own code and untouched snapshots of dependencies.
The goal of vendoring is more reproducibility, better deployment, the kind of things people currently use containers for; as well as better transparency through source control.
Other languages seem to have picked up the concept as is; one related concept is lockfiles, which define the same set of dependencies in a more compact form, involving hashes and remote package repositories. Lockfiles can be used to recreate the vendor directory and detect any alterations. The lockfile concept may have come from the Ruby gems community, but don't quote me on that.
The solution we’ve come up with is to throw every Ruby dependency in vendor. Everything. Savvy? Everyone is always on the same page: we don’t have to worry about who has what version of which gem. (we know) We don’t have to worry about getting everyone to update a gem. (we just do it once) We don’t have to worry about breaking the build with our libraries. […]
The goal here is simple: always get everyone, especially your production environment, on the same page. You don’t want to guess at which gems everyone does and does not have. Right.
There’s another point lurking subtlety in the background: once all your gems are under version control, you can (probably) get your app up and running at any point of its existence without fuss. You can also see, quite easily, which versions of what gems you were using when. A real history.
Are there any particular things to think about when building and installing (globally) a new version of Tcl from source, besides relinking /usr/local/bin/tclsh and wish to the new versions?
I know that the interpreter executables tclsh and wish are installed with different names, but what about the include and library files? When I build eggdrop, will it link with the latest version? How about the man pages - are the old ones overwritten by the new ones?
The usual approach for this case is to configure the build so that it's installed under a single directory (the Windows approach), say, under /opt/tcltk/8.6. You're then guaranteed against clashes with other versions and deinstallation is a matter of running rm -rf on that single directory. This approach has its downsides though:
You'll have to link (some) installed third-party Tcl libraries under your new hierarchy. This is because Tcl derives the set of paths to look for libraries from its own location.
/opt/tcltk/8.6/bin won't be listed in $PATH.
With certain OSes, another (possibly more sensible) approach is to do a "backport", that is, to take the source package of the required Tcl/Tk version and make it build for the installed version of the OS; then install the resulting packages in a normal way. On systems where various versions of Tcl/Tk are co-installable (for instance, Debian and its derivatives), this possibly provides the most sensible solution.
As to manual pages in the latter case, in Debian, they just end up being packaged in a separate package, installation of which is not required; so you just select one of the available documentation packages and install it.
In terms of having multiple versions present, this is a normal thing to do (do this by setting the --prefix option to configure when building) and has been so for quite a while. You'll probably want to avoid having multiple patchlevels of a single version if you can, but having, say, 8.4, 8.5 and 8.6 co-installed is entirely OK. You'll want to have the different installations in different directories too, and you're right about linking the unversioned tclsh name to the one you want normally (though I just use the versioned executable name instead).
The only way to have the manpages coexist nicely is to have them installed in separate directory trees and to update the MANPATH environment variable to point to the right one (unless you've got a man executable that will take paths to manpages directly — some do, some don't — and that is hardly as convenient). If you can bear being online, we've got official HTML builds of the documentation hosted at http://www.tcl.tk/man/ which includes all significant versions going back quite a long way.