Uninstall of dependent operators - openshift

The olm-book mentions about dependency resolution during install time, can you please also describe how uninstall/cleanup works.
Does the OLM keep reference counter of dependent operators when its a shared dependency ?
How to cleanup the dependent operators and when to do this?
olm-book : https://operator-framework.github.io/olm-book/docs/operator-dependencies-and-requirements.html

I can explain on how operator-sdk cleanup command works while uninstalling an operator bundle/packagemanifest managed by OLM.
The first step of the process is to identify and remove the subscription so that we stop any upgrades or further installs. Followed by this, we get the ClusterServiceVersion for the particular operator which is to be uninstalled from the subscription itself. When OLM creates resources it adds owner references/owner label to every namespaced/cluster scoped resource specified in the CSV. Hence, when a CSV is deleted the OLM garbage collector identifies the dependent resources through the owner references and deletes them.
If you want to cleanup the operator manually, I would suggest to just remove the ClusterServiceVersion referenced in installplan and leave it to the OLM and Kubebuilder GC to delete the resources.

Related

How can i hide VM-related registry keys and processes from malware VM detection techniques?

I built a malware analysis test lab and i used Pafish to detect analysis environment and i want to patch some fault. How could I hide the registry keys and processes from malware VM detection?
In Windows, there are many points where the operating system enables programs to intercept calls to operating system functions (these are called "hooks"). For example, a program can "hook" the calls to the file system functions that return the entries in a directory. Normally, a program hooks a function to monitor and measure performance, or perhaps to add an additional level of validation.
A rootkit or SANDBOX can use a hook to check every value returned by the function, and skip any value that represents a part of the rootkit. In the case of the directory enumerator, when the next file to be returned is a part of the rootkit, it is skipped - the file becomes "invisible".
Similarly, a hook on the function that returns registry values can hide a registry entry that you dont want the sandboxed app to check.

In what scenario is the --srcdir config option valid?

I am working on a configuration program - it isn't autoconf, but I'm trying (as much as possible) to get it so that ./configure files that use it can be interfaced in a similar manner to those that are made with autoconf -- and that means (as much as possible) supporting the same variable options.
Only there's one option that makes no sense to me. I mean, yes, I am fully clear on what the option means, I just can't conceive of a single scenario in which someone would be well-advised to use that option - except for one scenario in which I'm equally curious why ./configure scripts can't auto-detect the information it would provide.
The option I am referring to is the "--srcdir" option. The reason it so befuddles me is that the only scenario I can imagine in which the source-code files won't be in your present-working-directory (or relative to your present-working-directory as the configure script is programmed to expect) is if the "configure" script itself isn't in your present-working-directory ---- and in that one scenario, I really am unable to imagine why the "configure" script can't extrapolate the source-directory from the name it is invoked by - and instead has to have that --srcdir option to give it that information.
For example, let's say your program's source-code is located in the "awesome/software" directory. That means that, from where you are, the "configure" script would be "awesome/software/configure". Why can't the "configure" script deduce that the source-directory is "awesome/software" just from the fact that it is invoked by the name "awesome/software/configure", and instead require me to add a separate command-line option of: --srcdir=awesome/software
And if this is not the kind of scenario where one would need to specify the --srcdir option (or if it is not the only kind of such scenario) can someone describe to me any other kind of scenario where the person installing a program would be well-advised to alter the "srcdir" variable from it's default?
The option I am referring to is the "--srcdir" option. ...
the only scenario I can imagine in which the source-code files won't be in your present-working-directory (or relative to your present-working-directory as the configure script is programmed to expect) is if the "configure" script itself isn't in your present-working-directory
Right.
and in that one scenario, I really am unable to imagine why the "configure" script can't extrapolate the source-directory from the name it is invoked by - and instead has to have that --srcdir option to give it that information.
I'm not sure it's required. The configure script will attempt to guess the location of srcdir:
# Find the source files, if location was not specified.
if test -z "$srcdir"; then
ac_srcdir_defaulted=yes
# Try the directory containing this script, then the parent directory.
...
So if it's in neither of those places, this will fail, hence the need for --srcdir. Maybe this is (was?) needed where there's some kind of performance differential where the sources are stored on a "slow" drive and the build happens on a "fast" drive, and configure seems to run faster on the "fast" drive so it needs to be there as well...
At any rate, --srcdir is just a variable assignment, so it's not hard to do.
Why can't the "configure" script deduce that the source-directory is "awesome/software" just from the fact that it is invoked by the name "awesome/software/configure"
The configure source seems to do that without specifying --srcdir, but I have not tried it.

How can I prevent two Jenkins projects/builds from running concurrently?

I have two Jenkins projects that share a database. They must not be run simultaneously. Strictly speaking, there is no particular dependency between them beyond non concurrency, but at the moment I partially manage this constraint by running one "downstream" of the other. This works most of the time, but not always. If a source control change happens while the second is running, the first will start up again, and they'll be running concurrently and probably both fail miserably.
This is similar, but not identical, to How to prevent certain Jenkins jobs from running simultaneously? The difference is that I don't have a "number of threads" problem -- I'm already only running at most one thread of any given project at any one time, even in the case where two (different-project) builds stomp each other. This seems to rule out all the several suggestions in that thread.
The Locks and Latches plugin should resolve your problem. Create a lock and have both jobs use the same lock. That will prevent the jobs from running concurrently.
Install the plugin in "Manage Jenkins: Manage Plugins."
Define (provide a name for) your lock(s) in "Manage Jenkins: Configure System."
For each job you want to participate in the exclusion,
in ": Configure: Build Environment," check "Locks",
and pick your lock name from the drop list.
The Lockable Resources Plugin. Simple and working well for me May 2016.
Install the plugin.
In Manage Jenkins > Configure System go to Lockable Resources Manager.
Select Add Lockable Resource.
Enter values for field: Name and hit Save.
Warning: Do not enter spaces in Name field.
In Jenkins > job_name > Configure > General,
Select checkbox: This build requires lockable resources.
Enter name or names in value for field: Resources.
Start a build.
Under build #number select Locked Resources.
You should see something like:This build has locked the following resources: resource_name - resource_description.
Start a different build which uses the same resource.
You will see Build Queue in Jenkins status/menu showing job name.
Hover text shows Started by, Waiting for resources resources_list, Waiting for time.
(also resource tags/labels can be used)
Adding screenshot of Job Configuration page as there seems to be a problem for some users where "This build requires lockable resources" is not visible: ** when the checkbox is not selected you should only see "[_] This build requires lockable resources"
EDIT: Below information is effective as of 04/10/2014
Exclusion plugin, https://wiki.jenkins-ci.org/display/JENKINS/Exclusion-Plugin Very useful if few build use the same resource - e.g. a test database. All you need to do is to update configuration of all jobs using this resource and as a result they will never run in parallel but wait for others to complete.
Taken from : http://www.kaczanowscy.pl/tomek/2012-07/jenkins-plugins-part-iii-towards-continuous-delivery
This plugin does block two or more jobs from running in parallel.
To test, do this for job1
Configure
Under Build Environment check "Add resource to manage exclusion."
Then Add -> New Resource -> Name -> lock
Under Build -> Add build step
Critical Block Start
Add build step -> Add whatever you want to add.(add sleep 15 to make sure it lasts longer to check concurrency.)
Add build step -> Critical block end
Repeat the above steps for job2, make sure you use the same lock name 'lock'.
manually build both jobs concurrently.
Monitor the run progress under jenkins -> Exclusion administration.
1 December 2021
Use Build Blocker plugin, Install from Manage Jenkins > Plugin Manager
For example, you have two pipelines React-build and React-tests:
Go to React-build -> Configure -> Block build
if I don't need React-tests to run concurrently with the current React-build job, add it in the blocking list,
Regex expressions can also be used, i.e. to avoid concurrent builds for all projects starting with React-, add React-.* to the list,
Replace React-tests with any pipeline-name you want not to run parallel, with global or node level options,
When tried to run any blocked jobs together with configured React-build job, it gets moved to pending state,

Tcl extensions: Life Cycle of extensions' ClientData

Non-trivial native extensions will require per-interpreter data
structures that are dynamically allocated.
I am currently using Tcl_SetAssocData, with a key corresponding
to the extension's name and an appropriate deletion routine,
to prevent this memory from leaking away.
However, Tcl_PkgProvideEx also allows one to record such
information. This information can be retrieved by
Tcl_PkgRequireEx. Associating the extension's data structures
with its package seems more natural than in the "grab-bag"
AssocData; yet the Pkg*Ex routines do not provide an
automatically invoked deletion routine. So I think I need
to stay with the AssocData approach.
For which situations were the Pkg*Ex routines designed for?
Additionally, the Tcl Library allows one to install
ExitHandlers and ThreadExitHandlers. Paraphasing the
manual, this is for flushing buffers to disk etc.
Are there any other situations requiring use of ExitHandlers?
When Tcl calls exit, are Tcl_PackageUnloadProcs called?
The whole-extension ClientData is intended for extensions that want to publish their own stub table (i.e., an organized list of functions that represent an exact ABI) that other extensions can build against. This is a very rare thing to want to do; leave at NULL if you don't want it (and contact the Tcl core developers' mailing list directly if you do; we've got quite a bit of experience in this area). Since it is for an ABI structure, it is strongly expected to be purely static data and so doesn't need deletion. Dynamic data should be sent through a different mechanism (e.g., via the Tcl interpreter or through calling functions via the ABI).
Exit handlers (which can be registered at multiple levels) are things that you use when you have to delete some resource at an appropriate time. The typical points of interest are when an interpreter (a Tcl_Interp structure) is being deleted, when a thread is being deleted, and when the whole process is going away. What resources need to be specially deleted? Well, it's usually obvious: file handles, database handles, that sort of thing. It's awkward to answer in general as the details matter very much: ask a more specific question to get tailored advice.
However, package unload callbacks are only called in response to the unload command. Like package load callbacks, they use “special function symbol” registration, and if they are absent then the unload command will refuse to unload the package. Most packages do not use them. The use case is where there are very long-lived processes that need to have extra upgradeable functionality added to them.

Mercurial command-line "API" reference?

I'm working on a Mercurial GUI client that interacts with hg.exe through the command line (the preferred high-level API, as I understand it).
However, I am having trouble determining the possible outputs of each command. I can see several outputs by simulating situations, but I was wondering if there is a complete reference of the possible outputs for each command.
For instance, for the command hg fetch, some possible outputs are:
pulling from https://User#server.com/Repo
searching for changes
no changes found
if there are no changes, or:
abort: outstanding uncommitted changes
or one of several other messages, depending on the situation.
I would like to structure my program to handle as many of these cases as possible, but it's hard for me to know in advance what they all are.
Is there a documented reference for the command-line? I have not been able to find one with The Google.
Look through the translation strings file. Then you'll know you have every message handled and be able to see what parts of it vary.
Also, fetch is just a convenience wrapper around pull/update/merge. If you're invoking mercurial programmatically you probably want to keep those three very different concepts separate in your running it so you know which part failed. In your example above it's the 'update' failing, so the 'pull' would have succeeded and the 'update's failing would allow you to provide the user with a better message.
(fetch is an abomination, which is part of why it's disabled by default)
Is this what you were looking for: https://www.mercurial-scm.org/wiki/MercurialBook ?
Mercurial 1.9 brings a command server, a stable (in a sense that API doesn't change that much) and low overhead (there is no need to run hg process for every command). The communication is done via a pipe.