I'm using a provided Openshift PaaS deployment Grafana app image.
I'd like to add a plugin to that Grafana it is done by adding certain files to the file system or invoking a grafana-cli command.
I managed to do it manually with a single pole by accessing it through the oc CLI. What I don't know is how to make it persistent. I would like it to by applied whenever an Openshift pole is instantiated. I found no other way than providing a custom image for that.
Is there a supported way of adding files to an existing predefined image?
Or invoking a command on a pole after deployment? I tried the post deployment hook but it appears that the filesystem is not there yet (or I don't know how to use this hook)
A post deployment life cycle hook runs in its own container, with its own file system, not the container of the application. You want to look at a postStart hook.
$ oc explain dc.spec.template.spec.containers.lifecycle
RESOURCE: lifecycle <Object>
DESCRIPTION:
Actions that the management system should take in response to container
lifecycle events. Cannot be updated.
Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted.
FIELDS:
postStart <Object>
PostStart is called immediately after a container is created. If the
handler fails, the container is terminated and restarted according to its
restart policy. Other management of the container blocks until the hook
completes. More info:
http://kubernetes.io/docs/user-guide/container-environment#hook-details
preStop <Object>
PreStop is called immediately before a container is terminated. The
container is terminated after the handler completes. The reason for
termination is passed to the handler. Regardless of the outcome of the
handler, the container is eventually terminated. Other management of the
container blocks until the hook completes. More info:
http://kubernetes.io/docs/user-guide/container-environment#hook-details
Related
I am using golang to programmatically create and destroy one-off Compute Engine instances using the Compute Engine API.
I can create an instance just fine, but what I'm really having trouble with is launching a container on startup.
You can do it from the Console UI:
But as far as I can tell it's extremely hard to do it programmatically, especially with Container Optimized OS as the base image. I tried doing a startup script that does a docker pull us-central1-docker.pkg.dev/project/repo/image:tag but it fails because you need to do gcloud auth configure-docker us-central1-docker.pkg.dev first for that to work and COOS doesn't have gcloud nor a package manager to get it.
All my workarounds seem hacky:
Manually create a VM template that has the desired container and create instances of the template
Put container in external registry like docker hub (not acceptable)
Use Ubuntu instead of COOS with a package manager so I can programmatically install gcloud, docker, and the container on startup
Use COOS to pull down an image from dockerhub containing gcloud, then do some sort of docker-in-docker mount to pull it down
Am I missing something or is it just really cumbersome to deploy a container to a compute engine instance without using gcloud or the Console UI?
To have a Compute Engine start a container when the Compute Engine starts, one has to define meta data for the description of the container. When the COOS starts, it appears to run an application called konlet which can be found here:
https://github.com/GoogleCloudPlatform/konlet
If we look at the documentation for this, it says:
The agent parses container declaration that is stored in VM instance metadata under gce-container-declaration key and starts the container with the declared configuration options.
Unfortunately, I haven't found any formal documentation for the structure of this metadata. While I couldn't find documentation, I did find two possible solutions:
Decipher the source code of konlet and break it apart to find out how the metadata maps to what is passed when the docker container is started
or
Create a Compute Engine by hand with the desired container definitions and then start the Compute Engine. SSH into the Compute Engine and then retrieve the current metadata. We can read about retrieving meta data here:
https://cloud.google.com/compute/docs/metadata/overview
It turns out, it's not too hard to pull down a container from Artifact Registry in Container Optimized OS:
Run docker-credential-gcr configure-docker --registries [region]-docker.pkg.dev
See: https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#accessing_private_images_in_or
So what you can do is put the above line along with docker pull [image] and docker run ... into a startup script. You can specify a startup script when creating an instance using the metadata field: https://cloud.google.com/compute/docs/instances/startup-scripts/linux#api
This seems the least hacky way of provisioning an instance with a container programmatically.
You mentioned you used docker-credential-gcr to solve your problem. I tried the same in my startup script:
docker-credential-gcr configure-docker --registries us-east1-docker.pkg.dev
But it returns:
ERROR: Unable to save docker config: mkdir /root/.docker: read-only file system
Is there some other step needed? Thanks.
I recently ran into the other side of these limitations (and asked a question on the topic).
Basically, I wanted to provision a COOS instance without launching a container. I was unable to, so I just launched a container from a base image and then later in my CI/CD pipeline, Dockerized my app, uploaded it to Artifact Registry and replaced the base image on the COOS instance with my newly built app.
The metadata I provided to launch the initial base image as a container:
spec:
containers:
- image: blairnangle/python3-numpy-ta-lib:latest
name: containervm
securityContext:
privileged: false
stdin: false
tty: false
volumeMounts: []
restartPolicy: Always
volumes: []
I'm a Terraform fanboi, so the metadata exists within some Terraform configuration. I have a public project with the code that achieves this if you want to take a proper look: blairnangle/dockerized-flask-on-gce.
I have faced with following case and haven't found clear answer for me.
Preconditions:
I have kubernetes cluster
there are some options related to my application (for example debug_level=Error)
there are pods running and each of them uses configuration (ENV, mount path or cli args)
later I need to change value of some option (the same 'debug_level' Error -> Debug)
The Q is:
how should I notify my Pods that configuration has changed?
Earlier we could just send HUP signal to the exact process directly or call systemctl reload app.service
What are the best practices for this use-case?
Thanks.
I think this is something you could achieve using sidecar containers. This sidecar container could monitor for changes in the configuration and send the signal to the appropiate process. More info here: http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html
Tools like kubediff or kube-applier can compare your kubernetes YAML files, to what's running on the cluster.
https://github.com/weaveworks/kubediff
https://github.com/box/kube-applier
I need to create a VM instance in google compute engine with a startup script that takes 30 minutes, but it never finishes, it stops around 10 minutes after the instance boots. Is there a timeout? Is there another alternative to accomplish what I need to do? Thanks!
Given the additional clarification in the comments:
My script downloads another script and then executes it, and what that script does is download some big files, and then compute some values based on latitude/longitude. Then, when the process is finished, the VM is destroyed.
My recommendation would be to run the large download and processing asynchronously rather than synchronously. The reason being is that if it's synchronous, it's part of the VM startup (in the critical path), and the VM monitoring infrastructure notices that the VM is not completing its startup phase within a reasonable amount of time and is terminating it.
Instead, take the heavy-duty processing out of the critical path and do it in the background, i.e., asynchronously.
In other words, the startup script currently probably looks like:
# Download the external script
curl [...] -o /tmp/script.sh
# Run the file download, computation, etc. and shut down the VM.
/tmp/script.sh
I would suggest converting this to:
# Download the external script
curl [...] -o /tmp/script.sh
# Run the file download, computation, etc. and shut down the VM.
nohup /tmp/script.sh &
What this does is start the heavy processing in the background, but also disconnect it from the parent process such that it is not automatically terminated when the parent process (the actual startup script) is terminated. We want the main startup script to terminate so that the entire VM startup phase is marked completed.
For more info, see the Wikipedia page on nohup.
We have a custom plugin for Hudson which uploads the output of a build onto a remote machine. We have just started looking into using a Hudson slave to improve throughput of builds, but the projects which use the custom plugin are failing to deploy with FileNotFoundExceptions.
From what we can see, the plugin is being run on the master even when the build is happening on the slave. The file that is not being found does exist on the slave but not on the master.
Questions:
Can plugins be run on slaves? If so, how? Is there a way to identify a plugin as being 'serializable'? If Hudson slaves can't run plugins, how does the SVN checkout happen?
Some of the developers here think that the solution to this problem is to make the Hudson master's workspace a network drive and let the slave use that same workspace - is this as bad an idea as it seems to me?
Firstly, go Jenkins! ;)
Secondly, you are correct — the code is being executed on the master. This is the default behaviour of a Hudson/Jenkins plugin.
When you want to run code on a remote node, you need to get a reference to that node's VirtualChannel, e.g. via the Launcher that's probably passed into your plugin's main method.
The code to be run on the remote node should be encapsulated in a Callable — this is the part that needs to be serialisable, as Jenkins will automagically serialise it, pass it to the node via its channel, execute it and return the result.
This also hides the distinction between master and slave — even if the build is actually running on the master, the "callable" code will transparently run on the correct machine.
For example:
#Override
public boolean perform(AbstractBuild<?, ?> build, Launcher launcher,
BuildListener listener) {
// This method is being run on the master...
// Define what should be run on the slave for this build
Callable<String, IOException> task = new Callable<String, IOException>() {
public String call() throws IOException {
// This code will run on the build slave
return InetAddress.getLocalHost().getHostName();
}
};
// Get a "channel" to the build machine and run the task there
String hostname = launcher.getChannel().call(task);
// Much success...
}
See also FileCallable, and check out the source code of other Jenkins plugins with similar functionality.
I would recommend making your plugin work properly rather than using the network share solution.. :)
I'm forced to use custom closed source lib in my web app deployed in tomcat 6. It is logging a lot of exceptions in my stdout log (catalina.out) via printStackTrace and then rethrowing them for me to handle. Is there a way to prevent or reroute logging of exceptions from a specific package deployed in webapp?
e.printStackTrace prints to the console similar to System.err
In Tomcat, the catalina.sh has this line which redirects all console errors to catalina.out
This applies for the Tomcat server as a whole.
"$CATALINA_BASE"/logs/catalina.out 2>&1 &
So in short, if you cant tinker with the source code to use log4j, you could try sending this to another file within the catalina.sh, but again this would not be package specific as you want.
And this would just bloat another file in a similar manner.
How about calling that segment within a try/catch segment, thus catching the exception before your app dies, adding it to log4j (or any other logging mechanism)