I have faced with following case and haven't found clear answer for me.
Preconditions:
I have kubernetes cluster
there are some options related to my application (for example debug_level=Error)
there are pods running and each of them uses configuration (ENV, mount path or cli args)
later I need to change value of some option (the same 'debug_level' Error -> Debug)
The Q is:
how should I notify my Pods that configuration has changed?
Earlier we could just send HUP signal to the exact process directly or call systemctl reload app.service
What are the best practices for this use-case?
Thanks.
I think this is something you could achieve using sidecar containers. This sidecar container could monitor for changes in the configuration and send the signal to the appropiate process. More info here: http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html
Tools like kubediff or kube-applier can compare your kubernetes YAML files, to what's running on the cluster.
https://github.com/weaveworks/kubediff
https://github.com/box/kube-applier
Related
I am using golang to programmatically create and destroy one-off Compute Engine instances using the Compute Engine API.
I can create an instance just fine, but what I'm really having trouble with is launching a container on startup.
You can do it from the Console UI:
But as far as I can tell it's extremely hard to do it programmatically, especially with Container Optimized OS as the base image. I tried doing a startup script that does a docker pull us-central1-docker.pkg.dev/project/repo/image:tag but it fails because you need to do gcloud auth configure-docker us-central1-docker.pkg.dev first for that to work and COOS doesn't have gcloud nor a package manager to get it.
All my workarounds seem hacky:
Manually create a VM template that has the desired container and create instances of the template
Put container in external registry like docker hub (not acceptable)
Use Ubuntu instead of COOS with a package manager so I can programmatically install gcloud, docker, and the container on startup
Use COOS to pull down an image from dockerhub containing gcloud, then do some sort of docker-in-docker mount to pull it down
Am I missing something or is it just really cumbersome to deploy a container to a compute engine instance without using gcloud or the Console UI?
To have a Compute Engine start a container when the Compute Engine starts, one has to define meta data for the description of the container. When the COOS starts, it appears to run an application called konlet which can be found here:
https://github.com/GoogleCloudPlatform/konlet
If we look at the documentation for this, it says:
The agent parses container declaration that is stored in VM instance metadata under gce-container-declaration key and starts the container with the declared configuration options.
Unfortunately, I haven't found any formal documentation for the structure of this metadata. While I couldn't find documentation, I did find two possible solutions:
Decipher the source code of konlet and break it apart to find out how the metadata maps to what is passed when the docker container is started
or
Create a Compute Engine by hand with the desired container definitions and then start the Compute Engine. SSH into the Compute Engine and then retrieve the current metadata. We can read about retrieving meta data here:
https://cloud.google.com/compute/docs/metadata/overview
It turns out, it's not too hard to pull down a container from Artifact Registry in Container Optimized OS:
Run docker-credential-gcr configure-docker --registries [region]-docker.pkg.dev
See: https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#accessing_private_images_in_or
So what you can do is put the above line along with docker pull [image] and docker run ... into a startup script. You can specify a startup script when creating an instance using the metadata field: https://cloud.google.com/compute/docs/instances/startup-scripts/linux#api
This seems the least hacky way of provisioning an instance with a container programmatically.
You mentioned you used docker-credential-gcr to solve your problem. I tried the same in my startup script:
docker-credential-gcr configure-docker --registries us-east1-docker.pkg.dev
But it returns:
ERROR: Unable to save docker config: mkdir /root/.docker: read-only file system
Is there some other step needed? Thanks.
I recently ran into the other side of these limitations (and asked a question on the topic).
Basically, I wanted to provision a COOS instance without launching a container. I was unable to, so I just launched a container from a base image and then later in my CI/CD pipeline, Dockerized my app, uploaded it to Artifact Registry and replaced the base image on the COOS instance with my newly built app.
The metadata I provided to launch the initial base image as a container:
spec:
containers:
- image: blairnangle/python3-numpy-ta-lib:latest
name: containervm
securityContext:
privileged: false
stdin: false
tty: false
volumeMounts: []
restartPolicy: Always
volumes: []
I'm a Terraform fanboi, so the metadata exists within some Terraform configuration. I have a public project with the code that achieves this if you want to take a proper look: blairnangle/dockerized-flask-on-gce.
Where is the Openshift Master and Node Host Files in v4.6
Previously hosted below in v3
Master host files at /etc/origin/master/master-config.yaml
Node host files at /etc/origin/node/node-config.yaml
You can check your current kubelet configuration using the following procedures instead of the configuration file on the node hosts like OCPv3. Because the kubelet configuration was managed dynamically as of OCPv4.
Further information is here, Generating a file that contains the current configuration.
You can check it using above reference procedures(Generate the configuration file) or oc CLI as follows.
$ oc get --raw /api/v1/nodes/${NODE_NAME}/proxy/configz | \
jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
These files no longer exist in the same for as in OCP 3. To change anything on the machines themselves, you'll need to create MachineConfigs, as CoreOS is an immutable Operating System. If you change anything manually on the filesystem and reboot the machine, your changes will typically be reset.
To modify Worker Nodes, often the setting you are looking for can be configured via a kubeletConfig: Managing nodes - Modifying Nodes. Note that only certain settings can be changed, others cannot be changed at all.
For the Master Config, it depends on what you want to do, as you will potentially change the setting via a machineConfigPool or for example edit API Server setting via oc edit apiserver cluster. So it depends on what you actually want to change.
((I'm still learning; apologize in advance if I may have misunderstood some basics.))
In OpenShift, I have a pipeline that uses oc new-build (sourceCodeGitAddress) and oc start-build to create my deployment. If I rerun the pipeline though, it fails because it says - rightfully so - that there is already a buildConfig and imageStream. Is there a better way for me to run this pipeline so it automatically updates / builds / pushes / etc. to the build config and image stream?
Okay, I think I got it. I think I just need a separate pipeline that just runs the start-build and it will re-pull the source code and deploy. At least, that's what I'm seeing in the logs. Please feel free to correct me!
BuildConfig could be restarted via webhook-triggers configuration or you have to exclude new-build operation from the pipeline.
In addition, 'oc apply' can be rerun idempotently
More about BC start
I have a mrjob configuration that includes loading a large file from s3 into HDFS. I would like to include these commands in the configuration file, but it seems that all bootstrap commands execute on all of the nodes in the cluster. This is over-kill and might also create synchronization problems.
Is there some way to include startup commands for the master node only in the mrjob configuration or is the only solution to SSH into the head node after the cluster is up to perform these operations?
Yoav
Well, you could have your steps start with a mapper and set mapred.map.tasks=1 in your jobconf. I've never tried it, but seems like it should work.
Another suggestion:
Use a filesystem or zookeeper for coordination:
if get_exclusive_lock_on_resource(filesystem_path_or_zookeeper_path):
Do the expensive bit
release_lock(filesystem_path_or_zookeeper_path)
if expensive_bit_not_complete():
sleep 10
All right all you activemq guru's out there...
Currently activemq require a configuration file before it runs. It appears from its debug output message:
$ ./activemq start -h
INFO: Using default configuration (you can configure options in one of these file: /etc/default/activemq /home/user_name/.activemqrc)
That you can only put it in one of those two locations. Anybody know if this is the case? Is there some command line parameter to specify its location?
Thanks!
-roger-
Yes, it is possible. Here are 3 possible answers.
If classpath is setup properly:
activemq start xbean:myconfig.xml
activemq start xbean:file:./conf/broker1.xml
Not using the classpath:
activemq start xbean:file:C:/ActiveMQ/conf/broker2.xml
reference:
http://activemq.apache.org/activemq-command-line-tools-reference.html
I have not been able to find the answer to this and I struggled with this myself for a while, but I've found a bit of a workaround. When you use bin/activemq create, you can create a runnable instance that will have its own bin, conf, and data directories. Then you have more control over that runnable insance and the .activemqrc becomes less important.
See this for detail on the create option : http://activemq.apache.org/unix-shell-script.html
Try this:
bin/activemq start xbean:/home/user/activemq.xml
Note that if the xml file includes other files like jetty.xml then it needs to be in that dir also.
If using a recent 5.6 SNAPSHOT you can set the env var ACTIVEMQ_CONF to point to the location where you have the config files
in the /bin/activemq script under # CONFIGURATION # For using instances, you can add or remove any file destinations you'd like.
Be very though since it ignores the others at the first occurrency of a file, read more here
Unix configuration
happy coding !