API error (500): manifest unknown: manifest unknown - openshift

it failes to pull the image with SHA256 digest identifier

Unfortunately this is a side-effect of DockerHub removing backwards compatibility for Docker 1.9 daemons. When images are pushed using Docker 1.10, pull-by-id will fail for older daemons (which includes OpenShift masters importing metadata from the Hub). You can work around this by pulling the centos image and pushing it to the internal registry.
At the current time, using Docker 1.9 on your hosts will avoid this issue.

You can apply a workaround for this issue by removing Image Change Trigger and removing the hash from image attribute in container spec.

Modify the build config:
strategy:
dockerStrategy:
from:
kind: ImageStreamTag
name: mysql-56-centos7
Replace to:
strategy:
dockerStrategy:
from:
kind: DockerImage
name: docker.io/centos/mysql-56-centos7:latest

Related

mkdir /.gitlab-runner: permission denied running GitLab Runner in Kubernetes deployed via Helm

I'm trying to deploy the GitLab Runner (15.7.1) onto an on-premise Kubernetes cluster and getting the following error:
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
This is occurring with both the 15.7.1 image (Ubuntu?) and the alpine3.13-v15.7.1 image. Looking at the deployment, it looks likes it should be trying to use /home/gitlab-runner, but for some reason it is trying to use root (/), which is a protected directory.
Anyone else experience this issue or have a suggestion as to what to look at?
I am using the Helm chart (0.48.0) using a copy of the images from dockerhub (simply moved into a local repository as internet access is not available from the cluster). Connectivity to GitLab appears to be working, but the error causes the overall startup to fail. Full logs are:
Registration attempt 4 of 30
Runtime platform arch=amd64 os=linux pid=33 revision=6d480948 version=15.7.1
WARNING: Running in user-mode.
WARNING: The user-mode requires you to manually start builds processing:
WARNING: $ gitlab-runner run
WARNING: Use sudo for system-mode:
WARNING: $ sudo gitlab-runner...
Created missing unique system ID system_id=r_Of5q3G0yFEVe
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
I have tried the 15.7.1 image, the alpine3.13-v15.7.1 image, and the gitlab-runner-ocp:amd64-v15.7.1 image and searched the values.yaml for anything relevant to the path. Looking at the deployment template, it appears that it ought to be using /home/gitlab-runner as the directory (instead of /) [though the docs suggested it was /home].
As for "what was I expecting", of course I was expecting that it would "just work" :)
So, resolved this (and other) issues with:
Updated helm deployment template to mount an empty volume at /.gitlab-runner
[separate issue] explicitly added builds_dir and environment [per gitlab-org/gitlab-runner#3511 (comment 114281106)].
These two steps appeared to be sufficient to get the Helm chart deployment working.
You can easily create and mount the emptyDir (in case you are creating gitlab-runner with kubernetes manifest *.yml file):
volumes:
- emptyDir: {}
name: gitlab-runner
volumeMounts:
- name: gitlab-runner
mountPath: /.gitlab-runner
-------------------- OR --------------------
volumeMounts:
- name: root-gitlab-runner
mountPath: /.gitlab-runner
volumes:
- name: root-gitlab-runner
emptyDir:
medium: "Memory"

Cronjob of existing Pod

I have a django app running on Openshift 3. I need to run certain manage.py commands on a regular basis. In Openshift 2 I used the Cron gear and now in Openshift 3 I want to use the CronJob pod type.
I want to create a pod for the cronjob, use the same source as the django app is using, but not expose it.
For example:
W1 - Django app
D1 - Postgres DB
M1 - django app for manage.py jobs, run as a cronjob pod.
Any help is appreciated.
You want to use a scheduled job.
https://docs.openshift.com/container-platform/3.5/dev_guide/cron_jobs.html
https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
https://blog.openshift.com/openshift-jobs/
Note that at this time (OpenShift 3.5), you have to use batch/v2alpha1 as the API version. Be careful of out of date documentation showing older version labels.
What I am not sure of is how you can easily reference the image associated with an existing imagestream produced when you used the S2I builder to build you application and you want to use the same image. The base Kubernetes object for this expects you to refer to the image from the image registry. You would thus need to work that out by looking at the imagestream and copying the image registry IP and image details over by hand.
UPDATE 1
See:
https://stackoverflow.com/a/45227960/128141
for details of how from OpenShift 3.6 you can have it resolve the imagestream name automatically. That mechanism is still alpha status in 3.6, but does work.
I've gotten it to work with specifying the image name in the YAML, but then tried to get it to work as part of the template, but ran into an error when trying to use the batch/v1 version on this server
Cannot create cron job "djangomanage". The API version batch/v1 for kind CronJob is not supported by this server.
My template code is
- apiVersion: batch/v1
kind: CronJob
metadata:
name: djangomanage
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: djangomanage
image: '${NAME}:latest'
env:
- name: APP_SCRIPT
value: "/opt/app-root/src/cron.sh"
restartPolicy: Never
CRON.SH
python /opt/app-root/src/manage.py
you need to update line 1 with this:
- apiVersion: batch/v1beta1
see link below:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#cronjob-v1beta1-batch

Container-VM Image with GPD Volumes fails with "Failed to get GCE Cloud Provider. plugin.host.GetCloudProvider returned <nil> instead"

I currently try to switch from the "Container-Optimized Google Compute Engine Images" (https://cloud.google.com/compute/docs/containers/container_vms) to the "Container-VM" Image (https://cloud.google.com/compute/docs/containers/vm-image/#overview). In my containers.yaml, I define a volume and a container using the volume.
apiVersion: v1
kind: Pod
metadata:
name: workhorse
spec:
containers:
- name: postgres
image: postgres:9.5
imagePullPolicy: Always
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
gcePersistentDisk:
pdName: disk-name
fsType: ext4
This setup worked fine with the "Container-Optimized Google Compute Engine Images", however fails with the "Container-VM". In the logs, I can see the following error:
May 24 18:33:43 battleship kubelet[629]: E0524 18:33:43.405470 629 gce_util.go:176]
Error getting GCECloudProvider while detaching PD "disk-name":
Failed to get GCE Cloud Provider. plugin.host.GetCloudProvider returned <nil> instead
Thanks in advance for any hint!
This happens only when kubelet is run without the --cloud-provider=gce flag. The problem, unless is something different, is dependant on how GCP is launching Container-VMs.
Please contact with google cloud platform guys.
Note if this happens to you when using GCE: Add --cloud-provider=gce flag to kubelet in all your workers. This only applies to 1.2 cluster versions because, if i'm not wrong, there is an ongoing attach/detach design targeted for 1.3 clusters which will move this business logic out of kubelet.
In case someone is interested in the attach/detach redesign here it is its corresponding github issue: https://github.com/kubernetes/kubernetes/issues/20262

Can I configure scaling for and ElasticBeanstalk instance at deploy time?

Can I configure scaling for and ElasticBeanstalk instance at deploy time ?
There seems to be no option to do this, so then I have to go in after it is deployed and change it in the configuration settings
If you use ebextensions then yes. You can use autoscaling related option settings in your .ebextensions/01-scaling.config file.
You may want to use "MinSize" and "MaxSize" option settings in the namespace "aws:autoscaling:asg".
Example config file:
option_settings:
- namespace: aws:autoscaling:asg
option_name: MinSize
value: 3
- namespace: aws:autoscaling:asg
option_name: MaxSize
value: 6
Then your options are bundled with your application and you can configure them as part of environment startup and do not have to do a second deployment.
Documentation on option settings here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html

AWS Beanstalk Application Health Check

We use Beanstalk to deploy node applications. It works very well. I've created a couple of config files in an .ebextensions directory, to apply configuration info to our apps when we load them up. Again mostly works well. I have one thing that does not, and that is defining the application health check URL. I can't get it to go. One odd thing about it, it seems to be only parameter I have come across so far that has spaces in it, and I'm wondering about that. I have tried enclosing the values in quotes, just to see if that is the problem, but it still doesn't work. Has anyone done this before, and can tell me whether it works, and if there is something syntactical about this that is incorrect? As I said, the rest of the params get set correctly in beanstalk, just the last one doesn't. Note #environment# gets replaced with a grunt script before this gets deployed.
Here's the config file:
option_settings:
- namespace: aws:elasticbeanstalk:application:environment
option_name: NODE_ENV
value: #environment#
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: NodeVersion
value: 0.10.10
- namespace: aws:autoscaling:trigger
option_name: LowerThreshold
value: 40
- namespace: aws:autoscaling:trigger
option_name: MeasureName
value: CPUUtilization
- namespace: aws:autoscaling:trigger
option_name: UpperThreshold
value: 60
- namespace: aws:autoscaling:trigger
option_name: Unit
value: Percent
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /load_balance_test
Adding this worked for me:
# .ebextensions/healthcheckurl.config
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /health
- namespace: aws:elasticbeanstalk:environment:process:default
option_name: HealthCheckPath
value: /health
I discovered the second setting by doing eb config, which gives a nice overview of environment settings that can be overriden with option_settings in .ebextensions/yet-another.config files.
The spaces in this property name are weird, but it works when used with the alternative shorthand syntax for options:
option_settings:
aws:elasticbeanstalk:application:
Application Healthcheck URL: /
I use CloudFormation for EB, and in CF the syntax for that parameter is very strange. If that config file works the same as CF, the following string should work for you:
HTTP:80/load_balance_test
If you're using Terraform, then just make sure you have spaces in the name, and it will work fine:
setting {
namespace = "aws:elasticbeanstalk:application"
name = "Application Healthcheck URL"
value = "/api/health"
}
I just tried. It worked for me. Only the format specified in the original question worked for me i.e.,
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /api/v1/health/
you also might want to set the health_check_type to ELB instead of default EC2.
this is how I configured mine
$ cat .ebextensions/0090_healthcheckurl.config
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
HealthCheckType: "ELB"
HealthCheckGracePeriod: "600"
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /_status