Install fluentd on openshift without elastic search? - openshift

On openshift 4. 3, will config fluentd to forward logs to external syslog. Could I install fluentd only without install elasticsearch etc?
Thanks
Weiren

Yes, you can install only fluentd by only specifying the collection part when deploying the ClusterLogging CRD:
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
collection:
logs:
fluentd: {}
type: fluentd
managementState: Managed
Note that later versions of OpenShift even allow you to specify LogForwarding. More information on how to deploy the ClusterLogging can be found in the documentation: https://docs.openshift.com/container-platform/4.3/logging/cluster-logging-deploying.html#cluster-logging-deploy-clo-cli_cluster-logging-deploying

Related

Using JSON Patch on Kubernetes yaml file

I'm trying to use JSON Patch on one of my Kubernetes yaml file.
apiVersion: accesscontextmanager.cnrm.cloud.google.com/v1beta1
kind: AccessContextManagerServicePerimeter
metadata:
name: serviceperimetersample
spec:
status:
resources:
- projectRef:
external: "projects/12345"
- projectRef:
external: "projects/123456"
restrictedServices:
- "storage.googleapis.com"
vpcAccessibleServices:
allowedServices:
- "storage.googleapis.com"
- "pubsub.googleapis.com"
enableRestriction: true
title: Service Perimeter created by Config Connector
accessPolicyRef:
external: accessPolicies/0123
description: A Service Perimeter Created by Config Connector
perimeterType: PERIMETER_TYPE_REGULAR
I need to add another project to the perimeter (spec/status/resources).
I tried using following command:
kubectl patch AccessContextManagerServicePerimeter serviceperimetersample --type='json' -p='[{"op": "add", "path": "/spec/status/resources/-/projectRef", "value": {"external": {"projects/01234567"}}}]'
But it resulted in error:
The request is invalid: the server rejected our request due to an error in our request
I'm pretty sure that my path is not correct because it's nested structure. I'd appreciate any help on this.
Thank you.
I don't have the CustomResource you're using so I can't test this, but I think this should work:
kubectl patch AccessContextManagerServicePerimeter serviceperimetersample --type='json' -p='[{"op":"add","path":"/spec/status/resources/2","value":{"projectRef":{"external":"projects/12345"}}}]'

How to scrape metrics from bitnami/mariadb helm deployment with kube-prometheus-stack

I am using bitnami/mariadb helm chart deployments. I tried to enable the metrics and servicemonitor like below in the primary and secondary section in values.yaml file. And I noticed that the sidecar container and servicemonitor are not creating.
metrics:
enabled: true
image:
registry: docker.io
repository: bitnami/mysqld-exporter
tag: 0.14.0-debian-11-r37
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9104'
serviceMonitor:
enabled: true
interval: 30s
selector:
release: prometheus
I have enabled the metrics in other bitnami apps like above. But I am not sure why it is not working here.
K8s version: 1.21
MariaDB app version: 10.5.11-debian-10-r25

KNative serving is not showing Ready after installing on Openshift

Followed the link - https://docs.openshift.com/container-platform/4.1/serverless/installing-openshift-serverless.html to install KNative Serving on top of Openshift v4.1. After installing all the openshift operators, control plane. member roll etc as given in the link; I expect to see that serving component is running by executing -
C:\Knative installation>oc get knativeserving/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}'
But the above returns nothing. Just returns back the prompt.
Also below are o/p of the get resource command of the serving component -
C:\Knative installation>oc get knativeserving/knative-serving -n knative-serving
NAME VERSION READY REASON
knative-serving
C:\Knative installation>oc get knativeserving/knative-serving -n knative-serving -o yaml
apiVersion: serving.knative.dev/v1alpha1
kind: KnativeServing
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"serving.knative.dev/v1alpha1","kind":"KnativeServing","metadata": {"annotations":{},"name":"knative-serving","namespace":"knative-serving"}}
creationTimestamp: "2020-01-12T10:53:42Z"
generation: 1
name: knative-serving
namespace: knative-serving
resourceVersion: "63660251"
selfLink: /apis/serving.knative.dev/v1alpha1/namespaces/knative-serving/knativeservings/knative-serving
uid: cc4b330f-3529-11ea-83ef-0272cb600f74
What could be wrong? I believe KNative Serving did not install correctly but not sure how to debug. I uninstalled and reinstalled several times but no help.
Also, I thought to proceed and install a service using KNative Serving (ref link https://docs.openshift.com/container-platform/4.1/serverless/getting-started-knative-services.html) but, applying the very first resource shows problem.
service.yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-go
namespace: default
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
env:
- name: TARGET
value: "Go Sample v1"
Applying service.yaml returns error.
C:\start Knative service> oc apply --filename service.yaml
error: unable to recognize "service.yaml": no matches for kind "Service" in version "serving.knative.dev/v1alpha1"
Any help is appreciated. Thanks.

Kubernetes installation error in flannel step

I am installing kubernetes using kubeadm on GCP Centos VM and I am getting the following error while running flannel step.
Error:
[root#master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
What changes shall I made in order to fix this?
Use flannel yaml from the official documentation
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
As #suren correctly mention - the issue is in the apiVersion: extensions/v1beta1
In latest yaml it looks like
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
...
That's a versioning issue with the DaemonSet and your kubernetes cluster. You are using extensions/v1beta1, but DaemonSets have been promoted to apps/v1.
If you already have api-server running, try kubectl explain daemonset, and it will tell you what should be the apiVersion for the DaemonSets.
If not, just download the flannel file, edit it, change the apiVersion: extensions/v1beta1, by apiVersion: apps/v1, and it should work.

OpenShift freezes at nextjs build phase with message "Creating an optimized production build ..."

I'm trying the OpenShift Online version with two free evaluation months, my evalution stopped very early : I'm unable to get my app online since in the logs it freezes at te compile phase. Seeams to be a RAM issue, but since I don't have control over the ram, does anyone have suggestions?
These are the logs just before it hangs up:
added 915 packages from 491 contributors and audited 8414 packages in 63.205s
found 0 vulnerabilities
npm timing npm Completed in 63927ms
npm info ok
---> Building in production mode
npm info it worked if it ends with ok
npm info using npm#6.9.0
npm info using node#v10.16.0
npm info lifecycle weally#0.1.0~prebuild: weally#0.1.0
npm info lifecycle weally#0.1.0~build: weally#0.1.0
> weally#0.1.0 build /opt/app-root/src
> next build
Creating an optimized production build ...
The process is running fro three days now, it seams it's doing things because the processor graphs shows ups and downs, the memory is locked at 530Mb
Here's my build config:
kind: BuildConfig
apiVersion: build.openshift.io/v1
metadata:
name: weallynode
namespace: weally
selfLink: /apis/build.openshift.io/v1/namespaces/weally/buildconfigs/weallynode
uid: 8c6cf6ec-aa2d-11e9-9f6f-0a580a810073
resourceVersion: '1319487'
creationTimestamp: '2019-07-19T14:00:21Z'
labels:
app: weallynode
spec:
nodeSelector: null
output:
to:
kind: ImageStreamTag
name: 'weallynode:latest'
resources: {}
successfulBuildsHistoryLimit: 5
failedBuildsHistoryLimit: 2
strategy:
type: Source
sourceStrategy:
from:
kind: ImageStreamTag
namespace: openshift
name: 'nodejs:10'
postCommit: {}
source:
type: Git
git:
uri: 'https://1vu#bitbucket.org/1vu/weally.git'
sourceSecret:
name: weallygit
triggers:
- type: ImageChange
imageChange:
lastTriggeredImageID: >-
image-registry.openshift-image-registry.svc:5000/openshift/nodejs#sha256:9dce2f60b87b2eea351213c3d142716c0a70c894df8b3d2d4425b4933b8f6221
- type: ConfigChange
runPolicy: Serial
status:
lastVersion: 6
Could you check your buildConfig resources section ? Refer Setting Build Resources for more details.
apiVersion: "v1"
kind: "BuildConfig"
metadata:
name: "sample-build"
spec:
resources:
limits:
cpu: "100m"
memory: "256Mi"
If the limits section is configured by specific sizes, then you can change them.
I hope it help you.