Upgrading K8S cluster from v1.2.0 to v1.3.0 - json

I have 1 master and 4 minions all running on version 1.2.0. I am planning to upgrade them to 1.3.0. I want this done with minimal downtime.
So I did the following on one minion.
systemctl stop kubelet
yum update kubernetes-1.3.0-0.3.git86dc49a.el7
systemctl start kubelet
Once I bring up the service, i see the following ERROR.
Mar 28 20:36:55 csdp-e2e-kubernetes-minion-6 kubelet[9902]: E0328 20:36:55.215614 9902 kubelet.go:1222] Unable to register node "172.29.240.169" with API server: the body of the request was in an unknown format - accepted media types include: application/json, application/yaml
Mar 28 20:36:55 csdp-e2e-kubernetes-minion-6 kubelet[9902]: E0328 20:36:55.217612 9902 event.go:198] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"172.29.240.169.14b01ded8fb2d07b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"172.29.240.169", UID:"172.29.240.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientDisk", Message:"Node 172.29.240.169 status is now: NodeHasSufficientDisk", Source:api.EventSource{Component:"kubelet", Host:"172.29.240.169"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63626321182, nsec:814949499, loc:(*time.Location)(0x4c8a780)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63626330215, nsec:213372890, loc:(*time.Location)(0x4c8a780)}}, Count:1278, Type:"Normal"}': 'the body of the request was in an unknown format - accepted media types include: application/json, application/yaml' (will not retry!)
Mar 28 20:36:55 csdp-e2e-kubernetes-minion-6 kubelet[9902]: E0328 20:36:55.246100 9902 event.go:198] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"172.29.240.169.14b01ded8fb2fc88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"172.29.240.169", UID:"172.29.240.169", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.29.240.169 status is now: NodeHasSufficientMemory", Source:api.EventSource{Component:"kubelet", Host:"172.29.240.169"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63626321182, nsec:814960776, loc:(*time.Location)(0x4c8a780)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63626330215, nsec:213381138, loc:(*time.Location)(0x4c8a780)}}, Count:1278, Type:"Normal"}': 'the body of the request was in an unknown format - accepted media types include: application/json, application/yaml' (will not retry!)
Is v1.2.0 incompatible with v1.3.0 ?
Seems like the issue is with JSON incompatibility ? application/json, application/yaml
From master standpoint ::
[root#kubernetes-master ~]# kubectl get nodes
NAME STATUS AGE
172.29.219.105 Ready 3h
172.29.240.146 Ready 3h
172.29.240.168 Ready 3h
172.29.240.169 NotReady 3h
The node that I upgraded is in NotReady state.

As per the documentation you must upgrade your master components (kube-scheduler, kube-apiserver and kube-controller-manager) before your node components (kubelet, kube-proxy).
https://kubernetes.io/docs/getting-started-guides/ubuntu/upgrades/

Related

OpenShift upgrade error 4.11.x -> 4.12.2 Marking Degraded due to: unexpected on-disk state validating against rendered-worker

I'm administrating RHEL OpenShift cluster. I'm upgrading from 4.10.x -> 4.11.x -> 4.12.2
There are 3 masters, and 7 worker nodes.
all 3 masters updated
3 of the 8 workers updated.
Thus far the upgrade is now stuck on worker0 with:
oc logs machine-config-daemon-4bs9x -n openshift-machine-config-operator
< snip >
I0216 21:00:08.555947 3136 daemon.go:1255] Current config: rendered-worker-8ebd95b2c00a22992daf1248ebc5640f
I0216 21:00:08.555986 3136 daemon.go:1256] Desired config: rendered-worker-263c6ea5fafb6f1da35a31749a1180d7
I0216 21:00:08.555992 3136 daemon.go:1258] state: Degraded
I0216 21:00:08.566365 3136 update.go:2089] Running: rpm-ostree cleanup -r
Deployments unchanged.
I0216 21:00:08.647332 3136 update.go:2104] Disk currentConfig rendered-worker-263c6ea5fafb6f1da35a31749a1180d7 overrides node's currentConfig annotation rendered-worker-8ebd95b2c00a22992daf1248ebc5640f
I0216 21:00:08.651201 3136 daemon.go:1564] Validating against pending config rendered-worker-263c6ea5fafb6f1da35a31749a1180d7
E0216 21:00:10.291740 3136 writer.go:200] Marking Degraded due to: unexpected on-disk state validating against rendered-worker-263c6ea5fafb6f1da35a31749a1180d7: expected target osImageURL "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916d3c75fb02ee", have "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17" ("b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f")
I've had this problem before and followed the RedHat solutions to run the following command. But this is now failing.
oc debug node/worker0.xx.com
sh-4.4# chroot /host
sh-4.4# rpm-ostree status
State: idle
Deployments:
* db83d20cf09a263777fcca78594b16da00af8acc245d29cc2a1344abc3f0dac2
Version: 412.86.202301311551-0 (2023-01-31T15:54:05Z)
sh-4.4#
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee"
I0216 21:02:54.449270 3962714 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-821872843 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916d3c75fb02ee
I0216 21:03:48.349962 3962714 rpm-ostree.go:209] Previous pivot: quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17
I0216 21:03:49.926169 3962714 rpm-ostree.go:246] No com.coreos.ostree-commit label found in metadata! Inspecting...
I0216 21:03:49.926234 3962714 rpm-ostree.go:412] Running captured: ostree refs --repo /run/mco-machine-os-content/os-content-821872843/srv/repo
error: error running ostree refs --repo /run/mco-machine-os-content/os-content-821872843/srv/repo: exit status 1
error: opening repo: opendir(/run/mco-machine-os-content/os-content-821872843/srv/repo): No such file or directory
sh-4.4#
After a reboot and retry now I'm getting:
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916375fb02ee"
I0217 19:10:06.928154 1443914 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-903744214 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee
error: "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee" is not a valid image reference: invalid checksum digest length
W0217 19:10:07.176459 1443914 run.go:45] nice failed: running nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-903744214 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee failed: error: "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee" is not a valid image reference: invalid checksum digest length
: exit status 1; retrying...
^C
I tried this:
/run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916375fb02ee"
expecting this result ( from a previous upgrade problem ):
sh-4.4# chroot /host
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17"
I0208 21:50:00.408235 2962835 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-3432684387 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17
I0208 21:50:29.727695 2962835 rpm-ostree.go:353] Running captured: rpm-ostree status --json
I0208 21:50:29.780350 2962835 rpm-ostree.go:261] Previous pivot: quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:7c252d64354d207cd7fb2a6e2404e611a29bf214f63a97345dee1846055c15d8
I0208 21:50:31.456928 2962835 rpm-ostree.go:293] Pivoting to: 411.86.202301242231-0 (b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f)
I0208 21:50:31.456966 2962835 rpm-ostree.go:325] Executing rebase from repo path /run/mco-machine-os-content/os-content-3432684387/srv/repo with customImageURL pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17 and checksum b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f
I0208 21:50:31.457048 2962835 update.go:1972] Running: rpm-ostree rebase --experimental /run/mco-machine-os-content/os-content-3432684387/srv/repo:b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f --custom-origin-url pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17 --custom-origin-description Managed by machine-config-operator
0 metadata, 0 content objects imported; 0 bytes content written
Staging deployment... done
Upgraded:
NetworkManager 1:1.30.0-16.el8_4 -> 1:1.36.0-12.el8_6
< snip>
zlib 1.2.11-18.el8_4 -> 1.2.11-19.el8_6
Removed:
ModemManager-glib-1.10.8-2.el8.x86_64
libmbim-1.20.2-1.el8.x86_64
libqmi-1.24.0-1.el8.x86_64
openvswitch2.16-2.16.0-108.el8fdp.x86_64
redhat-release-coreos-410.84-2.el8.x86_64
Added:
WALinuxAgent-udev-2.3.0.2-2.el8_6.3.noarch
glibc-gconv-extra-2.28-189.5.el8_6.x86_64
libbpf-0.4.0-3.el8.x86_64
openvswitch2.17-2.17.0-67.el8fdp.x86_64
redhat-release-8.6-0.1.el8.x86_64
redhat-release-eula-8.6-0.1.el8.x86_64
shadow-utils-subid-2:4.6-16.el8.x86_64
Run "systemctl reboot" to start a reboot
sh-4.4# systemctl reboot

Unable to start nginx-ingress-controller Readiness and Liveness probes failed

I have installed using instructions at this link for the Install NGINX using NodePort option.
When I do ks logs -f ingress-nginx-controller-7f48b8-s7pg4 -n ingress-nginx I get :
W0304 09:33:40.568799 8 client_config.go:614] Neither --kubeconfig nor --master was
specified. Using the inClusterConfig. This might not work.
I0304 09:33:40.569097 8 main.go:241] "Creating API client" host="https://10.96.0.1:443"
I0304 09:33:40.584904 8 main.go:285] "Running in Kubernetes cluster" major="1" minor="23" git="v1.23.1+k0s" state="clean" commit="b230d3e4b9d6bf4b731d96116a6643786e16ac3f" platform="linux/amd64"
I0304 09:33:40.911443 8 main.go:105] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0304 09:33:40.916404 8 main.go:115] "Enabling new Ingress features available since Kubernetes v1.18"
W0304 09:33:40.918137 8 main.go:127] No IngressClass resource with name nginx found. Only annotation will be used.
I0304 09:33:40.942282 8 ssl.go:532] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0304 09:33:40.977766 8 nginx.go:254] "Starting NGINX Ingress controller"
I0304 09:33:41.007616 8 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"1a4482d2-86cb-44f3-8ebb-d6342561892f", APIVersion:"v1", ResourceVersion:"987560", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
E0304 09:33:42.087113 8 reflector.go:138] k8s.io/client-go#v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource
E0304 09:33:43.041954 8 reflector.go:138] k8s.io/client-go#v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource
E0304 09:33:44.724681 8 reflector.go:138] k8s.io/client-go#v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource
E0304 09:33:48.303789 8 reflector.go:138] k8s.io/client-go#v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource
E0304 09:33:59.113203 8 reflector.go:138] k8s.io/client-go#v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource
E0304 09:34:16.727052 8 reflector.go:138] k8s.io/client-go#v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource
I0304 09:34:39.216165 8 main.go:187] "Received SIGTERM, shutting down"
I0304 09:34:39.216773 8 nginx.go:372] "Shutting down controller queues"
E0304 09:34:39.217779 8 store.go:178] timed out waiting for caches to sync
I0304 09:34:39.217856 8 nginx.go:296] "Starting NGINX process"
I0304 09:34:39.218007 8 leaderelection.go:243] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx...
I0304 09:34:39.219741 8 queue.go:78] "queue has been shutdown, failed to enqueue" key="&ObjectMeta{Name:initial-sync,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[]OwnerReference{},Finalizers:[],ClusterName:,ManagedFields:[]ManagedFieldsEntry{},}"
I0304 09:34:39.219787 8 nginx.go:316] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0304 09:34:39.242501 8 leaderelection.go:253] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I0304 09:34:39.242807 8 queue.go:78] "queue has been shutdown, failed to enqueue" key="&ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[]OwnerReference{},Finalizers:[],ClusterName:,ManagedFields:[]ManagedFieldsEntry{},}"
I0304 09:34:39.242837 8 status.go:84] "New leader elected" identity="ingress-nginx-controller-7f48b8-s7pg4"
I0304 09:34:39.252025 8 status.go:204] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-7f48b8-s7pg4" node="fbcdcesdn02"
I0304 09:34:39.255282 8 status.go:132] "removing value from ingress status" address=[]
I0304 09:34:39.255328 8 nginx.go:380] "Stopping admission controller"
I0304 09:34:39.255379 8 nginx.go:388] "Stopping NGINX process"
E0304 09:34:39.255664 8 nginx.go:319] "Error listening for TLS connections" err="http: Server closed"
2022/03/04 09:34:39 [notice] 43#43: signal process started
I0304 09:34:40.263361 8 nginx.go:401] "NGINX process has stopped"
I0304 09:34:40.263396 8 main.go:195] "Handled quit, awaiting Pod deletion"
I0304 09:34:50.263585 8 main.go:198] "Exiting" code=0
When I do ks describe pod ingress-nginx-controller-7f48b8-s7pg4 -n ingress-nginx I get :
Name: ingress-nginx-controller-7f48b8-s7pg4
Namespace: ingress-nginx
Priority: 0
Node: fxxxxxxxx/10.XXX.XXX.XXX
Start Time: Fri, 04 Mar 2022 08:12:57 +0200
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
pod-template-hash=7f48b8
Annotations: kubernetes.io/psp: 00-k0s-privileged
Status: Running
IP: 10.244.0.119
IPs:
IP: 10.244.0.119
Controlled By: ReplicaSet/ingress-nginx-controller-7f48b8
Containers:
controller:
Container ID: containerd://638ff4d63b7ba566125bd6789d48db6e8149b06cbd9d887ecc57d08448ba1d7e
Image: k8s.gcr.io/ingress-nginx/controller:v0.48.1#sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899
Image ID: k8s.gcr.io/ingress-nginx/controller#sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--election-id=ingress-controller-leader
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 04 Mar 2022 11:33:40 +0200
Finished: Fri, 04 Mar 2022 11:34:50 +0200
Ready: False
Restart Count: 61
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-7f48b8-s7pg4 (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zvcnr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-zvcnr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 23m (x316 over 178m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
Warning BackOff 8m52s (x555 over 174m) kubelet Back-off restarting failed container
Normal Pulled 3m54s (x51 over 178m) kubelet Container image "k8s.gcr.io/ingress-nginx/controller:v0.48.1#sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899" already present on machine
When I try to curl the health endpoints I get Connection refused :
The state of the pods shows that they are both not ready :
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-4hzzk 0/1 Completed 0 3h30m
ingress-nginx-controller-7f48b8-s7pg4 0/1 CrashLoopBackOff 63 (91s ago) 3h30m
I have tried to increase the values for initialDelaySeconds in /etc/nginx/nginx.conf but when I attempt to exec into the container (ks exec -it -n ingress-nginx ingress-nginx-controller-7f48b8-s7pg4 -- bash) I also get an error error: unable to upgrade connection: container not found ("controller")
I am not really sure where I should be looking in the overall setup.
I have installed using instructions at this link for the Install NGINX using NodePort option.
The problem is that you are using outdated k0s documentation:
https://docs.k0sproject.io/v1.22.2+k0s.1/examples/nginx-ingress/
You should use this link instead:
https://docs.k0sproject.io/main/examples/nginx-ingress/
You will install the controller-v1.0.0 version on your Kubernetes cluster by following the actual documentation link.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/baremetal/deploy.yaml
The result is:
$ sudo k0s kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-dw2f4 0/1 Completed 0 11m
ingress-nginx-admission-patch-4dmpd 0/1 Completed 0 11m
ingress-nginx-controller-75f58fbf6b-xrfxr 1/1 Running 0 11m

How to connect Mysql with logstash?

When I was on windows this file working well but now I must restart my project on Linux and I don't know how can I do it ? bellow see logstash.conf xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
cammand : ./logstash -f longstash.conf
input {
jdbc {
jdbc_connection_string =>"jdbc:mysql://localhost:3306/test"
jdbc_user =>"root"
jdbc_password =>"password"
jdbc_driver_library =>"/usr/share/java/mysql-connector-java-5.1.45.jar"
jdbc_driver_class =>"com.mysql.jdbc.Driver"
schedule =>"* * * * *"
statement =>"SELECT * FROM Pro WHERE last_modificate >:sql_last_value"
use_column_value =>true
tracking_column =>last_modificate
}
}
output {
elasticsearch {
hosts =>"localhost:9200"
action=>update
document_id =>"%{id}"
doc_as_upsert =>true
index =>"blog"
document_type =>"pro"
}
}
And bellow see the error :
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-02-06 16:24:38.441 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2019-02-06 16:24:38.495 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.6.0"}
[WARN ] 2019-02-06 16:24:58.845 [Converge PipelineAction::Create<main>] elasticsearch - You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost:9200], doc_as_upsert=>true, action=>"update", index=>"blog", id=>"0d9b8021264f8db7c25bca76842096f28d088e42d8e84a573b39874bc2c38c19", document_id=>"%{id}", document_type=>"pro", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_0d66fa34-7e13-432a-9405-8084af971c1a", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>false, ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[INFO ] 2019-02-06 16:24:58.963 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2019-02-06 16:25:00.378 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[WARN ] 2019-02-06 16:25:01.241 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"}
[INFO ] 2019-02-06 16:25:02.692 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6}
[WARN ] 2019-02-06 16:25:02.705 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2019-02-06 16:25:02.805 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[INFO ] 2019-02-06 16:25:02.881 [Ruby-0-Thread-5: :1] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2019-02-06 16:25:03.044 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2019-02-06 16:25:03.726 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5e9c3d30 run>"}
[INFO ] 2019-02-06 16:25:03.868 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-02-06 16:25:05.345 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
Wed Feb 06 16:26:05 CET 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Wed Feb 06 16:26:06 CET 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
[ERROR] 2019-02-06 16:26:06.396 [Ruby-0-Thread-15: :1] jdbc - Unable to connect to database. Tried 1 times {:error_message=>"Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'"}
{ 2014 rufus-scheduler intercepted an error:
2014 job:
2014 Rufus::Scheduler::CronJob "* * * * *" {}
2014 error:
2014 2014
2014 Sequel::DatabaseConnectionError
2014 Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'
2014 com.mysql.jdbc.SQLError.createSQLException(com/mysql/jdbc/SQLError.java:965)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:3973)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:3909)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:873)
2014 com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(com/mysql/jdbc/MysqlIO.java:1710)
2014 com.mysql.jdbc.MysqlIO.doHandshake(com/mysql/jdbc/MysqlIO.java:1226)
2014 com.mysql.jdbc.ConnectionImpl.coreConnect(com/mysql/jdbc/ConnectionImpl.java:2188)
2014 com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(com/mysql/jdbc/ConnectionImpl.java:2219)
2014 com.mysql.jdbc.ConnectionImpl.createNewIO(com/mysql/jdbc/ConnectionImpl.java:2014)
2014 com.mysql.jdbc.ConnectionImpl.<init>(com/mysql/jdbc/ConnectionImpl.java:776)
2014 com.mysql.jdbc.JDBC4Connection.<init>(com/mysql/jdbc/JDBC4Connection.java:47)
2014 java.lang.reflect.Constructor.newInstance(java/lang/reflect/Constructor.java:423)
As the error is self explanatory.
2014 Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'
kindly check the configuration credentials. On linux have have to go /usr/share/logstash directory then run the following commnad.
sudo bin/logstash -f /etc/logstash/conf.d/yourfilename.conf

Autoscaling Deployment with custom metrics on Openshift 1.5.0

Is there any possibility to autoscale deployment with Openshift Origin 1.5.0 (kubernetes 1.5.2) and use custom metrics for this purpose?
Kubernetes documentation states that autoscaling with custom metrics are being supported from version 1.2. It looks true, just because Openshift horizontal pod autoscaler (HPA) tries to gain some metrics and calculate desired metrics. But my configuration fails to succeed to perform this. Guys, please help me with finding what I am doing wrong with this.
So, what happens:
I have set up a metrics as it is recommended in Origin latest docs (all steps are passed): https://docs.openshift.org/latest/install_config/cluster_metrics.html;
I have an app, which is being deployed with Deployment kind object;
this app exposes custom metrics with http json endpoint;
custom metrics are being collected and stored - this is shown in Openshift origin UI in Metrics tab of corresponding pod;
after I create HPA - some warning about collecting custom metrics appear, it writes something like 'Failed collecting custom metrics, did not recieve metrics for any ready pods';
I create HPA with API version 1 and include an annotation alpha/target.custom-metrics.podautoscaler.kubernetes.io: '{"items":[{"name":"requests_count", "value": "10"}]}';
if I request a deployed heapster app through master-proxy, I receive something like this
{
"metadata": {},
"items": [
{
"metadata": {
"name": "resty-1722683747-kmbw0",
"namespace": "availability-demo",
"creationTimestamp": "2017-05-24T09:50:24Z"
},
"timestamp": "2017-05-24T09:50:00Z",
"window": "1m0s",
"containers": [
{
"name": "resty",
"usage": {
"cpu": "0",
"memory": "2372Ki"
}
}
]
}
]
}
as you can see, there is really no custom metrics, and my custom metrics is named requests_count.
What steps should I take to succeed in implementing custom metrics autoscaling?
Screenshot with custom metrics being collected and exposed via Openshift Console UI
UPDATE:
In openshift master log warning looks like this:
I0524 10:17:47.537985 1 panics.go:76GET /apis/extensions/v1beta1/namespaces/availability-demo/deployments/resty/scale: (3.379724ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.543354 1 panics.go:76] GET /api/v1/proxy/namespaces/openshift-infra/services/https:heapster:/apis/metrics/v1alpha1/namespaces/availability-demo/pods?labelSelector=app%3Dresty: (4.830135ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.553255 1 panics.go:76] GET /api/v1/namespaces/availability-demo/pods?labelSelector=app%3Dresty: (8.864864ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.559909 1 panics.go:76] GET /api/v1/namespaces/availability-demo/pods?labelSelector=app%3Dresty: (5.725342ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.560977 1 panics.go:76] PATCH /api/v1/namespaces/availability-demo/events/resty.14c14bbf8b89534c: (6.385846ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.565418 1 panics.go:76] GET /api/v1/proxy/namespaces/openshift-infra/services/https:heapster:/api/v1/model/namespaces/availability-demo/pod-list/resty-1722683747-kmbw0/metrics/custom/requests_count?start=2017-05-24T10%3A12%3A47Z: (5.015336ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.569843 1 panics.go:76] GET /api/v1/namespaces/availability-demo/pods?labelSelector=app%3Dresty: (4.040029ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.575530 1 panics.go:76] PUT /apis/autoscaling/v1/namespaces/availability-demo/horizontalpodautoscalers/resty/status: (4.894835ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.575856 1 horizontal.go:438] Successfully updated status for resty
W0524 10:17:47.575890 1 horizontal.go:104] Failed to reconcile resty: failed to compute desired number of replicas based on Custom Metrics for Deployment/availability-demo/resty: failed to get custom metric value: did not recieve metrics for any ready pods
UPDATE: Found what request HPA issues to heapster through proxy to gather custom metrics. This requests always return empty metrics array:
GET /api/v1/proxy/namespaces/openshift-infra/services/https:heapster:/api/v1/model/namespaces/availability-demo/pod-list/availability-example-1694583826-55hqh/metrics/custom/requests_count?start=2017-05-25T13%3A14%3A24Z HTTP/1.1
Host: kubernetes-master:8443
Authorization: Bearer hpa-agent-token
And it returns
{"items":[{"metrics":[],"latestTimestamp":"0001-01-01T00:00:00Z"}]}
UPDATE: It turns out, that HPA requests heapster through proxy, and heapster - in its turn - request "summary" kubernetes api. Then the question is - why kubernetes "summary" api does not answer with metrics for above mentioned request, though the metrics exist?
Might be a wild guess but I had the issue myself on a self made cluster, the 2 things I ran into were token issues where certificate of my HA master setup was not set-up correctly and another issue was regarding my kubedns. Not sure if this is applicable for openshitf.

DKIM hmailserver and NameCheap Setup

I've been trying to setup my hmailserver with DKIM.
I was following this guide -> https://www.hmailserver.com/forum/viewtopic.php?t=29402
And I created my keys with this site -> https://www.port25.com/dkim-wizard/
Domain name: linnabary.us
DomainKey Selector: dkim
Key size: 1024
I created a pem file;
-----BEGIN RSA PRIVATE KEY-----
<key>
-----END RSA PRIVATE KEY-----
Saved it and loaded it into hmailserver
When I set this up on NameCheap I selected TXT Record, set my host as #, and put this line in, minus key of course;
v=DKIM1; k=rsa; p=<KEY>
Now when I test with -> http://www.isnotspam.com
It says my DKIM key is as follows;
----------------------------------------------------------
DKIM check details:
----------------------------------------------------------
Result: invalid
ID(s) verified: header.From=admin#linnabary.us
Selector=
domain=
DomainKeys DNS Record=._domainkey.
I was wondering if I am making any obvious errors in my record.
Edit;
The email contains the following line;
dkim-signature: v=1; a=rsa-sha256; d=linnabary.us; s=dkim;
This is what the setup looks like on NameCheap;
And here is the next test email from ;
This message is an automatic response from isNOTspam's authentication verifier service. The service allows email senders to perform a simple check of various sender authentication mechanisms. It is provided free of charge, in the hope that it is useful to the email community. While it is not officially supported, we welcome any feedback you may have at .
Thank you for using isNOTspam.
The isNOTspam team
==========================================================
Summary of Results
==========================================================
SPF Check : pass
Sender-ID Check : pass
DKIM Check : invalid
SpamAssassin Check : ham (non-spam)
==========================================================
Details:
==========================================================
HELO hostname: [69.61.241.46]
Source IP: 69.61.241.46
mail-from: admin#linnabary.us
Anonymous To: ins-a64wsfm3#isnotspam.com
---------------------------------------------------------
SPF check details:
----------------------------------------------------------
Result: pass
ID(s) verified: smtp.mail=admin#linnabary.us
DNS record(s):
linnabary.us. 1799 IN TXT "v=spf1 a mx ip4:69.61.241.46 ~all"
----------------------------------------------------------
Sender-ID check details:
----------------------------------------------------------
Result: pass
ID(s) verified: smtp.mail=admin#linnabary.us
DNS record(s):
linnabary.us. 1799 IN TXT "v=spf1 a mx ip4:69.61.241.46 ~all"
----------------------------------------------------------
DKIM check details:
----------------------------------------------------------
Result: invalid
ID(s) verified: header.From=admin#linnabary.us
Selector=
domain=
DomainKeys DNS Record=._domainkey.
----------------------------------------------------------
SpamAssassin check details:
----------------------------------------------------------
SpamAssassin 3.4.1 (2015-04-28)
Result: ham (non-spam) (04.6points, 10.0 required)
pts rule name description
---- ---------------------- -------------------------------
* 3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
* [score: 1.0000]
* -0.0 SPF_HELO_PASS SPF: HELO matches SPF record
* -0.0 SPF_PASS SPF: sender matches SPF record
* 0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
* [score: 1.0000]
* 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily
* valid
* 0.8 RDNS_NONE Delivered to internal network by a host with no rDNS
* 0.0 T_DKIM_INVALID DKIM-Signature header exists but is not valid
X-Spam-Status: Yes, hits=4.6 required=-20.0 tests=BAYES_99,BAYES_999,
DKIM_SIGNED,RDNS_NONE,SPF_HELO_PASS,SPF_PASS,T_DKIM_INVALID autolearn=no
autolearn_force=no version=3.4.0
X-Spam-Score: 4.6
To learn more about the terms used in the SpamAssassin report, please search
here: http://wiki.apache.org/spamassassin/
==========================================================
Explanation of the possible results (adapted from
draft-kucherawy-sender-auth-header-04.txt):
==========================================================
"pass"
the message passed the authentication test.
"fail"
the message failed the authentication test.
"softfail"
the message failed the authentication test, and the authentication
method has either an explicit or implicit policy which doesn't require
successful authentication of all messages from that domain.
"neutral"
the authentication method completed without errors, but was unable
to reach either a positive or a negative result about the message.
"temperror"
a temporary (recoverable) error occurred attempting to authenticate
the sender; either the process couldn't be completed locally, or
there was a temporary failure retrieving data required for the
authentication. A later retry may produce a more final result.
"permerror"
a permanent (unrecoverable) error occurred attempting to
authenticate the sender; either the process couldn't be completed
locally, or there was a permanent failure retrieving data required
for the authentication.
==========================================================
Original Email
==========================================================
From admin#linnabary.us Wed Apr 12 17:41:22 2017
Return-path: <admin#linnabary.us>
X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on isnotspam.com
X-Spam-Flag: YES
X-Spam-Level: ****
X-Spam-Report:
* 3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
* [score: 1.0000]
* -0.0 SPF_HELO_PASS SPF: HELO matches SPF record
* -0.0 SPF_PASS SPF: sender matches SPF record
* 0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
* [score: 1.0000]
* 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily
* valid
* 0.8 RDNS_NONE Delivered to internal network by a host with no rDNS
* 0.0 T_DKIM_INVALID DKIM-Signature header exists but is not valid
X-Spam-Status: Yes, hits=4.6 required=-20.0 tests=BAYES_99,BAYES_999,
DKIM_SIGNED,RDNS_NONE,SPF_HELO_PASS,SPF_PASS,T_DKIM_INVALID autolearn=no
autolearn_force=no version=3.4.0
Envelope-to: ins-a64wsfm3#isnotspam.com
Delivery-date: Wed, 12 Apr 2017 17:41:22 +0000
Received: from [69.61.241.46] (helo=linnabary.us)
by localhost.localdomain with esmtp (Exim 4.84_2)
(envelope-from <admin#linnabary.us>)
id 1cyMGg-0007x2-1Q
for ins-a64wsfm3#isnotspam.com; Wed, 12 Apr 2017 17:41:22 +0000
dkim-signature: v=1; a=rsa-sha256; d=linnabary.us; s=dkim;
c=relaxed/relaxed; q=dns/txt; h=From:Subject:Date:Message-ID:To:MIME-Version:Content-Type:Content-Transfer-Encoding;
bh=Ns4aRUgWUtil4fiVnvitgeV+q1K/smEYtRGN497S5Ew=;
b=Nc2Kzrzas0QqMpWM4fnF5o5wLWlWYFxlGlAipe+85H9cwGgc4hvEKUj1UvgB6I2VHUbJ0OGN/sJO9tjWgwlGypaUuW7Q8x/iI0UtC6cn7X6ZLHT+K6A2A6MdoyR1NF4xxvqPadcmcQwnrY0Tth4ycydpQMlBCZS30sc1qUjUrN0=
Received: from [192.168.1.12] (Aurora [192.168.1.12])
by linnabary.us with ESMTPA
; Wed, 12 Apr 2017 13:41:28 -0400
To: ins-a64wsfm3#isnotspam.com
From: Admin <admin#linnabary.us>
Subject: Welcome to Linnabary
Message-ID: <8e8be6cd-6354-aeb9-b577-2b0efc25a1a1#linnabary.us>
Date: Wed, 12 Apr 2017 13:41:28 -0400
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101
Thunderbird/45.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-DKIM-Status: invalid (pubkey_unavailable)
I honestly have no idea what I should put in here in order to protect
myself from filters, so I'm just making it up as I go.
- Tad
The Host value for your TXT entry should just be dkim._domainkey. Currently your domain key is located at: dkim._domainkey.linnabary.us.linnabary.us, so you're not supposed to add the domain here.
That's why the response to the test email says X-DKIM-Status: invalid (pubkey_unavailable) - the public key can't be found where it is supposed to be.