How to attach a floating IP using the present environment on openstack - openstack-nova

I'm newbie on openstack heat file. I did search but I didn't find relevant answer to my question. Here my template heat yaml file :
heat_template_version: newton
description: Simple template to deploy a single compute instance with an attached volume
resources:
my_instance:
type: OS::Nova::Server
properties:
name: instance-name
flavor: std.cpu1ram1
block_device_mapping_v2:
- device_name: vda
image: RHEL-7.4
volume_size: 30
delete_on_termination: true
networks:
- network: network-name.admin-network
security_group:
- security_group: [security-name.group-sec-default]
my_volume:
type: OS::Cinder::Volume
properties:
size: 10
my_attachment:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid: { get_resource: my_instance }
volume_id: { get_resource: my_volume }
mountpoint: /dev/vdb
This heat file works but I don't know how to attach floating IP to "my_instance". I'm able to do it inside Horizon and it's working without PB. Under Horizon interface, I've to choose "Router_dmz" as a pool which creates and allows floating IP. As I understood the floating IP address should associated to "network-name.admin-network". I read many documentation, and I don't know if I've to use the OS:Neutron::FloatingIPAssociation resources or OS::Nova::FloatingIPAssociation. I tried on my side and I've no issue.

I found this works for me :
heat_template_version: newton
description: Simple template to deploy a single compute instance
resources:
floating_ip:
type: OS::Nova::FloatingIP
properties:
pool: string_of_pool_of_public_network
my_instance:
type: OS::Nova::Server
properties:
name: instance-name
flavor: std.cpu1ram1
block_device_mapping_v2:
- device_name: vda
image: RHEL-7.4
volume_size: 30
delete_on_termination: true
networks:
- network: network-name.admin-network
security_group:
- security_group: [security-name.group-sec-default]
association:
type: OS::Nova::FloatingIPAssociation
properties:
floating_ip: { get_resource: floating_ip }
server_id: { get_resource: my_instance }
But this solution is deprecated I don't have any issue with Neutron

Related

EKS 1.22 update - ingress and alb not working

After updating EKS cluster to 1.22 all websites are down. Pods are ok but all the networking is not working.
I don't know how to fix ingresses and load balancer.
I have tried updating deprecated API versions for ingress-kong and internal-ingress-kong.
I can't find yaml file for alb-ingress-controller, but when I check last applied it is based on new API.
I have manually updated docker image of alb from 1.1.8 to 2.4.1
Name: alb-ingress-controller
Namespace: default
CreationTimestamp: Thu, 03 Sep 2020 02:05:01 +0000
Labels: app=alb-ingress-controller
app.kubernetes.io/name=alb-ingress-controller
git_version=54709a8bd94f795b1184b0c8336e9a6ec8aee807
name=alb-ingress-controller
version=20200909005829
Annotations: deployment.kubernetes.io/revision: 9
Selector: app.kubernetes.io/name=alb-ingress-controller
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=alb-ingress-controller
app.kubernetes.io/name=alb-ingress-controller
git_version=54709a8bd94f795b1184b0c8336e9a6ec8aee807
name=alb-ingress-controller
version=20200909005829
Annotations: kubectl.kubernetes.io/restartedAt: 2022-04-14T19:19:01Z
Service Account: alb-ingress-controller
Containers:
alb-ingress-controller:
Image: docker.io/amazon/aws-alb-ingress-controller:v2.4.1
Port: <none>
Host Port: <none>
Args:
--watch-namespace=default
--ingress-class=alb-ingress-controller
--cluster-name=staging-trn
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available False MinimumReplicasUnavailable
OldReplicaSets: <none>
NewReplicaSet: alb-ingress-controller-c46ff7bd9 (1/1 replicas created)
Events: <none>
I'm new to kubernetes and aws.
I think I have updated deprecated APIs in all places but errors are still pointing to the old APIs.
Error on ingresses:
E0415 07:54:29.332371 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.4/tools/cache/reflector.go:105: Failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.extensions)
Error on alb:
{"level":"error","ts":1650009210.0149224,"logger":"setup","msg":"unable to create controller","controller":"TargetGroupBinding","error":"no matches for kind \"TargetGroupBinding\" in version \"elbv2.k8s.aws/v1beta1\""}
I have created missing CRD TargetGroupBindings:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.5.0
creationTimestamp: null
labels:
app.kubernetes.io/name: alb-ingress-controller
name: targetgroupbindings.elbv2.k8s.aws
spec:
group: elbv2.k8s.aws
names:
kind: TargetGroupBinding
listKind: TargetGroupBindingList
plural: targetgroupbindings
singular: targetgroupbinding
scope: Namespaced
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
description: TargetGroupBinding is the Schema for the TargetGroupBinding API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: TargetGroupBindingSpec defines the desired state of TargetGroupBinding
properties:
networking:
description: networking provides the networking setup for ELBV2 LoadBalancer
to access targets in TargetGroup.
properties:
ingress:
description: List of ingress rules to allow ELBV2 LoadBalancer
to access targets in TargetGroup.
items:
properties:
from:
description: List of peers which should be able to access
the targets in TargetGroup. At least one NetworkingPeer
should be specified.
items:
description: NetworkingPeer defines the source/destination
peer for networking rules.
properties:
ipBlock:
description: IPBlock defines an IPBlock peer. If specified,
none of the other fields can be set.
properties:
cidr:
description: CIDR is the network CIDR. Both IPV4
or IPV6 CIDR are accepted.
type: string
required:
- cidr
type: object
securityGroup:
description: SecurityGroup defines a SecurityGroup
peer. If specified, none of the other fields can
be set.
properties:
groupID:
description: GroupID is the EC2 SecurityGroupID.
type: string
required:
- groupID
type: object
type: object
type: array
ports:
description: List of ports which should be made accessible
on the targets in TargetGroup. If ports is empty or unspecified,
it defaults to all ports with TCP.
items:
properties:
port:
anyOf:
- type: integer
- type: string
description: The port which traffic must match. When
NodePort endpoints(instance TargetType) is used,
this must be a numerical port. When Port endpoints(ip
TargetType) is used, this can be either numerical
or named port on pods. if port is unspecified, it
defaults to all ports.
x-kubernetes-int-or-string: true
protocol:
description: The protocol which traffic must match.
If protocol is unspecified, it defaults to TCP.
enum:
- TCP
- UDP
type: string
type: object
type: array
required:
- from
- ports
type: object
type: array
type: object
serviceRef:
description: serviceRef is a reference to a Kubernetes Service and
ServicePort.
properties:
name:
description: Name is the name of the Service.
type: string
port:
anyOf:
- type: integer
- type: string
description: Port is the port of the ServicePort.
x-kubernetes-int-or-string: true
required:
- name
- port
type: object
targetGroupARN:
description: targetGroupARN is the Amazon Resource Name (ARN) for
the TargetGroup.
type: string
targetType:
description: targetType is the TargetType of TargetGroup. If unspecified,
it will be automatically inferred.
enum:
- instance
- ip
type: string
required:
- serviceRef
- targetGroupARN
type: object
status:
description: TargetGroupBindingStatus defines the observed state of TargetGroupBinding
properties:
observedGeneration:
description: The generation observed by the TargetGroupBinding controller.
format: int64
type: integer
type: object
type: object
additionalPrinterColumns:
- jsonPath: .spec.serviceRef.name
description: The Kubernetes Service's name
name: SERVICE-NAME
type: string
- jsonPath: .spec.serviceRef.port
description: The Kubernetes Service's port
name: SERVICE-PORT
type: string
- jsonPath: .spec.targetType
description: The AWS TargetGroup's TargetType
name: TARGET-TYPE
type: string
- jsonPath: .spec.targetGroupARN
description: The AWS TargetGroup's Amazon Resource Name
name: ARN
priority: 1
type: string
- jsonPath: .metadata.creationTimestamp
name: AGE
type: date
Ingress resource should be updated as follows:
apiVersion: networking.k8s.io/v1
pls see examples here:
https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
to find ingress resources type the following:
kubectl get ingress --all-namespaces
then do the modification as mentioned above
pls be noted that backend configuration in ingress resource also needs some modification due to api change
also please be noted that from version 1.18 you're able to bind ingress resources using spec.ingressClassName field. If Omitted, ingress will work only if ingressClass that ingress controller implements is set to default.

How to create kubernetes secret as json object and load the same in kubernetes environment as json

I need to pass a JWK as kubernetes environment variable to my app.
I created a file to store my key like so:
cat deploy/keys/access-signature-public-jwk
{
algorithm = "RS256"
jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
Which is then used to create a kubernetes secret like so:
kubectl create secret generic intimations-signature-public-secret --from-file=./deploy/keys/access-signature-public-jwk
Which is then retrived in the kubernetes environment variable as:
- name: ACCESS_SIGNATURE_PUBLIC_JWK
valueFrom:
secretKeyRef:
name: intimations-signature-public-secret
key: access-signature-public-jwk
And passed to the application.conf of the application likeso:
pac4j.lagom.jwt.authenticator {
signatures = [
${ACCESS_SIGNATURE_PUBLIC_JWK}
]
}
The pac4j library expects the config pac4j.lagom.jwt.authenticator as a json object. But get the following exception when I run this app:
com.typesafe.config.ConfigException$WrongType: env variables: signatures has type list of STRING rather than list of OBJECT
at com.typesafe.config.impl.SimpleConfig.getHomogeneousWrappedList(SimpleConfig.java:452)
at com.typesafe.config.impl.SimpleConfig.getObjectList(SimpleConfig.java:460)
at com.typesafe.config.impl.SimpleConfig.getConfigList(SimpleConfig.java:465)
at org.pac4j.lagom.jwt.JwtAuthenticatorHelper.parse(JwtAuthenticatorHelper.java:84)
at com.codingkapoor.holiday.impl.core.HolidayApplication.jwtClient$lzycompute(HolidayApplication.scala
POD Description
Name: holiday-deployment-55b86f955d-9klk2
Namespace: default
Priority: 0
Node: minikube/192.168.99.103
Start Time: Thu, 28 May 2020 12:42:50 +0530
Labels: app=holiday
pod-template-hash=55b86f955d
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/holiday-deployment-55b86f955d
Containers:
holiday:
Container ID: docker://18443cfedc7fd39440f5fa6f038f36c58cec1660a2974e6432500e8c7d51f5e6
Image: codingkapoor/holiday-impl:latest
Image ID: docker://sha256:6e0ddcf41e0257755b7e865424671970091d555c4bad88b5d896708ded139eb7
Port: 8558/TCP
Host Port: 0/TCP
State: Terminated
Reason: Error
Exit Code: 255
Started: Thu, 28 May 2020 22:49:24 +0530
Finished: Thu, 28 May 2020 22:49:29 +0530
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Thu, 28 May 2020 22:44:15 +0530
Finished: Thu, 28 May 2020 22:44:21 +0530
Ready: False
Restart Count: 55
Liveness: http-get http://:management/alive delay=20s timeout=1s period=10s #success=1 #failure=10
Readiness: http-get http://:management/ready delay=20s timeout=1s period=10s #success=1 #failure=10
Environment:
JAVA_OPTS: -Xms256m -Xmx256m -Dconfig.resource=prod-application.conf
APPLICATION_SECRET: <set to the key 'secret' in secret 'intimations-application-secret'> Optional: false
MYSQL_URL: jdbc:mysql://mysql/intimations_holiday_schema
MYSQL_USERNAME: <set to the key 'username' in secret 'intimations-mysql-secret'> Optional: false
MYSQL_PASSWORD: <set to the key 'password' in secret 'intimations-mysql-secret'> Optional: false
ACCESS_SIGNATURE_PUBLIC_JWK: <set to the key 'access-signature-public-jwk' in secret 'intimations-signature-public-secret'> Optional: false
REFRESH_SIGNATURE_PUBLIC_JWK: <set to the key 'refresh-signature-public-jwk' in secret 'intimations-signature-public-secret'> Optional: false
REQUIRED_CONTACT_POINT_NR: 1
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kqmmv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-kqmmv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kqmmv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 5m21s (x23 over 100m) kubelet, minikube Container image "codingkapoor/holiday-impl:latest" already present on machine
Warning BackOff 27s (x466 over 100m) kubelet, minikube Back-off restarting failed container
I was wondering if there is any way to pass the environment variable as a json object instead of string. Please suggest. TIA.
First, the file access-signature-public-jwk is not a valid JSON file. You should update it as a valid one.
{
"algorithm" : "RS256",
"jwk" : {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
Steps I followed to validate.
kubectl create secret generic token1 --from-file=jwk.json
Mount the secret into the pod.
env:
- name: JWK
valueFrom:
secretKeyRef:
name: token
key: jwk.json
exec to the pod and check the env variable JWK
$ echo $JWK
{ "algorithm" : "RS256", "jwk" : {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"} }
Copy the content to a file
echo $JWK > jwk.json
Validate the file
$ jsonlint-php jwk.json
Valid JSON (jwk.json)
If I use the file you are given and followed the same steps. It gives an json validation error. Also, env variables are always strings. You have to convert them into the required types in your code.
$ echo $JWK
{ algorithm = "RS256" jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"} }
$ echo $JWK > jwk.json
$ jsonlint-php jwk.json
jwk.json: Parse error on line 1:
{ algorithm = "RS256"
-^
Expected one of: 'STRING', '}'
Although not a direct answer but an alternate solution to this problem.
As #hariK pointed out environment variables are always strings and in order to consume them as json we would need to convert the env var read as string into json.
However, in my case, this was not a viable solution because I was using a lib that was expecting a Config object and not a json object directly which would have meant a lot of work. Converting string -> json -> Config. Plus this approach is inconsistent with how Config object was being built in the developement scenarios i.e., json -> Config. See here.
The framework I am using to build this app is based on Play Framework which allows to modularize application configs in separate files and then club the required pieces together in a desired config file, as shown below. You can read it more in detail here.
application.conf
include "/opt/conf/app1.conf"
include "/opt/conf/app2.conf"
This allowed me to make use of Using Secrets as files from a Pod
feature from kubernetes.
Basically, I created a small config file that contains a part of my main application configuration file, as shown below:
cat deploy/keys/signature-public-jwk
pac4j.lagom.jwt.authenticator {
signatures = [
{
algorithm = "RS256"
jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
]
}
Then created a kubernetes secret and mounted volumes in deployment to appear in the pod as file
kubectl create secret generic signature-public-secret --from-file=./deploy/secrets/signature-public-jwks.conf
// deployment yaml
spec:
containers:
- name: employee
image: "codingkapoor/employee-impl:latest"
volumeMounts:
- name: signature-public-secret-conf
mountPath: /opt/conf/signature-public-jwks.conf
subPath: signature-public-jwks.conf
readOnly: true
volumes:
- name: signature-public-secret-conf
secret:
secretName: signature-public-secret
Use this mounted file location in the application.conf to include the same
include file("/opt/conf/signature-public-jwks.conf")
Notice that the mountPath and the file location in the application.conf are same.
Advantages of this approach:
The solution is consistent with both the development and test, production environments as we could return json instead of string to the lib, as explained above
Secrets shouldn't be passed as environment variables anyway! You can read more about it here.

Invalid type for io.k8s.api.core.v1.ConfigMapEnvSource got "array" expected "map"

I've a kubernetes cronjob manifest file.In that file I've defined enviornment variables.I'm generating yaml using a shell script but while using the yaml using kubectl create -f. I'm getting the following validation error
error validating "cron.yaml": error validating data: [ValidationError(CronJob.spec.jobTemplate.spec.template.spec.containers[0].envFrom[0].configMapRef): invalid type for io.k8s.api.core.v1.ConfigMapEnvSource: got "array", expected "map".
Can anyone suggest me how to resolve this?
You have a mistake in the syntax.
There are two approaches, using valueFrom for individual values or envFrom for multiple values.
valueFrom is used inside the env attribute.valueFrom will inject the value of a a key from the referenced configMap.
spec:
template:
spec:
containers:
- name: ad-sync
image: foo.azurecr.io/foobar/ad-sync
command: ["dotnet", "AdSyncService.dll"]
args: []
env:
- name: AdSyncService
valueFrom:
configMapKeyRef:
name: ad-sync-service-configmap
key: log_level
envFrom is used direct inside the container attribute.envFrom will inject All configMap keys as environment variables
spec:
template:
spec:
containers:
- name: ad-sync
image: foo.azurecr.io/foobar/ad-sync
command: ["dotnet", "AdSyncService.dll"]
envFrom:
- configMapRef:
name: ad-sync-service-configmap

Openshift: Change in trigger behavior between 3.6.1 and 3.7.1

I have two Openshift environments that I publish to from a Gitlab pipeline. Let's call the first one DEV and it runs Openshift v3.7.1+c2ce2c0-1. The second one is INT and runs v3.6.1+008f2d5. Recently DEV got upgraded from 3.6.1 to 3.7.1 and after that I noticed a strange change in the behavior of redeployment triggers.
In short, what I see is that existing deployments in the DEV environment are triggered to redeploy with an "Image changed" message when an unchanged deployment config template is applied and while the Docker image also remains unchanged. This means that for example a MongoDB or Jenkins deployment recreates all containers and loses all data with every run of the CI pipeline.
Yes, there is the theoretically the possibility to use persistent volumes but the Openshift installations are not under my control. The point here is that redeployments happen when neither image nor configuration changes.
The command that I am using from Gitlab is this:
oc process -f openshift-mongodb-ephemeral.yml -v MONGODB_DATABASE=mydb -v DATABASE_SERVICE_NAME=mongodb -l template=northbound-mongodb | oc apply -f -
Here is one of the deployment config templates:
apiVersion: v1
kind: Template
labels:
template: mongodb-ephemeral-template
metadata:
annotations:
description: |-
MongoDB database service, without persistent storage. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.
WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing
iconClass: icon-mongodb
openshift.io/display-name: MongoDB (Ephemeral)
tags: database,mongodb
creationTimestamp: 2017-03-14T11:25:13Z
name: mongodb-ephemeral
resourceVersion: "483"
selfLink: /oapi/v1/namespaces/openshift/templates/mongodb-ephemeral
uid: e41b7f8e-08a8-11e7-9120-000d3a266151
objects:
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: ${DATABASE_SERVICE_NAME}
spec:
ports:
- name: mongo
nodePort: 0
port: 27017
protocol: TCP
targetPort: 27017
selector:
name: ${DATABASE_SERVICE_NAME}
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
name: ${DATABASE_SERVICE_NAME}
spec:
replicas: 1
selector:
name: ${DATABASE_SERVICE_NAME}
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
name: ${DATABASE_SERVICE_NAME}
spec:
containers:
- capabilities: {}
env:
- name: MONGODB_USER
value: ${MONGODB_USER}
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: myproject-secrets
- name: MONGODB_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: myproject-secrets
- name: MONGODB_DATABASE
value: ${MONGODB_DATABASE}
image: ' '
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 27017
timeoutSeconds: 1
name: mongodb
ports:
- containerPort: 27017
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -i
- -c
- mongo 127.0.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD
--eval="quit()"
initialDelaySeconds: 3
timeoutSeconds: 1
resources:
limits:
memory: ${MEMORY_LIMIT}
securityContext:
capabilities: {}
privileged: false
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: ${DATABASE_SERVICE_NAME}-data
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- emptyDir:
medium: ""
name: ${DATABASE_SERVICE_NAME}-data
triggers:
- imageChangeParams:
automatic: true
containerNames:
- mongodb
from:
kind: ImageStreamTag
name: mongodb:${MONGODB_VERSION}
namespace: ${NAMESPACE}
lastTriggeredImage: ""
type: ImageChange
- type: ConfigChange
status: {}
parameters:
- description: Maximum amount of memory the container can use.
displayName: Memory Limit
name: MEMORY_LIMIT
required: true
value: 512Mi
- description: The OpenShift Namespace where the ImageStream resides.
displayName: Namespace
name: NAMESPACE
value: openshift
- description: The name of the OpenShift Service exposed for the database.
displayName: Database Service Name
name: DATABASE_SERVICE_NAME
required: true
value: mongodb
- description: Name of the MongoDB database accessed.
displayName: MongoDB Database Name
name: MONGODB_DATABASE
required: true
value: sampledb
- description: Version of MongoDB image to be used (2.4, 2.6, 3.2 or latest).
displayName: Version of MongoDB Image
name: MONGODB_VERSION
required: true
value: "3.2"
- description: Name of user to access MongoDB.
displayName: MongoDB user
name: MONGODB_USER
required: true
value: "mongouser"
The oc process and oc apply commands get executed with each execution of the Gitlab CI pipeline. The pipeline triggers whenever someone merges into e.g. the develop branch. I would like to keep this behavior because this guarantees if someone changes the configuration of MongoDB, Jenkins, etc. those get updated and redeployed automatically (in which case a loss of data is acceptable).
Does anybody know what change in OS could have prompted this change in behavior and how to achieve the old behavior again?
Credits go to Graham Dumpleton who answered my question, albeit through a comment. Since I did not hear back from him I am leaving this answer myself for completeness' sake.
After removing the entries
creationTimestamp
lastTriggeredImage
status
image
from the deployment config template, the unwanted redeployments stopped. I do not know which entry specifically started triggering the behavior in Openshift 3.7.1 but in any case, these four entries are information generated during an Openshift deployment process and should not be in a template to begin with.

Openstack heat template: interfaces with fixed IPs are not configured automatically

I assign a fixed IP to an interface in the heat template.
private_port_1:
type: OS::Neutron::Port
properties:
network: { get_param: private_net }
fixed_ips: [{"subnet": { get_param: private_subnet }, "ip_address": { get_param: private_ip_1 }}]
my_vm_123:
type: OS::Nova::Server
properties:
image: { get_param: image_name }
flavor: { get_param: flavor_name }
name: { get_param: vm_name }
networks:
- network: { get_param: public_net }
- port: { get_resource: private_port_1 }
The VM is successfully instantiated and its private IP (private_ip_1) is shown in the Horizon GUI. However, the "eth1" appears to be down and the /etc/network/interfaces contains configuration only for the public "eth0".
I can do a workaround by manually populating "/etc/network/interfaces" and turning the eth1 on in the "user_data:" part. The question is - is this the way it should be or there is something wrong with my heat or Openstack that prevents eth1 to be configured automatically?
Thanks!
Michael.
Yes, it is the way it should be.
OpenStack (Nova, Neutron) sets up the VM and provides right connectivity. However, the operation system running in the VM has to bring up the interfaces. Default cloud-init images are hardcoded to bring up only eth0 (with DHCP). So, you have to explicitly bring up eth1 in your VM.
You can use user_data variable of OS::Nova::Server resource type to run custom scripts when bringing up a VM. I had a similar use case where I needed to bring up eth1 automatically. You can check on how I achieved this at https://github.com/ypraveen/openstack-installer/blob/master/vm-heat-template/devstack.yaml
Line 33 shows the usage of user_data.
You can check the lines 41-45 in the init script that bring up eth1: https://github.com/ypraveen/openstack-installer/blob/master/vm-heat-template/devstack_vm_init.sh