Cloudformation: Create ElasticBeanstalk without Load Balancer - amazon-elastic-beanstalk

How can I create an ElasticBeanstalk environment without a load balancer? I haven't any options to disable it in the aws:autoscaling:launchconfiguration.
It's possible to turn off load balancing manually from AWS Management Console.
This my Cloudformation stack template:
WsServerEBApp:
Type: "AWS::ElasticBeanstalk::Application"
Properties:
ApplicationName: WS Server
Description: App for Websocket Server that will hold EB Environments
WsServerEBEnvironemt:
Type: 'AWS::ElasticBeanstalk::Environment'
Properties:
EnvironmentName: staging
ApplicationName: !Ref WsServerEBApp
CNAMEPrefix: staging
SolutionStackName: 64bit Amazon Linux 2017.03 v2.7.3 running Docker 17.03.1-ce
VersionLabel: !Ref WsServerEBAppVersion
OptionSettings:
- Namespace: 'aws:autoscaling:launchconfiguration'
OptionName: EC2KeyName
Value: !Ref KeyName
- Namespace: 'aws:autoscaling:launchconfiguration'
OptionName: IamInstanceProfile
Value: aws-elasticbeanstalk-ec2-role
- Namespace: 'aws:autoscaling:launchconfiguration'
OptionName: SecurityGroups
Value:
- launch-wizard-1
WsServerEBAppVersion:
Type: 'AWS::ElasticBeanstalk::ApplicationVersion'
Properties:
ApplicationName: !Ref WsServerEBApp

I did it using EnvironmentType value as SingleInstance:
WsServerEBEnvironemt:
Type: 'AWS::ElasticBeanstalk::Environment'
Properties:
....
OptionSettings:
....
- Namespace: aws:elasticbeanstalk:environment
OptionName: EnvironmentType
Value: SingleInstance

Related

EKS 1.22 update - ingress and alb not working

After updating EKS cluster to 1.22 all websites are down. Pods are ok but all the networking is not working.
I don't know how to fix ingresses and load balancer.
I have tried updating deprecated API versions for ingress-kong and internal-ingress-kong.
I can't find yaml file for alb-ingress-controller, but when I check last applied it is based on new API.
I have manually updated docker image of alb from 1.1.8 to 2.4.1
Name: alb-ingress-controller
Namespace: default
CreationTimestamp: Thu, 03 Sep 2020 02:05:01 +0000
Labels: app=alb-ingress-controller
app.kubernetes.io/name=alb-ingress-controller
git_version=54709a8bd94f795b1184b0c8336e9a6ec8aee807
name=alb-ingress-controller
version=20200909005829
Annotations: deployment.kubernetes.io/revision: 9
Selector: app.kubernetes.io/name=alb-ingress-controller
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=alb-ingress-controller
app.kubernetes.io/name=alb-ingress-controller
git_version=54709a8bd94f795b1184b0c8336e9a6ec8aee807
name=alb-ingress-controller
version=20200909005829
Annotations: kubectl.kubernetes.io/restartedAt: 2022-04-14T19:19:01Z
Service Account: alb-ingress-controller
Containers:
alb-ingress-controller:
Image: docker.io/amazon/aws-alb-ingress-controller:v2.4.1
Port: <none>
Host Port: <none>
Args:
--watch-namespace=default
--ingress-class=alb-ingress-controller
--cluster-name=staging-trn
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available False MinimumReplicasUnavailable
OldReplicaSets: <none>
NewReplicaSet: alb-ingress-controller-c46ff7bd9 (1/1 replicas created)
Events: <none>
I'm new to kubernetes and aws.
I think I have updated deprecated APIs in all places but errors are still pointing to the old APIs.
Error on ingresses:
E0415 07:54:29.332371 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.4/tools/cache/reflector.go:105: Failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.extensions)
Error on alb:
{"level":"error","ts":1650009210.0149224,"logger":"setup","msg":"unable to create controller","controller":"TargetGroupBinding","error":"no matches for kind \"TargetGroupBinding\" in version \"elbv2.k8s.aws/v1beta1\""}
I have created missing CRD TargetGroupBindings:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.5.0
creationTimestamp: null
labels:
app.kubernetes.io/name: alb-ingress-controller
name: targetgroupbindings.elbv2.k8s.aws
spec:
group: elbv2.k8s.aws
names:
kind: TargetGroupBinding
listKind: TargetGroupBindingList
plural: targetgroupbindings
singular: targetgroupbinding
scope: Namespaced
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
description: TargetGroupBinding is the Schema for the TargetGroupBinding API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: TargetGroupBindingSpec defines the desired state of TargetGroupBinding
properties:
networking:
description: networking provides the networking setup for ELBV2 LoadBalancer
to access targets in TargetGroup.
properties:
ingress:
description: List of ingress rules to allow ELBV2 LoadBalancer
to access targets in TargetGroup.
items:
properties:
from:
description: List of peers which should be able to access
the targets in TargetGroup. At least one NetworkingPeer
should be specified.
items:
description: NetworkingPeer defines the source/destination
peer for networking rules.
properties:
ipBlock:
description: IPBlock defines an IPBlock peer. If specified,
none of the other fields can be set.
properties:
cidr:
description: CIDR is the network CIDR. Both IPV4
or IPV6 CIDR are accepted.
type: string
required:
- cidr
type: object
securityGroup:
description: SecurityGroup defines a SecurityGroup
peer. If specified, none of the other fields can
be set.
properties:
groupID:
description: GroupID is the EC2 SecurityGroupID.
type: string
required:
- groupID
type: object
type: object
type: array
ports:
description: List of ports which should be made accessible
on the targets in TargetGroup. If ports is empty or unspecified,
it defaults to all ports with TCP.
items:
properties:
port:
anyOf:
- type: integer
- type: string
description: The port which traffic must match. When
NodePort endpoints(instance TargetType) is used,
this must be a numerical port. When Port endpoints(ip
TargetType) is used, this can be either numerical
or named port on pods. if port is unspecified, it
defaults to all ports.
x-kubernetes-int-or-string: true
protocol:
description: The protocol which traffic must match.
If protocol is unspecified, it defaults to TCP.
enum:
- TCP
- UDP
type: string
type: object
type: array
required:
- from
- ports
type: object
type: array
type: object
serviceRef:
description: serviceRef is a reference to a Kubernetes Service and
ServicePort.
properties:
name:
description: Name is the name of the Service.
type: string
port:
anyOf:
- type: integer
- type: string
description: Port is the port of the ServicePort.
x-kubernetes-int-or-string: true
required:
- name
- port
type: object
targetGroupARN:
description: targetGroupARN is the Amazon Resource Name (ARN) for
the TargetGroup.
type: string
targetType:
description: targetType is the TargetType of TargetGroup. If unspecified,
it will be automatically inferred.
enum:
- instance
- ip
type: string
required:
- serviceRef
- targetGroupARN
type: object
status:
description: TargetGroupBindingStatus defines the observed state of TargetGroupBinding
properties:
observedGeneration:
description: The generation observed by the TargetGroupBinding controller.
format: int64
type: integer
type: object
type: object
additionalPrinterColumns:
- jsonPath: .spec.serviceRef.name
description: The Kubernetes Service's name
name: SERVICE-NAME
type: string
- jsonPath: .spec.serviceRef.port
description: The Kubernetes Service's port
name: SERVICE-PORT
type: string
- jsonPath: .spec.targetType
description: The AWS TargetGroup's TargetType
name: TARGET-TYPE
type: string
- jsonPath: .spec.targetGroupARN
description: The AWS TargetGroup's Amazon Resource Name
name: ARN
priority: 1
type: string
- jsonPath: .metadata.creationTimestamp
name: AGE
type: date
Ingress resource should be updated as follows:
apiVersion: networking.k8s.io/v1
pls see examples here:
https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
to find ingress resources type the following:
kubectl get ingress --all-namespaces
then do the modification as mentioned above
pls be noted that backend configuration in ingress resource also needs some modification due to api change
also please be noted that from version 1.18 you're able to bind ingress resources using spec.ingressClassName field. If Omitted, ingress will work only if ingressClass that ingress controller implements is set to default.

Getting error while creating ELB in cloudformation

AWSTemplateFormatVersion: 2010-09-09
Resources:
MyLoadBalancer:
Type: 'AWS::ElasticLoadBalancingV2::LoadBalancer'
Properties:
IpAddressType: ipv4
AvailabilityZones:
- ap-southeast-1
Name: mytestingELB
Scheme: internet-facing
Type: application
SecurityGroups:
- !Ref sg-**********
Subnets:
- !Ref subnet-******
- !Ref subnet-*******
Metadata:
'AWS::CloudFormation::Designer':
id: 3f17841f-7296-4aeb-a464-94dbbf6542fd
'AWS::ElasticLoadBalancingV2::TargetGroup':
Properties:
HealthCheckIntervalSeconds: '30'
HealthCheckPath: /
HealthCheckProtocol: HTTP
HealthCheckTimeoutSeconds: '5'
HealthyThresholdCount: '5'
Matcher:
HttpCode: '200'
Name: testingtargetgroup
Port: '80'
Protocol: HTTP
TargetType: instance
UnhealthyThresholdCount: '2'
VpcId: !Ref vpc-******
Getting error Template is not valid: Template format error: Unresolved resource dependencies [subnet-, sg-, subnet-, vpc-*] in the Resources block of the template
PLease hel me to add
If vpc-******, subnet-******, sg-********** are actual IDs of your existing VPC, subnets, and security group, then you do not need !Ref to reference them.
Just provide them without !Ref, e.g.
SecurityGroups:
- sg-**********
Subnets:
- subnet-******
- subnet-*******
VpcId: vpc-******
New template version:
AWSTemplateFormatVersion: 2010-09-09
Resources:
MyLoadBalancer:
Type: 'AWS::ElasticLoadBalancingV2::LoadBalancer'
Properties:
IpAddressType: ipv4
Name: mytestingELB
Scheme: internet-facing
Type: application
SecurityGroups:
- !Ref sg-**********
Subnets:
- !Ref subnet-******
- !Ref subnet-*******
MyTargetGroup
'AWS::ElasticLoadBalancingV2::TargetGroup':
Properties:
HealthCheckIntervalSeconds: '30'
HealthCheckPath: /
HealthCheckProtocol: HTTP
HealthCheckTimeoutSeconds: '5'
HealthyThresholdCount: '5'
Matcher:
HttpCode: '200'
Name: testingtargetgroup
Port: '80'
Protocol: HTTP
TargetType: instance
UnhealthyThresholdCount: '2'
VpcId: !Ref vpc-******
MyTargetGroup was added and AvailabilityZones and Metadata removed.

AWS CloudFormation error with network interface

I get this error whenever I launch my stack
(Network interfaces and an instance-level security groups may not be
specified on the same request (Service: AmazonEC2; Status Code: 400;
Error Code: InvalidParameterCombination; Request ID:....))
and the status in Aws console is: ROLLBACK_COMPLETE
How I can solve this error?
EC2Instance:
Type: 'AWS::EC2::Instance'
Properties:
SecurityGroups:
- !Ref SecurityGroup
KeyName : !Ref EC2Key
AvailabilityZone: us-east-2a
ImageId: ami-01410f0e8f8b1acca
InstanceType: t2.micro
NetworkInterfaces:
- DeviceIndex: '0'
SubnetId: !Ref PublicSubnet
Is there a specific reason why you want to specify network interface?
If all you need to accomplish is to deploy the instance into the specific subnet, just drop the NetworkInterfaces part and specify the subnet for the instance itself.
EC2Instance:
Type: 'AWS::EC2::Instance'
Properties:
SecurityGroups:
- !Ref SecurityGroup
KeyName : !Ref EC2Key
AvailabilityZone: us-east-2a
ImageId: ami-01410f0e8f8b1acca
InstanceType: t2.micro
SubnetId: !Ref PublicSubnet

Openshift - timeout expired waiting for volumes to attach or mount for pod

I'm having some difficulties deploying an Openshift template, specifically with attaching a persistent volume. The template is meant to deploy Jira and a MYSQL database for persistence. I have the following persistent volume configuration deployed:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysqlpv0003
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
nfs:
path: /var/nfs/mysql
server: 192.168.0.171
persistentVolumeReclaimPolicy: Retain
Where 192.168.0.171 is a valid, working nfs server. My aim is to use this persistent volume as storage for the MYSQL server. The template I'm trying to deploy is as follows:
---
apiVersion: v1
kind: Template
labels:
app: jira-persistent
template: jira-persistent
message: |-
The following service(s) have been created in your project: ${NAME}, ${DATABASE_SERVICE_NAME}.
metadata:
annotations:
description: Deploys an instance of Jira, backed by a mysql database
iconClass: icon-perl
openshift.io/display-name: Jira + Mysql
openshift.io/documentation-url: https://github.com/sclorg/dancer-ex
openshift.io/long-description: Deploys an instance of Jira, backed by a mysql database
openshift.io/provider-display-name: ABXY Games, Inc.
openshift.io/support-url: abxygames.com
tags: quickstart,JIRA
template.openshift.io/bindable: 'false'
name: jira-persistent
objects:
# Database secrets
- apiVersion: v1
kind: Secret
metadata:
name: "${NAME}"
stringData:
database-password: "${DATABASE_PASSWORD}"
database-user: "${DATABASE_USER}"
keybase: "${SECRET_KEY_BASE}"
# application service
- apiVersion: v1
kind: Service
metadata:
annotations:
description: Exposes and load balances the application pods
service.alpha.openshift.io/dependencies: '[{"name": "${DATABASE_SERVICE_NAME}",
"kind": "Service"}]'
name: "${NAME}"
spec:
ports:
- name: web
port: 8080
targetPort: 8080
selector:
name: "${NAME}"
# application route
- apiVersion: v1
kind: Route
metadata:
name: "${NAME}"
spec:
host: "${APPLICATION_DOMAIN}"
to:
kind: Service
name: "${NAME}"
# application image
- apiVersion: v1
kind: ImageStream
metadata:
annotations:
description: Keeps track of changes in the application image
name: "${NAME}"
# Application buildconfig
- apiVersion: v1
kind: BuildConfig
metadata:
annotations:
description: Defines how to build the application
template.alpha.openshift.io/wait-for-ready: 'true'
name: "${NAME}"
spec:
output:
to:
kind: ImageStreamTag
name: "${NAME}:latest"
source:
contextDir: "${CONTEXT_DIR}"
git:
ref: "${SOURCE_REPOSITORY_REF}"
uri: "${SOURCE_REPOSITORY_URL}"
type: Git
strategy:
dockerStrategy:
env:
- name: CPAN_MIRROR
value: "${CPAN_MIRROR}"
dockerfilePath: Dockerfile
type: Source
triggers:
- type: ImageChange
- type: ConfigChange
- github:
secret: "${GITHUB_WEBHOOK_SECRET}"
type: GitHub
# application deployConfig
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
description: Defines how to deploy the application server
template.alpha.openshift.io/wait-for-ready: 'true'
name: "${NAME}"
spec:
replicas: 1
selector:
name: "${NAME}"
strategy:
type: Recreate
template:
metadata:
labels:
name: "${NAME}"
name: "${NAME}"
spec:
containers:
- env:
- name: DATABASE_SERVICE_NAME
value: "${DATABASE_SERVICE_NAME}"
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: database-user
name: "${NAME}"
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: "${NAME}"
- name: MYSQL_DATABASE
value: "${DATABASE_NAME}"
- name: SECRET_KEY_BASE
valueFrom:
secretKeyRef:
key: keybase
name: "${NAME}"
- name: PERL_APACHE2_RELOAD
value: "${PERL_APACHE2_RELOAD}"
image: " "
livenessProbe:
httpGet:
path: "/"
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 3
name: jira-mysql-persistent
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: "/"
port: 8080
initialDelaySeconds: 3
timeoutSeconds: 3
resources:
limits:
memory: "${MEMORY_LIMIT}"
triggers:
- imageChangeParams:
automatic: true
containerNames:
- jira-mysql-persistent
from:
kind: ImageStreamTag
name: "${NAME}:latest"
type: ImageChange
- type: ConfigChange
# database persistentvolumeclaim
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "${DATABASE_SERVICE_NAME}"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: "${VOLUME_CAPACITY}"
# database service
- apiVersion: v1
kind: Service
metadata:
annotations:
description: Exposes the database server
name: "${DATABASE_SERVICE_NAME}"
spec:
ports:
- name: mysql
port: 3306
targetPort: 3306
selector:
name: "${DATABASE_SERVICE_NAME}"
# database deployment config
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
description: Defines how to deploy the database
template.alpha.openshift.io/wait-for-ready: 'true'
name: "${DATABASE_SERVICE_NAME}"
spec:
replicas: 1
selector:
name: "${DATABASE_SERVICE_NAME}"
strategy:
type: Recreate
template:
metadata:
labels:
name: "${DATABASE_SERVICE_NAME}"
name: "${DATABASE_SERVICE_NAME}"
spec:
containers:
- env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: database-user
name: "${NAME}"
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: "${NAME}"
- name: MYSQL_DATABASE
value: "${DATABASE_NAME}"
image: " "
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 3306
timeoutSeconds: 1
name: mysql
ports:
- containerPort: 3306
readinessProbe:
exec:
command:
- "/bin/sh"
- "-i"
- "-c"
- MYSQL_PWD='${DATABASE_PASSWORD}' mysql -h 127.0.0.1 -u ${DATABASE_USER}
-D ${DATABASE_NAME} -e 'SELECT 1'
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
limits:
memory: "${MEMORY_MYSQL_LIMIT}"
volumeMounts:
- mountPath: "/var/lib/mysql/data"
name: "${DATABASE_SERVICE_NAME}-data"
volumes:
- name: "${DATABASE_SERVICE_NAME}-data"
persistentVolumeClaim:
claimName: "${DATABASE_SERVICE_NAME}"
triggers:
- imageChangeParams:
automatic: true
containerNames:
- mysql
from:
kind: ImageStreamTag
name: mysql:5.7
namespace: "${NAMESPACE}"
type: ImageChange
- type: ConfigChange
parameters:
- description: The name assigned to all of the frontend objects defined in this template.
displayName: Name
name: NAME
required: true
value: jira-persistent
- description: The OpenShift Namespace where the ImageStream resides.
displayName: Namespace
name: NAMESPACE
required: true
value: openshift
- description: Maximum amount of memory the JIRA container can use.
displayName: Memory Limit
name: MEMORY_LIMIT
required: true
value: 512Mi
- description: Maximum amount of memory the MySQL container can use.
displayName: Memory Limit (MySQL)
name: MEMORY_MYSQL_LIMIT
required: true
value: 512Mi
- description: Volume space available for data, e.g. 512Mi, 2Gi
displayName: Volume Capacity
name: VOLUME_CAPACITY
required: true
value: 1Gi
- description: The URL of the repository with your application source code.
displayName: Git Repository URL
name: SOURCE_REPOSITORY_URL
required: true
value: https://github.com/stpork/jira.git
- description: Set this to a branch name, tag or other ref of your repository if you
are not using the default branch.
displayName: Git Reference
name: SOURCE_REPOSITORY_REF
- description: Set this to the relative path to your project if it is not in the root
of your repository.
displayName: Context Directory
name: CONTEXT_DIR
- description: The exposed hostname that will route to the jira service, if left
blank a value will be defaulted.
displayName: Application Hostname
name: APPLICATION_DOMAIN
value: ''
- description: Github trigger secret. A difficult to guess string encoded as part
of the webhook URL. Not encrypted.
displayName: GitHub Webhook Secret
from: "[a-zA-Z0-9]{40}"
generate: expression
name: GITHUB_WEBHOOK_SECRET
- displayName: Database Service Name
name: DATABASE_SERVICE_NAME
required: true
value: database
- displayName: Database Username
from: user[A-Z0-9]{3}
generate: expression
name: DATABASE_USER
- displayName: Database Password
from: "[a-zA-Z0-9]{8}"
generate: expression
name: DATABASE_PASSWORD
- displayName: Database Name
name: DATABASE_NAME
required: true
value: sampledb
- description: Set this to "true" to enable automatic reloading of modified Perl modules.
displayName: Perl Module Reload
name: PERL_APACHE2_RELOAD
value: ''
- description: Your secret key for verifying the integrity of signed cookies.
displayName: Secret Key
from: "[a-z0-9]{127}"
generate: expression
name: SECRET_KEY_BASE
- description: The custom CPAN mirror URL
displayName: Custom CPAN Mirror URL
name: CPAN_MIRROR
value: ''
When run, the deployment for the MYSQL server eventually fails with the following error:
Unable to mount volumes for pod
"database-1-qvv86_test3(54f01c55-6885-11e9-bc42-3a342852673a)":
timeout expired waiting for volumes to attach or mount for pod
"test3"/"database-1-qvv86". list of unmounted volumes=[database-data
default-token-8hjgv]. list of unattached volumes=[database-data
default-token-8hjgv]
The persistent volume claim is attaching to the persistent volume successfully, but as far as I can tell the pod is not attaching to that volume. The template is being deployed in a fresh project, and the PV is freshly created and the nfs is empty. I can't see any errors with how the pod is referencing the persistent volume claim. I'm not sure why this error is occurring, but I'm just learning templates and am clearly missing something. Does anyone see what I'm missing?
The issue was in my NFS permissions. Here is the working content of my /etc/exports file:
/var/nfs *(rw,root_squash,no_wdelay)

Openshift: Change in trigger behavior between 3.6.1 and 3.7.1

I have two Openshift environments that I publish to from a Gitlab pipeline. Let's call the first one DEV and it runs Openshift v3.7.1+c2ce2c0-1. The second one is INT and runs v3.6.1+008f2d5. Recently DEV got upgraded from 3.6.1 to 3.7.1 and after that I noticed a strange change in the behavior of redeployment triggers.
In short, what I see is that existing deployments in the DEV environment are triggered to redeploy with an "Image changed" message when an unchanged deployment config template is applied and while the Docker image also remains unchanged. This means that for example a MongoDB or Jenkins deployment recreates all containers and loses all data with every run of the CI pipeline.
Yes, there is the theoretically the possibility to use persistent volumes but the Openshift installations are not under my control. The point here is that redeployments happen when neither image nor configuration changes.
The command that I am using from Gitlab is this:
oc process -f openshift-mongodb-ephemeral.yml -v MONGODB_DATABASE=mydb -v DATABASE_SERVICE_NAME=mongodb -l template=northbound-mongodb | oc apply -f -
Here is one of the deployment config templates:
apiVersion: v1
kind: Template
labels:
template: mongodb-ephemeral-template
metadata:
annotations:
description: |-
MongoDB database service, without persistent storage. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/mongodb-container/blob/master/3.2/README.md.
WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing
iconClass: icon-mongodb
openshift.io/display-name: MongoDB (Ephemeral)
tags: database,mongodb
creationTimestamp: 2017-03-14T11:25:13Z
name: mongodb-ephemeral
resourceVersion: "483"
selfLink: /oapi/v1/namespaces/openshift/templates/mongodb-ephemeral
uid: e41b7f8e-08a8-11e7-9120-000d3a266151
objects:
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: ${DATABASE_SERVICE_NAME}
spec:
ports:
- name: mongo
nodePort: 0
port: 27017
protocol: TCP
targetPort: 27017
selector:
name: ${DATABASE_SERVICE_NAME}
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
name: ${DATABASE_SERVICE_NAME}
spec:
replicas: 1
selector:
name: ${DATABASE_SERVICE_NAME}
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
name: ${DATABASE_SERVICE_NAME}
spec:
containers:
- capabilities: {}
env:
- name: MONGODB_USER
value: ${MONGODB_USER}
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: myproject-secrets
- name: MONGODB_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: myproject-secrets
- name: MONGODB_DATABASE
value: ${MONGODB_DATABASE}
image: ' '
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 27017
timeoutSeconds: 1
name: mongodb
ports:
- containerPort: 27017
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -i
- -c
- mongo 127.0.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD
--eval="quit()"
initialDelaySeconds: 3
timeoutSeconds: 1
resources:
limits:
memory: ${MEMORY_LIMIT}
securityContext:
capabilities: {}
privileged: false
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: ${DATABASE_SERVICE_NAME}-data
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- emptyDir:
medium: ""
name: ${DATABASE_SERVICE_NAME}-data
triggers:
- imageChangeParams:
automatic: true
containerNames:
- mongodb
from:
kind: ImageStreamTag
name: mongodb:${MONGODB_VERSION}
namespace: ${NAMESPACE}
lastTriggeredImage: ""
type: ImageChange
- type: ConfigChange
status: {}
parameters:
- description: Maximum amount of memory the container can use.
displayName: Memory Limit
name: MEMORY_LIMIT
required: true
value: 512Mi
- description: The OpenShift Namespace where the ImageStream resides.
displayName: Namespace
name: NAMESPACE
value: openshift
- description: The name of the OpenShift Service exposed for the database.
displayName: Database Service Name
name: DATABASE_SERVICE_NAME
required: true
value: mongodb
- description: Name of the MongoDB database accessed.
displayName: MongoDB Database Name
name: MONGODB_DATABASE
required: true
value: sampledb
- description: Version of MongoDB image to be used (2.4, 2.6, 3.2 or latest).
displayName: Version of MongoDB Image
name: MONGODB_VERSION
required: true
value: "3.2"
- description: Name of user to access MongoDB.
displayName: MongoDB user
name: MONGODB_USER
required: true
value: "mongouser"
The oc process and oc apply commands get executed with each execution of the Gitlab CI pipeline. The pipeline triggers whenever someone merges into e.g. the develop branch. I would like to keep this behavior because this guarantees if someone changes the configuration of MongoDB, Jenkins, etc. those get updated and redeployed automatically (in which case a loss of data is acceptable).
Does anybody know what change in OS could have prompted this change in behavior and how to achieve the old behavior again?
Credits go to Graham Dumpleton who answered my question, albeit through a comment. Since I did not hear back from him I am leaving this answer myself for completeness' sake.
After removing the entries
creationTimestamp
lastTriggeredImage
status
image
from the deployment config template, the unwanted redeployments stopped. I do not know which entry specifically started triggering the behavior in Openshift 3.7.1 but in any case, these four entries are information generated during an Openshift deployment process and should not be in a template to begin with.