Openstack heat template: interfaces with fixed IPs are not configured automatically - openstack-nova

I assign a fixed IP to an interface in the heat template.
private_port_1:
type: OS::Neutron::Port
properties:
network: { get_param: private_net }
fixed_ips: [{"subnet": { get_param: private_subnet }, "ip_address": { get_param: private_ip_1 }}]
my_vm_123:
type: OS::Nova::Server
properties:
image: { get_param: image_name }
flavor: { get_param: flavor_name }
name: { get_param: vm_name }
networks:
- network: { get_param: public_net }
- port: { get_resource: private_port_1 }
The VM is successfully instantiated and its private IP (private_ip_1) is shown in the Horizon GUI. However, the "eth1" appears to be down and the /etc/network/interfaces contains configuration only for the public "eth0".
I can do a workaround by manually populating "/etc/network/interfaces" and turning the eth1 on in the "user_data:" part. The question is - is this the way it should be or there is something wrong with my heat or Openstack that prevents eth1 to be configured automatically?
Thanks!
Michael.

Yes, it is the way it should be.
OpenStack (Nova, Neutron) sets up the VM and provides right connectivity. However, the operation system running in the VM has to bring up the interfaces. Default cloud-init images are hardcoded to bring up only eth0 (with DHCP). So, you have to explicitly bring up eth1 in your VM.
You can use user_data variable of OS::Nova::Server resource type to run custom scripts when bringing up a VM. I had a similar use case where I needed to bring up eth1 automatically. You can check on how I achieved this at https://github.com/ypraveen/openstack-installer/blob/master/vm-heat-template/devstack.yaml
Line 33 shows the usage of user_data.
You can check the lines 41-45 in the init script that bring up eth1: https://github.com/ypraveen/openstack-installer/blob/master/vm-heat-template/devstack_vm_init.sh

Related

Is it possible to configure Openshift coreDNS with "host" statement like for vanilla Kubernetes?

I want to resolve a particular external test hostname to its IP on my Openshift (4.10.32) cluster. Please note that it's needed for some workloads deployed by operators so cannot (I think) update the deployments yaml that are generated by the operator to add hostAliases for example.
I tried to add host statement in my DNS config like others do on vanilla Kubernetes, see here or here.
In other word I'd like my coreDNS configmap (generated by the dns.operator/default) and which look like that:
oc get configmap/dns-default -n openshift-dns -o yaml
apiVersion: v1
data:
Corefile: |
.:5353 {
bufsize 512
errors
log . {
class error
}
health {
lameduck 20s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus 127.0.0.1:9153
forward . /etc/resolv.conf {
policy sequential
}
cache 900 {
denial 9984 30
}
reload
}
kind: ConfigMap
metadata:
creationTimestamp: "2022-10-13T14:47:45Z"
labels:
dns.operator.openshift.io/owning-dns: default
name: dns-default
namespace: openshift-dns
ownerReferences:
- apiVersion: operator.openshift.io/v1
controller: true
kind: DNS
name: default
uid: ef6eb600-c5ec-4cf5-a4f0-ca654b613e16
resourceVersion: "2918009"
uid: cf3cd62e-fef0-4c52-a7e0-6d8d8bf149ed
to be updated so that the coreDNS config part would look like this (considering I want to add a host hostname.my-domain that resolves to IP 1.2.3.4)
Corefile: |
.:5353 {
bufsize 512
errors
log . {
class error
}
health {
lameduck 20s
}
ready
hosts {
1.2.3.4 hostname.my-domain
fallthrough
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus 127.0.0.1:9153
forward . /etc/resolv.conf {
policy sequential
}
cache 900 {
denial 9984 30
}
reload
}
Thx in advance for any hints on how to do that or on other ways to achieve what I'd like to do.

How to create kubernetes secret as json object and load the same in kubernetes environment as json

I need to pass a JWK as kubernetes environment variable to my app.
I created a file to store my key like so:
cat deploy/keys/access-signature-public-jwk
{
algorithm = "RS256"
jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
Which is then used to create a kubernetes secret like so:
kubectl create secret generic intimations-signature-public-secret --from-file=./deploy/keys/access-signature-public-jwk
Which is then retrived in the kubernetes environment variable as:
- name: ACCESS_SIGNATURE_PUBLIC_JWK
valueFrom:
secretKeyRef:
name: intimations-signature-public-secret
key: access-signature-public-jwk
And passed to the application.conf of the application likeso:
pac4j.lagom.jwt.authenticator {
signatures = [
${ACCESS_SIGNATURE_PUBLIC_JWK}
]
}
The pac4j library expects the config pac4j.lagom.jwt.authenticator as a json object. But get the following exception when I run this app:
com.typesafe.config.ConfigException$WrongType: env variables: signatures has type list of STRING rather than list of OBJECT
at com.typesafe.config.impl.SimpleConfig.getHomogeneousWrappedList(SimpleConfig.java:452)
at com.typesafe.config.impl.SimpleConfig.getObjectList(SimpleConfig.java:460)
at com.typesafe.config.impl.SimpleConfig.getConfigList(SimpleConfig.java:465)
at org.pac4j.lagom.jwt.JwtAuthenticatorHelper.parse(JwtAuthenticatorHelper.java:84)
at com.codingkapoor.holiday.impl.core.HolidayApplication.jwtClient$lzycompute(HolidayApplication.scala
POD Description
Name: holiday-deployment-55b86f955d-9klk2
Namespace: default
Priority: 0
Node: minikube/192.168.99.103
Start Time: Thu, 28 May 2020 12:42:50 +0530
Labels: app=holiday
pod-template-hash=55b86f955d
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/holiday-deployment-55b86f955d
Containers:
holiday:
Container ID: docker://18443cfedc7fd39440f5fa6f038f36c58cec1660a2974e6432500e8c7d51f5e6
Image: codingkapoor/holiday-impl:latest
Image ID: docker://sha256:6e0ddcf41e0257755b7e865424671970091d555c4bad88b5d896708ded139eb7
Port: 8558/TCP
Host Port: 0/TCP
State: Terminated
Reason: Error
Exit Code: 255
Started: Thu, 28 May 2020 22:49:24 +0530
Finished: Thu, 28 May 2020 22:49:29 +0530
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Thu, 28 May 2020 22:44:15 +0530
Finished: Thu, 28 May 2020 22:44:21 +0530
Ready: False
Restart Count: 55
Liveness: http-get http://:management/alive delay=20s timeout=1s period=10s #success=1 #failure=10
Readiness: http-get http://:management/ready delay=20s timeout=1s period=10s #success=1 #failure=10
Environment:
JAVA_OPTS: -Xms256m -Xmx256m -Dconfig.resource=prod-application.conf
APPLICATION_SECRET: <set to the key 'secret' in secret 'intimations-application-secret'> Optional: false
MYSQL_URL: jdbc:mysql://mysql/intimations_holiday_schema
MYSQL_USERNAME: <set to the key 'username' in secret 'intimations-mysql-secret'> Optional: false
MYSQL_PASSWORD: <set to the key 'password' in secret 'intimations-mysql-secret'> Optional: false
ACCESS_SIGNATURE_PUBLIC_JWK: <set to the key 'access-signature-public-jwk' in secret 'intimations-signature-public-secret'> Optional: false
REFRESH_SIGNATURE_PUBLIC_JWK: <set to the key 'refresh-signature-public-jwk' in secret 'intimations-signature-public-secret'> Optional: false
REQUIRED_CONTACT_POINT_NR: 1
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kqmmv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-kqmmv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kqmmv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 5m21s (x23 over 100m) kubelet, minikube Container image "codingkapoor/holiday-impl:latest" already present on machine
Warning BackOff 27s (x466 over 100m) kubelet, minikube Back-off restarting failed container
I was wondering if there is any way to pass the environment variable as a json object instead of string. Please suggest. TIA.
First, the file access-signature-public-jwk is not a valid JSON file. You should update it as a valid one.
{
"algorithm" : "RS256",
"jwk" : {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
Steps I followed to validate.
kubectl create secret generic token1 --from-file=jwk.json
Mount the secret into the pod.
env:
- name: JWK
valueFrom:
secretKeyRef:
name: token
key: jwk.json
exec to the pod and check the env variable JWK
$ echo $JWK
{ "algorithm" : "RS256", "jwk" : {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"} }
Copy the content to a file
echo $JWK > jwk.json
Validate the file
$ jsonlint-php jwk.json
Valid JSON (jwk.json)
If I use the file you are given and followed the same steps. It gives an json validation error. Also, env variables are always strings. You have to convert them into the required types in your code.
$ echo $JWK
{ algorithm = "RS256" jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"} }
$ echo $JWK > jwk.json
$ jsonlint-php jwk.json
jwk.json: Parse error on line 1:
{ algorithm = "RS256"
-^
Expected one of: 'STRING', '}'
Although not a direct answer but an alternate solution to this problem.
As #hariK pointed out environment variables are always strings and in order to consume them as json we would need to convert the env var read as string into json.
However, in my case, this was not a viable solution because I was using a lib that was expecting a Config object and not a json object directly which would have meant a lot of work. Converting string -> json -> Config. Plus this approach is inconsistent with how Config object was being built in the developement scenarios i.e., json -> Config. See here.
The framework I am using to build this app is based on Play Framework which allows to modularize application configs in separate files and then club the required pieces together in a desired config file, as shown below. You can read it more in detail here.
application.conf
include "/opt/conf/app1.conf"
include "/opt/conf/app2.conf"
This allowed me to make use of Using Secrets as files from a Pod
feature from kubernetes.
Basically, I created a small config file that contains a part of my main application configuration file, as shown below:
cat deploy/keys/signature-public-jwk
pac4j.lagom.jwt.authenticator {
signatures = [
{
algorithm = "RS256"
jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
]
}
Then created a kubernetes secret and mounted volumes in deployment to appear in the pod as file
kubectl create secret generic signature-public-secret --from-file=./deploy/secrets/signature-public-jwks.conf
// deployment yaml
spec:
containers:
- name: employee
image: "codingkapoor/employee-impl:latest"
volumeMounts:
- name: signature-public-secret-conf
mountPath: /opt/conf/signature-public-jwks.conf
subPath: signature-public-jwks.conf
readOnly: true
volumes:
- name: signature-public-secret-conf
secret:
secretName: signature-public-secret
Use this mounted file location in the application.conf to include the same
include file("/opt/conf/signature-public-jwks.conf")
Notice that the mountPath and the file location in the application.conf are same.
Advantages of this approach:
The solution is consistent with both the development and test, production environments as we could return json instead of string to the lib, as explained above
Secrets shouldn't be passed as environment variables anyway! You can read more about it here.

Build an Image from Dockerfile using Pipeline with Openshift/jenkins-client-plugin

I am trying to setup an OpenShift Jenkins pipeline to:
Get source from git. The source includes a Dockerfile
Run tests
Build an image using the Dockerfile
Push the image to an Imagestream
I have created a BuildConfig with Docker strategy which actually works without the pipeline. The problem is I don't know how to run tests
If I try to kick off the build from the Pipeline it does not trigger the build.
Am I on the right path? What be the best approach for the projects like this? Should I keep relying on OpenShift specific tools? Or move to plain Kubernetes?
I am using a https://github.com/openshift/jenkins-client-plugin for the pipeline.
Here is my BuildConfig:
kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "front-end-build"
spec:
runPolicy: "Serial"
nodeSelector:
hostname: "vhkdld518"
source:
git:
uri: "https://kraporta#bitbucket/scm/~kraporta/test-kube.git"
strategy:
dockerStrategy:
from:
kind: "ImageStreamTag"
name: "nginx:alpine"
dockerfilePath: Dockerfile
output:
to:
kind: "ImageStreamTag"
name: "front-end:latest"
Here is the pipeline:
node {
stage('build') {
openshift.withCluster() {
openshift.withProject() {
echo "Using project: ${openshift.project()}"
def builds = openshift.selector("bc", "front-end-build").related('builds')
builds.describe()
timeout(5) {
builds.untilEach(1) {
it.describe()
echo "Inside loop: ${it}"
return (it.object().status.phase == "Complete")
}
}
}
}
}
}
Thanks a lot!
I know it's an old question but i'd to keep it as reference.
the issue:
there's missing step, you haven't start the build process so there's no builds!
the solution:
we need to start the build process before fetching the builds, here:
def builds = openshift.selector("bc", "front-end-build").related('builds')
to be (there's many way to be done):
def buildConfig = openshift.selector("bc", "front-end-build")
openshift.startBuild("front-end-build") # we started the build process
def builds = buildConfig.related('builds')
check: https://github.com/openshift/jenkins-client-plugin#watching-and-waiting-of-course

How to attach a floating IP using the present environment on openstack

I'm newbie on openstack heat file. I did search but I didn't find relevant answer to my question. Here my template heat yaml file :
heat_template_version: newton
description: Simple template to deploy a single compute instance with an attached volume
resources:
my_instance:
type: OS::Nova::Server
properties:
name: instance-name
flavor: std.cpu1ram1
block_device_mapping_v2:
- device_name: vda
image: RHEL-7.4
volume_size: 30
delete_on_termination: true
networks:
- network: network-name.admin-network
security_group:
- security_group: [security-name.group-sec-default]
my_volume:
type: OS::Cinder::Volume
properties:
size: 10
my_attachment:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid: { get_resource: my_instance }
volume_id: { get_resource: my_volume }
mountpoint: /dev/vdb
This heat file works but I don't know how to attach floating IP to "my_instance". I'm able to do it inside Horizon and it's working without PB. Under Horizon interface, I've to choose "Router_dmz" as a pool which creates and allows floating IP. As I understood the floating IP address should associated to "network-name.admin-network". I read many documentation, and I don't know if I've to use the OS:Neutron::FloatingIPAssociation resources or OS::Nova::FloatingIPAssociation. I tried on my side and I've no issue.
I found this works for me :
heat_template_version: newton
description: Simple template to deploy a single compute instance
resources:
floating_ip:
type: OS::Nova::FloatingIP
properties:
pool: string_of_pool_of_public_network
my_instance:
type: OS::Nova::Server
properties:
name: instance-name
flavor: std.cpu1ram1
block_device_mapping_v2:
- device_name: vda
image: RHEL-7.4
volume_size: 30
delete_on_termination: true
networks:
- network: network-name.admin-network
security_group:
- security_group: [security-name.group-sec-default]
association:
type: OS::Nova::FloatingIPAssociation
properties:
floating_ip: { get_resource: floating_ip }
server_id: { get_resource: my_instance }
But this solution is deprecated I don't have any issue with Neutron

App deployed on Kubernetes cannot be accessed from Internet

I am new to Kubernetes & Docker. I created a simple nodejs application and deployed on BlueMix Kubernetes. But I am unable to accesses the application on internet. The ip & port mentioned in the kubernetes is not accessible. Can somebody help me.
I tried to http://10.76.193.146:31972, but it did not go through. I am not sure if this the public ip as its 10. series.
I also tried the public ip ( http://184.173.1.79:31972 ) mentioned in the blue mix kubernetes cluster - screenshot below. But that too failed.
This are steps I followed.
Created a nodejs app locally. It ran as desired on the local
// Load the http module to create an http server.
var http = require('http');
// Configure our HTTP server to respond with Hello World to all requests.
var server = http.createServer(function (request, response) {
response.writeHead(200, {"Content-Type": "text/plain"});
response.end("Hello World\n");
});
// Listen on port 8000, IP defaults to 127.0.0.1
server.listen(8000);
// Put a friendly message on the terminal
console.log("Server running at http://127.0.0.1:8000/");
---------- package.json
{
"name": "helloworld-nodejs",
"version": "0.0.1",
"description": "First Docker",
"main": "app.js",
"scripts": {
"start": "PORT=8000 node ./app.js"
},
"author": "",
"license": "ISC"
}
Created a docker container locally and ran the docker. It worked properly
Uploaded the docker container on Bluemix registry as
registry.ng.bluemix.net/testkubernetes/helloworld-nodejs:0.0.1
Created the Nodes and Services in Kubernetes, using the following YAML file
----------Node YAML file
apiVersion: v1
kind: Pod
metadata:
name: helloworld-nodejs
labels:
name: helloworld-nodejs
spec:
containers:
- name: helloworld-nodejs
image: registry.ng.bluemix.net/testkubernetes/helloworld-nodejs:0.0.1
ports:
- containerPort: 8000
---------- Services YAML
apiVersion: v1
kind: Service
metadata:
name: helloworld-nodejs
labels:
name: helloworld-nodejs
spec:
type: NodePort
selector:
name: helloworld-nodejs
ports:
- port: 8080
The application gets deployed properly and is also running, which I can confirm from the logs
Result of kubectl get services & kubectl get nodes command
Since your service's port is different from your pod's containerPort, you will have to specify targetPort in your service.
spec:
type: NodePort
selector:
name: helloworld-nodejs
ports:
- port: 8080
targetPort: 8000
According to the Kubernetes documentation on targetPort, it is the:
Number or name of the port to access on the pods targeted by the
service. .... If this is not specified, the value of the 'port' field
is used (an identity map).