Openshift new-app from local Docker image - openshift

I'm trying to deploy on an Openshift Origin Pod an image which is available in my Docker repository:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
tomcat9demo latest 535da1774da0 9 weeks ago 350.2 MB
When running:
$ oc new-app tomcat9demo --name tomcatdemo
Apparently it seems to find the image from the Docker repository:
--> Found Docker image 535da17 (9 weeks old) from for "tomcat9demo:latest"
* This image will be deployed in deployment config "tomcat9demo"
* Port 8080/tcp will be load balanced by service "tomcat9demo"
* Other containers can access this service through the hostname "tomcat9demo"
* WARNING: Image "tomcat9demo:latest" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources with label app=tomcat9demo ...
deploymentconfig "tomcat9demo" created
service "tomcat9demo" created
--> Success
However the status shows an error. It seems there's an error with the image pull:
$ oc get pods
NAME READY STATUS RESTARTS AGE
tomcat9demo-1-zrj98 0/1 ErrImagePull 0 16s
$ oc status
Error from server: the server could not find the requested resource
Do I need something else to grab the image from my local Docker repository ?

You need to specify the option --docker-image for it to point to your local image repo. Example:
oc new-app tomcat9demo --docker-image tomcat9demo

You should add a tag to the local docker image.
docker build -t image_name .
docker tag image_name:<current tag ot just a latest> image_name:<new tag>
oc new-app mage_name:<new tag> --name=app_name
And you should see:
Found Docker image <your docker image id> (19 minutes old) from for "image_name:<new tag>"

Refer to this awesome youtube video, I was able to successfully deploy my local docker image onto the dedicated openshift platform's docker registry with the help of this:
Push local docker images to openshift registry - minishift
Hope it helps.

Related

OpenShift deploy an application from private registry by using "oc new pp" command

In OpenShift, I want to deploy application by using docker image which its location is on the private docker registry. To do this I have written the following command from terminal by using OpenShift Container Platform Command Line Interface (oc CLI)
oc new-app --docker-image=myregistry.com/mycompany/myimage --name=private --insecure-registry=true
I received an error which type is 407 proxy authentication when I run the above command. Because, To pull the image from my private registry need to authentication. I have a secret for this authentication, too, but I don't know how can add the secret to above command.
Could you help me, please? or another way ...
Finally, I could have solved. The problem is lack of steps while creating secret for private docker registry. The all steps are:
1) If you do not already have a Docker credentials file for the secured registry, you can create a secret by running:
$ oc create secret docker-registry <pull_secret_name> \
--docker-server=<registry_server> \
--docker-username=<user_name> \
--docker-password=<password> \
--docker-email=<email>
2) To use a secret for pulling images for Pods, you must add the secret to your service account:
$ oc secrets link default <pull_secret_name> --for=pull
3) To use a secret for pushing and pulling build images, the secret must be mountable inside of a Pod. You can do this by running:
$ oc secrets link builder <pull_secret_name>
https://docs.openshift.com/container-platform/4.1/openshift_images/managing-images/using-image-pull-secrets.html

to deploy docker on apache2 server

I have my html application that runs in apache2 server and I want to dockerize the html application that should be run in docker container using apache2 package. I tried but got docker build failed. I dont want to use nginx server help me with apache.
Here is the following Dockerfile content in html application
FROM apache2:2.4.18
WORKDIR /var/www/html/startapp
COPY . /var/www/docker
Then I tried to build this with docker using
sudo docker build -t startapp .
It returns:
Sending build context to Docker daemon 335.6MB
Step 1/3 : FROM apache2:2.4.18
pull access denied for apache2, repository does not exist or may require 'docker login'
If its not possible with apache2 so there is change to build by lampp server in ubuntu 16.0.4.
Try replacing the base image (the one that you are using is not available as on default docker registry).
FROM httpd:2.4
Take a look at https://hub.docker.com/_/httpd/ for more information.
It seems like you are trying to use a non-official docker image for Apache, So either you build apache2 image using its Dockerfile if you have it. Or you can login to the private repository that holds apache2 image if you have its credentials. Otherwise you may use the official Apache docker image

Openshift stable release install centos 7

Please can someone let me know which latest version of openshift-ansible (origin) is stable enough to install on Centos 7?
I am looking for successful multi-node install experience and any tips that was used.
Thanks
the latest stable release is 3.9
git clone https://github.com/openshift/openshift-ansible
cd openshift-ansible
git checkout release-3.9
and follow the Advanced Installation guide
https://docs.openshift.org/latest/install_config/install/advanced_install.html
It is now working.
After enabling openshift_repos_enable_testing=true, I did not run the pre-requisite playbook before the deploy_cluster playbook, which was why it was still giving the error of not finding the packages.
I believe that v3.11.0 version of OpenShift OKD/Origin (latest 3.x release at time) meets your needs. In this answer is a complete roadmap for installing OpenShift OKD/Origin as a single node cluster service.
Some information transposed from the OKD website about OpenShift OKD/Origin...
The Community Distribution of Kubernetes that powers Red Hat
OpenShift. Built around a core of OCI container packaging and
Kubernetes container cluster management, OKD is also augmented by
application lifecycle management functionality and DevOps tooling. OKD
provides a complete open source container application platform.
OKD is a distribution of Kubernetes optimized for continuous
application development and multi-tenant deployment. OKD adds
developer and operations-centric tools on top of Kubernetes to enable
rapid application development, easy deployment and scaling, and
long-term lifecycle maintenance for small and large teams. OKD is a
sibling Kubernetes distribution to Red Hat OpenShift.
OKD embeds Kubernetes and extends it with security and other
integrated concepts. OKD is also referred to as Origin in github and
in the documentation.
If you are looking for enterprise-level support, or information on
partner certification, Red Hat also offers Red Hat OpenShift Container
Platform.
So I recommend starting with OpenShift OKD/Origin using the roadmap below to install on CentOS 7. Then you can explore other possibilities ("multi-node", for example).
However, if you want to test the OpenShift (OKD) 4.X application the guide and the right way to do this is at this link Install the OpenShift (OKD) 4.X cluster (UPI/"bare-metal"). It is a long way and with a reasonable level of complexity.
PLUS:
Informations about OpenShift Ansible on GitHub and RedHat Ansible;
You can take a look at the OpenShift Installer (NOT OKD/Origin!).
OpenShift Origin (OKD) - Open source container application platform:
OpenShift is a family of containerization software products developed by Red Hat. Its flagship product is the OpenShift Container Platform - an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux. The family's other products provide this platform through different environments: OKD serves as the community-driven upstream (akin to the way that Fedora is upstream of Red Hat Enterprise Linux), OpenShift Online is the platform offered as software as a service, and Openshift Dedicated is the platform offered as a managed service.
The OpenShift Console has developer and administrator oriented views. Administrator views allow one to monitor container resources and container health, manage users, work with operators, etc. Developer views are oriented around working with application resources within a namespace. OpenShift also provides a CLI that supports a superset of the actions that the Kubernetes CLI provides.
The OpenShift Origin (OKD) is the comunity driven version of OpenShift (non-enterprise-level). That means you can host your own PaaS (Platform as a service) for free and almost with no hassle.
[Ref(s).: https://en.wikipedia.org/wiki/OpenShift ,
https://www.openshift.com/blog/openshift-ecosystem-get-started-openshift-origin-gitlab ]
Setup Local OpenShift Origin (OKD) Cluster on CentOS 7
All commands in this setup must be performed with the "root" user.
Update CentOS 7
Updating your CentOS 7 server...
yum -y update
Install and Configure Docker
OpenShift required docker engine on the host machine for running containers. Install Docker and other dependencies on CentOS 7 using the commands below...
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum -y install git-core
yum -y install wget
yum -y install yum-utils
yum -y install device-mapper-persistent-data
yum -y install lvm2
yum -y install docker-ce
yum -y install docker-ce-cli
yum -y install containerd.io
Add logged in user account to docker group...
usermod -aG docker $USER
newgrp docker
Create necessary folders...
mkdir "/etc/docker"
mkdir "/etc/containers"
Create "registries.conf" file with an insecure registry parameter ("172.30.0.0/16") to the Docker daemon...
tee "/etc/containers/registries.conf" << EOF
[registries.insecure]
registries = ['172.30.0.0/16']
EOF
Create "daemon.json" file with configurations...
tee "/etc/docker/daemon.json" << EOF
{
"insecure-registries": [
"172.30.0.0/16"
]
}
EOF
We need to reload systemd and restart the Docker daemon after editing the config...
systemctl daemon-reload
systemctl restart docker
Enable Docker to start at boot...
systemctl enable docker
Then enable "IP forwarding" on your system...
tee "/etc/sysctl.d/ip_forward.conf" << EOF
net.ipv4.ip_forward=1
EOF
sysctl -w net.ipv4.ip_forward=1
Configure Firewalld.
Add the necessary firewall permissions...
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=8443/tcp --permanent
firewall-cmd --zone=public --add-port=53/udp --permanent
firewall-cmd --zone=public --add-port=8053/udp --permanent
firewall-cmd --reload
NOTE: Allows containers access to the OpenShift master API (8443/tcp), DNS (53/udp) endpoints and add others permissions.
Download OpenShift
Download the OpenShift binaries from GitHub and move them to the "/usr/local/bin/" folder...
wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz
tar -zxvf openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz
cd ./openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit
mv ./oc /usr/local/bin/
mv ./kubectl /usr/local/bin/
rm -rf ./openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit*
Verify installation of OpenShift client utility...
oc version
Start OpenShift Origin (OKD) Local Cluster
Now bootstrap a local single server OpenShift Origin cluster by running the following command...
oc cluster up --public-hostname="<YOUR_SERVER_IP_OR_NAME>"
... or...
oc cluster up --public-hostname="$(ip route get 1 | awk '{print $NF;exit}')"
This one above will get the primary IP address of the local machine dynamically.
[Ref(s).: https://stackoverflow.com/a/25851186/3223785 ]
TIP: In case of error, try perform the command oc cluster down and repeat the command above.
NOTE: Insufficient hardware configuration (mainly CPU and RAM) will cause timeout on the command above.
IMPORTANT: If the parameter --public-hostname="<YOUR_SERVER_IP_OR_NAME>" is not informed, then calls to the web service ("web console") at URL <YOUR_SERVER_IP_OR_NAME> will be redirected to the local IP "127.0 .0.1".
[Ref(s).: https://github.com/openshift/origin/issues/19699 , https://github.com/openshift/origin/issues/19699#issuecomment-854069124 , https://github.com/openshift/origin/issues/20726 ,
https://github.com/openshift/origin/issues/20726#issuecomment-498078849 , https://hayardillasenlared.blogspot.com/2020/06/instalar-openshift-origin-ubuntu.html , https://www.a5idc.net/helpview_526.html , https://thecodeshell.wordpress.com/ , https://www.techrepublic.com/article/how-to-install-openshift-origin-on-ubuntu-18-04/ ]
The command above will...
Start OKD Cluster listening on the interface informed (<YOUR_SERVER_IP_OR_NAME>:8443);
Start a web console listening on all interfaces at "/console" (<YOUR_SERVER_IP_OR_NAME>:8443);
Launch Kubernetes system components;
Provisions registry, router, initial templates and a default project;
The OpenShift cluster will run as an all-in-one container on a Docker host.
On a successful installation, you should get output similar to below...
[...]
Login to server ...
Creating initial project "myproject" ...
Server Information ...
OpenShift server started.
The server is accessible via web console at:
https://<YOUR_SERVER_IP_OR_NAME>:8443
You are logged in as:
User: developer
Password: <any value>
To login as administrator:
oc login -u system:admin
TIPS:
There are a number of options which can be applied when setting up Openshift Origin. View them with oc cluster up --help;
Command model using custom options...
MODEL
oc cluster up --public-hostname="<PUBLIC_HOSTNAME_OR_IP>" --routing-suffix="<PUBLIC_HOSTNAME_OR_IP>.<SUFFIX>"
EXAMPLE
oc cluster up --public-hostname="192.168.56.124" --routing-suffix="192.168.56.124.nip.io"
;
The OpenShift Origin cluster configuration files will be located inside the "~/openshift.local.clusterup" directory. The "~" is the logged in user home directory.
If your cluster setup was successful the command...
oc cluster status
... will give you a positive output like this...
Web console URL: https://<YOUR_SERVER_IP_OR_NAME>:8443/console/
Config is at host directory
Volumes are at host directory
Persistent volumes are at host directory /root/openshift.local.clusterup/openshift.local.pv
Data will be discarded when cluster is destroyed
Run OpenShift as a single node cluster service on system startup
Create OpenShift service file...
read -r -d '' FILE_CONTENT << 'HEREDOC'
BEGIN
[Unit]
Description=OpenShift oc cluster up service
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/bash -c "/usr/local/bin/oc cluster up --public-hostname=\"$(ip route get 1 | awk '{print $NF;exit}')\""
ExecStop=/usr/bin/bash -c "/usr/local/bin/oc cluster down"
Restart=no
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=occlusterup
User=root
Type=oneshot
RemainAfterExit=yes
TimeoutSec=300
[Install]
WantedBy=multi-user.target
END
HEREDOC
echo -n "${FILE_CONTENT:6:-3}" > '/etc/systemd/system/openshift.service'
NOTE: For some reason without the workaround /usr/bin/bash -c "<SOME_COMMAND>" we were unable to start the OpenShift cluster. Additional information about parameters for the oc cluster up command can be seen in the references immediately below.
[Ref(s).: https://avinetworks.com/docs/18.1/avi-vantage-openshift-installation-guide/ ,
https://github.com/openshift/origin/issues/7177#issuecomment-391478549 ,
https://github.com/minishift/minishift/issues/1910#issuecomment-375031172 ]
[Ref(s).: https://tobru.ch/openshift-oc-cluster-up-as-systemd-service/ , https://eenfach.de/gitblit/blob/RedHatTraining!agnosticd.git/af831991c7c752a1215cfc4cff6a028e31f410d7/ansible!configs!rhte-oc-cluster-vms!files!oc-cluster.service.j2 ]
Start and enable (start at boot) the OpenShift service and see the log output in sequence...
systemctl enable openshift.service
systemctl start openshift.service
journalctl -u openshift.service -f --no-pager | less
Using OpenShift OKD/Origin Admin Console
OKD includes a web console which you can use for creation and other management actions. This web console is accessible on server IP/hostname on the port 8443 via https...
https://<IP_OR_HOSTNAME>:8443/console
NOTE: You should see an OpenShift Origin page with username and password form (USERNAME: developer / PASSWORD: developer ).
Deploy a test application in the Cluster
Login to Openshift cluster as "regular developer" user (USERNAME: developer / PASSWORD: developer )...
oc login
TIP: You begin logged in as "developer".
Create a test project using oc "new-project" command...
MODEL
oc new-project <PROJECT_NAME> --display-name="<PROJECT_DISPLAY_NAME>" --description="<PROJECT_DESCRIPTION>"
EXAMPLE
oc new-project test-project --display-name="Test Project" --description="My cool Test Project."
NOTE: All commands below involving the "deployment-example" parameter value will be linked to the "test-project" because after create this project it will be selected as the project for the subsequent settings. To confirm this login as administrator using the oc login -u system:admin command and see the output of the oc status command. For more information, see the oc project <PROJECT_NAME> command in the "Some OpenShift Origin Cluster Useful Commands" section.
Tag an application image from Docker Hub registry...
oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest
Deploy application to OpenShift...
MODEL
oc new-app <DEPLOYMENT_NAME>
EXAMPLE
oc new-app "deployment-example"
Allow external access to the deployed application...
MODEL
oc expose "svc/<DEPLOYMENT_NAME>"
EXAMPLE
oc expose "svc/deployment-example"
Show application deployment status...
oc status
Show pods status...
oc get pods
Get service detailed information...
oc get svc
Test Application local access...
NOTE: See <CLUSTER_IP> on command oc get svc output above.
curl http://<CLUSTER_IP>:8080
See external access route to the deployed application...
oc get routes
Test external access to the application...
Open the URL <HOST_PORT> on your browser.
MODEL
http://<HOST_PORT>
EXAMPLE
http://deployment-example-test-project.192.168.56.124.nip.io
NOTES:
See <HOST_PORT> on oc get routes output;
The wildcard DNS record *.<IP_OR_HOSTNAME>.nip.io points to OpenShift Origin server IP address.
Delete test project...
MODEL
oc delete project "<PROJECT_NAME>"
EXAMPLE
oc delete project "test-project"
[Ref(s).: https://docs.openshift.com/container-platform/4.2/applications/projects/working-with-projects.html#deleting-a-project-using-the-CLIprojects ]
Delete test deployment...
MODEL
oc delete all -l app="<DEPLOYMENT_NAME>"
EXAMPLE
oc delete all -l app="deployment-example"
Check pods status after deleting the project and the deployment...
oc get pods
TIP: Completely recreate the cluster...
oc cluster down
rm -rf ~/openshift.local.clusterup
. May be necessary reboot the server to delete the above folder;
. The "~" is the logged in user home directory.
Some OpenShift Origin Cluster Useful Commands
To login as an administrator use...
oc login -u system:admin
As administrator ("system:admin") user you can see informations such as node status...
oc get nodes
To get more detailed information about a specific node, including the reason for the current condition...
MODEL
oc describe node "<NODE_NAME>"
EXAMPLE
oc describe node "localhost"
To display a summary of the resources you created...
oc status
Select a project to perform CLI operations...
oc project "<PROJECT_NAME>"
NOTE: The selected project will be used in all subsequent operations that manipulate project-scoped content.
[Ref(s).: https://docs.openshift.com/container-platform/4.2/applications/projects/working-with-projects.html#viewing-a-project-using-the-CLI_projects ]
To return to the "regular developer" user (USERNAME: developer / PASSWORD: developer )...
oc login
To check who is the logged in user...
oc whoami
Thanks! =D

when I push image into openshift registry, is it private?

When I push Docker image into OpenShift registry, is it private? Looking into About page in my OpenShift console, it says I can push to registry.rh-us-east-1.openshift.com registry. When I log in, I can nicely push:
oc whoami -t # to get token
docker login -u <username> -p <token> registry.rh-us-east-1.openshift.com
docker tag xyz registry.rh-us-east-1.openshift.com/xyz/xyz
docker push registry.rh-us-east-1.openshift.com/xyz/xyz
And the question is: if I do not share mine username/token with anybody, is the image in that registry private (i.e. can not be accessed by anybody else except for my OpenShift Online account)?
Correct, it will only be visible to you as a user when you log into the image registry using docker login and to the service accounts in your OpenShift project which need to be able to pull the image from the image registry to deploy it.

Connecting to percona docker from a java docker container

I know there have been many similar questions, but none of them are what I want. I'm following this because I specifically need 5.5, at least for now. My java project (which accesses mysql) is in a container I built with
docker build -t projectname-testing .
The Dockerfile is pretty standard, it just copies over a built tarball and extracts it to a specific folder. The CMD is a shell script run_dev_server.sh that just launches the server with dev configurations rather than production ones.
I created a percona docker container with the command given in the link with
docker run --name projectname-mysql-server -e MYSQL_ROOT_PASSWORD="" -d percona:5.5
So now the way I see it, just need the link the two as mentioned in the link:
docker run -p 3306:3306 --name projectname-local --link projectname-mysql-server projectname-testing
Which gives me
docker: Error response from daemon: Cannot link to a non running container: /projectname-mysql-server AS /projectname-local/projectname-mysql-server.
ERRO[0000] error getting events from daemon: net/http: request canceled
Which isn't very helpful and doesn't tell me what happened. Am I understanding this process wrong? What should I be doing?
First of all, I would recommend using the official Percona docker image from Docker Hub, instead of building your own image. The official image has a 5.5 version; https://hub.docker.com/_/percona/
You can either extend this image if you need specific changes (such as a custom configuration), for example;
FROM percona:5.5
COPY my-config.cnf /etc/mysql/conf.d/
Important: I notice you are publishing port 3306 (-p 3306:3306). Publishing a port makes it publicly accessible on the host's network-interface. You should only do this if you have external software that needs to connect to the database. If only your application needs access to the database, publishing the port is not needed, because containers can connect with eachother through the docker container-container network, which is "private" and not reachable from outside the host.
The --link option on the default network is a legacy option that is still around for backward compatibility, but should not be used for most situations. The --link option has a number of limitations;
legacy links are not dynamic; it's not possible to replace a linked container without re-creating all containers linked to that container
restarting a linked container can break the link, with no option to re-establish a link
legacy links are uni-directional
environment variables are shared between containers, which can easily lead to leaking (e.g.) credentials to other containers.
Docker 1.9 introduced custom docker networks, which allows
A simple example;
create a network for your application;
docker network create mynet
create a database container, and attach it to the network; there is no need to publish its ports for other containers to connect to it. (I'm using an nginx image here, just to illustrate the concept);
docker run -d --name db --network mynet nginx:alpine
create an "application" container and attach it to the same network; doing so
allows it to communicate with the db container over that network;
docker run -dit --name app --network mynet alpine sh
The application container can now connect to the db container, using its name
as hostname (db); to illustrate this, open a shell in the app container, install curl and connect to http://db:80;
docker exec -it app sh
/ # apk add --no-cache curl
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
(1/4) Installing ca-certificates (20161130-r1)
(2/4) Installing libssh2 (1.7.0-r2)
(3/4) Installing libcurl (7.52.1-r3)
(4/4) Installing curl (7.52.1-r3)
Executing busybox-1.25.1-r0.trigger
Executing ca-certificates-20161130-r1.trigger
OK: 5 MiB in 15 packages
/ # curl http://db:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
You can read more about networks (also how to dynamically attach and detach a container from a network) in the []"docker container networking" section of the documentation](https://docs.docker.com/engine/userguide/networking/)