I have openshift 3.9 installed in one AWS region ohio. I have jenkins installed in it. I have a pipeline code in where it will take Java code from GitHub bind with jboss and deployed it in project test within the same cluster. It works fine and I'm able to access the app as pod is creating and app is also binding with jboss. Now I want to deploy this application across different clusters either within the same region or across different regions. Is there a way to achieve this?
You can use the oc command line tool in your Jenkins pipeline to deploy it to a different cluster. For a related example, check the Gitlab review apps example using an OpenShift cluster. It does something similar, where the CI pipeline deploys the required artifacts to an OpenShift cluster using oc and appropriate credentials.
Related
I have an OpenShift 3.11 project using PHP and I would like to execute a script to configure the pod after it is deployed. The first thing the script will do is to create a symbolic link from the pod to the PV named /storage so that I can display the reports in the reports directory on the PV in a browser. I would also like to copy an image from my image directory - the image will indicate whether the application is running on the development system, the test system or production. The name of the appropriate image will be held in a config map which is tailored to each system.
I did consider OpenShift's Pod Based Lifecycle Hooks but they appear to run in a separate pod to the application that is deployed so the symbolic link will not be created in the application pod. The OpenShift documentation mentions you can change your image's ENTRYPOINT. The example shows running a Java application however I still require the PHP and Apache image to be deployed in addition to creating the symbolic link and copying the image.
Is it possible to perform post deployment configuration of a pod in OpenShift and if so how is it done?
I'm building an application in Python and using GitHub Actions to automate the testing 'on push'. However, I now want to connect my app to an existing MySql database.
From searching the Marketplace, Google and YouTube, I can see the following options:
use the MySql supplied with the GitHub Actions' Ubuntu virtual environment
setup a new MySql dB inside the GitHub Actions VM.
Setup MySql inside a Docker container and connect to it from another Docker container containing my app.
What I can't see is how to connect out of the GitHub Actions VM to an existing database on my network. Is it possible and should I expect to see a pre-built action to do this in the Marketplace.
Sorry for such an obtuse question: old, out-of-date programmer new to both CI/CD and containerisation. Thank you.
I have a load balanced EB environment, running a PHP application on an Apache server.
We have successfully deployed the identical software to a test environment in this AWS account, as a pre-production test. This went as expected, and updated the sortware with each CLI deployment.
I cloned this environment in order to deploy the production instance. Generally, deploying the application via EB CLI results in a healthy instance. I say generally because occasionally this shows as degraded - to fix this, I select the latest application version and deploy it to the instance via the admin interface. This feels like a workaround because the console already shows the correct version as the one deployed.
The problem I am having now is in changing the environment variables, to point to the production database. When I change this via the configuration>software section, no changes are stored. When I hit 'apply' the environment starts to transition. When this is complete, the instance health has degraded and the changes made to the configuration are not persisted.
I don't really see a pattern here, and it's behaving in a way that differs from the way the test instance did - I had no problems there.
Any suggestions on how to get past this?
I am looking for a way to install AppDynamics in a OpenShift Cluster.
Unable to find proper documentation on how to install and what tools need to be installed.
Should My Application Docker file also include any images related to AppDynamics
If anyone familiar with this please share some steps or provide reference to documents.
Old docs: https://docs.appdynamics.com/22.2/en/infrastructure-visibility/monitor-containers-with-docker-visibility/use-docker-visibility-with-red-hat-openshift
New Docs: https://docs.appdynamics.com/22.2/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent
Note that there is not a prescribed way to instrument as such, you need to make some decisions.
i.e. (from the second doc link):
The first decision is to use the officially released pre-built
Appdynamics Operator images published on DockerHub and Redhat
Registry or If you want to build a custom Appdynamics Operator image.
See Build the Custom Cluster Agent Image.
The second decision is whether to use the officially released
pre-built Cluster Agent images published on DockerHub and Redhat
Registry or If you want to build a custom Cluster Agent image. See
Cluster Agent Container Image.
The third decision is whether to install the Cluster Agent using the
Kubernetes CLI or the Cluster Agent Helm Chart. See Install the
Cluster Agent with the Kubernetes CLI and Install the Cluster Agent
with Helm Charts.
I used to test installed mongodb-shard on kubernetes by helm,but I found that those helm charts are not really a qualified mongodb shard.These helm charts can correctly create Pods with names like mongos-1 mongod-server-1 mongod-shard-1,this seems to be a correct shard cluster configuration, but the appropriate mongos, mongod server instance is not created on the corresponding Pod.They just create a normal mongod instance on the corresponding pod, and there is no connection between them.Do I need to add scripts to execute commands similar to rs.addShard(config)?Encountered the same problem when installing mysql cluster using helm.
What I want to know is, is it not appropriate to install mysql/mongodb cluster on kubernetes in general scenarios?Is the database installed independently or deployed on Kubernetes?
Yes, you can deploy MongoDB instances on Kubernetes clusters.
Use standalone instance if you want to test and develop and replica set for production like deployments.
Also to make things easier you can use MongoDB Enterprise Kubernetes Operator:
The Operator enables easy deploys of MongoDB into Kubernetes clusters,
using our management, monitoring and backup platforms, Ops Manager and
Cloud Manager. By installing this integration, you will be able to
deploy MongoDB instances with a single simple command.
This guide has references to the official MongoDB documentation with more necessary details regarding:
Install Kubernetes Operator
Deploy Standalone
Deploy Replica Set
Deploy Sharded Cluster
Edit Deployment
Kubernetes Resource Specification
Troubleshooting Kubernetes Operator
Known Issues for Kubernetes Operator
So basically all you need to know in this topic.
Please let me know if that helped.