I have production golang app running on openshift using this cartridge (https://github.com/smarterclayton/openshift-go-cart) with a mysql database. The cartridge has had some updates which I would like to pull into my app.
Is it possible to reploy the base cartridge into my gears without deleting the whole application?
If your repository contains .openshift/markers/hot_deploy, when you perform a git push OpenShift will not rebuild the application and perform a hot deployment instead.
See the Hot Deploying Applications section of the user guide, as well as this blog post (which somehow contains more specific details about where the marker file goes)
Related
I have an OpenShift 3.11 project using PHP and I would like to execute a script to configure the pod after it is deployed. The first thing the script will do is to create a symbolic link from the pod to the PV named /storage so that I can display the reports in the reports directory on the PV in a browser. I would also like to copy an image from my image directory - the image will indicate whether the application is running on the development system, the test system or production. The name of the appropriate image will be held in a config map which is tailored to each system.
I did consider OpenShift's Pod Based Lifecycle Hooks but they appear to run in a separate pod to the application that is deployed so the symbolic link will not be created in the application pod. The OpenShift documentation mentions you can change your image's ENTRYPOINT. The example shows running a Java application however I still require the PHP and Apache image to be deployed in addition to creating the symbolic link and copying the image.
Is it possible to perform post deployment configuration of a pod in OpenShift and if so how is it done?
I'm building an application in Python and using GitHub Actions to automate the testing 'on push'. However, I now want to connect my app to an existing MySql database.
From searching the Marketplace, Google and YouTube, I can see the following options:
use the MySql supplied with the GitHub Actions' Ubuntu virtual environment
setup a new MySql dB inside the GitHub Actions VM.
Setup MySql inside a Docker container and connect to it from another Docker container containing my app.
What I can't see is how to connect out of the GitHub Actions VM to an existing database on my network. Is it possible and should I expect to see a pre-built action to do this in the Marketplace.
Sorry for such an obtuse question: old, out-of-date programmer new to both CI/CD and containerisation. Thank you.
I'm having a flask app that fetches data from external source only at application startup.
I need to refresh app data, and the easiest way (without changing app logic) would be to daily restart deployment (scale pod to zero and up again).
Is this possible to achieve this in deployment configuration (DeploymentConfig)?
OpenShift version is v3.11.104.
Seems to be solvable with running an app using OpenShift Cronjob, configuring the Cronjob to be run every day, and setting ConcurrencyPolicy param to Replace.
I'm trying to hook into an application created event in OpenShift - if such an event exists.
The reason being, I would like to have a command run (ideally in a new pod), for creating a database schema. It doesnt make sense to have this in the application image, as I only need this run once - when the application is created.
I have looked into pod lifecycle hooks (https://docs.openshift.com/enterprise/3.1/dev_guide/deployments.html#pod-based-lifecycle-hook) however these events happen everytime there is a new deployment. So this also is too often for my use case.
Is there a way to have an image run just once when an Openshift application is created?
You're on the right track in the comments here. In the OpenShift v2 days the same scenario existed with lifecycle hooks.
For our WordPress Quickstart in OpenShift v2, for instance, we would check to see if the database was created yet on every new deployment. If not, we initialized an empty database with the same name as the app (in this case letting WordPress create the schema afterwards, but it's the same idea needed here): OpenShift v2 WordPress deploy action hook
In OpenShift v3, there are a few ways to implement a similar lifecycle hook, but the common pattern we're using in our templates now is to leverage the ability to execute a new pod to run database setup steps just prior to the deployment phase: OpenShift v3 CakePHP pre deploy lifecycle hook
Following this pattern, you would add your code to generate your database schema in a file like the v3 CakePHP migrate-database.sh in your source repo and execute the script with a pre deploy lifecycle hook (via execNewPod), checking first to see if the database/schema (select * from someknowntable limit 1) exists before loading the schema.
I was looking at a README file that raised some questions about database persistence on Openshift.
Note: Every time you push, everything in your remote repo dir gets recreated
please store long term items (like an sqlite database) in the OpenShift
data directory, which will persist between pushes of your repo.
The OpenShift data directory is accessible relative to the remote repo
directory (../data) or via an environment variable OPENSHIFT_DATA_DIR.
https://github.com/ryanj/nodejs-custom-version-openshift/blob/master/README#L24
However, I could find no confirmation of this on the Openshift website. Is this README out of date? I'd rather not test this, so it would be much appreciated if anyone had any firsthand knowledge they'd be willing to share.
Yep, that readme file is up to date regarding SQLite. All gears have SQLite installed on them. Data should be stored in the persistent storage directory on your gear. This does not apply to MySQL/MongoDB/PostgreSQL as those databases are add-on cartridges pre-configured to use persistent storage, whereas SQLite is simply installed and available for use.
See the first notice found in the OpenShift Origin documentation here: https://docs.openshift.org/origin-m4/oo_cartridge_guide.html
Specifically:
Cartridges and Persistent Storage: Every time you push, everything in
your remote repo directory is recreated. Store long term items (like
an sqlite database) in the OpenShift data directory, which will
persist between pushes of your repo. The OpenShift data directory can
be found via the environment variable $OPENSHIFT_DATA_DIR.
The official OpenShift Django QuickStart shows the design pattern you should follow for adding SQLite to your application via the deploy action hook. See: https://github.com/openshift/django-example/blob/master/.openshift/action_hooks/deploy