I'm having a flask app that fetches data from external source only at application startup.
I need to refresh app data, and the easiest way (without changing app logic) would be to daily restart deployment (scale pod to zero and up again).
Is this possible to achieve this in deployment configuration (DeploymentConfig)?
OpenShift version is v3.11.104.
Seems to be solvable with running an app using OpenShift Cronjob, configuring the Cronjob to be run every day, and setting ConcurrencyPolicy param to Replace.
Related
I have an OpenShift 3.11 project using PHP and I would like to execute a script to configure the pod after it is deployed. The first thing the script will do is to create a symbolic link from the pod to the PV named /storage so that I can display the reports in the reports directory on the PV in a browser. I would also like to copy an image from my image directory - the image will indicate whether the application is running on the development system, the test system or production. The name of the appropriate image will be held in a config map which is tailored to each system.
I did consider OpenShift's Pod Based Lifecycle Hooks but they appear to run in a separate pod to the application that is deployed so the symbolic link will not be created in the application pod. The OpenShift documentation mentions you can change your image's ENTRYPOINT. The example shows running a Java application however I still require the PHP and Apache image to be deployed in addition to creating the symbolic link and copying the image.
Is it possible to perform post deployment configuration of a pod in OpenShift and if so how is it done?
I have a load balanced EB environment, running a PHP application on an Apache server.
We have successfully deployed the identical software to a test environment in this AWS account, as a pre-production test. This went as expected, and updated the sortware with each CLI deployment.
I cloned this environment in order to deploy the production instance. Generally, deploying the application via EB CLI results in a healthy instance. I say generally because occasionally this shows as degraded - to fix this, I select the latest application version and deploy it to the instance via the admin interface. This feels like a workaround because the console already shows the correct version as the one deployed.
The problem I am having now is in changing the environment variables, to point to the production database. When I change this via the configuration>software section, no changes are stored. When I hit 'apply' the environment starts to transition. When this is complete, the instance health has degraded and the changes made to the configuration are not persisted.
I don't really see a pattern here, and it's behaving in a way that differs from the way the test instance did - I had no problems there.
Any suggestions on how to get past this?
I'm working on an architecture to deploy my webapp. I would like to use Google Managed Instance Groups because I have some strict requirements. I was wondering:
which is the best Web container to be deployed in a distributed environment?
I'm familiar with Tomcat, it's Tomcat OK to be deployed in an instance group?
my Webapp running on tomcat will generate logs that will be stored in the current machine hosting tomcat. How should I handle distributed application logs.
I don't want to lose information and I would like to have a single view of all log of my webapp even if distributed, Is it that possible?
Thanks
I have used tomcat in GCP for over a year and it has worked without problems with the load balancer. To solve the issue of the logs you must use an agent to save the logs in stackdriver https://cloud.google.com/logging/docs/view/service/agent-logs
I have production golang app running on openshift using this cartridge (https://github.com/smarterclayton/openshift-go-cart) with a mysql database. The cartridge has had some updates which I would like to pull into my app.
Is it possible to reploy the base cartridge into my gears without deleting the whole application?
If your repository contains .openshift/markers/hot_deploy, when you perform a git push OpenShift will not rebuild the application and perform a hot deployment instead.
See the Hot Deploying Applications section of the user guide, as well as this blog post (which somehow contains more specific details about where the marker file goes)
I have install tomcat 6.0 and mysql 5.5 on amazon linux instance.
now i want to deploy war file on that tomcate & .sql file on mysql running on amazon instance .I am new for amazon services.Plz give details about procedure.
Plz help me for that .Thanks in advance.
The simple way is use scp or rsync to upload file and restart Tomcat.
However if you have many servers or WARs, even more complicated situation, consider other ways:
use jenkins to deploy
write your deploy script leveraged by python-fabric
You should design your own deploy process to overcome the difficulties you met.
In my case is every ec2 instances are spot instance, they are created by scripts or autoscaling.
We should keep the every new spot instance update to date, using the latest software and JARs to run web crawler.
Our design is very simple. Just a script to download files from S3 and unzip it:
ec2 spot instance completed booting
run the software-update script
run the software script getting from the updater
In your case, there are some key point you not figure out:
How many ec2 instance should update ?
A ec2 instance how to know it need to update.
(many other points)
What is the best way to deploy your WARs ? It depends on your situation.