openshift database and data directory - openshift

I was looking at a README file that raised some questions about database persistence on Openshift.
Note: Every time you push, everything in your remote repo dir gets recreated
please store long term items (like an sqlite database) in the OpenShift
data directory, which will persist between pushes of your repo.
The OpenShift data directory is accessible relative to the remote repo
directory (../data) or via an environment variable OPENSHIFT_DATA_DIR.
https://github.com/ryanj/nodejs-custom-version-openshift/blob/master/README#L24
However, I could find no confirmation of this on the Openshift website. Is this README out of date? I'd rather not test this, so it would be much appreciated if anyone had any firsthand knowledge they'd be willing to share.

Yep, that readme file is up to date regarding SQLite. All gears have SQLite installed on them. Data should be stored in the persistent storage directory on your gear. This does not apply to MySQL/MongoDB/PostgreSQL as those databases are add-on cartridges pre-configured to use persistent storage, whereas SQLite is simply installed and available for use.
See the first notice found in the OpenShift Origin documentation here: https://docs.openshift.org/origin-m4/oo_cartridge_guide.html
Specifically:
Cartridges and Persistent Storage: Every time you push, everything in
your remote repo directory is recreated. Store long term items (like
an sqlite database) in the OpenShift data directory, which will
persist between pushes of your repo. The OpenShift data directory can
be found via the environment variable $OPENSHIFT_DATA_DIR.
The official OpenShift Django QuickStart shows the design pattern you should follow for adding SQLite to your application via the deploy action hook. See: https://github.com/openshift/django-example/blob/master/.openshift/action_hooks/deploy

Related

OpenShift post deployment initialisation script

I have an OpenShift 3.11 project using PHP and I would like to execute a script to configure the pod after it is deployed. The first thing the script will do is to create a symbolic link from the pod to the PV named /storage so that I can display the reports in the reports directory on the PV in a browser. I would also like to copy an image from my image directory - the image will indicate whether the application is running on the development system, the test system or production. The name of the appropriate image will be held in a config map which is tailored to each system.
I did consider OpenShift's Pod Based Lifecycle Hooks but they appear to run in a separate pod to the application that is deployed so the symbolic link will not be created in the application pod. The OpenShift documentation mentions you can change your image's ENTRYPOINT. The example shows running a Java application however I still require the PHP and Apache image to be deployed in addition to creating the symbolic link and copying the image.
Is it possible to perform post deployment configuration of a pod in OpenShift and if so how is it done?

Openshift free account setup local environment

I am using free plan from Openshift Paas for my applications. I want to set up openshift environment on my local machine so that I don't have any issues while setting up my app on live environment. I will configure the app on local and then push the code to production server, i.e the Openshift server.
Is it possible with the free plan? I am sure you can set up the Openshift Origin on your local machine using Vagrant/Docker but I have a doubt, if I will be able to push my changes on server using it?
Using your Vagrant/Docker instance on your local machine, you will be able to tie that to Github and you can pull from there. Since your local machine's IP is unlikely to be exposed on the internet, Github cannot see you: so the webhook from Github will not work to trigger automatic re-builds.
Still, that's probably a minor thing if you are just experimenting. With a few clicks, you can go to the build and click to re-build.

Amazon AWS Cloudformation JSON template to assign the LAMP www/html folder permissions to ec2-user

I have created a JSON template to create the Amazon AWS LAMP stack with RDS (free tier) and succeffully created the stack. But when I tried to move the files to the var/www/html folder it seems to have no permission for the ec2-user. I know changing permission with help of SSH. But my intention is to create a template to setup a stack (hosting environment) without using any ssh client.
Also I know how to add a file or copy a zipped source to var/ww/html with the cloudformation JSON templating. What need to do is, just create the environment and later upload the files using ftp client and db using workbench or something. Please help me attain my goal, which I will share publicly for AWS beginners who are not familiar with setting up things with SSH.
The JSON template is a bit lengthy and so here is the link to the code http://pasted.co/803836f5
use the Cloud formation init Meta instead of Userdata.
That way you can run commands on the server such as pulling down files from S3 and then running gzip to expand them.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html
tar files and distribution dependent files like .deb or .rpm include the file permissions for directories. So you could set up a tar or custom .rpm file to include ec2-user as the owner
Alternatively, whatever scripting element installs the apache could also run a set of updates to set the owner of the /var/www/html to ec2-user
Of course you might run into trouble with the User / Group that apache runs under and be able to upload with ftp but not able to read with apache. It would need some thought, and possibly adding the ec2-user to the apache group or ftp'ing as the apache user or some other combination that gives the ttpd server read access and the ssh user write access

Rebuild openshift cartridge without deleting app

I have production golang app running on openshift using this cartridge (https://github.com/smarterclayton/openshift-go-cart) with a mysql database. The cartridge has had some updates which I would like to pull into my app.
Is it possible to reploy the base cartridge into my gears without deleting the whole application?
If your repository contains .openshift/markers/hot_deploy, when you perform a git push OpenShift will not rebuild the application and perform a hot deployment instead.
See the Hot Deploying Applications section of the user guide, as well as this blog post (which somehow contains more specific details about where the marker file goes)

Using version control with SSIS packages (saving 'sensitive' data)

We are a team working on a bunch of SSIS packages, which we share using version control (SVN). We have three ways of saving sensitive data in these packages :
not storing them at all
storing them with a user key
storing them with a password
However, each of these options is inconvenient while testing packages saved and committed by an other developer. For each such package, one has to update the credentials, no matter how the sensitive data was persisted.
Is there a better way to collaborate on SSIS packages?
Since my workplace uses file deployment, I use "Don't save sensitive" In order to make development easier, we also store config files with the packages in our version control system, and the connection strings for the development environment are stored in the config files. The config files are also stored in a commonly named folder, so if I retrieve the config files into my common config file area, then I can open any of our project packages and they will work for me for development. When the packages are deployed, they are secured by the deployment team on a machine where developers do not have access and the config file values for connection strings are changed to match the production environment.
We do somthing similar using database deployment. Each enviroment has a configuration database, and every package references a single xml config file in a common file path on every server/workstation, e.g., "c:\SSISConfig". This xml config file has one entry that points to the appropriate config database for that environment. All of the rest of the SSIS configs are stored in that config database. The config database in production is only accessible by the admin group, the developers do not have access. When new packages and configurations are deployed to prod, connection strings are updated by the admin group. The packages are all set to "Dont save sensitive".