Where data are stored which need a web service in OpenShift 3? Anyway how can I browser file system?
In OpenShift 3 there is no persistent storage provided by default. You will need to claim a persistent volume and then mount it at whatever directory you desire in the container for your application.
To view the contents of the directory, use oc rsh or the terminal window for a pod in the web console to get shell access, Then change to the directory to look in the directory.
To transfer files into the persistent volume, you can use the oc rsync command.
You can find a tutorial on transferring files in and out of container at https://learn.openshift.com
Related
I have an OpenShift 3.11 project using PHP and I would like to execute a script to configure the pod after it is deployed. The first thing the script will do is to create a symbolic link from the pod to the PV named /storage so that I can display the reports in the reports directory on the PV in a browser. I would also like to copy an image from my image directory - the image will indicate whether the application is running on the development system, the test system or production. The name of the appropriate image will be held in a config map which is tailored to each system.
I did consider OpenShift's Pod Based Lifecycle Hooks but they appear to run in a separate pod to the application that is deployed so the symbolic link will not be created in the application pod. The OpenShift documentation mentions you can change your image's ENTRYPOINT. The example shows running a Java application however I still require the PHP and Apache image to be deployed in addition to creating the symbolic link and copying the image.
Is it possible to perform post deployment configuration of a pod in OpenShift and if so how is it done?
Background:
I've deployed a spring boot app to the openshift platform, and would like to know how to handle persistent storage in OpenShift3.
I've subscribed to the free plan and have access to the console.
I can use oc command, but access seems limited under my user for commands like 'oc get pv' and others.
Question
How can I get a finer control over my pvc (persistent storage claim) on OS3?
Ideally, I want a shell and be able to 'list' file on that volume.
Thanks in advance for your help!
Solution
Add storage to your pod
use the command oc rsh <my-pod> to get access to the pod
cd /path-to-your-storage/
The oc get pv command can only be run by a cluster admin because it shows all the declared persistent volumes available in the cluster as a whole.
All you need to know is that in OpenShift Online starter, you have access to claim one persistent volume. The type of that persistent volume is ReadWriteOnce or RWO.
A persistent volume is not yours until you make a claim and so have a persistent volume claim (pvc) in your project. In order to be able to see what is in the persistent volume, it has to be mounted against a pod, or in other words, in use by an application. You can then get inside of the pod and use normal UNIX commands to look at what is inside the persistent volume.
For more details on persistent volumes, suggest perhaps reading chapter about storage in the free eBook at:
https://www.openshift.com/deploying-to-openshift/
Is it possible to mount a webdav file system in OpenShift? I have ssh access, but no root access so I can't install davfs or actually mount anything. I just need access to a webdav folder from OpenShift, how can I do this?
OpenShift online does not currently support WEBDAV, you will need to use SFTP or SCP to upload/download files from your gear(s).
I have created a JSON template to create the Amazon AWS LAMP stack with RDS (free tier) and succeffully created the stack. But when I tried to move the files to the var/www/html folder it seems to have no permission for the ec2-user. I know changing permission with help of SSH. But my intention is to create a template to setup a stack (hosting environment) without using any ssh client.
Also I know how to add a file or copy a zipped source to var/ww/html with the cloudformation JSON templating. What need to do is, just create the environment and later upload the files using ftp client and db using workbench or something. Please help me attain my goal, which I will share publicly for AWS beginners who are not familiar with setting up things with SSH.
The JSON template is a bit lengthy and so here is the link to the code http://pasted.co/803836f5
use the Cloud formation init Meta instead of Userdata.
That way you can run commands on the server such as pulling down files from S3 and then running gzip to expand them.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html
tar files and distribution dependent files like .deb or .rpm include the file permissions for directories. So you could set up a tar or custom .rpm file to include ec2-user as the owner
Alternatively, whatever scripting element installs the apache could also run a set of updates to set the owner of the /var/www/html to ec2-user
Of course you might run into trouble with the User / Group that apache runs under and be able to upload with ftp but not able to read with apache. It would need some thought, and possibly adding the ec2-user to the apache group or ftp'ing as the apache user or some other combination that gives the ttpd server read access and the ssh user write access
I was looking at a README file that raised some questions about database persistence on Openshift.
Note: Every time you push, everything in your remote repo dir gets recreated
please store long term items (like an sqlite database) in the OpenShift
data directory, which will persist between pushes of your repo.
The OpenShift data directory is accessible relative to the remote repo
directory (../data) or via an environment variable OPENSHIFT_DATA_DIR.
https://github.com/ryanj/nodejs-custom-version-openshift/blob/master/README#L24
However, I could find no confirmation of this on the Openshift website. Is this README out of date? I'd rather not test this, so it would be much appreciated if anyone had any firsthand knowledge they'd be willing to share.
Yep, that readme file is up to date regarding SQLite. All gears have SQLite installed on them. Data should be stored in the persistent storage directory on your gear. This does not apply to MySQL/MongoDB/PostgreSQL as those databases are add-on cartridges pre-configured to use persistent storage, whereas SQLite is simply installed and available for use.
See the first notice found in the OpenShift Origin documentation here: https://docs.openshift.org/origin-m4/oo_cartridge_guide.html
Specifically:
Cartridges and Persistent Storage: Every time you push, everything in
your remote repo directory is recreated. Store long term items (like
an sqlite database) in the OpenShift data directory, which will
persist between pushes of your repo. The OpenShift data directory can
be found via the environment variable $OPENSHIFT_DATA_DIR.
The official OpenShift Django QuickStart shows the design pattern you should follow for adding SQLite to your application via the deploy action hook. See: https://github.com/openshift/django-example/blob/master/.openshift/action_hooks/deploy