Can you use openshift to deploy non-web applications? - openshift

I cannot seem to find a direct and recent answer to this question. It appears to me that OpenShift is used only to deploy web-application (by the languages supported, etc.) but I want to make sure. Can you only use OpenShift to deploy web applications?

You can find more information at:
https://www.openshift.com/
On that page it says:
Run multiple languages, frameworks, and databases on the same platform and take advantage of the docker eco-system.
That statement links to:
https://www.openshift.com/features/technologies.html
where it lists various language builders provided as well as images for database products.
If you can package something up in a container image, then generally you can run it. The caveat is that OpenShift by default doesn't allow you to run containers as root and will assign you a uid for it to be run as. Many images on Docker Hub use poor practices and expect to be run as root. These will not usually work out of the box. On an OpenShift system which you have no admin rights, and especially a multi user system, you will not be able to, nor would you be given the ability to run stuff as root, so you just need to follow good practices on how your image is setup if building your own images from scratch.

Related

How can I automate the setup of development tools for new hires?

I am looking in a way to have some kind of "image" (VM, Vagrant box, Docker container...???) with all the development tools needed to work on our software project, like a configured IDE (i.e. Eclipse or PyCharm), build and deployment tools.
After a bit of searching I found surprising little about this topic, while plenty about development environments that mirrors the production one. Almost every source I found considers installing development tools on the host, while deploying in a virtualized environment.
The first thing that comes to my mind is a virtual machine of some sort, maybe provisioned in an automated way (Packer + Ansible maybe). I have also seen some blog posts about running GUI applications in Docker containers via X.org.
Is there a better way? How did you solve the problem?
The ultimate goal is to let new hires being productive in hours instead of days.
UPDATE: After some research, I am currently evaluating:
Development in a Virtual Machine
Development with the support of Docker containers
Cloud IDEs
Have your IT department make an image of a development laptop and then use a confluence page for tweaking the images to the needs of the individual developer. Then use docker images for setting up any servers they will need. These can be run on the laptops. You can use docker swarm to have many docker images spun up if you need it.
I prefer to have dev tools installed on the host so every one do it its own way and I don't want to convert someone to specific tool.
If you want to go the other route and give your new hires a ready to use dev box, I would go with vagrant working in GUI mode + provisioning scripts. For example the jhipster project has a nice dev box, its pretty nice as they have many tools to be installed, its pretty neat so after you install vagrant/virtualbox (or vmware)/git on your host you're ready in minutes.

How best to deploy this multi-tier app?

We currently have an application that runs on one dedicated server. I'd like to move it to OpenShift. It has:
A public-facing web app written in PhP
A Java app for administrators running on Wildfly
A Mysql database
A filesystem containing lots of images and documents that must be accessible to both the Java and PhP apps. A third party ftp's a data file to the server every day, and a perl script loads that into the db and the file system.
A perl script occasionally runs ffmpeg to generate videos, reading images from and writing videos to the filesystem.
Is Openshift a good solution for this, or would it be better to use AWS directly instead (for instance because they have dedicated file system components?)
Thanks
Michael Davis
Ottawa
The shared file system will definitely be the biggest issue here. You could get around it by setting up your applications to use Amazon S3 or some other shared Cloud file system though fairly easily.
As for the rest of the application, if I were setting this up I would:
Setup a scaled PHP application, even if you set the scaling to just use 1 gear this will allow you to put the MySQL database on it's own gear, and even choose a different size for it, such as having medium web gears (that run php) and a large gear that runs the MySQL database. This will also allow your wildfly gear to access the database since it will have a FQDN (fully qualified domain name) that any of your applications on your account can reach. However, keep in mind that it will use a non-standard port instead of 3306.
Then you can setup your WildFly server as whatever size you want, but, keep in mind that the MySQL connection variables will not be there, you will have to put them into your java application manually.
As for the perl script, depending on how intensive it is, you could run it on it's own whatever sized gear with some extra storage, or you could co-locate it with either the php or java application as a cron job. You can have it store the files on Amazon S3 and pull them down/upload them as it does the ffmpeg operations on them. Since OpenShift is also hosted on Amazon (In the US-EAST region) these operations should be pretty fast, as long as you also put your S3 bucket in the US-EAST region.
Those are my thoughts, hope it helps. Feel free to ask questions if you have them. You can also visit http://help.openshift.com and under "Contact Us" click on "Submit a request" and make sure you reference this StackOverflow question so I know what you are talking about, you can ask any questions you might have and we can discuss solutions for them.

Why docker (Container-based technologies) are useful

Lately I have been looking into docker and the usefulness it can provide to a SaaS company. I have spent some time learning about how to containerize apps and learning briefly about what docker and containers are. I have some problems understanding the usefulness of this technology. I have watched some videos from dockercon and it seems like everyone is talking about how docker makes deployment easy and how you deploy in your dev environment is guaranteed to run in production. However, I have some questions:
Deploying containers directly to production from the dev environment means that developers should develop inside containers which are the same containers that will run on production. This is practically not possible because developers like to develop on their fancy MACs with IDEs. Developers will revolt if they are told to ssh into containers and develop their code inside. So how does that work in companies that currently use docker?
If we assume that the development workflow will not change. Developers will develop locally and push their code into a repo or something. So where is the "containerizing the app" fits within the workflow?
Also if developers do not develop within containers, then the "what you develop is what you deploy and is guaranteed to work" assumption is violated. If this is the case, then I can only see that the only benefit docker offers is isolation, which is the same thing virtualization offer, of course with a lower overhead. So my question here would be, is the low overhead the only advantage docker has on virtualization? or are other things I dont see?
You can write the code outside of a container and transfer it into the container in many different ways. Some examples include:
Code locally and include the source when you docker build by using an ADD or COPY statement as part of the Dockerfile
Code locally and push your code to a source code repository like GitHub and then have the build process pull the code into the container as part of docker build
Code locally and mount your local source code directory as a shared volume with the container.
The first two allow you to have exactly the same build process in production and development. The last example would not be suitable for production, but could be quickly converted to production with an ADD statement (i.e. the first example)
In the Docker workflow, Developers can create both source code (which gets stored and shared in a repository like git, mercurial, etc) and a ready-to-run container, which gets stored and shared via a repository like https://registry.hub.docker.com or a local registry.
The containerized running code that you develop and test is exactly what can go to production. That's one advantage. In addition you get isolation, interesting container-to-container networking, and tool integration with a growing class of devops tools for creating, maintaining and deploying containers.

Provide Shell in a HTML page of that Web Server

I have a Linux based Web Server running Fedora. I have created and hosted couple of HTML pages on that.
I have info providing CLI tools that run on this server but must be accessible to all users from their browsers
I haven't started and these are my requirements
How do I provide that servers shell (BASH) via HTML page? What are the softwares that make it possible?
Can I provide auto-login enabled shell?
I just want to avoid multiple users having to open SSH sessions to the server. Also I can provide instructions and terminal access hand in hand using HTML pages.
ShellInABox appears to provide a colored terminal interface to browsers via Ajax. (homepage) Since it runs as a separate webserver, you may need to link your users to a different port on your site. There are surely more alternatives (other projects like this) out there.
The following advice applies regardless whether you use ShellInABox or continue to provide ssh access.
If you don't fully know and trust all your users, then assume at least one of them is a whizzkid cracker, determined to crash or break into your system. The first thing he may try to do is log in and run a forkbomb.
You should therefore do your best to sandbox users, so they cannot harm the system or each other. Restrict their access privileges (file/folder/network access) to only what is needed to achieve the tasks you allow. SELinux and AppArmor have facilities for this. You can find some more sandboxing techniques here and here. Docker is a new system that may be worth investigating.
It would be very wise to host your login server on a separate or virtual machine, distinct from your main webserver, so that any user who does manage to break out of the sandbox will not be on the same machine as your other services. (But note he will still be inside your LAN!) User-mode-linux is a less secure alternative and chroot is worse still, but better than nothing!
If users should be able to save files, then I would recommend giving each user a separate account, especially if their files should persist between sessions. Of course, as a workaround for auto-login you could provide a guest account with password guest555 for all users, but then a malicious user could bother others by deleting files or putting nasty stuff in the shell startup scripts. (I certainly don't recommend guest/guest because crackers regularly scan the net for ssh servers hosting that account!)

What can I do as OpenShift user?

I'm currently using a virtual server and want to try OpenShift out. But I'm not really getting yet, how it works. Do I get a root access to my "webspace"? Can I set up the server OS (e.g. Debian 7)? Can I install/uninstall software (nginx, PHP 5.5, PHP Code Sniffer PEAR package etc.)? Can I use one gear for multiple websites?
It not clear by your line of questioning what portion of OpenShift you are not understanding, so I will try and lay out the architecture and provide documentation to get you started.
OpenShift is a Red Hat developed product (so its going to be easiest to get started on RHEL or Fedora), but it can also run on other Linux systems (however you may need to piece meal the components together, but it can be done).
This is talked about in building your own live cd on the community site, however has not been done for you by the OpenShift community.
There are two starting places for OpenShift, and they dependon on what you are trying to use openshift for? As a PaaS hosting solution, or PaaS hosted solution?
For a PaaS hosting solution a good starting point is to look at the Origin page as it provides VM's and install instructions, for OpenShift's Community product.
Because OpenShift is a PaaS solution these components (see Architecture Links), when cobbled together provide users with an application space (which they do not have root access to).
https://www.openshift.com/products/architecture
https://www.openshift.com/wiki/architecture-overview
As the administrator for the box you would have (root access) but your end users would not.
For a PaaS hosted solution a good starting place for OpenShift is OpenShift Online which is Red Hat's Hosted solution for the OpenShift Origin project.
Get started by Creating an Acount
With an online account you can get started using the hosted solution very quickly by trying some of the quickstarts. Be sure to read the full set of OpenShift Documentation as well as install the client tools