PHP, Apache and MySQL in the same Docker container (same Dockerfile) - mysql

I need an image with all 3 items in the same container, I know it is not a good practice, I know docker is solving the scalability, redundancy and availability issues by splitting services, etc. For all my projects I separate the services but for this project in specific I need all 3 services running under the same container.
So far I can run apache or mysql but not both at the same time, some issues with the entrypoints, some issues with the mysql permissions and I still haven't been able to get them all together in the same container up and running.
Has anyone faced this issue? Maybe some documentation I have missed?
Thank you

I've solved this before for a purely-local dev testing environment (because as you stated, this isn't normally a good practice).
What I did was start my Docker image from Alpine Linux, and then installed PHP, MySQL, and Nginx on top of it. You'll also need to COPY your source files into the container and set their appropriate permissions with chmod or chown.
Alternatively, you can use a volume as well, but depending on your OS you might run into permissions issues unless you create the same user/group in your container that owns the files on your local system.
If you'd like some inspiration, you can view the Dockerfile for the container that I've mentioned in the above.

Related

Can you use openshift to deploy non-web applications?

I cannot seem to find a direct and recent answer to this question. It appears to me that OpenShift is used only to deploy web-application (by the languages supported, etc.) but I want to make sure. Can you only use OpenShift to deploy web applications?
You can find more information at:
https://www.openshift.com/
On that page it says:
Run multiple languages, frameworks, and databases on the same platform and take advantage of the docker eco-system.
That statement links to:
https://www.openshift.com/features/technologies.html
where it lists various language builders provided as well as images for database products.
If you can package something up in a container image, then generally you can run it. The caveat is that OpenShift by default doesn't allow you to run containers as root and will assign you a uid for it to be run as. Many images on Docker Hub use poor practices and expect to be run as root. These will not usually work out of the box. On an OpenShift system which you have no admin rights, and especially a multi user system, you will not be able to, nor would you be given the ability to run stuff as root, so you just need to follow good practices on how your image is setup if building your own images from scratch.

How to keep a data store portable in Docker?

I am in the progress of changing my development environment to Docker and I'm pretty happy so far but I have one basic question. But first let me describe on what kind of setup I've landed.
I'm using the example of an environment for web development.
I'm organizing every service in its own container, so
PHP who talks to a mysql container and has a data container (called app) for the source.
nginx links to the PHP container and serves the files from the data container (app).
app is basically the same container as PHP (to save space) and mounts my project folder into the container. app then serves the project folder to the other containers.
then there is a mysql container who has his own data container called data
a phpmyadmin container that talks to the mysql container
and finally there is data, the data container for the DB.
I'm not sure if the benefits are clear for everyone, so there it is (because you could put everything into one container...).
Mounting the project folder from my host machine into the Docker container lets me use my favorite editor and gives me continuous development.
Decoupling the database engine from its store gives you the freedom to change the engine but keep the data. (And of cause you don't have to install any programming stuff apart from an editor and Docker.)
My goal is to have the whole setup highly portable, so having the latest version of my project code on the host system and not living inside a container is a huge plus. I am organizing the setup described above in a ´docker-compose.yml´ file inside my project folder. So I can just copy that whole project folder to a different machine, type ´docker-compose´ and be up and running.
I actually have it in my Dropbox and can switch machines just like that. Pretty sweet.
But there is one drawback. The DB store is not portable as it lies somewhere in the Virtualbox file system. I tried mounting the data store into the host OS but that doesn't really work. The files are there but I get various errors when I try to read or write to it.
I guess my question is if there is a best practice to keep the database store in sync (or highly portable) between different dev machines.
I'd nix the data containers and switched over to named volumes. Data containers haven't been needed for quite a while despite some outdated documentation indicating otherwise.
Named volumes let you select from a variety of volume drivers that make it possible to mount the data from outside sources (including NFS, gluster, and flocker). It also removes the requirement to pick a container that won't have a significant disk overhead, allows you to mount the folders in any location in each container, and separates container management from data management (so a docker rm -v $(docker ps -aq) doesn't nuke your data).
The named volume is as easy to create as giving the volume a name on the docker run, e.g. docker run -v app-data:/app myapp. And then you can list them with docker volume ls.

Is it possible to use OpenShift without using rhc?

I am trying to get an application running on OpenShift but after trying to create an ssh key on Ubuntu using ssh-keygen I ran into permissions problems. This is because I find I have no need for the rhc client if it only automates this process but bloats my computer (laptop) with a ruby installation.
I find that it would be best to have an alternative for Ubuntu (Linux) users. Is it possible to make this happen or do I have to go the rhc way?
You get a long way without the rhc command line tool. Obviously you can create your ssh key yourself and add/mange it via the OpenShift website. You can also create your application there and add cartridges. When it comes to starting the app, you can usually do that by jsut pushing your git repository. Last but not least, you can ssh onto your OpenShift gear and do a lot from there, for example view the log files.
That said, the rhc client is your one stop client for all this (and more). So even if you might not need it right now and some task are in fact done easier without it, I would still recommend to install it. A lot of information/tutorials are using rhc and w/o enough experience you will not know how to achieve a certain task in a different way.

Why docker (Container-based technologies) are useful

Lately I have been looking into docker and the usefulness it can provide to a SaaS company. I have spent some time learning about how to containerize apps and learning briefly about what docker and containers are. I have some problems understanding the usefulness of this technology. I have watched some videos from dockercon and it seems like everyone is talking about how docker makes deployment easy and how you deploy in your dev environment is guaranteed to run in production. However, I have some questions:
Deploying containers directly to production from the dev environment means that developers should develop inside containers which are the same containers that will run on production. This is practically not possible because developers like to develop on their fancy MACs with IDEs. Developers will revolt if they are told to ssh into containers and develop their code inside. So how does that work in companies that currently use docker?
If we assume that the development workflow will not change. Developers will develop locally and push their code into a repo or something. So where is the "containerizing the app" fits within the workflow?
Also if developers do not develop within containers, then the "what you develop is what you deploy and is guaranteed to work" assumption is violated. If this is the case, then I can only see that the only benefit docker offers is isolation, which is the same thing virtualization offer, of course with a lower overhead. So my question here would be, is the low overhead the only advantage docker has on virtualization? or are other things I dont see?
You can write the code outside of a container and transfer it into the container in many different ways. Some examples include:
Code locally and include the source when you docker build by using an ADD or COPY statement as part of the Dockerfile
Code locally and push your code to a source code repository like GitHub and then have the build process pull the code into the container as part of docker build
Code locally and mount your local source code directory as a shared volume with the container.
The first two allow you to have exactly the same build process in production and development. The last example would not be suitable for production, but could be quickly converted to production with an ADD statement (i.e. the first example)
In the Docker workflow, Developers can create both source code (which gets stored and shared in a repository like git, mercurial, etc) and a ready-to-run container, which gets stored and shared via a repository like https://registry.hub.docker.com or a local registry.
The containerized running code that you develop and test is exactly what can go to production. That's one advantage. In addition you get isolation, interesting container-to-container networking, and tool integration with a growing class of devops tools for creating, maintaining and deploying containers.

Are there any disadvantages to using Bitnami vs a native server stack?

I have read about the advantages of using a BitNami stack for LAMP development, now I am wondering if there are any drawbacks to using BitNami vs manually installing PHP, MySQL, and Apache separately. I use Mac OS but I would be interested on how it applies to both Mac and Windows. Any thoughts?
I am one of the developers of BitNami. Whether to use a native stack or a BitNami stack depends on what you are trying to do. Installing the individual items separately should be exactly the same as running our installer, and the whole purpose why we put the installers together is so you would not have to :) In the case of Mac, one of the advantages of BitNami is that you can have more up-to-date components and multiple installations. A disadvantage / difference is that the applications and path will be different than the typical ones so if you are using third-party tutorials or documentation, it may not work right away
There are 3 common drawbacks to Bitnami vs. a native LEMP/LAMP stack:
File paths. Because Bitnami is a container approach to web stacks, it installs everything in Ubuntu (or whatever Linux distro) under the /opt/bitnami directory. So, many developers who are used to customizing their stack using nano or vim editors (via the Bash shell) quickly discover that you first have to figure out where all the different configuration files of your stack modules reside, etc. Even after you figure those out, most of the online tutorials and documentations you might find will not apply to your stack.
Lockdown. This could be seen as either an advantage or a disadvantage, depending on your perspective (and situation). The entire point of using a containerized approach is to have more control of the stack environment, which can improve compatibility, predictability, security, and otherwise. However as #team-life mentioned, this can quickly become frustrating when you are trying to use "standard" Bash shell commands or even the MySQL CLI, e.g. when trying to analyze or replicate your stack, etc. To put it simply, logging into shell on a server where Bitnami is installed is not in fact logging into the actual shell :)
Upgrades. At the end of the day, Bitnami (and other containers, like Docker) are adding another "layer" to your stack, and thus, more bloat. For some users this "bloat" is justifiable, and preferable (for example, very large companies who require across-the-board uniformity). But what many developers discover with Bitnami and containers is upgrading your stack can be rather janky. For all the alleged advantages in terms of environment "stability", it turns out that upgrading your stack can actually introduce quite a bit of instability and unpredictability, often to the extent of canceling out the benefits. As #domi mentioned, all upgrades run through Bitnami (and not Ubuntu mirrors, etc) meaning you are bound to their versions and release schedules; you are also often required to completely re-install the stack again...
Ultimately, containers are a recent trend that have become very popular among so-called "enterprise" and "corporate" in-house teams, but it is one of those things that might not be the best features for smaller agencies or independent developers to embrace.
That is why native LEMP stacks like SlickStack (my project) are gaining momentum.
This Reddit thread has a few other AWS-specific comments as well.
BitNami uses paths that will be very different from the industry standard ones so if you are trying to login to a server to do some task, it will take you a lot of time to understand their custom-made-folder-structure. And that's a big drawback. When you login to a unix server, you know where the files and paths are, maybe you have one or two options, that are standard. BitNami uses a completely different one. Chaos ensues.
I'm a happy bitnami stack user. It's a great stack. I can describe many advantages.
The draw back of using bitnami stack is the update cycle. For example on Debian/Ubuntu based system, you can not use the standard apt-get update/upgrade.
That means some security updates might not get to your system as fast as your standard cron (automated periodic) update mechanism.
To upgrade the system you will need to create backup, install a new stack, then import the backup to the new stack. Which might not be an ideal procedure.
Some people categorize that as non-production-environment.
Bitnami - ease of use, validated components - known working good configuration.
Disadvantage - Patches and updates. you cannot update packages for security like you can for native install. Any bulletins must be addressed by the bitnami team, who may/will roll out an update to address issues. The bitnami updates are full stack upgrades, meaning you can't just upgrade a single component (php for example) - you need to upgrade the whole bitnami stack, and the often recommended method is to backup your application database, install a parallel bitnami stack that has the latest updates, then restore or migrate to the new installation.
Some will tell you that you can shoehorn patches into bitnami stacks, but it's not at all recommended, will lead you off the stack and most likely cause you down stream issues.
Bitnami evidently is unable to use certain commands from their mysql command line. I'm finding this very frustrating. Here is some stuff I found out.
It puts you into its own bash shell bash-4.2#
mysql>SHOW MASTER STATUS returns -> (nothing) doesn't seem to work
rcmysql start or stop doesn't work from mysql> you have to shell out of where your at and run the ctlscript.sh which is a pain.
Just to get to command line you have to run ./use_lampstack
I'm guessing that they are giving us a very paired down mysql group of commands because there will be less for them to support and less for people to jack up.
So this came up for me because I was trying setup replication. I was following directions from someone who had a "regular" install. It was difficult to follow because most of the commands he was suggesting didn't work from the bitnami mysql> command line. So while I really like the uniformity of Bitnami and the modular nature of it I have run into a snag trying to setup replication.