Lately I have been looking into docker and the usefulness it can provide to a SaaS company. I have spent some time learning about how to containerize apps and learning briefly about what docker and containers are. I have some problems understanding the usefulness of this technology. I have watched some videos from dockercon and it seems like everyone is talking about how docker makes deployment easy and how you deploy in your dev environment is guaranteed to run in production. However, I have some questions:
Deploying containers directly to production from the dev environment means that developers should develop inside containers which are the same containers that will run on production. This is practically not possible because developers like to develop on their fancy MACs with IDEs. Developers will revolt if they are told to ssh into containers and develop their code inside. So how does that work in companies that currently use docker?
If we assume that the development workflow will not change. Developers will develop locally and push their code into a repo or something. So where is the "containerizing the app" fits within the workflow?
Also if developers do not develop within containers, then the "what you develop is what you deploy and is guaranteed to work" assumption is violated. If this is the case, then I can only see that the only benefit docker offers is isolation, which is the same thing virtualization offer, of course with a lower overhead. So my question here would be, is the low overhead the only advantage docker has on virtualization? or are other things I dont see?
You can write the code outside of a container and transfer it into the container in many different ways. Some examples include:
Code locally and include the source when you docker build by using an ADD or COPY statement as part of the Dockerfile
Code locally and push your code to a source code repository like GitHub and then have the build process pull the code into the container as part of docker build
Code locally and mount your local source code directory as a shared volume with the container.
The first two allow you to have exactly the same build process in production and development. The last example would not be suitable for production, but could be quickly converted to production with an ADD statement (i.e. the first example)
In the Docker workflow, Developers can create both source code (which gets stored and shared in a repository like git, mercurial, etc) and a ready-to-run container, which gets stored and shared via a repository like https://registry.hub.docker.com or a local registry.
The containerized running code that you develop and test is exactly what can go to production. That's one advantage. In addition you get isolation, interesting container-to-container networking, and tool integration with a growing class of devops tools for creating, maintaining and deploying containers.
Related
I have inherited a Read The Docs in-house installation (to see our internal git server), in which it was a known issue that the build volume would eventually run full. Now it has again and we would like to find the proper solution. We currently run in Openshift and to my understanding the build job runs "next to" the web server and communicate through shared volumes, including a build volume.
It appears that the problem is that old builds (notably Pull Requests) are not deleted but stay for ever on the build volume. I am not a Django programmer so I am unfamiliar with these kind of applications making the spelunking challenging.
Is this a simple setting about cleaning that my ex-colleague have missed, or where should I look in the sources? The last thing he did before leaving was upgrading to 6.0.
I cannot seem to find a direct and recent answer to this question. It appears to me that OpenShift is used only to deploy web-application (by the languages supported, etc.) but I want to make sure. Can you only use OpenShift to deploy web applications?
You can find more information at:
https://www.openshift.com/
On that page it says:
Run multiple languages, frameworks, and databases on the same platform and take advantage of the docker eco-system.
That statement links to:
https://www.openshift.com/features/technologies.html
where it lists various language builders provided as well as images for database products.
If you can package something up in a container image, then generally you can run it. The caveat is that OpenShift by default doesn't allow you to run containers as root and will assign you a uid for it to be run as. Many images on Docker Hub use poor practices and expect to be run as root. These will not usually work out of the box. On an OpenShift system which you have no admin rights, and especially a multi user system, you will not be able to, nor would you be given the ability to run stuff as root, so you just need to follow good practices on how your image is setup if building your own images from scratch.
I am in the progress of changing my development environment to Docker and I'm pretty happy so far but I have one basic question. But first let me describe on what kind of setup I've landed.
I'm using the example of an environment for web development.
I'm organizing every service in its own container, so
PHP who talks to a mysql container and has a data container (called app) for the source.
nginx links to the PHP container and serves the files from the data container (app).
app is basically the same container as PHP (to save space) and mounts my project folder into the container. app then serves the project folder to the other containers.
then there is a mysql container who has his own data container called data
a phpmyadmin container that talks to the mysql container
and finally there is data, the data container for the DB.
I'm not sure if the benefits are clear for everyone, so there it is (because you could put everything into one container...).
Mounting the project folder from my host machine into the Docker container lets me use my favorite editor and gives me continuous development.
Decoupling the database engine from its store gives you the freedom to change the engine but keep the data. (And of cause you don't have to install any programming stuff apart from an editor and Docker.)
My goal is to have the whole setup highly portable, so having the latest version of my project code on the host system and not living inside a container is a huge plus. I am organizing the setup described above in a ´docker-compose.yml´ file inside my project folder. So I can just copy that whole project folder to a different machine, type ´docker-compose´ and be up and running.
I actually have it in my Dropbox and can switch machines just like that. Pretty sweet.
But there is one drawback. The DB store is not portable as it lies somewhere in the Virtualbox file system. I tried mounting the data store into the host OS but that doesn't really work. The files are there but I get various errors when I try to read or write to it.
I guess my question is if there is a best practice to keep the database store in sync (or highly portable) between different dev machines.
I'd nix the data containers and switched over to named volumes. Data containers haven't been needed for quite a while despite some outdated documentation indicating otherwise.
Named volumes let you select from a variety of volume drivers that make it possible to mount the data from outside sources (including NFS, gluster, and flocker). It also removes the requirement to pick a container that won't have a significant disk overhead, allows you to mount the folders in any location in each container, and separates container management from data management (so a docker rm -v $(docker ps -aq) doesn't nuke your data).
The named volume is as easy to create as giving the volume a name on the docker run, e.g. docker run -v app-data:/app myapp. And then you can list them with docker volume ls.
I am looking in a way to have some kind of "image" (VM, Vagrant box, Docker container...???) with all the development tools needed to work on our software project, like a configured IDE (i.e. Eclipse or PyCharm), build and deployment tools.
After a bit of searching I found surprising little about this topic, while plenty about development environments that mirrors the production one. Almost every source I found considers installing development tools on the host, while deploying in a virtualized environment.
The first thing that comes to my mind is a virtual machine of some sort, maybe provisioned in an automated way (Packer + Ansible maybe). I have also seen some blog posts about running GUI applications in Docker containers via X.org.
Is there a better way? How did you solve the problem?
The ultimate goal is to let new hires being productive in hours instead of days.
UPDATE: After some research, I am currently evaluating:
Development in a Virtual Machine
Development with the support of Docker containers
Cloud IDEs
Have your IT department make an image of a development laptop and then use a confluence page for tweaking the images to the needs of the individual developer. Then use docker images for setting up any servers they will need. These can be run on the laptops. You can use docker swarm to have many docker images spun up if you need it.
I prefer to have dev tools installed on the host so every one do it its own way and I don't want to convert someone to specific tool.
If you want to go the other route and give your new hires a ready to use dev box, I would go with vagrant working in GUI mode + provisioning scripts. For example the jhipster project has a nice dev box, its pretty nice as they have many tools to be installed, its pretty neat so after you install vagrant/virtualbox (or vmware)/git on your host you're ready in minutes.
We are looking for a way to use GitHub on an internal system that we are developing at work. We have developed it in PHP and MySQL, with a fair bit of jQuery/Ajax, on a Windows Server VM running IIS. Other staff can access the frontend over the network using the IP address.
There are currently three people working on it and at the moment we directly edit the file on the VM as we need it to still communicate with the database to check our changes have worked. There is no option to install anything like WAMP on our individual machines and there are the usual group policy restrictions so the only access we have to a database is via the VM. We have been working with copies of files/folders and the database but there is always the risk that then merging these would be a massive task.
I do use GitHub (mainly desktop but I can just about get by with using the command line as long as I have a list of the command in front of me) at home to sync between my PC and Laptop, via GitHub.com and believe that the issues we get with several people needing to update the same file would be eradicated by using it here at work.
However, there are some queries we need to ensure we have straight in our heads before putting forward a request.
Is what we are asking for viable? Can several branches on the same server be worked on at the same time or would this only work on an individual machine.
Given that our network is fairly restricted, is there any way that we can work on the files on our machine and connect to a VM hosted database? I believe that an IDE will allow us to run php files on a standard machine (although a request for Eclipse is now around 6 weeks old and there is still no confirmation that we will get it any time soon) but will this also allow .
The stuff we do is not overly sensitive but the company would certainly not want what we do out there in a public repository (and also would not be likely to pay for a premium GitHub account) so we would need to branch/pull/merge directly from our machines to the VM.
Does anyone have any advice/suggestions/solutions to this? Although GitHub would be a preferred option as I already use it, we are open to any suggestion that will allow three people, on different machines, simultaneously work on a central system while ensuring that we do not overwrite or affect each others stuff.
Setting up a git repo on Windows is not trivial and may require a fair bit of work. You can try using SVN it is fairly straight forward to install on windows and has a better learning curve than Git. I am not saying SVN is better/worse as compared to Git, it's much better suited to your needs. We have a similar setup and we use Tortoise SVN https://subversion.apache.org/ as a client. SVN also has branches and stuff.
SVN for server side repository https://subversion.apache.org/
If you would still prefer Git on windows, check this out - https://www.linkedin.com/pulse/step-guide-setup-secure-git-remote-repository-windows-nivedan-bamal
1) It is possible to work on many branches and then merge them into a single branch. That's the preferred Git development way. You can do the same on SVN.