I have two containers that need to run on the same machine.
The first container is the server and the second one is the agent.
Server image tag name: local-wptserver
Server image tag name: local-wptagent
I can get it to work locally so I am trying to deploy it to the cloud (Azure) for the team's consumption.
This is how I am running it locally::
docker run -d -p 4000:80 local-wptserver
docker run -d -p 4001:80 --network="host" -e "SERVER_URL=http://localhost:4000/work/" -e "LOCATION=EastUS_wptdriver" local-wptagent
This basically sets up the server and the agent to talk to each other so once I start making API calls to the server it schedules the job with the agent and returns me the results.
However since my images are now in Azure Container registry, how do I get the container to instantiate with those extra parameters (--network="host" -e "SERVER_URL=http://localhost:4000/work/" -e "LOCATION=EastUS_wptdriver") when it gets deployed to a web app?
Is this something I can add in the docker file, prior to creating the image? If yes, how?
Note: I am using the same Azure app service plan to ensure that the two web apps (built form two different repositories in the Azure container registry: server and agent) are on the same machine.
When you want to deploy multiple containers in the Azure Web App, then you need to use the docker-compose file to deploy the multiple containers. You can follow the steps in the example.
And then, the --network does not support in the docker-compose in Azure Web App, you can see all supported compose options here. But don't worry, the containers can communicate with each other through the ports.
According to the message that you push the images to Azure Container Registry. Then you should use the ACR as the docker registry. So you need to set the environment variables for your ACR and the details here. And there are also many details you cloud know in the link I provided.
Related
When i create a container (but not run it yet) by docker container create ... (not by docker run), if I include option --network my_network_name then when i run this docker, will the docker be connected to the network that i specified?
If you say 'no' then it means --network my_network_name does not have any real effect.
More specifically, if i create a container by:
docker container create --name mysql_container --network my_network mysql
then when i run it by:
docker container start -it mysql_container
will mysql_container be automatically connected to my_network?
from Docker. Docs
Network driversđź”—
Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:
bridge: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate. See bridge networks.
host: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly. See use the host network.
overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons. This strategy removes the need to do OS-level routing between these containers. See overlay networks.
macvlan: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack. See Macvlan networks.
none: For this container, disable all networking. Usually used in conjunction with a custom network driver. none is not available for swarm services. See disable container networking.
Network plugins: You can install and use third-party network plugins with Docker. These plugins are available from Docker Hub or from third-party vendors. See the vendor’s documentation for installing and using a given network plugin.
I am user of Openshift online and OKD. I am facing similar issue in both places. Please have a look.
I have created a project.
I have launched php in Developer's Catalog option. With other details, I entered my project's git url, project is cloned successfully. Now it needs to connect to mysql database only.
In Pods, I deployed mysql image from 'Deploy Image' option. It is launched successfully.
When I make mysql connection from php pod to mysql pod, it does not connect, connection time out.
How should I make connection?
Note :
I do not have datastore option to launch mysql from developer's catalog in openshift online, that's why I am launching mysql image from deploy image.
As you mentioned you are using Openshift Online and OKD and you are facing the issue at both places.
You can not create mysql from development store because currently, the OpenShift Online catalog does not provide MySQL template via the web interface directly, but you can deploy the MySQL template using the oc CLI instead. The database deployment is simplified when using templates.
Once logged in with the oc CLI, running
oc new-app -L
will list all of the templates that we were used to seeing in the web console, including the mysql-persistent. Then, you can specify all the template parameters via the oc CLI, e.g.:
oc new-app mysql-persistent -p MYSQL_USER=<desired_DB_username> -p MYSQL_PASSWORD=<mysql_password> -p MYSQL_DATABASE=<desired_database_name>
If you'd like to see all the supported template parameters, you can use
oc process <template_name> --parameters -n openshift
or, for a more detailed output,
oc describe template <template_name> -n openshift
Once the app is launched successfully, you can find this app's hostname in services and connect to it from your php pod after defining host name in php configuration file.
I have an application that makes use docker-compose file to stand up on docker environment. Is there a way i can port/publish this multi-container application to IBM Bluemix?
The IBM Containers service has two distinct flavors you can use presently. You can either use Container Groups (backed by docker containers, the service also supports docker-compose files).
Your comment above seems to indicate that you want to create a docker container? You can do that from the service too. If you want to run docker machine, you will not be able to do that on the first service with container groups, or on the kubernetes service (currently. It is still in beta).
The new version of the service is container orchestration backed by Kubernetes, and managed by SoftLayer. You can use this in much the same way you use docker-compose, except your docker container cloud is managed by kubernetes rather than you, the user.
Sure! Try out this tutorial to get started:
https://console.ng.bluemix.net/docs/containers/container_single_ui.html#container_compose_intro
I have several questions regarding Docker.
First my project:
I have a blog on a shared host and want to move it to the cloud to have all the server sides in my hands and to have the possibility to scale my server on my needs.
My first intend was to setup a nice ubuntu 14 lts as a server with nginx, php 7 and mysql. But I think it's not that easy to transfer such a server to another cloud i.e. from gce to aws. I then thought about using docker, as a friend told me how easy it is to setup containers and how easy it is to move them from one server to another.
I then read a lot about docker but stumbled upon a few things I wondered about.
In my understanding docker runs just services like php, mysql or similar, but doesn't hold data, right?
Where would I store all the data like database, nginx.conf, php.ini and all the Files I want to serve with nginx (ie. /var/www/)?
Are they stored on the host system? If yes, it would not be easier to move a docker setup then move a whole server, no?
Do I really have an advantage of using Docker to serve a Wordpress Blog or another Website using MySQL and so on?
Thanks in advance
Your data is either stored on the host machine or you data is attached to the docker containers remotely (using a network-attached block device).
When you store your data on the host machine, you have a number of options.
The data can be 'inside' one of your containers (e.g. your mysql databases live inside your mysql container).
You can mount one or more directories from your host machine inside your containers. So then the data lives on your host.
You can create Docker volumes or Docker volume containers that are used to store your data. These volumes or volume containers are mounted inside the container with your application. The data then lives in directories managed by Docker.
For details of these options, see dockervolumes
The last option is that you mount remote storage to your docker containers. Flocker is one of the options you have for this.
At my work I've set up a host (i.e. server) that runs a number of services in docker containers. The data for each of these services 'lives' in a Docker data volume container.
This way, the data and the services are completely separated. That allows me to start, stop, upgrade and delete the containers that are running my services without affecting the data.
I have also made separate Docker containers that are started by cron and these back up the data from the data volume containers.
For mysql, the backup container connects to the mysql container and executes mysqldump remotely.
I can also run the (same) containers that are running my services on my development machine, using the data that I backed up from the production server.
This is useful, for instance, to test upgrading mysql from 5.6 to 5.7.
I'm planning to migrate my application stack to Docker. Let me describe the services I'm currently using:
HAProxy, which is used for SSL termination on all service's connections (HTTP and raw TCP connections), and forwards traffic to the services below.
Nginx, which serves static files, like updates and some information pages.
Node.js, which runs the main applications.
MySQL (MariaDB), the database used and shared by all the applications.
My question is about the database.
What's the proper way of running MariaDB in this case?
Install and run it inside my container, along with the other services?
Run the official image in a separate container, and link my container to it with the --link option of Docker's run command?
Does the first option have any disadvantage?
TeamSpeak docker container uses the second option and that's what made me question myself about the correct way of running the database in my case, but I particularly feel more inclined to package all the services inside my own image.
Docker Philosophy: Split your application into microservices and use a container for each microservice.
In your case, I recommend a MariaDB container, Using official (Library) Image gives you easier update management, but feel free to use your custom image.
An HAProxy Container, A nginx container and a nodejs container.
This way you divided your application into microservices, and you can upgrade, manage and troubleshoot them easier in an isolated environment.
If you are thinking about delivering your application to end users via docker, a simple docker-compose file will do the trick for easy launching the required containers.