Run GitLabRunner as connected to 2 docker networks - gitlab-ci-runner

I need two network interfaces for some tests of my library.
GitLab is used as a CI/CD server.
I can use docker network create (twice) + docker run -itd + docker connect + docker attach for launching a docker container as connected to the 2 networks.
But I couldn't find out any way how to configure GitLab Runner (via .gitlab-ci.yml of somehow else) for this needs.
Any help (or just additional info) is greatly appreciated.
Thank you.

Related

When creating (not running) a docker, does assigning a container to a network have any real effect?

When i create a container (but not run it yet) by docker container create ... (not by docker run), if I include option --network my_network_name then when i run this docker, will the docker be connected to the network that i specified?
If you say 'no' then it means --network my_network_name does not have any real effect.
More specifically, if i create a container by:
docker container create --name mysql_container --network my_network mysql
then when i run it by:
docker container start -it mysql_container
will mysql_container be automatically connected to my_network?
from Docker. Docs
Network driversđź”—
Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:
bridge: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate. See bridge networks.
host: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly. See use the host network.
overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons. This strategy removes the need to do OS-level routing between these containers. See overlay networks.
macvlan: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack. See Macvlan networks.
none: For this container, disable all networking. Usually used in conjunction with a custom network driver. none is not available for swarm services. See disable container networking.
Network plugins: You can install and use third-party network plugins with Docker. These plugins are available from Docker Hub or from third-party vendors. See the vendor’s documentation for installing and using a given network plugin.

Running two docker containers on one Azure App service plan

I have two containers that need to run on the same machine.
The first container is the server and the second one is the agent.
Server image tag name: local-wptserver
Server image tag name: local-wptagent
I can get it to work locally so I am trying to deploy it to the cloud (Azure) for the team's consumption.
This is how I am running it locally::
docker run -d -p 4000:80 local-wptserver
docker run -d -p 4001:80 --network="host" -e "SERVER_URL=http://localhost:4000/work/" -e "LOCATION=EastUS_wptdriver" local-wptagent
This basically sets up the server and the agent to talk to each other so once I start making API calls to the server it schedules the job with the agent and returns me the results.
However since my images are now in Azure Container registry, how do I get the container to instantiate with those extra parameters (--network="host" -e "SERVER_URL=http://localhost:4000/work/" -e "LOCATION=EastUS_wptdriver") when it gets deployed to a web app?
Is this something I can add in the docker file, prior to creating the image? If yes, how?
Note: I am using the same Azure app service plan to ensure that the two web apps (built form two different repositories in the Azure container registry: server and agent) are on the same machine.
When you want to deploy multiple containers in the Azure Web App, then you need to use the docker-compose file to deploy the multiple containers. You can follow the steps in the example.
And then, the --network does not support in the docker-compose in Azure Web App, you can see all supported compose options here. But don't worry, the containers can communicate with each other through the ports.
According to the message that you push the images to Azure Container Registry. Then you should use the ACR as the docker registry. So you need to set the environment variables for your ACR and the details here. And there are also many details you cloud know in the link I provided.

Is it possible to containerize a cluster?

Is it possible to containerize a minishift or minikube cluster? So that I can docker run -it the container, and oc/kubectl get the resources inside?
The Dockerfile could be like:
FROM alpine:latest
RUN **minishift installation**
ENTRYPOINT ['minishift', 'start']
We currently have a product that has a minishift cluster in a VM, so I was wondering if we can transition it from VM to Container.
Theoretically, yes this might be possible. But this should not be done this way.
While I do understand running Minikube or Minishift inside a Virtual Machine, I cannot understand why you would like to run in inside a container. Both of those are just one node easy to use Kubernetes or OpenShift.
If you already have a cluster why not used it to run the app that you want.
By deploying Minikube or MiniShift into a container, you are creating a huge overhead for your application.
You might be interested in reading this blog Running systemd within a Docker Container. It is a bit dated but it might be something that you are looking for. Daniel Walsh also posted an update in 2019 for that post from 2014, you can find it here

Docker Portable deployment across machines: where to start

I need to install a new linux server on a vps, for using mysql, apache, php and some php appications.
In the future i might need to move this server to an other machine ( for example when i want to move the vps to a machine of my own in collocation).
I understand that with Docker, it is possible to just copy to whole server installation to another machine, without the need to reinstall everything.
But what is the most easy way to do this? What action do i need to when installing the new server? I guess i need to install Linux and the rest in a Docker installation. But i am not sure. Does anyone know a step by step guide?
I am new to Docker. I get overwhelmed with all the tools on How to scale Docker containers in production.
I want to use Plesk as well. Plesk supports Docker. Perhaps is that an easy way to go.
1) Create Dockerfile in which you describe actions that you need to do on image, you can find examples in offical documentation https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
2) Build image from Dockerfile
3) Register on docker hub
4) Push your image to docker hub
5) When you setup new server you just need to pull your image from hub

Host multicontainer application on Bluemix?

I have an application that makes use docker-compose file to stand up on docker environment. Is there a way i can port/publish this multi-container application to IBM Bluemix?
The IBM Containers service has two distinct flavors you can use presently. You can either use Container Groups (backed by docker containers, the service also supports docker-compose files).
Your comment above seems to indicate that you want to create a docker container? You can do that from the service too. If you want to run docker machine, you will not be able to do that on the first service with container groups, or on the kubernetes service (currently. It is still in beta).
The new version of the service is container orchestration backed by Kubernetes, and managed by SoftLayer. You can use this in much the same way you use docker-compose, except your docker container cloud is managed by kubernetes rather than you, the user.
Sure! Try out this tutorial to get started:
https://console.ng.bluemix.net/docs/containers/container_single_ui.html#container_compose_intro