I'm porting my docker environment to rancher server 1.0.0.
I have a wordpress container which is linked to a mysql container.
Each one are in separate stack: One stack for the wordpress container and one for the mysql container.
Previously, linking between those two container was achieve using a docker-compose.yml for my wordpress container containing:
wordpress:
external_links:
- mysql:mysql
This was working perfectly before, but not anymore when those containers are within a rancher server.
The documentation about DNS service is not clear for me:
http://docs.rancher.com/rancher/rancher-services/internal-dns-service/
In rancher, my stack is named mysql and my service mysql.
I have tried to link using what
wordpress:
external_links:
- mysql.mysql:mysql
But this does not works too.
Those two containers are in a custom catalog, the only way right now to make this work is to create and start the two services and then change linking by upgrading the wordpress service afterward.
Any idea ?
I'm i missing something ?
Thanks a lots !
Here is the solution:
Instead of:
external_links:
- mysql.mysql:mysql
Use the following syntax for linking service within a stack:
external_links:
- mysql/mysql:mysql
Or more generically:
external_links:
- stack_name/service_name:alias_name
Hope this help !
Related
I am using golang to programmatically create and destroy one-off Compute Engine instances using the Compute Engine API.
I can create an instance just fine, but what I'm really having trouble with is launching a container on startup.
You can do it from the Console UI:
But as far as I can tell it's extremely hard to do it programmatically, especially with Container Optimized OS as the base image. I tried doing a startup script that does a docker pull us-central1-docker.pkg.dev/project/repo/image:tag but it fails because you need to do gcloud auth configure-docker us-central1-docker.pkg.dev first for that to work and COOS doesn't have gcloud nor a package manager to get it.
All my workarounds seem hacky:
Manually create a VM template that has the desired container and create instances of the template
Put container in external registry like docker hub (not acceptable)
Use Ubuntu instead of COOS with a package manager so I can programmatically install gcloud, docker, and the container on startup
Use COOS to pull down an image from dockerhub containing gcloud, then do some sort of docker-in-docker mount to pull it down
Am I missing something or is it just really cumbersome to deploy a container to a compute engine instance without using gcloud or the Console UI?
To have a Compute Engine start a container when the Compute Engine starts, one has to define meta data for the description of the container. When the COOS starts, it appears to run an application called konlet which can be found here:
https://github.com/GoogleCloudPlatform/konlet
If we look at the documentation for this, it says:
The agent parses container declaration that is stored in VM instance metadata under gce-container-declaration key and starts the container with the declared configuration options.
Unfortunately, I haven't found any formal documentation for the structure of this metadata. While I couldn't find documentation, I did find two possible solutions:
Decipher the source code of konlet and break it apart to find out how the metadata maps to what is passed when the docker container is started
or
Create a Compute Engine by hand with the desired container definitions and then start the Compute Engine. SSH into the Compute Engine and then retrieve the current metadata. We can read about retrieving meta data here:
https://cloud.google.com/compute/docs/metadata/overview
It turns out, it's not too hard to pull down a container from Artifact Registry in Container Optimized OS:
Run docker-credential-gcr configure-docker --registries [region]-docker.pkg.dev
See: https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#accessing_private_images_in_or
So what you can do is put the above line along with docker pull [image] and docker run ... into a startup script. You can specify a startup script when creating an instance using the metadata field: https://cloud.google.com/compute/docs/instances/startup-scripts/linux#api
This seems the least hacky way of provisioning an instance with a container programmatically.
You mentioned you used docker-credential-gcr to solve your problem. I tried the same in my startup script:
docker-credential-gcr configure-docker --registries us-east1-docker.pkg.dev
But it returns:
ERROR: Unable to save docker config: mkdir /root/.docker: read-only file system
Is there some other step needed? Thanks.
I recently ran into the other side of these limitations (and asked a question on the topic).
Basically, I wanted to provision a COOS instance without launching a container. I was unable to, so I just launched a container from a base image and then later in my CI/CD pipeline, Dockerized my app, uploaded it to Artifact Registry and replaced the base image on the COOS instance with my newly built app.
The metadata I provided to launch the initial base image as a container:
spec:
containers:
- image: blairnangle/python3-numpy-ta-lib:latest
name: containervm
securityContext:
privileged: false
stdin: false
tty: false
volumeMounts: []
restartPolicy: Always
volumes: []
I'm a Terraform fanboi, so the metadata exists within some Terraform configuration. I have a public project with the code that achieves this if you want to take a proper look: blairnangle/dockerized-flask-on-gce.
I try to transfer my projects CI to GitHub Actions. For integration tests I need to start and access redis container. I am using info from this
article.
So code looks like this
build-artifacts:
name: Build artifacts
runs-on: ubuntu-latest
services:
redis:
image: redis:3.2.12
ports:
- 6379:6379
I can access redis using localhost:6379 but I can't access it using redis:6379. The article does not help. What I am doing wrong?
Thank you in advance.
So I figured out what was the problem.
Docker network works only if you run your job inside container. And I had not.
Here is example https://github.com/actions/example-services/blob/989ef69ed164330bee413f11ce9332d76f943af7/.github/workflows/mongodb-service.yml#L19
And a quote:
runs all of the steps inside the specified container rather than on the VM host.
Because of this the network configuration changes from host based network to a container network.
U need to host an external redis database because containers in GitHub Actions are isolated.
For other hand u can prepare a docker container with all you need for testing and then u can run the tests inside.
Un can take a look here https://github.com/gonsandia/github-action-deploy
Its a custom action where u define the dockerfile and the scripts to runs
I'm learning Github Actions and designing a workflow with a job that requires a Service Container.
The documentation states that configuration must specify "The Docker image to use as the service container to run the action. The value can be the Docker base image name or a public docker Hub or registry". All of the examples in the docs use publicly-available Docker images, however I want to create a Service Container from a Dockerfile contained within my repo.
Is it possible to use a local Dockerfile to create a Service Container?
Because the job depends on a Service Container, that image must exist when the job begins, and therefore the image cannot be created by an earlier step in the same job. The image could be built in a separate job, but because jobs execute in separate runners I believe that Job 2 will not have access to the image created in Job 1. If this is true then could I follow this approach, using upload/download-artifact so provide Job 1's image to Job 2?
If all else fails, I could have Job 1 create the image and upload it to Docker Hub, then have Job 2 download it from Docker Hub, but surely there is a better way.
The GitHub Actions host machine (runner) is a fully loaded Linux machine, with everything everybody needs already installed.
You can easily launch multiple containers - either your own images, or public images - by simply running docker and docker-compose commands.
My advice to you is: Describe your service(s) in a docker-compose.yml file, and in one of your GitHub Actions steps, simply do docker-compose up -d.
You can create a docker image with the Dockerfile or docker-compose.yml residing inside the repo. Refer to this public gist, it might be helpful.
Instead of building multiple docker-images, you can use docker-compose. Docker-compose is the preferred way to deal with this kind of scenario.
I have problem installing CYGNUS using docker as source, simply i cannot understand where i should map what specific agent.conf.
Image i am using is from here.
When i try to map agent.conf witch have my specific setup to container it starts and run but fail to copy, and not only that any change i made to file inside container wont stay it returns to previous default state.
While i have no issues with grouping_rules.conf using same approach.
I used docker and docker compose both same results.
Path on witch i try to copy opt/apache-flume/conf/agent.conf
docker run -v /home/igor/Documents/cygnus/agent.conf:/opt/apache-flume/conf/agent.conf fiware/cygnus-ngsi
Can some who managed to run it using his config tell me if i misunderstood location of agent.conf or something because this is weird, i used many docker images and never had issue where i was not able to copy from my machine to docker container.
Thanks in advance.
** EDIT **
Link of agent.conf
Did you copy the agent.conf file to your directory before start the container?
As you can see here, when you define a volume with "-v" option, docker copies the content of the host directory, inside the container directory using the mount point. Therefore, you must first provide the agent.conf file on your host.
The reason is that when using a "bind mounted" directory from the
host, you're telling docker that you want to take a file or directory
from your host and use it in your container. Docker should not modify
those files/directories, unless you explicitly do so. For example, you
don't want -v /home/user/:/var/lib/mysql to result in your
home-directory being replaced with a MySQL database.
If you do not have access to the agent.conf file, you can download the template in the source code from the official cygnus github repo here. You can also copy it once the docker container is running, using the docker cp option:
docker cp <containerId>:/file/path/within/container /host/path/target
Keep in mind, that you will have to edit the agent.conf file to configure it according to the database you are using. You can find in the official doc how to configure cygnus to use differents sinks like MongoDB, MySQL, etc.
I hope I have been helpful.
Best regards!
Am on a Macbook Pro laptop and running docker-machine (0.5.0) and docker-compose (1.5.0) to get my containers going.
This means I'm using docker-machine to create my virtualbox boot2docker driven HOST machines, which will run my docker daemon and host all my containers.
I think I'm missing something critical with the concept of HOSTS and VOLUME, as they refer to Docker and the documentation.
This is my docker-compose.yml file (web simply builds the php:5.6-apache image):
web:
restart: "always"
build: ./docker-containers/web
ports:
- "8080:80"
volumes:
- ./src:/var/www/html
links:
- mysql:mysql
mysql:
restart: "always"
image: mysql:5.7
volumes_from:
- data
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=XXX
data:
restart: "no"
image: mysql:5.7
volumes:
- /var/lib/mysql
command: "true"
Docker Compose file documention for volumes is here: http://docs.docker.com/compose/compose-file/
It states for volumes - Mount paths as volumes, optionally specifying a path on the host machine (HOST:CONTAINER), or an access mode (HOST:CONTAINER:ro).
HOST in this case refers to my VM created by docker-machine, correct? Or my local macbook file system? Mounting a path on my VM to a container?
Under web I declare:
volumes:
- ./src:/var/www/html
and this is mapping my local macbook file system ./src folder on my macbook pro to my web container. If my understanding is correct though, shouldn't it be mapping the ./src folder on my VM to /var/www/html within the web container?! In theory I think I should be required to COPY my local mac file system folder ./src to my VM first, and then I do this volume declaration. It seems docker-compose is magically doing it all at once though? confused
Lastly, we can see that I'm creating a data-only container to persist my mysql data. I've declared:
volumes:
- /var/lib/mysql
Shouldn't this create a /var/lib/mysql folder on my HOST boot2docker VM and I could then navigate to this folder on the VM, yes/no? When I use docker-machine to ssh into my machine, and then navigate to /var/lib, there is NO mysql folder at all?! Why is it not being created? Is there something wrong with my configuration? :/
Thanks in advance! Any explanations as to what I'm doing wrong here would be greatly appreciated!
Ok there's a couple of points that need to be addressed here.
Lets start with what a docker volume is(Try to not think about your macbook or the vagrant machine at this point. Just be mindful of the fact that the dockers use a different filesystem, where ever it may reside at this point ):
Maybe imagine it like this, in and of itself every volume in Docker is just a part of the internal file system docker uses.
The containers can use theses volumes, like they were "small harddrives" that can be mounted by them and also shared between them (or mounted by two of them at the same time, like mounting a super fast version of some ftp server to two clients or whatever :P ).
In principal you can declare these volumes ( still not thinking about your computer/vagrant itself, just the dockers ;) ) via the Dockerfile's VOLUME instruction.
Standard example, run one webserver container like so:
FROM: nginx
VOLUME /www
Now everything that goes into /www can in theory be mounted and unmounted from a container and also mounted to multiple containers.
Now Nginx alone is boring, so we want to have php run over the files that nginx stores to produce some more fun content. => We need to mount that volume into some php-fpm container.
Ergo in our compose file we'd do this
web:
image: nginx
php:
image: php-fpm
volumes_from:
- web
=> voila! every folder declared by a VOLUME directive in the nginx/web container will be visible in the php one. Important point to note here, whatever is in nginx's /www, will override whatever php has in /www.
If you put the :ro, php can't even write to that folder :)
Now moving close to your issue, there's a second way to declare volumes, that does not require them being declared in the Dockerfile. This can be done by mounting volumes from the host (in this case your vagrant/boo2docker thingy). Let's discuss this as though we're running on a native Linux first.
If you were to put something like:
volumes:
- /home/myuser/folder:/folder
in your docker-compose.yml, then this will mean that /home/myuser/folder will now be mounted into the docker. It will override whatever the docker has in /folder and just like the /www also be accessible from the thing that declared it. Now the Linux machine the docker daemon is running on.
So much for the theory :), in fact you probably just need the following advice to get your stuff going :):
The way boot2docker/docker-machine/kitematic and all these things deal with the issue is simply, that they first of all just mount a volume in the vagrant machine to the docker containers, and them simply also mount this thing into your Mac file system, hoping it will all work out :P
Now for the practical problem all of us using this (or just trying to help their coworkers into the world of sweet sweet Docker :P) on Mac are facing is permissions. I mean think about it ( root or some other user handles files in the container,the user vagrant might handle files in the vagrant host and then your Mac user "skalfyfan" handles those files in Mac. They all have different user id's and whatnot => many problems ensue with that, and somewhat depending on what you're actually running in Docker. Mysql and Apache are especially painful, because they do not run as root within the container. This means, they often have trouble writing to the Mac file system.
Before trying the second approach below, simply try putting your container volumes under you Mac home directory. This will resolve issues with MySQL in most cases as I have found over time.
Btw: No need to declare full paths to volumes ./folder is fine and read relative to the place your docker-compose.yml resides!
Just put the compose-yml in your Mac users folder, that's all that matters. No chmod 777 -R :P will help you here, it just needs to be under your home folder :)
Still some apps ( Apache for example ) will still give you a hard time. The fact that the user id of whatever runs in the container differs from your Mac user id will make your life hell. In order to get around this, you need to adjust the user id as well as the user group in a way that doesn't conflict with your Mac's permissions. The group you want on a Mac is staff, a UID that works would be for example 1000.
Hence you could put this at the end of your Dockerfile:
RUN usermod -u 1000 www-data
RUN usermod -G staff www-data
or
RUN usermod -u 1000 mysql
RUN usermod -G staff mysql
So as you have now learnt:
In theory I think I should be required to COPY my local mac file
system folder ./src to my VM first, and then I do this volume
declaration. It seems docker-compose is magically doing it all at once
though?
Right on, it does that :)
Lastly, we can see that I'm creating a data-only container to persist
my mysql data. I've declared:
volumes:
- /var/lib/mysql
This one you got wrong :) As explained, if you don't give a host folder, then Docker will persist this path. But only for this container and all will stay within the docker file system. Nothing is written to the host at all! This will always only happen if you give a host folder before the container folder!
Hope this helped :)