I have problem installing CYGNUS using docker as source, simply i cannot understand where i should map what specific agent.conf.
Image i am using is from here.
When i try to map agent.conf witch have my specific setup to container it starts and run but fail to copy, and not only that any change i made to file inside container wont stay it returns to previous default state.
While i have no issues with grouping_rules.conf using same approach.
I used docker and docker compose both same results.
Path on witch i try to copy opt/apache-flume/conf/agent.conf
docker run -v /home/igor/Documents/cygnus/agent.conf:/opt/apache-flume/conf/agent.conf fiware/cygnus-ngsi
Can some who managed to run it using his config tell me if i misunderstood location of agent.conf or something because this is weird, i used many docker images and never had issue where i was not able to copy from my machine to docker container.
Thanks in advance.
** EDIT **
Link of agent.conf
Did you copy the agent.conf file to your directory before start the container?
As you can see here, when you define a volume with "-v" option, docker copies the content of the host directory, inside the container directory using the mount point. Therefore, you must first provide the agent.conf file on your host.
The reason is that when using a "bind mounted" directory from the
host, you're telling docker that you want to take a file or directory
from your host and use it in your container. Docker should not modify
those files/directories, unless you explicitly do so. For example, you
don't want -v /home/user/:/var/lib/mysql to result in your
home-directory being replaced with a MySQL database.
If you do not have access to the agent.conf file, you can download the template in the source code from the official cygnus github repo here. You can also copy it once the docker container is running, using the docker cp option:
docker cp <containerId>:/file/path/within/container /host/path/target
Keep in mind, that you will have to edit the agent.conf file to configure it according to the database you are using. You can find in the official doc how to configure cygnus to use differents sinks like MongoDB, MySQL, etc.
I hope I have been helpful.
Best regards!
Related
I am running ejabberd using this docker image "https://github.com/processone/docker-ejabberd/tree/master/ecs".
Wondering which is the path for .erlang.cookie inside the container? I was trying to setup cluster in different host.
I can't find it in /home/ejabberd location. Tried setting environment variable ERLANG_COOKIE while running docker still can't find it in /home/ejabberd location.
You already found where the erlang cookie file is generated and available.
Alternatively, you can use the ERLANG_COOKIE environment variable to set the cookie value, and don't care about the file. See https://github.com/processone/docker-ejabberd/tree/master/ecs#clustering-example
It is in $HOME directory when login to the container as root user, In my case /home/ejabberd. The file is hidden use ls -a to list it.
To login as root user to the container use --user root while docker exec
I am starting out with programming and am currently working with Docker Containers.
One of the containers is a webserver that takes an input from another container and displays an output on a web page on localhost.
I was wondering if it would be possible to change some comments on the webpage that is part of the container and if so how to go about it?
PS: Pretty new to all this, so please forgive me if I'm asking something really basic
Depends upon the strategy, if you need to change it dynamically when you change the code. You should mount the directory on the container via docker command or docker-compose file. If it is static copy the files via docker file.
It is strange that you are a beginner to programming and are working with docker containers. But now you are here.
Find out if the files you want to edit are part of a container ('baked in') or if they get mounted at container runtime.
If they are baked in, you would go to the bakery (docker build ...) and modify files so that you get modified containers.
If they are mounted at runtime (docker run -v ...) find out where they get mounted from and modify the files over there.
Baked in files cannot be changed just like that, so they reflect an immutable installation. The other files can be changed at runtime. There is no right or wrong, choose the pattern depending on what you want to achieve. That is where the strategy comes into play.
I'm learning Github Actions and designing a workflow with a job that requires a Service Container.
The documentation states that configuration must specify "The Docker image to use as the service container to run the action. The value can be the Docker base image name or a public docker Hub or registry". All of the examples in the docs use publicly-available Docker images, however I want to create a Service Container from a Dockerfile contained within my repo.
Is it possible to use a local Dockerfile to create a Service Container?
Because the job depends on a Service Container, that image must exist when the job begins, and therefore the image cannot be created by an earlier step in the same job. The image could be built in a separate job, but because jobs execute in separate runners I believe that Job 2 will not have access to the image created in Job 1. If this is true then could I follow this approach, using upload/download-artifact so provide Job 1's image to Job 2?
If all else fails, I could have Job 1 create the image and upload it to Docker Hub, then have Job 2 download it from Docker Hub, but surely there is a better way.
The GitHub Actions host machine (runner) is a fully loaded Linux machine, with everything everybody needs already installed.
You can easily launch multiple containers - either your own images, or public images - by simply running docker and docker-compose commands.
My advice to you is: Describe your service(s) in a docker-compose.yml file, and in one of your GitHub Actions steps, simply do docker-compose up -d.
You can create a docker image with the Dockerfile or docker-compose.yml residing inside the repo. Refer to this public gist, it might be helpful.
Instead of building multiple docker-images, you can use docker-compose. Docker-compose is the preferred way to deal with this kind of scenario.
I want to move containers from one host to another. The containers have updated data in their filesystem, so I do not want to move the original images (docker save) but containers (using docker export).
So I use
docker export l4bnode > l4bnode.tar
on the old host, copy the file to new host, and import image
cat l4bnode.tar | docker import - andi/l4bnode
on the new one. But.. it looks like all the configuration data I had in the Dockerfile (and that I also could specify/had specified in the command line when running the container) is lost. I tried
docker run andi/l4bnode
and get
docker: Error response from daemon: No command specified.
Using docker inspect, I see that all data on the imported image is empty, though it is set on the exported running container. I mainly am missing startup command, working directory, environment variables and exposed ports (some of which I have to change then due to the migration and new environment).
How can I apply the original configuration on the new host, or preferrably, migrate it properly?
You can commit the current container state as new image. Then use save/load on the new image.
That being said this is something you generally should try to avoid. Runtime data should be kept in volumes, any configuration changes should happen via Dockerfile rebuilds.
If you look at dockerfiles the often contains lines like this:
sed 's/main$/main universe/' -i /etc/apt/sources.list
I think it is difficult to set up things like this.
Is it possible to launch a default OS image, then enter it interactive with a shell, do some modifications, and then print out the diff (filesystem diff)?
The diff should be used as the dockerfile to recreating the image.
But maybe I am missing something, since I am new to docker.
You can create docker images several ways.
I tend to have two windows open when I create a new docker image. One for my docker run -i -t centos bash, where I am writing all my commands to get it the way I want, and the other one with the Dockerfile, so I can put in whatever I do.
When it comes to config files, I am putting them in the files/folders that matches the one on the image.
Example, if I change /etc/something/file.conf, I will create the file in etc/something/file.conf in the same directory as my Dockerfile, and then use Dockers ADD command to add it whenever I do a build.
This works perfectly, since I can have all this in a git repository with a README.md containing the info I need for running/building the image.
The other thing you can do is to is to run docker ps -a after you are done with the changes you wanted to create an image on, and get the docker ID of the image of the container you just configured. You can tag this new image, or start it with docker run abc0123 bash just like you would a normal docker image.
The problem with this is that you wont be able to easily build it next time without bringing the whole image.
Dockerfiles with ADD is the way to go!
If you do not want to run sed (which is used to preserve the default file and of minimal changes to it), you can simply ADD the modifies file.
For that you can docker run -it --rm thebaseimage /bin/sh (or any other shell that is provided) and edit it in place. Then just copy it outside the container (or docker export it) and use it on your build.
The downside of ADD vs RUN sed… is that, if something changes in a new version of your base image, you will overwrite those changes.
The Dockerfile is (mostly) equivalent to a series of docker run and docker commit commands. You wouldn't want to look at the docker diff to see what files changed -- you'd want to see what docker run commands had occurred. You could get these from your host shell history and process these into a Dockerfile.