I want to see folder structure in OpenShift 3. Is it possible to ssh in? I see with rsync can copy in / out, but how to list content?
To access the container which is running your application use the oc rsh command. This will give you an interactive shell and you can use normal Unix commands to change directory, list files etc.
Consider reading the free eBook at https://www.openshift.com/promotions/for-developers.html and work through exercises at https://learn.openshift.com to learn more about using OpenShift. You can also find various blog posts at blog.openshift.com
If your container doesn't include a shell, you can also use oc exec to run commands directly. Example for a specific container and a command with arguments (note the double dash) :
oc exec -it POD -c CONTAINER -- ls -lrt /tmp/
Related
I am running ejabberd using this docker image "https://github.com/processone/docker-ejabberd/tree/master/ecs".
Wondering which is the path for .erlang.cookie inside the container? I was trying to setup cluster in different host.
I can't find it in /home/ejabberd location. Tried setting environment variable ERLANG_COOKIE while running docker still can't find it in /home/ejabberd location.
You already found where the erlang cookie file is generated and available.
Alternatively, you can use the ERLANG_COOKIE environment variable to set the cookie value, and don't care about the file. See https://github.com/processone/docker-ejabberd/tree/master/ecs#clustering-example
It is in $HOME directory when login to the container as root user, In my case /home/ejabberd. The file is hidden use ls -a to list it.
To login as root user to the container use --user root while docker exec
I have problem installing CYGNUS using docker as source, simply i cannot understand where i should map what specific agent.conf.
Image i am using is from here.
When i try to map agent.conf witch have my specific setup to container it starts and run but fail to copy, and not only that any change i made to file inside container wont stay it returns to previous default state.
While i have no issues with grouping_rules.conf using same approach.
I used docker and docker compose both same results.
Path on witch i try to copy opt/apache-flume/conf/agent.conf
docker run -v /home/igor/Documents/cygnus/agent.conf:/opt/apache-flume/conf/agent.conf fiware/cygnus-ngsi
Can some who managed to run it using his config tell me if i misunderstood location of agent.conf or something because this is weird, i used many docker images and never had issue where i was not able to copy from my machine to docker container.
Thanks in advance.
** EDIT **
Link of agent.conf
Did you copy the agent.conf file to your directory before start the container?
As you can see here, when you define a volume with "-v" option, docker copies the content of the host directory, inside the container directory using the mount point. Therefore, you must first provide the agent.conf file on your host.
The reason is that when using a "bind mounted" directory from the
host, you're telling docker that you want to take a file or directory
from your host and use it in your container. Docker should not modify
those files/directories, unless you explicitly do so. For example, you
don't want -v /home/user/:/var/lib/mysql to result in your
home-directory being replaced with a MySQL database.
If you do not have access to the agent.conf file, you can download the template in the source code from the official cygnus github repo here. You can also copy it once the docker container is running, using the docker cp option:
docker cp <containerId>:/file/path/within/container /host/path/target
Keep in mind, that you will have to edit the agent.conf file to configure it according to the database you are using. You can find in the official doc how to configure cygnus to use differents sinks like MongoDB, MySQL, etc.
I hope I have been helpful.
Best regards!
I'm using the mysql image as an example, but the question is generic.
The password used to launch mysqld in docker is not visible in docker ps however it's visible in docker inspect:
sudo docker run --name mysql-5.7.7 -e MYSQL_ROOT_PASSWORD=12345 -d mysql:5.7.7
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b98afde2fab7 mysql:5.7.7 "/entrypoint.sh mysq 6 seconds ago Up 5 seconds 3306/tcp mysql-5.7.7
sudo docker inspect b98afde2fab75ca433c46ba504759c4826fa7ffcbe09c44307c0538007499e2a
"Env": [
"MYSQL_ROOT_PASSWORD=12345",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"MYSQL_MAJOR=5.7",
"MYSQL_VERSION=5.7.7-rc"
]
Is there a way to hide/obfuscate environment parameters passed when launching containers. Alternatively, is it possible to pass sensitive parameters by reference to a file?
Weirdly, I'm just writing an article on this.
I would advise against using environment variables to store secrets, mainly for the reasons Diogo Monica outlines here; they are visible in too many places (linked containers, docker inspect, child processes) and are likely to end up in debug info and issue reports. I don't think using an environment variable file will help mitigate any of these issues, although it would stop values getting saved to your shell history.
Instead, you can pass in your secret in a volume e.g:
$ docker run -v $(pwd)/my-secret-file:/secret-file ....
If you really want to use an environment variable, you could pass it in as a script to be sourced, which would at least hide it from inspect and linked containers (e.g. CMD source /secret-file && /run-my-app).
The main drawback with using a volume is that you run the risk of accidentally checking the file into version control.
A better, but more complicated solution is to get it from a key-value store such as etcd (with crypt), keywhiz or vault.
You say "Alternatively, is it possible to pass sensitive parameters by reference to a file?", extract from the doc http://docs.docker.com/reference/commandline/run/ --env-file=[] Read in a file of environment variables.
I ran through the fig python / django tutorial on Fedora 20 (docker 1.0.0) but it failed & tripped an AVC denial in SELinux when django-admin.py attempted to create the project files.
I reviewed the policy, i can see that setting the docker_var_lib_t context on my code dir would permit docker to write there (although i've just spied docker_share_t in the policy, that looks a better fit permissions wise - no chr / blk devices in that context).
Code directory locations are not predictable so setting a system wide policy (via semanage fcontext) doesn't seem the best way forward; i'd need to introduce some kind of convention.
Is there any way to automatically set this context on volumes mounted from a host?
You can set the following context on the directory
chcon -Rt svirt_sandbox_file_t $HOME/code/export
then run your docker command as
docker run --rm -it -v $HOME/code/export:/exported:ro image /foo/bar
If you look at dockerfiles the often contains lines like this:
sed 's/main$/main universe/' -i /etc/apt/sources.list
I think it is difficult to set up things like this.
Is it possible to launch a default OS image, then enter it interactive with a shell, do some modifications, and then print out the diff (filesystem diff)?
The diff should be used as the dockerfile to recreating the image.
But maybe I am missing something, since I am new to docker.
You can create docker images several ways.
I tend to have two windows open when I create a new docker image. One for my docker run -i -t centos bash, where I am writing all my commands to get it the way I want, and the other one with the Dockerfile, so I can put in whatever I do.
When it comes to config files, I am putting them in the files/folders that matches the one on the image.
Example, if I change /etc/something/file.conf, I will create the file in etc/something/file.conf in the same directory as my Dockerfile, and then use Dockers ADD command to add it whenever I do a build.
This works perfectly, since I can have all this in a git repository with a README.md containing the info I need for running/building the image.
The other thing you can do is to is to run docker ps -a after you are done with the changes you wanted to create an image on, and get the docker ID of the image of the container you just configured. You can tag this new image, or start it with docker run abc0123 bash just like you would a normal docker image.
The problem with this is that you wont be able to easily build it next time without bringing the whole image.
Dockerfiles with ADD is the way to go!
If you do not want to run sed (which is used to preserve the default file and of minimal changes to it), you can simply ADD the modifies file.
For that you can docker run -it --rm thebaseimage /bin/sh (or any other shell that is provided) and edit it in place. Then just copy it outside the container (or docker export it) and use it on your build.
The downside of ADD vs RUN sed… is that, if something changes in a new version of your base image, you will overwrite those changes.
The Dockerfile is (mostly) equivalent to a series of docker run and docker commit commands. You wouldn't want to look at the docker diff to see what files changed -- you'd want to see what docker run commands had occurred. You could get these from your host shell history and process these into a Dockerfile.