Docker on Mac: Unable to run MySQL - mysql

I am using Docker very first time. On running command: make kickoff I am getting error:
myapp_php_apache_engine_dev is up-to-date
Starting myapp_mysql_dev
ERROR: for mysql Cannot start service mysql: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:359: container init caused \\\"rootfs_linux.go:53: mounting \\\\\\\"/Applications/MAMP/htdocs/clients/codingmachine/myapp/mysql/custom-my.cnf\\\\\\\" to rootfs \\\\\\\"/mnt/sda1/var/lib/docker/aufs/mnt/2ab6b2578ad9f8da2d453aefa5cd9b288fee30dd2d73efc3048627cf0861d55a\\\\\\\" at \\\\\\\"/mnt/sda1/var/lib/docker/aufs/mnt/2ab6b2578ad9f8da2d453aefa5cd9b288fee30dd2d73efc3048627cf0861d55a/etc/mysql/mysql.cnf\\\\\\\" caused \\\\\\\"not a directory\\\\\\\"\\\"\"\n"
ERROR: Encountered errors while bringing up the project.
make: *** [up] Error 1

When running docker toolbox, the docker daemon runs in a VirtualBox VM. The daemon (and containers, which run inside that VM) therefore don't have access to files on your host (Mac OS X).
When you bind-mount a directory from your host into the container (so that the container can access those files), files are always mounted from the host that the daemon runs on; in your case, the VirtualBox VM.
Docker Toolbox uses a "trick" to allow you to mount files from your host; files inside the /Users/ directory are shared with the VirtualBox VM using VirtualBox "guest additions". This means that when you run;
docker run -v /Users/myname/somedir:/var/www/html -d nginx
The docker daemon mounts the /Users/myname/somedir directory from the VM into the container. Due to the guest additions "trick", this path actually is shared with your OS X machine, so the container "sees" the files from your OS X machine.
However, any directory outside of the /Users/ directory is not shared between the OS X machine and the VM. If you try to bind-mount a path that does not exist inside the VM, docker creates an empty directory (it assumes you want to mount a directory, because it has no way to tell if it should be a directory or a file), and mounts that directory inside the container.
In your example, you try to bind mount;
/Applications/MAMP/htdocs/clients/codingmachine/myapp/mysql/custom-my.cnf
Inside the container at;
/etc/mysql/mysql.cnf
The /Applications directory is not shared with the VM, so docker creates an empty directory named custom-my.cnf inside the VM, then tries to mount that directory at /etc/mysql/mysql.cnf inside the container. This fails, because you cannot mount a directory on top of a file, and Linux produces an error "not a directory".
To resolve your issue;
Move the files you're trying to mount somewhere inside your /Users/ directory. Note that VirtualBox guest additions always mounts files/directories as if they're owned by user 1000 and group 1000; mysql may therefore not have write access to those files
If you have a modern Mac, running OS X 10.10 or later, check out Docker for Mac, which allows you to share any directory with Docker, and does not have the "permissions" issue; https://docs.docker.com/docker-for-mac/troubleshoot/#/volume-mounting-requires-file-sharing-for-any-project-directories-outside-of-users

Related

location of .erlang.cookie in ejabberd docker is missing

I am running ejabberd using this docker image "https://github.com/processone/docker-ejabberd/tree/master/ecs".
Wondering which is the path for .erlang.cookie inside the container? I was trying to setup cluster in different host.
I can't find it in /home/ejabberd location. Tried setting environment variable ERLANG_COOKIE while running docker still can't find it in /home/ejabberd location.
You already found where the erlang cookie file is generated and available.
Alternatively, you can use the ERLANG_COOKIE environment variable to set the cookie value, and don't care about the file. See https://github.com/processone/docker-ejabberd/tree/master/ecs#clustering-example
It is in $HOME directory when login to the container as root user, In my case /home/ejabberd. The file is hidden use ls -a to list it.
To login as root user to the container use --user root while docker exec

Couldn't load the external resource at: file:/var/lib/neo4j/import with Neo4j docker image

I am trying to load the node from csv in Neo4j, however, every time I try to do this I get such an error:
Neo.ClientError.Statement.ExternalResourceFailed: Couldn't load the external resource at: file:/var/lib/neo4j/import/events.csv
My event.csv file is in /var/lib/neo4j/import directory with 777 permissions. The query I try to run looks like this:
USING PERIODIC COMMIT 500 LOAD CSV WITH HEADERS FROM "file:///events.csv" AS line
CREATE (e:Event { event_id: toInteger(line.event_id),
created: line.created,
description: line.description })
I set up Neo4j using the latest version of docker image. What might be wrong with file permissions or file location?
Docker container cannot get access to files outside on the host machine, unless you mount those files to the container.
Solution is to bind-mount the directory to your container when calling the docker run command:
docker run -v /var/lib/neo4j/import:/var/lib/neo4j/import ... <IMAGE> <CMD>

How to set htpasswd for oauth in master config for minishift (v1.11.0) (Openshift Origin)

I'm trying to activate authentification via htpasswd in my minishift 1.11.0 installation. I cannot find the master config file to set the values described in the documentation for Openshift Origin. I've searched in the minishift-VM via minishift ssh and in the minishift folders in my home folder on my Windows 7 Host.
How can I activate htpasswd for minishift 1.11.0?
EDIT:
I found the master-config.yaml in the folder /var/lib/minishift/openshift.local.config/master/. I changed the content under oauthConfig as described in the Openshift documentation:
https://docs.openshift.org/latest/install_config/configuring_authentication.html
The .htpasswd file is located in the same folder and referenced in the master config with it's absolute path.
But when I stop and start minishift again, the starting process ends with the following error:
-- Starting OpenShift container ...
Starting OpenShift using container 'origin'
FAIL
Error: could not start OpenShift container "origin"
Details:
No log available from "origin" container
minishift : Error during 'cluster up' execution: Error starting the cluster.
In Zeile:1 Zeichen:1
+ minishift start --vm-driver=virtualbox
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (Error during 'c...ng the cluster.:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
EDIT 2:
I'm suspecting, that Openshift directly uses the tool htpasswd to verify the passwords of the users. I was not able to install htpasswd in the boot2docker VM, that minishift uses, so the initialization of the container failes. (also yum is not installed by default).
Is it possible to install htpasswd in boot2docker? If yes, where can I get the package?
I think I have found the problem. While trying I changed to the centos image for minishift with the corresponding flag at startup:
minishift start --iso-url=centos
When I wanted to patch the configuration to the master with minishift openshift config set it failed and rolled back. Searching in the logs (with minishift logs) got me this line:
error: Invalid MasterConfig /var/lib/origin/openshift.local.config/master/master-config.yaml
oauthConfig.identityProvider[0].provider.file: Invalid value: "/var/lib/minishift/openshift.local.config/master/.htpasswd": could not read file: stat /var/lib/minishift/openshift.local.co
nfig/master/.htpasswd: no such file or directory
Openshift couldn't find the HTPasswd file, because for Openshift the master-config.yaml file lies in
/var/lib/origin/openshift.local.config/master
and not in
/var/lib/minishift/openshift.local.config/master
as I had written in the config file. The latter one is the path of the files as seen by the minishift-VM itself (as seen, when using minishift ssh), but the Openshift instance, that runs inside it sees only the first one. I only had to update the master config file to the right filepath.
I haven't checked, if this also solves the problem for the boot2docker-iso, but I think this must have been the problem. And HTPasswd really doesn't need to be installed in the VM to let this work. You just need the file with your users and passwords reachable for the VM.
PS.: I also got a strange side behaviour. One user was already defined, when I changed to HTPasswd. I also defined him in the password file, but when trying to log with this username via the webconsole, I got the error, that the user could not be created. All other usernames work correctly. Maybe I have to delete him in some internal user directory, before adding him to HTPasswd.

Access file from chrome node docker

I am using selenium grid docker to run my automation testsuites.
I hv configured chrome node docker to run my testcases on the browser.
In my testcases I am downloading some file from the web. It is getting downloaded in "/home/seluser/Downloads/". I need the content of the file as the content is dynamic. How I can access file downloaded on the docker image from machine where my automation is running.
One way to do this would be to mount the /home/seluser/Downloads path somewhere on your host machine. Change your docker run command to add something like -v /path/on/host:/home/seluser/Downloads and all of the saved files will be accessible on your host machine.
You can use docker cp command
docker cp <name of the container>:/home/seluser/Downloads copied_files
where copied_files is a directory where files will be copied

Managing the selinux context of a file created on the host via a Docker container's volume

I ran through the fig python / django tutorial on Fedora 20 (docker 1.0.0) but it failed & tripped an AVC denial in SELinux when django-admin.py attempted to create the project files.
I reviewed the policy, i can see that setting the docker_var_lib_t context on my code dir would permit docker to write there (although i've just spied docker_share_t in the policy, that looks a better fit permissions wise - no chr / blk devices in that context).
Code directory locations are not predictable so setting a system wide policy (via semanage fcontext) doesn't seem the best way forward; i'd need to introduce some kind of convention.
Is there any way to automatically set this context on volumes mounted from a host?
You can set the following context on the directory
chcon -Rt svirt_sandbox_file_t $HOME/code/export
then run your docker command as
docker run --rm -it -v $HOME/code/export:/exported:ro image /foo/bar