Gunicorn listening always at http://127.0.0.1:8000 - gunicorn

I have set up my django application on webfaction and now I am trying to move to using Guicorn to serve my application. When I set up my files and config everything seems to be working except that it is always listening at 127.0.0.1:8000.
My configuration is as below.
supervisord.conf
[unix_http_server]
file=/home/devana/tmp/supervisor.sock
[supervisord]
logfile=/home/devana/tmp/supervisord.log
logfile_maxbytes=50MB
logfile_backups=10
loglevel=info
pidfile=/home/devana/webapps/devana/etc/supervisord.pid
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///home/devana/tmp/supervisor.sock
[include]
files = /home/devana/webapps/devana/etc/supervisord/*.ini
Supervisor.ini
[program:devana]
command=/home/devana/webapps/devana/scripts/start_server
directory=/home/devana/webapps/devana/csiop/
user=devana
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile = /home/devana/tmp/gunicorn_supervisor.log
start_server
NAME="devana" # Name of the application
DJANGODIR=/home/devana/webapps/devana/csiop # Django project directory
SOCKFILE=/home/devana/webapps/devana/run/gunicorn.sock # we will communicte using this
unix socket
USER=devana # the user to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=devana.settings.production # which settings should Django use
DJANGO_WSGI_MODULE=devana.wsgi # WSGI module name
BIND=2.14.5.58:31148 (IP and the port number provided by webfaction in this place)
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /home/devana/webapps/devana/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER \
--log-level=debug \
--bind=$BIND
Now when I type the '../bin/Supervisord' command, guicorn is starting but it listening at 127.0.0.1:8000 instead of the bind variable I provided and I am not able to open my website using http://mywebsite.com.
Could someone point me what I am doing wrong

I found the problem. Instead of using BIND variable containing both IP and port, I separated them into two different variables and used --bind=$IP:$PORT. That seems to work

If gunicon listens on 127.0.0.1:8000 it probably is the default that is applied because the supplied -b / --bind parameter cannot be applied.
In my case, I ran gunicorn via Docker and had the following directive in my Dockerfile to run as default command:
CMD ["gunicorn", "config.wsgi", "--bind 0.0.0.0:8000"] # listening on 127.0.0.1:8000
CMD ["gunicorn", "config.wsgi", "--bind", "0.0.0.0:8000"] # listening on 0.0.0.0:8000
I'm not sure what was broken in your case but if someone from the future stumbles upon this: check how the --bind value is passed to gunicorn.

Related

How to use external files as arguments in Docker images [duplicate]

I am trying to mount a host directory into a Docker container so that any updates done on the host is reflected into the Docker containers.
Where am I doing something wrong. Here is what I did:
kishore$ cat Dockerfile
FROM ubuntu:trusty
RUN apt-get update
RUN apt-get -y install git curl vim
CMD ["/bin/bash"]
WORKDIR /test_container
VOLUME ["/test_container"]
kishore$ tree
.
├── Dockerfile
└── main_folder
├── tfile1.txt
├── tfile2.txt
├── tfile3.txt
└── tfile4.txt
1 directory, 5 files
kishore$ pwd
/Users/kishore/tdock
kishore$ docker build --tag=k3_s3:latest .
Uploading context 7.168 kB
Uploading context
Step 0 : FROM ubuntu:trusty
---> 99ec81b80c55
Step 1 : RUN apt-get update
---> Using cache
---> 1c7282005040
Step 2 : RUN apt-get -y install git curl vim
---> Using cache
---> aed48634e300
Step 3 : CMD ["/bin/bash"]
---> Running in d081b576878d
---> 65db8df48595
Step 4 : WORKDIR /test_container
---> Running in 5b8d2ccd719d
---> 250369b30e1f
Step 5 : VOLUME ["/test_container"]
---> Running in 72ca332d9809
---> 163deb2b1bc5
Successfully built 163deb2b1bc5
Removing intermediate container b8bfcb071441
Removing intermediate container d081b576878d
Removing intermediate container 5b8d2ccd719d
Removing intermediate container 72ca332d9809
kishore$ docker run -d -v /Users/kishore/main_folder:/test_container k3_s3:latest
c9f9a7e09c54ee1c2cc966f15c963b4af320b5203b8c46689033c1ab8872a0eakishore$ docker run -i -t k3_s3:latest /bin/bash
root#0f17e2313a46:/test_container# ls -al
total 8
drwx------ 2 root root 4096 Apr 29 05:15 .
drwxr-xr-x 66 root root 4096 Apr 29 05:15 ..
root#0f17e2313a46:/test_container# exit
exitkishore$ docker -v
Docker version 0.9.1, build 867b2a9
I don't know how to check boot2docker version
Questions, issues facing:
How do I need to link the main_folder to the test_container folder present inside the docker container?
I need to make this automatically. How do I to do that without really using the run -d -v command?
What happens if the boot2docker crashes? Where are the Docker files stored (apart from Dockerfile)?
There are a couple ways you can do this. The simplest way to do so is to use the dockerfile ADD command like so:
ADD . /path/inside/docker/container
However, any changes made to this directory on the host after building the dockerfile will not show up in the container. This is because when building a container, docker compresses the directory into a .tar and uploads that context into the container permanently.
The second way to do this is the way you attempted, which is to mount a volume. Due to trying to be as portable as possible you cannot map a host directory to a docker container directory within a dockerfile, because the host directory can change depending on which machine you are running on. To map a host directory to a docker container directory you need to use the -v flag when using docker run, e.g.,:
# Run a container using the `alpine` image, mount the `/tmp`
# directory from your host into the `/container/directory`
# directory in your container, and run the `ls` command to
# show the contents of that directory.
docker run \
-v /tmp:/container/directory \
alpine \
ls /container/directory
The user of this question was using Docker version 0.9.1, build 867b2a9, I will give you an answer for docker version >= 17.06.
What you want, keep local directory synchronized within container directory, is accomplished by mounting the volume with type bind. This will bind the source (your system) and the target (at the docker container) directories. It's almost the same as mounting a directory on linux.
According to Docker documentation, the appropriate command to mount is now mount instead of -v. Here's its documentation:
--mount: Consists of multiple key-value pairs, separated by commas. Each key/value pair takes the form of a <key>=<value> tuple. The --mount syntax is more verbose than -v or --volume, but the order of the keys is not significant, and the value of the flag is easier to understand.
The type of the mount, which can be bind, volume, or tmpfs. (We are going to use bind)
The source of the mount. For bind mounts, this is the path to the file or directory on the Docker daemon host. May be specified as source or src.
The destination takes as its value the path where the file or directory will be mounted in the container. May be specified as destination, dst, or target.
So, to mount the the current directory (source) with /test_container (target) we are going to use:
docker run -it --mount src="$(pwd)",target=/test_container,type=bind k3_s3
If these mount parameters have spaces you must put quotes around them. When I know they don't, I would use `pwd` instead:
docker run -it --mount src=`pwd`,target=/test_container,type=bind k3_s3
You will also have to deal with file permission, see this article.
you can use -v option from cli, this facility is not available via Dockerfile
docker run -t -i -v <host_dir>:<container_dir> ubuntu /bin/bash
where host_dir is the directory from host which you want to mount.
you don't need to worry about directory of container if it doesn't exist docker will create it.
If you do any changes in host_dir from host machine (under root privilege) it will be visible to container and vice versa.
2 successive mounts:
I guess many posts here might be using two boot2docker, the reason you don't see anything is that you are mounting a directory from boot2docker, not from your host.
You basically need 2 successive mounts:
the first one to mount a directory from your host to your system
the second to mount the new directory from boot2docker to your container like this:
1) Mount local system on boot2docker
sudo mount -t vboxsf hostfolder /boot2dockerfolder
2) Mount boot2docker file on linux container
docker run -v /boot2dockerfolder:/root/containerfolder -i -t imagename
Then when you ls inside the containerfolder you will see the content of your hostfolder.
Is it possible that you use docker on OS X via boot2docker or something similar.
I've made the same experience - the command is correct but nothing (sensible) is mounted in the container, anyway.
As it turns out - it's already explained in the docker documentation. When you type docker run -v /var/logs/on/host:/var/logs/in/container ... then /var/logs/on/host is actually mapped from the boot2docker VM-image, not your Mac.
You'll have to pipe the shared folder through your VM to your actual host (the Mac in my case).
For those who wants to mount a folder in current directory:
docker run -d --name some-container -v ${PWD}/folder:/var/folder ubuntu
I'm just experimenting with getting my SailsJS app running inside a Docker container to keep my physical machine clean.
I'm using the following command to mount my SailsJS/NodeJS application under /app:
cd my_source_code_folder
docker run -it -p 1337:1337 -v $(pwd):/app my_docker/image_with_nodejs_etc
[UPDATE] As of ~June 2017, Docker for Mac takes care of all the annoying parts of this where you have to mess with VirtualBox. It lets you map basically everything on your local host using the /private prefix. More info here. [/UPDATE]
All the current answers talk about Boot2docker. Since that's now deprecated in favor of docker-machine, this works for docker-machine:
First, ssh into the docker-machine vm and create the folder we'll be mapping to:
docker-machine ssh $MACHINE_NAME "sudo mkdir -p \"$VOL_DIR\""
Now share the folder to VirtualBox:
WORKDIR=$(basename "$VOL_DIR")
vboxmanage sharedfolder add "$MACHINE_NAME" --name "$WORKDIR" --hostpath "$VOL_DIR" --transient
Finally, ssh into the docker-machine again and mount the folder we just shared:
docker-machine ssh $MACHINE_NAME "sudo mount -t vboxsf -o uid=\"$U\",gid=\"$G\" \"$WORKDIR\" \"$VOL_DIR\""
Note: for UID and GID you can basically use whatever integers as long as they're not already taken.
This is tested as of docker-machine 0.4.1 and docker 1.8.3 on OS X El Capitan.
Using command-line :
docker run -it --name <WHATEVER> -p <LOCAL_PORT>:<CONTAINER_PORT> -v <LOCAL_PATH>:<CONTAINER_PATH> -d <IMAGE>:<TAG>
Using docker-compose.yaml :
version: '2'
services:
cms:
image: <IMAGE>:<TAG>
ports:
- <LOCAL_PORT>:<CONTAINER_PORT>
volumes:
- <LOCAL_PATH>:<CONTAINER_PATH>
Assume :
IMAGE: k3_s3
TAG: latest
LOCAL_PORT: 8080
CONTAINER_PORT: 8080
LOCAL_PATH: /volume-to-mount
CONTAINER_PATH: /mnt
Examples :
First create /volume-to-mount. (Skip if exist)
$ mkdir -p /volume-to-mount
docker-compose -f docker-compose.yaml up -d
version: '2'
services:
cms:
image: ghost-cms:latest
ports:
- 8080:8080
volumes:
- /volume-to-mount:/mnt
Verify your container :
docker exec -it CONTAINER_ID ls -la /mnt
docker run -v /host/directory:/container/directory -t IMAGE-NAME /bin/bash
docker run -v /root/shareData:/home/shareData -t kylemanna/openvpn /bin/bash
In my system I've corrected the answer from nhjk, it works flawless when you add the -t flag.
On Mac OS, to mount a folder /Users/<name>/projects/ on your mac at the root of your container:
docker run -it -v /Users/<name>/projects/:/projects <container_name> bash
ls /projects
If the host is windows 10 then instead of forward slash, use backward slash -
docker run -it -p 12001:80 -v c:\Users\C\Desktop\dockerStorage:/root/sketches
Make sure the host drive is shared (C in this case). In my case I got a prompt asking for share permission after running the command above.
For Windows 10 users, it is important to have the mount point inside the C:/Users/ directory. I tried for hours to get this to work. This post helped but it was not obvious at first as the solution for Windows 10 is a comment to an accepted answer. This is how I did it:
docker run -it -p 12001:80 -v //c/Users/C/Desktop/dockerStorage:/root/sketches \
<your-image-here> /bin/bash
Then to test it, you can do echo TEST > hostTest.txt inside your image. You should be able to see this new file in the local host folder at C:/Users/C/Desktop/dockerStorage/.
As of Docker 18-CE, you can use docker run -v /src/path:/container/path to do 2-way binding of a host folder.
There is a major catch here though if you're working with Windows 10/WSL and have Docker-CE for Windows as your host and then docker-ce client tools in WSL. WSL knows about the entire / filesystem while your Windows host only knows about your drives. Inside WSL, you can use /mnt/c/projectpath, but if you try to docker run -v ${PWD}:/projectpath, you will find in the host that /projectpath/ is empty because on the host /mnt means nothing.
If you work from /c/projectpath though and THEN do docker run -v ${PWD}:/projectpath and you WILL find that in the container, /projectpath will reflect /c/projectpath in realtime. There are no errors or any other ways to detect this issue other than seeing empty mounts inside your guest.
You must also be sure to "share the drive" in the Docker for Windows settings.
Jul 2015 update - boot2docker now supports direct mounting. You can use -v /var/logs/on/host:/var/logs/in/container directly from your Mac prompt, without double mounting
I've been having the same issue.
My command line looked like this:
docker run --rm -i --name $NAME -v `pwd`:/sources:z $NAME
The problem was with 'pwd'. So I changed that to $(pwd):
docker run --rm -i --name $NAME -v $(pwd):/sources:z $NAME
How do I link the main_folder to the test_container folder present inside the docker container?
Your command below is correct, unless your on a mac using boot2docker(depending on future updates) in which case you may find the folder empty. See mattes answer for a tutorial on correcting this.
docker run -d -v /Users/kishore/main_folder:/test_container k3_s3:latest
I need to make this run automatically, how to do that without really
using the run -d -v command.
You can't really get away from using these commands, they are intrinsic to the way docker works. You would be best off putting them into a shell script to save you writing them out repeatedly.
What happens if boot2docker crashes? Where are the docker files stored?
If you manage to use the -v arg and reference your host machine then the files will be safe on your host.
If you've used 'docker build -t myimage .' with a Dockerfile then your files will be baked into the image.
Your docker images, i believe, are stored in the boot2docker-vm. I found this out when my images disappeared when i delete the vm from VirtualBox. (Note, i don't know how Virtualbox works, so the images might be still hidden somewhere else, just not visible to docker).
Had the same problem. Found this in the docker documentation:
Note: The host directory is, by its nature, host-dependent. For this reason, you can’t mount a host directory from Dockerfile, the VOLUME instruction does not support passing a host-dir, because built images should be portable. A host directory wouldn’t be available on all potential hosts.
So, mounting a read/write host directory is only possible with the -v parameter in the docker run command, as the other answers point out correctly.
I found that any directory laying under system directive like /var, /usr, /etc could not be mount under the container.
The directive should be at user's space -v switch instructs docker daemon to mount local directory to the container, for example:
docker run -t -d -v /{local}/{path}:/{container}/{path} --name {container_name} {imagename}
Here's an example with a Windows path:
docker run -P -it --name organizr --mount src="/c/Users/MyUserName/AppData/Roaming/DockerConfigs/Organizr",dst=/config,type=bind organizrtools/organizr-v2:latest
As a side note, during all of this hair pulling, having to wrestle with figuring out, and retyping paths over and over and over again, I decided to whip up a small AutoHotkey script to convert a Windows path to a "Docker Windows" formatted path. This way all I have to do is copy any Windows path that I want to use as a mount point to the clipboard, press the "Apps Key" on the keyboard, and it'll format it into a path format that Docker appreciates.
For example:
Copy this to your clipboard:
C:\Users\My PC\AppData\Roaming\DockerConfigs\Organizr
press the Apps Key while the cursor is where you want it on the command-line, and it'll paste this there:
"/c/Users/My PC/AppData/Roaming/DockerConfigs/Organizr"
Saves a lot to time for me. Here it is for anyone else who may find it useful.
; --------------------------------------------------------------------------------------------------------------
;
; Docker Utility: Convert a Windows Formatted Path to a Docker Formatter Path
; Useful for (example) when mounting Windows volumes via the command-line.
;
; By: J. Scott Elblein
; Version: 1.0
; Date: 2/5/2019
;
; Usage: Cut or Copy the Windows formatted path to the clipboard, press the AppsKey on your keyboard
; (usually right next to the Windows Key), it'll format it into a 'docker path' and enter it
; into the active window. Easy example usage would be to copy your intended volume path via
; Explorer, place the cursor after the "-v" in your Docker command, press the Apps Key and
; then it'll place the formatted path onto the line for you.
;
; TODO:: I may or may not add anything to this depending on needs. Some ideas are:
;
; - Add a tray menu with the ability to do some things, like just replace the unformatted path
; on the clipboard with the formatted one rather than enter it automatically.
; - Add 'smarter' handling so the it first confirms that the clipboard text is even a path in
; the first place. (would need to be able to handle Win + Mac + Linux)
; - Add command-line handling so the script doesn't need to always be in the tray, you could
; just pass the Windows path to the script, have it format it, then paste and close.
; Also, could have it just check for a path on the clipboard upon script startup, if found
; do it's job, then exit the script.
; - Add an 'all-in-one' action, to copy the selected Windows path, and then output the result.
; - Whatever else comes to mind.
;
; --------------------------------------------------------------------------------------------------------------
#NoEnv
SendMode Input
SetWorkingDir %A_ScriptDir%
AppsKey::
; Create a new var, store the current clipboard contents (should be a Windows path)
NewStr := Clipboard
; Rip out the first 2 chars (should be a drive letter and colon) & convert the letter to lowercase
; NOTE: I could probably replace the following 3 lines with a regexreplace, but atm I'm lazy and in a rush.
tmpVar := SubStr(NewStr, 1, 2)
StringLower, tmpVar, tmpVar
; Replace the uppercase drive letter and colon with the lowercase drive letter and colon
NewStr := StrReplace(NewStr, SubStr(NewStr, 1, 2), tmpVar)
; Replace backslashes with forward slashes
NewStr := StrReplace(NewStr, "\", "/")
; Replace all colons with nothing
NewStr := StrReplace(NewStr, ":", "")
; Remove the last char if it's a trailing forward slash
NewStr := RegExReplace(NewStr, "/$")
; Append a leading forward slash if not already there
if RegExMatch(NewStr, "^/") == 0
NewStr := "/" . NewStr
; If there are any spaces in the path ... wrap in double quotes
if RegExMatch(NewStr, " ") > 0
NewStr := """" . NewStr . """"
; Send the result to the active window
SendInput % NewStr
To get this working in Windows 10 I had to open the Docker Settings window from the system tray and go to the Shared Drives section.
I then checked the box next to C. Docker asked for my desktop credentials to gain authorisation to write to my Users folder.
Then I ran the docker container following examples above and also the example on that settings page, attaching to /data in the container.
docker run -v c:/Users/<user.name>/Desktop/dockerStorage:/data -other -options
boot2docker together with VirtualBox Guest Additions
How to mount /Users into boot2docker
https://medium.com/boot2docker-lightweight-linux-for-docker/boot2docker-together-with-virtualbox-guest-additions-da1e3ab2465c
tl;dr Build your own custom boot2docker.iso with VirtualBox Guest
Additions (see link) or download
http://static.dockerfiles.io/boot2docker-v1.0.1-virtualbox-guest-additions-v4.3.12.iso
and save it to ~/.boot2docker/boot2docker.iso.
Note that in Windows you'll have to provide the absolute path.
Host: Windows 10
Container: Tensorflow Notebook
Below worked for me.
docker run -t -i -v D:/projects/:/home/chankeypathak/work -p 8888:8888 jupyter/tensorflow-notebook /bin/bash
i had same issues , i was trying to mount C:\Users\ folder on docker
this is how i did it Docker Toolbox command line
$ docker run -it --name <containername> -v /c/Users:/myVolData <imagename>
You can also do this with Portainer web application for a different visual experience.
First pull the Portainer image:
docker pull portainer/portainer
Then create a volume for Portainer:
docker volume create portainer_data
Also create a Portainer container:
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
You will be able to access the web app with your browser at this URL: "http://localhost:9000". At the first login, you will be prompted to set your Portainer admin credentials.
In the web app, follow these menus and buttons: (Container > Add container > Fill settings > Deploy Container)
I had trouble to create a "mount" volume with Portainer and I realized I had to click "bind" when creating my container's volume. Below is an illustration of the volume binding settings that worked for my container creation with a mounted volume binded to the host.
P.S.: I'm using Docker 19.035 and Portainer 1.23.1
I had the same requirement to mount host directory from container and I used volume mount command. But during testing noticed that it's creating files inside container too but after some digging found that they are just symbolic links and actual file system used form host machine.
Quoting from the Official Website:
Make sure you don’t have any previous getting-started containers running.
Run the following command from the app directory.
x86-64 Mac or Linux device:
docker run -dp 3000:3000 \
-w /app -v "$(pwd):/app" \
node:12-alpine \
sh -c "yarn install && yarn run dev"
Windows (PowerShell):
docker run -dp 3000:3000 `
-w /app -v "$(pwd):/app" `
node:12-alpine `
sh -c "yarn install && yarn run dev"
Aple silicon Mac or another ARM64 device:
docker run -dp 3000:3000 \
-w /app -v "$(pwd):/app" \
node:12-alpine \
sh -c "apk add --no-cache python2 g++ make && yarn install && yarn run dev"
Explaining:
dp 3000:3000 - same as before. Run in detached (background) mode and create a port mapping
w /app - sets the “working directory” or the current directory that the command will run from
v "$(pwd):/app" - bind mount the current directory from the host into the /app directory in the container
node:12-alpine - the image to use.
Note that this is the base image for our app from the Dockerfile sh -c "yarn install && yarn run dev" - the command.
We’re starting a shell using sh (alpine doesn’t have bash) and running yarn install to install all dependencies and then running yarn run dev. If we look in the package.json, we’ll see that the dev script is starting nodemon.

Accessing environment variables in Docker containers linked with --link

I'm setting up the development environment for my application inside Docker containers, at the moment I have these containers:
myapp-data - Holds application source code and log files
myapp-phpfpm - Runs the php5-fpm process for Nginx
myapp-nginx - Runs the Nginx web server that serves the application
This setup works beautifully, I'm really happy with it. But my application needs a MySQL database to connect to, so I'm using the official MySQL image, and running it like so:
sudo docker run --name myapp-mysql -e "MYSQL_ROOT_PASSWORD=iamroot" -e "MYSQL_USER=redacted" -e "MYSQL_PASSWORD=redacted" -e "MYSQL_DATABASE=redacted" -d mysql
This also works great. But my myapp-phpfpm container needs to be linked to the myapp-mysql container in order to expose MySQL's connection details to my application. So I restart my myapp-phpfpm container:
sudo docker run --privileged=true --name myapp-phpfpm --volumes-from myapp-data --link myapp-mysql:mysql -d readr/phpfpm
So now my myapp-phpfpm container is linked to my myapp-mysql container so I should be able to access the database within my PHP application.
The problem is I can't. The environment variables don't exist inside the PHP application. If I do:
die(var_dump(`printenv`));
I don't get the MySQL environment variables. To try to debug I did a whoami to find out what user PHP is running as, which is www-data. I then created a bash process inside the container, used su www-data to become the www-data user and did printenv there. Sure enough, the MySQL environment variables do exist there:
MYSQL_PORT_3306_TCP_PORT=3306
MYSQL_PORT_3306_TCP=tcp://172.17.1.118:3306
MYSQL_ENV_MYSQL_ROOT_PASSWORD=iamroot
... etc ...
So, how can I access the environment variables that Docker exposes about my myapp-mysql container within PHP?
I solved this by creating a custom start.sh script that then gets called from my Dockerfile:
#!/bin/sh
# Function to update the fpm configuration to make the service environment variables available
function setEnvironmentVariable() {
if [ -z "$2" ]; then
echo "Environment variable '$1' not set."
return
fi
# Check whether variable already exists
if grep -q $1 /etc/php5/fpm/pool.d/www.conf; then
# Reset variable
sed -i "s/^env\[$1.*/env[$1] = $2/g" /etc/php5/fpm/pool.d/www.conf
else
# Add variable
echo "env[$1] = $2" >> /etc/php5/fpm/pool.d/www.conf
fi
}
# Grep for variables that look like MySQL (MYSQL)
for _curVar in `env | grep MYSQL | awk -F = '{print $1}'`;do
# awk has split them by the equals sign
# Pass the name and value to our function
setEnvironmentVariable ${_curVar} ${!_curVar}
done
# start php-fpm
exec /usr/sbin/php5-fpm
This then adds the environment variables to the PHP5-FPM config so they can be accessed from within PHP scripts.
php-fpm by default clears all environment variables, /etc/php5/fpm/pool.d/www.conf:
; Setting to "no" will make all environment variables available to PHP code
; via getenv(), $_ENV and $_SERVER.
; Default Value: yes
;clear_env = no
you can fix this by uncommenting in your Dockerfile:
RUN sed -i -e "s/;clear_env\s*=\s*no/clear_env = no/g" /etc/php5/fpm/pool.d/www.conf
I'd recommend using something like fig and just passing the env vars to both containers at startup. If you really want to you could docker inspect any container from any other container if you bind-mount the docker socket, then do something like this:
docker inspect -f {{.Config.Env}} myapp-mysql
The problem may not be the environment variables - it may be your PHP installation.
TL;DR environment variables that are accessible when you're running your application under Apache & PHP may not be available if you're using nginx or lighttpd and fastcgi.
The longer version
Here's the way I understand it (and it's probably wrong or incomplete because my experience with this is quite limited). Because PHP is not running as part of the browser under nginx with fastCGI, it does not have access to the shell in which the browser was started and therefore does not have access to the environment variables in that shell.
The solution is to declare the variables you're interested in as part of the configuration. This answer is kind of terse, but it contains the basic answer to this problem.

dockerfile - Unable to connect error

So here's what i have to do: i need to set up some containers automatically using docker. One of them is liek this: Debian Squeeze with limited CPU shares and limited memory (1 cpu share and 512 mb memory),preinstalled apache2,build-essential,php5,mysql-server-5.5,openssh-server and with some ports opened (8000 for Apache and 1500 for MySQL). So i created the following dockerfile :
FROM debian:squeeze
MAINTAINER Name < email : >
# Update the repository sources list
RUN apt-get update
# Install apache, PHP, and supplimentary programs. curl and lynx-cur are for debugging the container.
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install apache2 build-essential php5 mysql-server openssh-server libapache2-mod-php5 php5-mysql php5-gd php-pear php-apc php5-curl curl lynx-cur
# Enable apache mods.
RUN a2enmod php5
RUN a2enmod rewrite
# Manually set up the apache environment variables
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
EXPOSE 80
# Copy site into place.
ADD www /var/www/site
# Update the default apache site with the config we created.
ADD apache-config.conf /etc/apache2/sites-enabled/000-default.conf
# By default, simply start apache.
CMD /usr/sbin/apache2ctl -D FOREGROUND
#CMD [ "mysqladmin -u root password mysecretpasswordgoeshere"]
EXPOSE 3306
the content of apache-config.conf is this:
<VirtualHost *:80>
ServerAdmin me#mydomain.com
DocumentRoot /var/www/site
<Directory /var/www/site/>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order deny,allow
Allow from all
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
and in www folder i put a php file with this code:
<?php
$connect=mysql_connect("localhost:1500","root","") or die("Unable to Connect");
?>
to test the connection to the mysql server
then
i build all this into an image like this:
sudo docker build --rm --tag="tag_name" .
and then i run the image like this
sudo docker run -c=1 -m="512m" --net=bridge -p 8000:80 -p 1500:3306 -d --name="container_name" tag_name
It seems to work,the apache server works when i access localhost:8000/site in my browser but is shows "Unable to connect". what am i doing wrong?
And another problem is that,the contaienr is running but i can't attach to it.I run this command
sudo docker attach CONTAINER_ID
and then nothing happens,can't do anythign else from there,What am i doing wrong?
I have to build few more dockerfiles similar to this to create containers.All those must be hosted on a ZFS file system and i have to configure a container repository of 50gb based on it,what does this mean and how do i do that?
I'm sorry for my english,it's not my native language :(
Thank you in advance
MySQL issue
in the PHP code
$connect=mysql_connect("localhost:1500","root","") or die("Unable to Connect");
localhost refers to the container IP address. Since there is no MySQL server running in that container the connection will fail.
In this gist, I've changed a bit your example to have the container start both MySQL and Apache (I assume this was your first intent) using the following instruction: CMD bash -c '(mysqld &); /usr/sbin/apache2ctl -D FOREGROUND' and changed the PHP code to connect to the MySQL server on localhost:3306.
Docker attach
The docker attach command is meant to allow you to interact with the process currently running in the foreground of a container. Unless that process is a shell, it won't provide you with a shell in that container.
Take this example:
Start a container running a shell process
docker run -it --rm base bash
You are now in interactive mode in your container and can play around with the shell running in the foreground in that container:
root#de8f16a13571:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin selinux srv sys tmp usr var
if you now exit the shell by typing exit the shell process will end, and as that was the process running in the foreground in the container, that container will stop.
root#de8f16a13571:/# exit
exit
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Now start a new container named test running bash again:
docker run -it --name test base bash
verify you can interact with it and detach from it by hitting keys Ctrl+p+q. You end up back in the docker host shell.
verify that the container named test is still running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
81f0f1094f4a base:latest "bash" 6 seconds ago Up 5 seconds test
You can then use the docker attach command to attach to the bash program in the container:
docker attach test
root#81f0f1094f4a:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin selinux srv sys tmp usr var
ZSH
And regarding ZSH, I don't know what all that means either. Also note that having 3 questions at once makes it difficult for the community to come up with a single answer that would answer all 3 ; maybe consider posting a new question for those.
Please comment if my assumptions about how you run MySQL or what your intent is with docker attach are wrong.

Google Compute Engine: how to set hostname permanently?

How do I set the hostname of an instance in GCE permanently? I can set it via hostname,but after reboot it is gone again.
I tried to feed in metadata (hostname:f.q.d.n), but that did not do the job. But it should work via metadata (https://github.com/GoogleCloudPlatform/compute-image-packages/tree/master/google-startup-scripts).
Anybody an idea?
The most simple way to achieve it is creating a simple script and that's what I have done.
I have stored the hostname in the instance metadata and then I retrieve it every time the system restarts in order to set the hostname using a cron job.
$ gcloud compute instances add-metadata <instance> --metadata hostname=<new_hostname>
$ sudo crontab -e
And this is the line that must be appended in crontab
#reboot hostname $(curl --silent "http://metadata.google.internal/computeMetadata/v1/instance/attributes/hostname" -H "Metadata-Flavor: Google")
After these steps, every time you restart your instance it will have the hostname <new_hostname>.
You can check it in the prompt or with the command: hostname
You need to remove the file /etc/dhcp/dhclient.d/google_hostname.sh
rm -rf /etc/dhcp/dhclient.d/google_hostname.sh
rm -rf /etc/dhcp/dhclient-exit-hooks.d/google_set_hostname
It's worth noting that this script is needed in order to run gcloud beta compute instances create with the --hostname flag. If this script is absent on a base image, new VM instances will preserve the source hostname/FQDN!
Edit rc.local
sudo nano /etc/rc.local
Add your line under the rest:
hostname *your.hostname.com*
Make sure to run the following after for the script to be executed
chmod +x /etc/rc.d/rc.local
Reboot, and profit.
That isn't possible. Please take a look at this answer. The following article explains that the "hostname" is part of the default metadata entries and it is not possible to manually edit any of the default metadata pairs. As such, you would need to use a script or something else to change the hostname every time the system restarts, otherwise it will automatically get re-synced with the metadata server on every reboot.
You can find information on startup scripts for GCE in this article. You can visit this one for info on how to apply the script to an instance.
You can also create a simple startup-script to do the jobs:
$ gcloud compute instances add-metadata <instance-name> --zone <instance-zone> --metadata startup-script='#! /bin/bash
hostname <hostname>'
Notice that if you already have a startup-script you need to add to the existing startup-script below command or you will replace all the startup-script:
$ hostname instance-name
I was lucky to set hostname at GCE running CentOS.
Source: desantolo.com
Click EDIT on your instance
Go to "Custom metadata" section
Add hostname + your.hostname.tld (change "your.hostname.tld" to your actual hostname
run curl --silent "http://metadata.google.internal/computeMetadata/v1/instance/attributes/hostname" -H "Metadata-Flavor: Google"
run sudo env EDITOR=nano crontab -e to edit crontab
add line #reboot hostname $(curl --silent "http://metadata.google.internal/computeMetadata/v1/instance/attributes/hostname" -H "Metadata-Flavor: Google")
On your keyboard Ctrl + X
On your keyboard hit Y
On your keyboard hit Enter
run reboot
after system rebooted, run hostname and see if your changes applied
Good luck!
If anyone finds this solution does not work for them on GCS instance. Then I suggest you try using exit hooks as described by Google Support.
In fact, some distributions of Linux like CentOS and Debian use
dhclient-script script to configure the network parameters of the
machine. This script is invoked from time to time by dhclient which is
dynamic host configuration protocol client and provides a means for
configuring one or more network interfaces using the DHCP protocol,
BOOTP protocol, or if these protocols fail, by statically assigning an
address.
The following text is a quote from the man (manual) page of
dhclient-script:
After all processing has completed, /usr/sbin/dhclient-script
checks for the presence of an executable
/etc/dhcp/dhclient-exit-hooks script, which if present is invoked using the ´.´ command. The exit status of
dhclient-script will be passed to dhclient-exit-hooks in the exit_status shell variable, and will always be zero
if the script succeeded at the task for which it was invoked. The rest of the environment as described previ‐
ously for dhclient-enter-hooks is also present. The /etc/dhcp/dhclient-exit-hooks script can modify the valid of
exit_status to change the exit status of dhclient-script.
That being said, by taking a look into the code snippet of
dhclient-script, we can see the script checks for the existence of an
executable /etc/dhcp/dhclient-up-hooks script and all scripts in
/etc/dhcp/dhclient-exit-hooks.d/ directory.
ETCDIR="/etc/dhcp"
193 exit_with_hooks() {
194 exit_status="${1}"
195
196 if [ -x ${ETCDIR}/dhclient-exit-hooks ]; then
197 . ${ETCDIR}/dhclient-exit-hooks
198 fi
199
200 if [ -d ${ETCDIR}/dhclient-exit-hooks.d ]; then
201 for f in ${ETCDIR}/dhclient-exit-hooks.d/*.sh ; do
202 if [ -x ${f} ]; then
203 . ${f}204 fi
205 done
206 fi
207
208 exit ${exit_status}209 }
Therefore, in order to modify the hostname of your Linux VM you can
create a custom script with .sh extension and place it in
/etc/dhcp/dhclient-exit-hooks.d/ directory. If this directory does not
exist, you can create it. The content of the custom script will be:
hostname YourFQDN.sh
>
be sure to make this new .sh file executable:
chmod +x YourFQDN.sh
Source: (https://groups.google.com/d/msg/gce-discussion/olG_nXZ-Jaw/Y9HMl4mlBwAJ)
Im not sure I understand Adrián's answer. It seems overly complex since you have to run a script each boot why not just use hostname?
vi /etc/rc.local
add:
hostname your_hostname
thats it. tested and working. no need to fiddle with metadata and such.
Non-cron/metadata/script solution.
Edit /etc/dhclient-(network-interface).conf or create one if it doesn't exist.
Example:
sudo nano /etc/dhclient-eth0.conf
Then add the following line, replacing the desired FQDN between the double quotes:
supersede host-name "hostname.domain-name";
Persists between reboots and hostname and hostname -f works as intended.
Tested on Debian.
The dhclient sets the hostname using DHCP
You can override this by creating a custom hook script in /etc/dhcp/dhclient-exit-hooks.d/custom_set_hostname that would read the hostname from /etc/hostname:
if [ -f "/etc/hostname" ]; then
new_host_name=$(cat /etc/hostname)
fi
The script must have the execute permission.
It's important to set the new_host_name variable and not calling the hostname command directly as any call to the hostname command will be overriden by another hook or the dhclient-script which uses this variable
When creating a VM, you can specify a custom FQDN hostname as an optional parameter. This feature is currently in Beta.
$ gcloud beta compute instances create INSTANCE_NAME --hostname example.hostname
This should work across OSes, and prevent the need for workaround scripts.
More info in the docs.
-- Sirui (Product Manager, Google Compute Engine)
In my CentOS VMs I found that the script /etc/dhcp/dhclient.d/google_hostname.sh, installed by the google-compute-engine RPM, actually changed the hostname. This happens when the instance gets its IP address during boot.
While it's not the long-term solution I really want, for now I simply deleted this script. The hostname I set with hostnamectl now persists after a reboot.
The script is likely to be in exactly the same place in Debian/Ubuntu VMs, but of course I don't run any of those.
There is some hack you can do to achieve this as i did. Just do:
sudo chattr +i /etc/hosts
This command actually makes the file "(i)mmutable", which means even root can't change it (unless root does chattr -i /etc/hosts first, of course).
As above, you can undo this with sudo chattr -i /etc/hosts
Cheer!
An easy way to fix this is to set up a startup script with custom metadata.
Key :startup-script
Value:
#! /bin/bash
hostname <desired hostname>

Setting dynamic path in the redis.conf using the Environment variable

I have a environment variable MY_HOME which has a path to a directory /home/abc
Now, I have a redis.conf file In which I need to set this path like this
**redis.conf**
pidfile $MY_HOME/local/var/pids/redis.pid
logfile $MY_HOME/local/var/log/redis.log
dir $MY_HOME/local/var/lib/redis/
like we do in command line, so that my config file picks the path based on the Environment variable.
Because Redis can read its config from stdin, I do something very similar to what #jolestar suggested. I put placeholder variables in my redis.conf and then replace them using sed in my Redis launcher. For example:
==========
$MY_HOME/redis/redis.conf
==========
...
pidfile {DIR}/pids/server{n}.pid
port 123{n}
...
Then I have a script to start Redis:
==========
runredis.sh
==========
DIR=$MY_HOME/redis
for n in {1..4}; do
echo "starting redis-server #$n ..."
sed -e "s/{n}/$n/g" -e "s/{DIR}/$DIR/g" < $DIR/redis.conf | redis-server -
done
I've been using this approach forever and it works out well.
this is not supported by Redis, however it's achievable with the use of envsubst (which's installed by default on almost all modern distros) -to substitute in the values of environment variables before running redis-server.
envsubst '$HOME:$MY_HOME' < ~/.tmpl_redis.conf > ~/.redis.conf && redis-server ~/.redis.conf
# or
envsubst '$MY_HOME' < ~/.tmpl_redis.conf > ~/.redis.conf && redis-server !#:5 # 5th argument from the previous command
i also find a solution,but redis config is no support for env var.
i think have 2 method:
start up redis by a script,script get env var and change redis config.
start up redis by command line,and send env var as parameter.