I am trying to set up a Keycloak server inside a Docker container, and I wish
it to utilize a MySQL database stored on the host machine, but I want this
database to be managed by a MySQL instance that is also running inside a
Docker container. I cannot get this to work, however.
Thus far I have tried the following:
# Create network for keycloak
docker network create edci-network
# First start up MySQL server…
docker run \
--name edci-keycloak-mysql \
-d \
--net edci-network \
-e MYSQL_DATABASE=edci-keycloak \
-e MYSQL_USER=edci-keycloak \
-e MYSQL_PASSWORD=password \
-v /path/to/local/database:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=root_password \
mysql
# … then run Keycloak with token exchange enabled.
docker run \
--name edci-keycloak \
-d \
-p 9000:8080 \
--net edci-network \
-e KEYCLOAK_USER=admin \
-e KEYCLOAK_PASSWORD=admin \
-e DB_ADDR=edci-keycloak-mysql \
-e DB_PASSWORD=password \
-e JAVA_OPTS_APPEND="
-Dkeycloak.profile.feature.token_exchange=enabled
-Dkeycloak.profile.feature.admin_fine_grained_authz=enabled
" \
quay.io/keycloak/keycloak:15.0.2
However, the Keycloak logs proclaim
Using H2 database
as the server starts up. What am I doing wrong here? The MySQL Example
on the Keycloak Docker Hub page does not work as is either.
Note that using Docker Compose is not an option, so answers relying on it
are not considered. Thanks for any assistance.
Keycloak container logs: https://pastebin.com/b56cmxBJ.
You are not using predefined values (e.g. Keycloak container expect DB name keycloak), so you need to configure all DB details (env variables DB_*) explicitly:
# Create network for keycloak
docker network create edci-network
# First start up MySQL server…
docker run \
--name edci-keycloak-mysql \
-d \
--net edci-network \
-e MYSQL_DATABASE=edci-keycloak \
-e MYSQL_USER=edci-keycloak \
-e MYSQL_PASSWORD=password \
-v /path/to/local/database:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=root_password \
mysql
# … then run Keycloak with token exchange enabled.
docker run \
--name edci-keycloak \
-d \
-p 9000:8080 \
--net edci-network \
-e KEYCLOAK_USER=admin \
-e KEYCLOAK_PASSWORD=admin \
-e DB_VENDOR=mysql \
-e DB_ADDR=edci-keycloak-mysql \
-e DB_DATABASE=edci-keycloak \
-e DB_USER=edci-keycloak \
-e DB_PASSWORD=password \
-e JAVA_OPTS_APPEND="
-Dkeycloak.profile.feature.token_exchange=enabled
-Dkeycloak.profile.feature.admin_fine_grained_authz=enabled
" \
quay.io/keycloak/keycloak:15.0.2
Related
Running MySQL/MariaDB in a Docker container:
docker run -p 3306:3306 --name $(DATABASE_NAME) -v /tmp/mysql:/var/lib/mysql -e MYSQL_DATABASE=$(DATABASE_NAME) -e MYSQL_USER=$(DATABASE_USER) -e MYSQL_ROOT_PASSWORD=$(DATABASE_PASSWORD) -d mariadb:latest > /dev/null
And then running Django version 4 locally with:
manage.py runserver 127.0.0.1:8000
Error
django.db.utils.OperationalError: (2013, "Lost connection to MySQL server at 'reading initial communication packet', system error: 2")
I can successfully connect to the database with MySQL Workbench as well as the command:
mysql -h 127.0.0.1 -P 3306 -u root -p <database>
I am launching Django and the MySQL/MariaDB Docker container from a Makefile.
Makefile
SHELL := /bin/bash
.PHONY: dj-start-local
dj-start-local: start-mysql
PYTHONPATH=. django_project/src/manage.py runserver 127.0.0.1:8000
.PHONY: start-mysql
start-mysql:
docker run -p 3306:3306 --name $(DATABASE_NAME) \
-v /tmp/mysql:/var/lib/mysql \
-e MYSQL_DATABASE=$(DATABASE_NAME) \
-e MYSQL_USER=$(DATABASE_USER) \
-e MYSQL_ROOT_PASSWORD=$(DATABASE_PASSWORD) \
-d mariadb:latest > /dev/null
Use the healthcheck.sh in the container. Use the MARIADB_MYSQL_LOCALHOST_USER=1 to create a mysql#localhost user that the script can use to access the database,
The healthcheck waits until its fully started regardless of the time.
Makefile:
.PHONY: start-mariadb
start-mariadb:
docker run -p 3306:3306 --name $(DATABASE_NAME) \
-e MARIADB_DATABASE=$(DATABASE_NAME) \
-e MARIADB_USER=$(DATABASE_USER) \
-e MARIADB_PASSWORD=$(DATABASE_PASSWORD) \
-e MARIADB_ROOT_PASSWORD=$(DATABASE_PASSWORD) \
-e MARIADB_MYSQL_LOCALHOST_USER=1 \
-v /tmp/mysql:/var/lib/mysql \
-d mariadb:latest
while ! docker exec $(DATABASE_NAME) healthcheck.sh --su=mysql --connect --innodb_initialized; do sleep 1; done
docker exec --user mysql $(DATABASE_NAME) mariadb -e 'select "hello world"'
Test run:
$ make start-mariadb
docker run -p 3306:3306 --name dd \
-e MARIADB_DATABASE=dd \
-e MARIADB_USER=dd \
-e MARIADB_PASSWORD=dd \
-e MARIADB_ROOT_PASSWORD=dd \
-e MARIADB_MYSQL_LOCALHOST_USER=1 \
-d mariadb:latest
53066fffa293ed061743024e387bd7fb6f1c664efd603c7a657ba88e307be308
while ! docker exec dd healthcheck.sh --su=mysql --connect --innodb_initialized; do sleep 1; done
healthcheck connect failed
healthcheck connect failed
healthcheck connect failed
healthcheck connect failed
docker exec --user mysql dd mariadb -e 'select "hello world"'
hello world
hello world
Note: added MARIADB_PASSWORD otherwise the database/user wouldn't be created.
The issue may be due to a race condition, where Django is attempting to connect to the database before it is ready. Try waiting a few seconds after starting the Docker container.
Simple Solution
Makefile
.PHONY: start-mysql
start-mysql:
docker run -p 3306:3306 --name $(DATABASE_NAME) \
-v /tmp/mysql:/var/lib/mysql \
-e MYSQL_DATABASE=$(DATABASE_NAME) \
-e MYSQL_USER=$(DATABASE_USER) \
-e MYSQL_ROOT_PASSWORD=$(DATABASE_PASSWORD) \
-d mariadb:latest
sleep 4
Better Solution using healthcheck.sh
In Dan Black's answer, he pointed out that the MariaDB docker container includes a script healthcheck.sh.
If you only want to check if the MariaDB container can receive connections, then healthcheck.sh can be run without specifying a user (avoiding authorization complications).
The Makefile below will check for a database connection every second until the database is ready.
Makefile (advanced version)
.PHONY: mysql-start
mysql-start: mysql-stop
docker run -p 3306:3306 --name $(DATABASE_NAME) \
-v /tmp/mysql:/var/lib/mysql \
-e MYSQL_DATABASE=$(DATABASE_NAME) \
-e MYSQL_USER=$(DATABASE_USER) \
-e MYSQL_ROOT_PASSWORD=$(DATABASE_PASSWORD) \
-d mariadb:latest
while ! docker exec $(DATABASE_NAME) healthcheck.sh --connect 2> /dev/null; do sleep 1; done
I am busy with creating mysql + joomla in docker. But everytime I delete the old container and create a new Joomla container with the same parameters, I have to do the installation of joomla again. How can I make the Joomla website persistent so that everytime I create a new joomla container I don't have to redo the installation in the webbrowser.
I tried creating an image of the already installed Joomla container but that didn't give me any success.
Also I mounted the /var/www/html folder of the joomla container, but still the same problem.
This is the commands I use to create both containers:
mysql:
docker run \
--name JoomlaDB \
-d \
-p 3306:3306 \
-e MYSQL_USER=JoomlaUser \
-e MYSQL_PASSWORD=password \
-e MYSQL_ROOT_PASSWORD=password \
-e MYSQL_DATABASE=joomla \
-v db_volume:/var/lib/mysql \
mysql:5.7
Joomla:
docker run \
-d \
-p 8080:80 \
-e JOOMLA_DB_HOST=JoomlaDB:3306 \
-e JOOMLA_DB_USER=JoomlaUser \
-e JOOMLA_DB_PASSWORD=password \
-e JOOMLA_DB_NAME=joomla \
--link JoomlaDB:mysql \
-v joomla_volume:/var/www/html \
joomla
Thanks in advance for the answers!
Hello i have created mysql image and with this command
docker run --name db-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:latest --> Run container with my sql
docker pull mysql --> create image with mysql
docker run --name db_mysql-e MYSQL_ROOT_PASSWORD=1234 -e MYSQL_DATABASE=mami -p 3306:3306 -d mysql
i execute it but after that i don't know what to do and how to make a DB in this container and the job for the backup
If someone can help me with step by step what to do
You could use the cron service from your host system to run the following command as described in the documentation for the mysql docker image:
crontab example for running the command every night at 2:00 am:
00 02 * * * /usr/bin/docker exec db-mysql sh -c 'exec mysqldump --all-databases -uroot -p"my-secret-pw"' > /some/path/on/your/host/all-databases.sql
Alternatively you could run another container designed just for this task such as deitch/mysql-backup:
docker run --name db-mysql -d \
-e MYSQL_ROOT_PASSWORD=my-secret-pw \
-e MYSQL_USER=my-user \
-e MYSQL_PASSWORD=my-user-password \
-e MYSQL_DATABASE=my-db \
mysql:latest
docker run -d --restart=always \
--name=db-backup \
-e DB_DUMP_BEGIN=0200 \
-e DB_SERVER=db-mysql \
-e DB_USER=my-user \
-e DB_PASS=my-user-password \
-e DB_NAMES=my-db \
-e DB_DUMP_TARGET=/db \
-v /somewhere/on/your/host/:/db \
databack/mysql-backup
You also need to make sure the /somewhere/on/your/host/ folder is writable by users of group 1005:
sudo chgrp 1005 /somewhere/on/your/host/
sudo chmod g+rwX /somewhere/on/your/host/
But this container must have a mean to connect to your db-mysql container. For that you create a docker network and connect both containers to it:
docker network create mysql-backup-net
docker network connect mysql-backup-net db-backup
docker network connect mysql-backup-net db-mysql
Is it possible to have several flume's agents (sinks) under the same configuration file (agent.conf)?
I think so. It is a matter of include all the per-sinks configuration in the same agent.conf file. There is an example here.
The preferred way for FIWARE is using Dockers. So, let's imagine we need a Cygnus and we want the data to be "sinked" to MongoDB and MySQL.
A good practice would consist of making a Docker-compose file in order to build the application, but in this case, I'll show how to deploy all dockers needed separately.
We want to deploy a MySQL so Cygnus can store data in it. We can do it this way:
sudo docker run --name mysql_showcases \
-e MYSQL_ROOT_PASSWORD=root \
-e MYSQL_DATABASE=dbcygnus \
-e MYSQL_USER=cygnus \
-e MYSQL_PASSWORD=cygnus \
-e MYSQL_ROOT_HOST='%' \
-p 3306:3306 -it -v /data/mysql:/var/lib/mysql -d -h mysql mysql/mysql-server:5.5
We want to deploy a MongoDB so Cygnus can also store data in it. We can do it this way:
sudo docker run --name mongo_showcases -v /data/mongodb:/data/db -d \
-h mongo mongo:3.6
Finally, we can deploy Cygnus using a Docker linked with both previous dockers:
docker run -d --name cygnus_showcases --link mysql_showcases --link mongo_showcases \
-p 8081:8081 -p 5050:5050 \
-e CYGNUS_MYSQL_HOST=mysql_showcases -e CYGNUS_MYSQL_PORT=3306 \
-e CYGNUS_MYSQL_USER=root -e CYGNUS_MYSQL_PASS=root \
-e CYGNUS_MONGO_HOSTS=mongo_showcases:27017 \
fiware/cygnus-ngsi
So, we've deployed a Docker, using Cygnus which will store data in a MongoDB and a MySQL database. We can also provide more "variables" to configure other sinks to where to store data in.
I am very new in using Docker and I wonder if there is a way to allow multiple MySQL instances/containers to run at the same time? I have tried the following:
For the master:
docker run -d -v /var/projects/test/db_master:/var/lib/mysql --name
db_master -p 2222:22 -e ROOT_PASS="mypass" tutum/ubuntu:trusty
For the slaves:
docker run -d -v /var/projects/test/db_slave:/var/lib/mysql --name db_slave -p 2322:22 -e ROOT_PASS="mypass" tutum/ubuntu:trusty
docker run -d -v /var/projects/test/db_slave_5_hours:/var/lib/mysql --name db_slave_5_hours -p 2522:22 -e ROOT_PASS="mypass" tutum/ubuntu:trusty
Run MySQL Master and Slave containers:
docker run \
-d \
--volumes-from db_master \
-p 3706:3306 \
-e MYSQL_PASS=admin \
-e REPLICATION_MASTER=true -e REPLICATION_USER=admin -e REPLICATION_PASS=admin \
--name mysql \
tutum/mysql
docker run -d \
--volumes-from db_slave \
-e REPLICATION_SLAVE=true \
-e MYSQL_PASS=admin \
-p 3806:3306 \
--link mysql:mysql \
--name mysql_slave \
tutum/mysql
docker run -d \
--volumes-from db_slave_5_hours \
-e REPLICATION_SLAVE=true \
-e MYSQL_PASS=admin \
-e REPLICATION_DELAY=18000 \
-p 4006:3306 \
--link mysql:mysql \
--name mysql_slave_5_hours \
tutum/mysql
The second slave just times out and exits after 13 attempts of starting MySQL as specified in run.sh and it does not matter which slave I start first.
Thank you in advance.