Docker on Windows Data Persistence - Host Mapping vs Data Volume - mysql

I'm very new to Docker and after reading about data volumes I'm still somewhat confused by the behaviour I'm seeing.
In my compose file I had an entry for mysql like this:
db:
image: mysql
restart: always
volumes:
- ./database:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: p4ssw0rd!
networks:
- back
This mapped the /database directory to /var/lib/mysql. The database files where created and I could start Wordpress, install, add a post. The problem as it never persisted any created data. If I restarted Docker and executed:
docker-compose up -d
The database was empty.
Changing this to:
db:
image: mysql
restart: always
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: p4ssw0rd!
networks:
- back
And adding in a volume like this:
volumes:
db_data:
Now persists the data in the Docker data volume and restarting works. Any data created during the last run is still present.
How would I get this to work using the host mapped directory?
Am I right in thinking the second example using volumes is the way to go?

Docker volumes on windows work a bit different way than Linux. Basically on Windows, docker runs a VM and the docker is setup inside the VM. So it seems to you that you run docker commands locally on Windows but the actual stuff happens in background inside a VM.
docker run -v d:/data:/data alpine ls /data
First you need to make share the D: in docker settings. You can find a detailed article explaining the steps for doing so
https://rominirani.com/docker-on-windows-mounting-host-directories-d96f3f056a2c

Related

How to migrate schema using Flyway into a docker container database to create an isolated testing environment?

I want to migrate the schemas I have for the tables in my db into a docker container I've made. First time using flyway and it seems entirely possible for it to be able to migrate the schema over and into my db. I want to make it so that when I run my unit-tests locally it will run against this docker test container and not the public db.
Here is what I have so far, inside my docker-compose.yaml file:
db:
image: mysql:latest
network_mode: bridge
restart: always
environment:
MYSQL_ROOT_PASSWORD: test
MYSQL_DATABASE: test
ports:
- "3306:3306"
When I run docker-compose up on my terminal, the container appears to build and is shown in the Docker UI. And when I run docker ps in another terminal I see the server is spun up.
Then when I go to MySQL workbench UI and login the credentials for the container as:
Username: localhost
Port: 3306
Password: test
Username: root
And when I run that I get a boiler-plate mysql db. Now what I want to do now is migrate the schema of another db to replace the boiler-plate one.
So I have flyway installed and everything setup on my machine but I don't know how to make the connection between that docker container db I just made and the db schema I have.
Here is my flyway file so far:
flyway.url=jdbc:mysql://localhost:3306/db
flyway.user=root
flyway.password=test
flyway.baselineOnMigrate=false
flyway.defaultSchema=config_data
flyway.schemas=config_data, kol_data, baw_data, ter_data
I also think this layout I found online of the docker-compose.yaml file might work too but not sure how to integrate it into my code"
version: '3'
services:
flyway:
image: flyway/flyway:6.3.1
command: -configFiles=/flyway/conf/flyway.config -locations=filesystem:/flyway/sql -connectRetries=60 migrate
volumes:
- ${PWD}/sql_versions:/flyway/sql
- ${PWD}/docker-flyway.config:/flyway/conf/flyway.config
depends_on:
- postgres
postgres:
image: postgres:12.2
restart: always
ports:
- "5432:5432"
environment:
- POSTGRES_USER=example-username
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db-name
All in all I simply want to take the schema from one db and put that into my docker container so that I can run unit-tests against it.
Thanks

How can I use local mysql data in a mysql container created through docker-compose

My company has been working with local setups of our mysql database for years. We have recently decided to adopting a containerized approach to local development, and we want to add the database into being run in a container. The issue is, because all of our data is already set up locally, we want to be able to just use the same data in the mysql container. I have tried using volumes to mount the directory storing all the mysql data into the container to no avail. Has anyone had success with doing this?
db part of docker-compose.yml:
db:
image: mysql:5.6
container_name: mysql
ports:
- "3306:3306"
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
volumes:
- /usr/local/mysql/data:/var/lib/mysql
I am able to get mysql running fine and am able to connect to it easily from my local machine, but when I connect, none of the local databases that already exist are there. Is there something that I'm overlooking?
#yourknightmares,
So I just ran a test and it worked for me. Here is what I did:
docker-compose.yml
version: "3.9"
services:
nginx:
image: nginx:latest
ports:
- "9999:9999"
command: tail -f /dev/null
volumes:
- "/etc/nginx/nginx.conf:/opt/nginx/nginx.conf"
In my host machine, I have the file at /etc/nginx/nginx.conf
, then:
$ docker-compose up -d
$ docker exec -it 02ba7032d699 bash
$ root#02ba7032d699:/# cat /opt/nginx/nginx.conf
#hello
The file was mounted just fine from the host to the container. I would suggest you to do the same exercise just for troubleshooting purposes. Also, have you looked at the container logs with docker logs container_id?

Why is my TeamCity internal NuGet feed missing part of its URL?

My TeamCity server seems to be using a broken URL for its built-in NuGet feed.
I'm running it in a docker container using the official JetBrains image. I'm not behind a reverse proxy. I have configured the "server URL" setting.
I can use the feed in Visual Studio using the full URL (unauthenticated guest access) and it all works great. It's adding packages from build artifacts, Visual Studio can pull them.
It's just that the TeamCity property that is supposed to contain the feed URL is broken, as shown in the screen shot. So my builds are failing like this:
/usr/share/dotnet/sdk/3.1.302/NuGet.targets(128,5): error : Unable to load the service index for source http://teamcity:8111/guestAuth/app/nuget/feed/TigraOss/TigraOSS/v3/index.json.
Those are internally generated and not something I've edited, so I'm a bit confuzzled. Any ideas on how to fix this? (I've tried restarting the server, obviously).
Update
I think this might be due to the fact that everything is running in docker containers. A bit later in the parameters screen (off the bottom of the screen shot above) is another line:
teamcity.serverUrl http://teamcity:8111
I think this is coming from my docker-compose.yml file:
agent:
image: jetbrains/teamcity-agent
container_name: teamcity-agent
restart: unless-stopped
privileged: true
user: "root"
environment:
- SERVER_URL=http://teamcity:8111
- AGENT_NAME=ubuntu-ovh-vps-tigra
- DOCKER_IN_DOCKER=start
volumes:
- agentconfig:/data/teamcity_agent/conf
- agentwork:/opt/buildagent/work
- agentsystem:/opt/buildagent/system
- agent1_volumes:/var/lib/docker
I tried changing the SERVER_URL value in my docker-compose.yml file and restarting the agent container, but it looks like once the agent config file is created, the value is sticky and I need to go in and hand-edit that.
Now I have the agent using the full FQDN of the server, so we'll see if that works.
I think this is caused by my complicated docker-in-docker build. I am running TeamCity server and the linux build agent in docker containers built with docker-compose. Here's my docker-compose.yml file with secrets removed:
version: '3'
services:
db:
image: mariadb
container_name: teamcity-db
restart: unless-stopped
env_file: .env
volumes:
- mariadb:/var/lib/mysql
command: --default-authentication-plugin=mysql_native_password
teamcity:
depends_on:
- db
image: jetbrains/teamcity-server
container_name: teamcity
restart: unless-stopped
environment:
- TEAMCITY_SERVER_MEM_OPTS="-Xmx750m"
volumes:
- datadir:/data/teamcity_server/datadir
- logs:/opt/teamcity/logs
ports:
- "8111:8111"
agent:
image: jetbrains/teamcity-agent
container_name: teamcity-agent
restart: unless-stopped
privileged: true
user: "root"
environment:
SERVER_URL: http://fully.qualified.name:8111
AGENT_NAME: my-agent-name
DOCKER_IN_DOCKER: start
volumes:
- agentconfig:/data/teamcity_agent/conf
- agentwork:/opt/buildagent/work
- agentsystem:/opt/buildagent/system
- agent1_volumes:/var/lib/docker
volumes:
mariadb:
datadir:
logs:
agentconfig:
agentwork:
agentsystem:
agent1_volumes:
networks:
default:
When I first created everything, I had the SERVER_URL variable set to `http://teamcity:8111". This works because Docker maps the host name to the service name, which is also 'teamcity' so that host is resolvable within the docker composition.
The problem comes when doing a build step inside yet another container.
I am building .NET Core and the .NET SDK is not installed on the machine,
so I have to run the build using the .NET Core SDK container.
The agent passes in the URL of the NuGet feeed, which is pointing to the docker service name, and the build container can't "see" that host name. I'm not sure why not. I tried passing in --network teamcity_default as a command line argument to docker run, but it says that network doesn't exist.
I found two ways to get things to work.
Edit the build step to use the FQDN of the nuget feed, and don't use the teamcity built-in parameter %teamcity.nuget.feed.guestAuth.feed-id.v3%. I don't like this solution much because it sets me up for a breakage in the future.
Find the docker volume where the teamcity agent config is stored. In my case, it was /var/lib/docker/volumes/teamcity_agentconfig/_data. Edit the buildAgent.properties file and set serverUrl=http\://fully.qualified.name\:8111. Then docker-compose restart agent. Then you can safely use %teamcity.nuget.feed.guestAuth.feed-id.v3% in containerized builds.
I haven't tested this, but I think you may be able to avoid all this in the first place by using a fully-qualified server name in the docker-compose.yml file. However you have to do this right from the start, because the moment you run docker-compose up the agent config filesystem is created and becomes permanent.

MySQL in Docker mounted directory permissions are messed up on external SSD

I'm running MySQL inside a docker image and I'm mounting a directory from the host inside the container to have the database persisted ./db:/var/lib/mysql.
Since the data got really big, I had to move everything to my external SSD. Now it seems that the permissions of the mounted directory are messed up. When I let MySQL container initialize the ./db directory, it's all good. But if I stop the containers, remove the external SSD, and connect it back and spin up the containers again, the MySQL container keeps restarting, logging things like:
chown: changing ownership of '/var/lib/mysql/._binlog.000004': Operation not permitted
I'm running docker on my mac.
ls -l: drwxrwxrwx# 1 amir staff 131072 May 5 21:25 db.
docker -v: Docker version 19.03.8, build afacb8b.
docker-compose -v: docker-compose version 1.25.4, build 8d51620a
docker-compose.yml:
version: "3"
services:
db:
image: mysql
volumes:
- ./db:/var/lib/mysql
restart: always
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: password
networks:
- projectnetwork
networks:
projectnetwork:
Any hints to how I can solve this problem would be greatly appreciated :) Thank you!
Alright, I'm not sure if this is the best way to solve this, but I can get around the issue with this at the moment. Please let me know if this is totally dumb and there are better solutions.
I tried running the container with docker itself and passing --user "$(id -u):$(id -g)" and it worked.
Unfortunately, we can't do sub bash commands in docker-compose file, so I had to create my own script that sets an environment variable and runs docker-compose:
DOCKER_COMPOSE_USER=$(id -u):$(id -g) docker-compose up -d
And in docker-compose.yml:
user: ${DOCKER_COMPOSE_USER}
That did the trick!

Moving Wordpress site to Docker: Error establishing DB connection

Ive been making new sites with Wordpress & Docker recently and have a reasonable grasp of how it all works and Im now looking to move some established sites into Docker.
Ive been following this guide:
https://stephenafamo.com/blog/moving-wordpress-docker-container/
I have everything setup as it should be but when I go to my domain.com:1234 I get the error message 'Error establishing a database connection'. I have changed 'DB HOST' to 'mysql' in wp-config.php as advised and all the DB details from the site Im bringing in are correct.
I have attached to the mysql container and checked that the db is there and with the right user and also made sure the pw is correct via mysql CLI too.
SELinux is set to permissive and I havent changed any dir/file ownership nor permissions and for the latter dirs are all 755 and files 644 as they should be.
Edit: I should mention that database/data and everything under that seem to be owned by user/group 'polkitd input' instead of root.
Docker logs aren't really telling me much either apart from the 500 error messages for the WP container when I browse the site on port 1234 (as expected though).
This is the docker-compose file:
version: '2'
services:
example_db:
image: mysql:latest
container_name: example_db
volumes:
- ./database/data:/var/lib/mysql
- ./database/initdb.d:/docker-entrypoint-initdb.d
restart: always
environment:
MYSQL_ROOT_PASSWORD: password123 # any random string will do
MYSQL_DATABASE: mydomin_db # the name of your mysql database
MYSQL_USER: my domain_me # the name of the database user
MYSQL_PASSWORD: password123 # the password of the mysql user
example:
depends_on:
- example_db
image: wordpress:php7.1 # we're using the image with php7.1
container_name: example
ports:
- "1234:80"
restart: always
links:
- example_db:mysql
volumes:
- ./src:/var/www/html
Suggestions most welcome as Im out of ideas!
With the new version of docker-compose it will look like this (if you don't want to use PhpMyAdmin you can leave it out):
version: '3.7'
volumes:
wp-data:
networks:
wp-back:
services:
db:
image: mysql:5.7
volumes:
- wp-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: rootPassword
MYSQL_DATABASE: wordpress
MYSQL_USER: wp-user
MYSQL_PASSWORD: wp-pass
ports:
- 8889:3306
networks:
- wp-back
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
environment:
PMA_HOST: db
MYSQL_USER: wp-user
MYSQL_PASSWORD: wp-pass
MYSQL_ROOT_PASSWORD: rootPassword
ports:
- 3001:80
networks:
- wp-back
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- 8888:80
- 443:443
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wp-user
WORDPRESS_DB_PASSWORD: wp-pass
volumes:
- ./wordpress-files:/var/www/html
container_name: wordpress-site
networks:
- wp-back
The database volume is a named volume wp-data, while the wordpress html is a bind-mount to your current directory ./wordpress-files .
make sure that the wp-config.php file has same credentials defined for db_user, db_password as in docker-composer yml file. I too had similar problem i deleted all the files and re-installed and saw that docker-composer up -d would start everything but the wp-config.php file contents for mysql settings were not defined as in docker. so i changed it accordingly and started working eventually
Please take a look at the following compose script. I tried and tested. It works fine.
version: '2'
services:
db:
image: mysql:latest
container_name: db_server
volumes:
- ./database/data:/var/lib/mysql
- ./database/initdb.d:/docker-entrypoint-initdb.d
restart: always
environment:
MYSQL_ROOT_PASSWORD: password123 # any random string will do
MYSQL_DATABASE: udb_test # the name of your mysql database
MYSQL_USER: me_prname # the name of the database user
MYSQL_PASSWORD: password123 # the password of the mysql user
example:
depends_on:
- db
image: wordpress:php7.1 # we're using the image with php7.1
container_name: wp-web
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: me_prname
WORDPRESS_DB_PASSWORD: password123
WORDPRESS_DB_NAME: udb_test
ports:
- "1234:80"
restart: always
volumes:
- ./src:/var/www/html
Let me know if you encounter further issues.
if you want it all in one container you can refer this repo here,
https://github.com/akshayshikre/lamp-alpine/tree/development
Here from lamp-alpine image is used
Then mysql, php, apache2 (lamp stack) is installed and copied local wordpress demosite and db for demo purpose
if you do not want any kind of continuous integration part ignore .circleci folder
Check docker-compose file and Dockerfile, Environment variables are in .env file
I share with you my approach
Show running version, question to see if all is well on your side!
$ docker --version && docker-compose --version
run Docker Copose file
$ docker-compose -f docker-compose.yml up -d
after you wait fast forward
show running containers and name of the Wordpress Container is listening on port 8000
$ docker ps
you will see the name of your WordPress container on the table as follows if you have followed the steps listed on their site
https://hub.docker.com/_/wordpress
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
xxxxxxxxxxxx wordpress:latest "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:8000->80/tcp cms_wordpress_1
xxxxxxxxxxxx mysql:5.7 "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 3306/tcp, 33060/tcp cms_db_1
and if you check your browser with the address : localhost:8000
you will get the message "error establishing DB connection"
launch bash inside the Wordpress container
$ docker exec -it cms_wordpress_1 bash
apt update fails as there is no connectivity
$ apt update
open up new terminal and show current Firewalld configuration
$ sudo cat /etc/firewalld/firewalld-workstation.conf | greb 'FirewallBackend'
currently set to 'nftables'
set value to 'iptables'
$ sudo sed -i 's/FirewallBackend=nftables/FirewallBackend=iptables/g' /etc/firewalld/firewalld-workstation.conf
confirme new value
$ sudo cat /etc/firewalld/firewalld-workstation.conf | grep 'FirewallBackend'
restart Firwalld service to apply change
$ sudo systemctl restart firewalld.service
Refresh the running Wordpress session in your browser and that's good.
good work.
In some cases a probable cause of this issue could be, you have made volumes using docker compose up and then when you did docker compose down you expected the volumes to be deleted as well as the docker images, but this is not how it works.
From the doc you could read this:
For data that needs to persist between updates, use host or named volumes.
It implicitly means that named volumes will not get deleted with down, so what happens is, when you do an up and then add a row to a table and then do a subsequent down, then on the next up you will get the same old volume and so querying the same table would give you the same row you created previously!
What does this have to do with the error Error establishing DB connection, you may ask. To answer your question, let's assume one scenario: What if you changed some MYSQL passwords in the docker compose file in between running the down command and the second up command?
MYSQL keeps its own data just like any other data in its tables, so when you do the second up, Docker loads the old volume (the one created by the first up) and thus the old credential information will be used by MYSQL and Docker will not even have the opportunity to insert your new information (the ones you changed in the docker compose file) in the administration tables. So obviously, you will be rejected.
The solution thus now would be very simple. To fix it, either do:
docker-compose down -v
to remove the named volumes as well as the images when running the down, or do:
docker volume rm [volname]
if you've done the down before, and now you want to delete the named volumes.
If you follow this tutorials ,https://stephenafamo.com/blog/moving-wordpress-docker-container/, your site wil not work properly. Coz It doesn't restore database and you need to restore manually .sql dump file existed in initdb.d dir by using this command.
cat backup.sql | docker exec -i CONTAINER /usr/bin/mysql -u root --password=root DATABASE
I also stuck in this and my CSS are not working properly.
Please let me know when you have new idea .