Orion APIs authentication through Keycloak - fiware

I want to add authentication on my Orion APIs through my Keycloak IdM.
I know it is possible to use Orion together Pep Proxy Wilma and Keyrock to do this task, and a possible workaround can be to integrate keyrock with keycloak at this link (7 years ago).
Do you have some news or suggestion about this?
Thank you in advance.

there is a (relatively new) solution available. Instead of Wilma, you can use the Kong-API-Gateway as a PEP-Proxy with the FIWARE-PEP-Plugin. That way, authentication(and authorization) can be delegated to Keycloak. You can find more on that in those 2 presentations:
https://github.com/wistefan/presentations/blob/main/summit-gran-canaria/kong/FGS22-Kong.pdf
https://github.com/wistefan/presentations/blob/main/summit-gran-canaria/keycloak/FGS22-Keycloak.pdf

And the kong.yml file is:
_format_version: "2.1"
_transform: true
services:
- host: "orion_ip"
name: "orion"
path: "/v2"
port: 1026
protocol: http
routes:
- name: orion
paths:
- /orion
strip_path: true
plugins:
- name: pep-plugin
config:
authorizationendpointtype: Keycloak
authorizationendpointaddress: https://keycloak_ip
keycloakrealm: myrealm
keycloakclientid: clientid
keycloakclientsecret: clientsecret
keycloackadditionalclaims:
"http.fiware-servicepath": "fiware-servicepath"
"http.fiware-service": "fiware-service"

I found all params you need to run the docker image (in powershell):
docker run -d --name kong-dbless `
-v "$(pwd):/kong/declarative/" `
-e "KONG_DATABASE=off" `
-e "KONG_DECLARATIVE_CONFIG=/kong/declarative/kong.yml" `
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" `
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" `
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" `
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" `
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001" `
-e KONG_LICENSE_DATA `
-e "KONG_LOG_LEVEL=info" `
-e "KONG_PLUGINS=bundled,pep-plugin" `
-e "KONG_PLUGINSERVER_NAMES=pep-plugin" `
-e "KONG_PLUGINSERVER_PEP_PLUGIN_QUERY_CMD=/go-plugins/pep-plugin -dump" `
-e "KONG_PLUGINSERVER_PEP_PLUGIN_START_CMD=/go-plugins/pep-plugin" `
-p 8000:8000 `
-p 8001:8001 `
quay.io/fiware/kong:0.3.3

Related

GitHub Actions for Automatic Tag Not Working

One of my GitHub Actions for automatic tagging is not working and I don't seem to know why.
Here is my tag.yml:
name: 'tag'
on:
push:
branches:
- main
jobs:
tag:
runs-on: ubuntu-latest
steps:
- name: 'Checkout'
uses: actions/checkout#v2.4.0
- name: 'Tag'
uses: anothrNick/github-tag-action#1.36.0
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
The error I get is this:
Warning: Unexpected input(s) 'repo-token', valid inputs are ['entryPoint', 'args']
Run anothrNick/github-tag-action#1.36.0
/usr/bin/docker run --name a72c5b92e429db40e09e9b93f3e458fdb9_f74ce8 --label 9916a7 --workdir /github/workspace --rm -e INPUT_REPO-TOKEN -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_RUN_ATTEMPT -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_REF_NAME -e GITHUB_REF_PROTECTED -e GITHUB_REF_TYPE -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_ARCH -e RUNNER_NAME -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/terraform-provider-mirantis/terraform-provider-mirantis":"/github/workspace" 9916a7:2c5b92e429db40e09e9b93f3e458fdb9
*** CONFIGURATION ***
DEFAULT_BUMP: minor
WITH_V: false
RELEASE_BRANCHES: master,main
CUSTOM_TAG:
SOURCE: .
DRY_RUN: false
INITIAL_VERSION: 0.0.0
TAG_CONTEXT: repo
PRERELEASE_SUFFIX: beta
VERBOSE: true
Is master a match for main
Is main a match for main
pre_release = false
From https://github.com/Richard-Barrett/terraform-provider-mirantis
* [new tag] v1.0-beta -> v1.0-beta
fatal: ambiguous argument '0.0.0': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
Merge pull request #14 from Richard-Barrett/Richard-Barrett-patch6 automating terraform with release and goreleaser
minor
Bumping tag 0.0.0.
New tag 0.1.0
2022-01-16T04:39:36Z: **pushing tag 0.1.0 to repo Richard-Barrett/terraform-provider-mirantis
"message": "Bad credentials",
"documentation_url": "https://docs.github.com/rest"
}
Error: Tag was not created properly.
Here is the public Repo it is affiliated with: https://github.com/Richard-Barrett/terraform-provider-mirantis
Any advice...?
You are missusing this action, it should be:
- name: 'Tag'
uses: anothrNick/github-tag-action#1.36.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
All parameters are passed by ENV as described here

Github Action not getting triggered

I want to use Github Action to trigger Jenkins build, when PR on develop branch is merged with changes in frontend/ dir. I have following file in .github/workflows/ if the repo
name: Trigger Jenkins Build [ Build-Portal ]
on:
push:
branches: [ develop ]
paths: 'frontend/**'
types: [closed]
jobs:
build:
name: Triggering Jenkins Build [ Build-Portal ]
runs-on: ubuntu-latest
if: github.event.pull_request.merged == true
steps:
- name: Trigger Build-Portal
uses: actions/trigger-jenkins#develop
with:
jenkins_url: "http://jenkins.example.net:8080/"
jenkins_user: ${{ secrets.JENKINS_USER }}
jenkins_token: ${{ secrets.JENKINS_USER_TOKEN }}
job_name: "Build-Portal"
job_params: '{"FRESH_BUILD":"True", "UI":"True", "BUILD_BRANCH":"develop", "DEPLOY_DEV":"True"}'
job_timeout: "3600" # Default 30 sec. (optional)
But I don't see this is getting triggered under tab /actions
I updated above workflow to
name: Trigger Jenkins Build [ Build-Portal ]
on:
pull_request:
branches: [ develop ]
types: [ closed ]
jobs:
trigger:
name: Triggering Jenkins Build [ Build-Portal ]
runs-on: ubuntu-latest
steps:
- name: Triggering Jenkins Build
uses: appleboy/jenkins-action#master
if: github.event.pull_request.merged == true
with:
url: "http://jenkins.example.net:8080/"
user: ${{ secrets.JENKINS_USER }}
token: ${{ secrets.JENKINS_USER_TOKEN }}
job: "Build-Portal"
with these changes, workflow is running, but getting failure with missing jenkins config
Run appleboy/jenkins-action#master
with:
url: http://jenkins.example.net:8080/
job: Build-Portal
/usr/bin/docker run --name a33c102b3a03f81d04bc7936bf41daf1ca949_52143d --label 8a33c1 --workdir /github/workspace --rm -e INPUT_URL -e INPUT_USER -e INPUT_TOKEN -e INPUT_JOB -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/Build-Portal/Build-Portal":"/github/workspace" 8a33c1:02b3a03f81d04bc7936bf41daf1ca949
2021/06/16 14:16:24 missing jenkins config
What I am missing here ?

Ansible playbook gets stuck trying to perform mysql operations

I'm trying to perfrom a simple "SHOW DATABASES;" command with an ansible playbook and it gets stuck when executes the command.
I tried different options
- hosts: servers
become: true
vars_prompt:
- name: "db_passw"
prompt: "DB root password?"
tasks:
- name: Configure Database
shell: |
mysql -u root -p {{ db_passw }} < ~/sql_query.sql
Also
- hosts: servers
become: true
vars_prompt:
- name: "db_passw"
prompt: "DB root password?"
tasks:
- name: Configure Database
shell: |
mysql -u root -p {{ db_passw }} -e "SHOW DATABASES;"
And I also tried, just as a test, to put the password explicitly
- hosts: servers
become: true
vars_prompt:
- name: "db_passw"
prompt: "DB root password?"
tasks:
- name: Configure Database
shell: |
mysql -u root -p <root_passw> -e "SHOW DATABASES;"
But it always gets stuck at the same point. I tried to execute above's mysql commands in the remote machine shell and they work with no problems. I also tried to execute other commands before mysql's ones and they are being executed.
Is there any problem between ansible and mysql?
I know thta there is a MySQL module for Ansible, but it's functionality is too limited.
Thank you
Works for me. The play below
- hosts: dbserver
tasks:
- command: |
mysql -e "SHOW DATABASES;"
register: result
- debug:
var: result
gives:
TASK [debug]
ok: [dbserver] => {
"result": {
"changed": true,
"cmd": [
"mysql",
"-e",
"SHOW DATABASES;"
],
"delta": "0:00:00.010958",
"end": "2019-04-04 16:19:58.359100",
"failed": false,
"rc": 0,
"start": "2019-04-04 16:19:58.348142",
"stderr": "",
"stderr_lines": [],
"stdout":
...
It's not necessary to specify "-u root" when "become: true". I haven't set a MySQL password for root.

Connection refused when trying to connect to local mysql database from docker container

I have a problem to connect with local mysql database from a docker container. I'm using docker-compose with two services in containers, database is not on container
I have this docker-compose file:
version: '2'
services:
web:
build:
context: ./
dockerfile: web-dev.dockerfile
volumes:
- ./:/var/www
ports:
- "8080:80"
links:
- app
network_mode: "bridge"
dns:
- 10.0.50.6
app:
build:
context: ./
dockerfile: app-dev.dockerfile
volumes:
- ./:/var/www
network_mode: "bridge"
dns:
- 10.0.50.6
the web container is an nginx service with this Dockerfile:
FROM nginx:1.10
ADD ./vhost.dev.conf /etc/nginx/conf.d/default.conf
WORKDIR /var/www
and this configuration file:
server {
listen 80;
index index.php index.html;
root /var/www/formapp/public;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
the app container is an app service with this Dockerfile:
FROM php:7-fpm
ENV USER=pasquale
RUN apt-get update && apt-get install -y libmcrypt-dev mysql-client \
openssl zip unzip git nano wget libaio-dev iputils-ping
RUN mkdir -p /opt/oracle/instantclient_10_2
# Download files from in oracle folder:
# https://agora.zanichelli.it/downloads/basic-10.2.0.5.0-linux-x64.zip
# https://agora.zanichelli.it/downloads/sdk-10.2.0.5.0-linux-x64.zip
ADD oracle/basic-10.2.0.5.0-linux-x64.zip /opt/oracle/basic-10.2.0.5.0-linux-x64.zip
ADD oracle/sdk-10.2.0.5.0-linux-x64.zip /opt/oracle/sdk-10.2.0.5.0-linux-x64.zip
RUN unzip /opt/oracle/basic-10.2.0.5.0-linux-x64.zip -d /opt/oracle \
&& unzip /opt/oracle/sdk-10.2.0.5.0-linux-x64.zip -d /opt/oracle \
&& ln -s /opt/oracle/instantclient_10_2/libclntsh.so.10.1 /opt/oracle/instantclient_10_2/libclntsh.so \
&& ln -s /opt/oracle/instantclient_10_2/libclntshcore.so.10.1 /opt/oracle/instantclient_10_2/libclntshcore.so \
&& ln -s /opt/oracle/instantclient_10_2/libocci.so.10.1 /opt/oracle/instantclient_10_2/libocci.so
ADD oracle/tns-admin/tnsnames.ora /opt/oracle/instantclient_10_2/network/admin/tnsnames.ora
ENV LD_LIBRARY_PATH /opt/oracle/instantclient_10_2/
RUN docker-php-ext-install pdo_mysql \
&& pecl install mcrypt-1.0.1 \
&& docker-php-ext-enable mcrypt \
&& pecl install mongodb \
&& docker-php-ext-enable mongodb \
&& docker-php-ext-configure pdo_oci --with-pdo-oci=instantclient,/opt/oracle/instantclient_10_2,10.2 \
&& echo 'instantclient,/opt/oracle/instantclient_10_2' | pecl install oci8 \
&& docker-php-ext-install pdo_oci \
&& docker-php-ext-enable oci8
RUN yes | pecl install xdebug \
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_enable=1' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.default_enable=1' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_connect_back=1' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_autostart=1' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_handler="dbgp"' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_port=9000' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_host=0.0.0.0' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_log=/var/www/xdebug.log' >> /usr/local/etc/php/conf.d/xdebug.ini
RUN mkdir -p /home/$USER
RUN groupadd -g 1000 $USER
RUN useradd -u 1000 -g $USER $USER -d /home/$USER
RUN chown $USER:$USER /home/$USER
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
WORKDIR /var/www
USER $USER
launching the containers with docker-compose -f docker-compose.dev.yml up --build -d works fine and docker ps command give me this output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7202e2862ef5 190todb_web "nginx -g 'daemon of…" 9 hours ago Up About an hour 443/tcp, 0.0.0.0:8080->80/tcp 190todb_web_1_3c628ae1c69b
d45c04d353d5 190todb_app "docker-php-entrypoi…" 9 hours ago Up About an hour 9000/tcp 190todb_app_1_dd2ac7028b87
I install a lumen app in my project folder formapp and then I created a seeder to insert fake data in my database, and from bash, if I run /projectfolder/formapp$ php artisan db:seed the seeder works and I have this output:
Seeding: UsersTableSeeder
Database seeding completed successfully.
Then I created a route to access my users table from the lumen app:
$router->get('users', function () use ($router) {
return User::all();
});
my lumen env file is this:
APP_NAME=Lumen
APP_ENV=local
APP_KEY=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9
APP_DEBUG=true
APP_URL=http://localhost
APP_TIMEZONE=UTC
LOG_CHANNEL=stack
LOG_SLACK_WEBHOOK_URL=
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=form_db
DB_USERNAME=root
DB_PASSWORD=radiohead
CACHE_DRIVER=file
QUEUE_CONNECTION=sync
JWT_SECRET=JhbGciOiJIUzI1N0eXAiOiJKV1QiLC
but if I try to connect from http://localhost:8080/users I have this lumen error:
SQLSTATE[HY000] [2002] Connection refused (SQL: select * from `users`)
I tried to change the DB_HOST but I cannot solve the problem:
0.0.0.0 (SQLSTATE[HY000] [2002] Connection refused (SQL: select * from `users`));
172.17.0.1 (SQLSTATE[HY000] [1130] Host '172.17.0.2' is not allowed to connect to this MySQL server (SQL: select * from `users`));
172.17.0.1 is my docker0 inet address.
How can I config my project to work?
PS: the lumen app is up and running, is only the db connection that is not working
You cannot use localhost when the database and app are not on the same server.
You need to allow access by something like:
GRANT ALL PRIVILEGES ON *.* TO 'root'#'172.17.0.2' IDENTIFIED BY '<password>';
You can replace 172.17.0.2 by wildcard %.

django cannot connect mysql in docker-compose

I'm very new for docker, now I am trying to run django with mariadb in docker through docker-compose, but I always get this error:
I use Docker version 17.09.1-ce, build 19e2cf6, docker-compose version 1.18.0, build 8dd22a9
django.db.utils.OperationalError: (2003, 'Can\'t connect to MySQL
server on \'mariadb55\' (111 "Connection refused")')
I can connect db correctly after run docker-compose up db in local or remote, and I even can run python manage.py runserver 0.0.0.0:6001 correctly in anaconda virtual environment to connect db service in docker by setting parameters of settings.py file like below:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'test',
'USER': 'belter',
# 'HOST': 'mariadb55',
'HOST': '127.0.0.1',
'PORT': '3302',
'PASSWORD': 'belter_2017',
'default-character-set': 'utf8',
'OPTIONS': {
'sql_mode': 'traditional',
}
}
}
This is my docker-compose.yml file
version: '3'
services:
db:
image: mariadb:5.5
restart: always
environment:
- MYSQL_HOST=localhost
- MYSQL_PORT=3306
- MYSQL_ROOT_HOST=%
- MYSQL_DATABASE=test
- MYSQL_USER=belter
- MYSQL_PASSWORD=belter_2017
- MYSQL_ROOT_PASSWORD=123456_abc
volumes:
- /home/belter/mdbdata/mdb55:/var/lib/mysql
ports:
- "3302:3306"
web:
image: onlybelter/django_py35
command: python3 manage.py runserver 0.0.0.0:6001
volumes:
- /mnt/data/www/mysite:/djcode
ports:
- "6001:6001"
depends_on:
- db
links:
- db:mariadb55
I almost tried everything I can find, but still cannot figure it out, any help would be nice!
What I have tried:
Docker compose mysql connection failing
Linking django and mysql containers using docker-compose
Django connection to postgres by docker-compose
Finally, I figured it out!
The key point is, just as #SangminKim said, I need to use 3306 not 3302 in settings.py, and use db as HOST not 127.0.0.1.
So this is my docker-compose.yml file now:
version: '3'
services:
db:
image: mariadb:5.5
restart: always
environment:
- MYSQL_HOST=localhost
- MYSQL_PORT=3306 # cannot change this port to other number
- MYSQL_ROOT_HOST=%
- MYSQL_DATABASE=test
- MYSQL_USER=belter
- MYSQL_PASSWORD=belter_2017
- MYSQL_ROOT_PASSWORD=123456_abc
volumes:
- /home/belter/mdbdata/mdb55:/var/lib/mysql
ports:
- "3302:3306"
web:
image: onlybelter/django_py35
command: python3 manage.py runserver 0.0.0.0:6001
volumes:
- .:/djcode
ports:
- "6001:6001"
depends_on:
- db
So now we can connect this docker-mysql by mysql -h 127.0.0.1 -P 3302 -u root -p in shell directly, but we have to use db and 3306 in django settings.py file:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'test',
'USER': 'belter',
# 'HOST': 'mariadb55',
'HOST': 'db', #<---
'PORT': '3306', #<---
'PASSWORD': 'belter_2017',
'default-character-set': 'utf8',
'OPTIONS': {
'sql_mode': 'traditional',
}
}
}
And we can still check if this port is open, by running extra command in docker-compose.yml file:
...
web:
image: onlybelter/django_py35
command: /bin/sh -c "python check_db.py --service-name mysql --ip db --port 3306"
volumes:
- .:/djcode
...
Here is check_db.py file:
# check_db.py
import socket
import time
import argparse
""" Check if port is open, avoid docker-compose race condition """
parser = argparse.ArgumentParser(description='Check if port is open, avoid\
docker-compose race condition')
parser.add_argument('--service-name', required=True)
parser.add_argument('--ip', required=True)
parser.add_argument('--port', required=True)
args = parser.parse_args()
# Get arguments
service_name = str(args.service_name)
port = int(args.port)
ip = str(args.ip)
# Infinite loop
while True:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = sock.connect_ex((ip, port))
if result == 0:
print("{0} port is open! Bye!".format(service_name))
break
else:
print("{0} port is not open! I'll check it soon!".format(service_name))
time.sleep(3)
By the way, this is my Dockerfile for build django-py35:
FROM python:3.5-alpine
MAINTAINER Xin Xiong "xiongxin20008#126.com"
ENV PYTHONUNBUFFERED 1
RUN set -e; \
apk add --no-cache --virtual .build-deps \
gcc \
libc-dev \
linux-headers \
mariadb-dev \
python3-dev \
postgresql-dev \
freetype-dev \
libpng-dev \
g++ \
;
RUN mkdir /djcode
WORKDIR /djcode
ENV REFRESHED_AT 2017-12-25
ADD requirements.txt /djcode/
RUN pip install --no-cache-dir -r /djcode/requirements.txt
RUN pip install uwsgi
ADD . /djcode/ # copy . to /djcode/
EXPOSE 6001
See more details from here: https://github.com/OnlyBelter/django-compose
You should use the container name instead of localhost (or 127.0.0.1) in your settings.py file. Try providing a container name to the db service in the docker-compose.yml file using container_name attribute and replace the host name in the settings.py by the value of the container_name. (Make sure that they are in the same network that docker compose creates for you.)
Build container with this:
docker run --name mysql-latest \
-p 3306:3306 -p 33060:33060 \
-e MYSQL_ROOT_HOST='%' -e MYSQL_ROOT_PASSWORD='strongpassword' \
-d mysql/mysql-server:latest
Make sure MYSQL_ROOT_HOST='%', that means root can connect from any IP.