How can I control services start sequence of asp.net web api and mysql db service? - mysql

I have a problem now. I have tried a few methods from Google to control services start sequence.for example, use wait-for-it.sh、 write shell script ,but all doesn't work for me.
I'm using macOS and my project is asp.net core web api in .net 6 core and using entity framework to connect and migrate to mysql database.
dockerfile
`# Get base sdk Image from Microsoft
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /app
#copy the csprog file and restore any dependecies
COPY *.csproj ./
RUN dotnet restore
#copy the project files and build our release
COPY . ./
RUN dotnet publish -c Release -o out
#Generate runtime image
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
EXPOSE 80
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "SuperHeroAPI.dll"]`
docker-compose.yml
`version: '3.4'
services:
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
container_name: super
depends_on:
- db
restart: always
environment:
- DBHOST=db
- ASPNETCORE_ENVIRONMENT=Development
db:
image: mysql:8.0.29
container_name: db
environment:
MYSQL_ROOT_PASSWORD: 123456
MYSQL_DATABASE: apidb
ports:
- "3306:3306"
`
program.cs
`
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllers();
builder.Services.AddCors();
var host = builder.Configuration["DBHOST"] ?? "db";
var port = builder.Configuration["DBPORT"] ?? "3306";
var password = builder.Configuration["DBPASSWORD"] ?? "123456";
builder.Services.AddDbContext<DataContext>(options =>
{
options.UseMySql($"server={host};userid=root;pwd={password};port={port};database=apidb", new MySqlServerVersion("8.0.29"));
});
var dataContext = builder.Services.BuildServiceProvider().GetService<DataContext>();
dataContext.Database.Migrate();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddScoped<ISuperHeroDao, SuperHeroImpl>();
builder.Services.AddScoped<SuperHeroService>();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();
app.Run();
Datacontext.cs
`using Microsoft.EntityFrameworkCore;
namespace SuperHeroAPI.Data
{
public class DataContext : DbContext
{
public DataContext(DbContextOptions<DataContext> options) : base(options) { }
public DbSet<SuperHero> SuperHeroes { get; set; }
}
}`
based on above all code , I can start my api service finally! but api would restart repeatedly until db service start successfully.I don't want that . I wanna start db service until succeed and then start api service so it won't restart repeatedly.
I'm sorry I'm not a native english speaker ,maybe I can't express my idea clearly , I would appreciate you if you could provide useful solutions and ideas.
expect to start db service first until succeed and then start api service. I've tried wait-for-it.sh. shell scripts .doesn't work.

Related

Problems connecting Cloud Run Application to Cloud SQL using Spring boot

I am trying to connect a Spring application (using Kotlin and Gradle) to a Google Cloud SQL instance and database. I am getting the error message
java.lang.RuntimeException: [<project-name>:europe-west1:<db-instance>] The Cloud SQL Instance does not exist or your account is not authorized to access it. Please verify the instance connection name and check the IAM permissions for project "<project-name>"
I have followed the guide on how to connect carefully, but to no avail.
Relevant files
src/main/resources/application.yml
server:
port: ${PORT:8080}
spring:
liquibase:
change-log: classpath:liquibase/db.changelog.xml
contexts: production
cloud:
appId: <project-id>
gcp:
sql:
instance-connection-name: <instance-connection-name>
database-name: <db-name>
jpa:
hibernate:
dialect: org.hibernate.dialect.MySQL8Dialect
default_schema: <schema>
show_sql: true
ddl-auto: none
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
continue-on-error: true
initialization-mode: always
url: jdbc:mysql:///<db-name>?cloudSqlInstance=<instance-connection-name>&socketFactory=com.google.cloud.sql.mysql.SocketFactory&user=<user>&password=<password>
username: <user>
password: <password>
---
spring:
config:
activate:
on-profile: dev
jpa:
hibernate:
ddl-auto: create-drop
spring.jpa.database-platform: org.hibernate.dialect.H2Dialect
datasource:
url: jdbc:h2:mem:mydb
username: sa
password: password
driverClassName: org.h2.Driver
cloud:
gcp:
sql:
enabled: false
build.gradle.kts
import org.jetbrains.kotlin.gradle.tasks.KotlinCompile
plugins {
id("org.springframework.boot") version "2.6.5"
id("io.spring.dependency-management") version "1.0.11.RELEASE"
kotlin("jvm") version "1.6.10"
kotlin("plugin.spring") version "1.6.10"
kotlin("plugin.allopen") version "1.4.32"
kotlin("plugin.jpa") version "1.4.32"
kotlin("kapt") version "1.4.32"
}
allOpen {
annotation("javax.persistence.Entity")
annotation("javax.persistence.Embeddable")
annotation("javax.persistence.MappedSuperclass")
}
group = "com.<company>"
version = "0.0.1-SNAPSHOT"
java.sourceCompatibility = JavaVersion.VERSION_17
repositories {
mavenCentral()
}
dependencies {
implementation("org.springframework.boot:spring-boot-starter-web:2.6.5")
implementation("org.springframework.boot:spring-boot-starter-webflux:2.6.5")
implementation("org.springframework.boot:spring-boot-starter-data-jpa:2.6.5")
implementation("org.springframework.cloud:spring-cloud-gcp-starter-sql-mysql:1.2.8.RELEASE")
implementation("org.jetbrains.kotlin:kotlin-reflect:1.6.10")
implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8:1.6.10")
implementation("com.fasterxml.jackson.dataformat:jackson-dataformat-xml:2.13.2")
implementation("com.fasterxml.jackson.core:jackson-annotations:2.13.2")
implementation("com.fasterxml.jackson.core:jackson-core:2.13.2")
implementation("com.fasterxml.jackson.core:jackson-databind:2.13.2.2")
implementation("com.fasterxml.jackson.module:jackson-module-kotlin:2.13.2")
implementation("com.fasterxml.jackson.datatype:jackson-datatype-jsr310:2.13.2")
implementation("org.hibernate:hibernate-core:5.6.7.Final")
implementation("javax.persistence:javax.persistence-api:2.2")
implementation( "commons-codec:commons-codec:1.15")
implementation("io.github.microutils:kotlin-logging-jvm:2.1.21")
implementation("ch.qos.logback:logback-classic:1.2.11")
implementation("com.google.cloud.sql:mysql-socket-factory-connector-j-8:1.4.4")
runtimeOnly("com.h2database:h2:2.1.210")
runtimeOnly("org.springframework.boot:spring-boot-devtools:2.6.5")
testImplementation("org.springframework.boot:spring-boot-starter-test:2.6.5")
}
tasks.withType<KotlinCompile> {
kotlinOptions {
freeCompilerArgs = listOf("-Xjsr305=strict")
jvmTarget = "17"
}
}
tasks.withType<Test> {
useJUnitPlatform()
}
Dockerfile
FROM openjdk:17-alpine
ENV USER=appuser
# <placeholder> Replace context path for your own application
ENV JAVA_HOME=/opt/openjdk-17 \
HOME=/home/$USER \
CONTEXT_PATH=/aws-service-baseline
RUN adduser -S $USER
# <placeholder> Add additional packages for the docker container here
RUN apk add --no-cache su-exec
# <placeholder> Replace baseline.jar with your applications JAR file (defined in build.gradle.kts)
COPY Docker/runapp.sh build/libs/<application-name>-0.0.1-SNAPSHOT.jar $HOME/
RUN chmod 755 $HOME/*.sh && \
chown -R $USER $HOME
WORKDIR /home/$USER
CMD [ "./runapp.sh"]
Docker/runapp.sh
#!/bin/sh
set -e
# The module to start.
# <placeholder> Replace this with your own modulename (from module-info)
APP_JAR="<application-name>-0.0.1-SNAPSHOT.jar"
JAVA_PARAMS="-XshowSettings:vm"
echo " --- RUNNING $(basename "$0") $(date -u "+%Y-%m-%d %H:%M:%S Z") --- "
set -x
/sbin/su-exec "$USER:1000" "$JAVA_HOME/bin/java" "$JAVA_PARAMS $JAVA_PARAMS_OVERRIDE" -jar -Dserver.port=$PORT "$APP_JAR"
GCP details
I have made sure the SQL instances connection is added to the Cloud Run Revisions. The IAM roles for the compute service account also seem to be right. See images
IAM: https://i.stack.imgur.com/yYaC5.png
Database: https://i.stack.imgur.com/NErad.png
Cloud Run connection https://i.stack.imgur.com/fKTSZ.png
Additional details
When running ./gradlew bootRun on my local machine (with GCP credentials present), the App works properly with an SQL connection. It also works after running ./gradle bootRun to build the JAR file and run the JAR directly. It does not work out of the box when running in Docker, but if I add the GCP credentials to the Docker container locally, it connects to the Database.
Does anyone have any suggestions on what might be wrong? Any help much appreciated!
I have tried connecting locally and locally in a Docker container.
Figured it out! Human error of course. The Cloud Run Service was initially configured with another Services Account, and not the default Compute Engine Service account.

FIWARE - IoT Agent - Data for Orion

I have installed FIWARE on my machine (Ubuntu 18.04) and I am currently trying to work with the IoT Agent, using the HTTPBindings.js (my data is sent via LoRaWAN and I've changed the parseData function in order to use my own data "protocol" [id=1&temp=12&humidity=10], which brings me here to ask 2 questions for someone who is more experienced and can help me with:
function parseData(req, res, next) {
let data;
let error;
let payload;
let obj;
try {
let newPayload = new Buffer.from(payload, "base64").toString("ascii");
var ps = newPayload.split("&").reduce((accum, x) => {
const kv = x.split("=");
return { ...accum, ...{ [kv[0]]: kv[1] } };
}, {});
data = ulParser.parse(newPayload.replace(/&/g, "|").replace(/=/g, "|"));
} catch (e) {
error = e;
}
if (error) {
next(error);
} else {
req.ulPayload = data;
config.getLogger().debug(context, 'Parsed data: [%j]', data);
next();
}
}
After changing this function, I can't get the data to be updated in the orion/v2/entities.. Would someone explain me how does this work?
How can I add a proxy to usenter code heree in the Wirecloud? I've created it using the FIWARE servers, but testing on my own, I do not have this.
Thank you in advance.
Configuring NGSI Proxy
The ngsi-proxy is configured using environment variables and a port.
ngsi-proxy:
image: fiware/ngsiproxy:1.2.0
hostname: ngsi-proxy
container_name: wc-ngsi-proxy
networks:
default:
ipv4_address: 172.18.1.14
expose:
- "8100"
ports:
- "8100:8100"
environment:
- PORT=8100
- TRUST_PROXY_HEADERS=0
The NGSI proxy config in the wirecloud widget is then http://<host>:<port> - in this case http://ngsi-proxy:8100
Testing HTTP Binding connectivity
Incoming HTTP measures can be controlled by the IOTA_HTTP_PORT Environment variable:
iot-agent:
image: fiware/iotagent-ul:${ULTRALIGHT_VERSION}
hostname: iot-agent
container_name: fiware-iot-agent
depends_on:
- mongo-db
- orion
networks:
- default
ports:
- "4041:4041"
- "7896:7896"
expose:
- "7896"
environment:
..etc
- IOTA_NORTH_PORT=4041
- IOTA_LOG_LEVEL=DEBUG
- IOTA_HTTP_PORT=7896
- IOTA_PROVIDER_URL=http://iot-agent:4041
If you ramp up the debug and expose the relevant port, you should be able to send measures directly to your custom IoT Agent and see some sort of response (probably an error) - this can help track down your coding issue.
You can always add additional debug logging to the custom IoT Agent to see how the conversion is working.

How to run Sequelize migrations inside Docker

I'm trying to docerize my NodeJS API together with a MySQL image. Before the initial run, I want to run Sequelize migrations and seeds to have the tables up and ready to be served.
Here's my docker-compose.yaml:
version: '3.8'
services:
mysqldb:
image: mysql
restart: unless-stopped
environment:
MYSQL_ROOT_USER: myuser
MYSQL_ROOT_PASSWORD: mypassword
MYSQL_DATABASE: mydb
ports:
- '3306:3306'
networks:
- app-connect
volumes:
- db-config:/etc/mysql
- db-data:/var/lib/mysql
- ./db/backup/files/:/data_backup/data
app:
build:
context: .
dockerfile: ./Dockerfile
image: node-mysql-app
depends_on:
- mysqldb
ports:
- '3030:3030'
networks:
- app-connect
stdin_open: true
tty: true
volumes:
db-config:
db-data:
networks:
app-connect:
driver: bridge
Here's my app's Dockerfile:
FROM node:lts-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3030
ENV PORT 3030
ENV NODE_ENV docker
RUN npm run db:migrate:up
RUN npm run db:seeds:up
CMD [ "npm", "start" ]
And here's my default.db.json that the Sequelize migration uses (shortened):
{
"development": {
},
"production": {
},
"docker": {
"username": "myuser",
"password": "mypassword",
"database": "mydb",
"host": "mysqldb",
"port": "3306",
"dialect": "mysql"
}
}
Upon running compose up the DB installs well, the image deploys, but when it reaches the RUN npm run db:migrate:up (which translates into npx sequelize-cli db:migrate) I get the error:
npx: installed 81 in 13.108s
Sequelize CLI [Node: 14.17.0, CLI: 6.2.0, ORM: 6.6.2]
Loaded configuration file "default.db.json".
Using environment "docker".
ERROR: getaddrinfo EAI_AGAIN mysqldb
npm ERR! code ELIFECYCLE
npm ERR! errno 1
If I change the "host" in the default.db.json to "127.0.0.1", I get ERROR: connect ECONNREFUSED 127.0.0.1:3306 in place of the ERROR: getaddrinfo EAI_AGAIN mysqldb.
What am i doing wrong, and what host should I specify so the app can see the MySQL container? Should I remove the network? Should I change ports? (I tried combinations of both to no avail, so far).
I solved my issue by using Docker Compose Wait. Essentially, it adds a wait loop that samples the DB container, and only when it's up, runs migrations and seeds the DB.
My next problem was: those seeds ran every time the container was run - I solved that by instead running a script that runs the seeds, and touchs a semaphore file. If the file exists already, it skips the seeds.
The following configuration worked for me, I am adding the .env, sequelize configuration along with mysql database and docker. And finally don't forget to run docker-compose up --build cheers 🎁 🎁 🎁
.env
DB_NAME="testdb"
DB_USER="root"
DB_PASS="root"
DB_HOST="mysql"
.sequelizerc now we can use config.js rather than config.json for sequelize
const path = require('path');
module.exports = {
'config': path.resolve('config', 'config.js')
}
config.js
require("dotenv").config();
module.exports = {
development: {
username: process.env.DB_USER,
password: process.env.DB_PASS,
database: process.env.DB_NAME,
host: process.env.DB_HOST,
dialect: "mysql"
},
test: {
username: process.env.DB_USER,
password: process.env.DB_PASS,
database: process.env.DB_NAME,
host: process.env.DB_HOST,
dialect: "mysql"
},
production: {
username: process.env.DB_USER,
password: process.env.DB_PASS,
database: process.env.DB_NAME,
host: process.env.DB_HOST,
dialect: "mysql"
}
}
database-connection with sequelize
import Sequelize from 'sequelize';
import dbConfig from './config/config';
const conf = dbConfig.development;
const sequelize = new Sequelize(
conf.database,
conf.username,
conf.password,
{
host: conf.host,
dialect: "mysql",
operatorsAliases: 0,
logging: 0
}
);
sequelize.sync();
(async () => {
try {
await sequelize.authenticate();
console.log("Database connection setup successfully!");
} catch (error) {
console.log("Unable to connect to the database", error);
}
})();
export default sequelize;
global.sequelize = sequelize;
docker-compose.yaml
version: "3.8"
networks:
proxy:
name: proxy
services:
mysql:
image: mysql
networks:
- proxy
ports:
- 3306:3306
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testdb
healthcheck:
test: "mysql -uroot -p$$MYSQL_ROOT_PASSWORD -e 'SHOW databases'"
interval: 10s
retries: 3
api:
build: ./node-backend
networks:
- proxy
ports:
- 3000:3000
depends_on:
mysql:
condition: service_healthy
Dockerfile
FROM node:16
WORKDIR /api
COPY . /api
RUN npm i
EXPOSE 3000
RUN chmod +x startup.sh
RUN npm i -g sequelize-cli
RUN npm i -g nodemon
ENTRYPOINT [ "./startup.sh" ]
startup.sh
#!/bin/bash
npm run migrate-db
npm run start
I found a really clean solution wanted to share it. First of all I used docker-compose, so if you are using only docker, it might not help.
First thig first, I created a docker file which looks like that.I am using typescript, so if you are using js, you don't need to download typescript and build it!
FROM node:current-alpine
WORKDIR /app
COPY . ./
COPY .env.development ./.env
RUN npm install
RUN npm install -g typescript
RUN npm install -g sequelize-cli
RUN npm install -g nodemon
RUN npm run build
RUN rm -f .npmrc
RUN cp -R res/ dist/
RUN chmod 755 docker/entrypoint.sh
EXPOSE 8000
EXPOSE 3000
EXPOSE 9229
CMD ["sh", "-c","--","echo 'started';while true; do sleep 1000; done"]
Till here it is standart. In order to do things in right order, I need a docker compose and entrypoint file. Entrypoint file is a file, that runs when your containers start. Here is docker-compose file.
version: '3'
services:
app:
build:
context: ..
dockerfile: docker/Dockerfile.development
entrypoint: docker/development-entrypoint.sh
ports:
- 3000:3000
env_file:
- ../.env.development
depends_on:
- postgres
postgres:
image: postgres:alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=test
volumes:
- ./docker_postgres_init.sql:/docker-entrypoint-initdb.d/docker_postgres_init.sql
As you can see, I am using postgresql for db. My docker file, docker-compose and also entrypoint files are in a folder called docker, thats why the paths starts wtih docker, change it according to your file structure. Last and the best part is the entrypoint file. It is really simple.
#!/bin/sh
echo "Starting get ready!!!"
sequelize db:migrate
nodemon ./dist/index.js
Ofcourse change the path of the index.js file according to your settings.
Hope it helps!

ECONNREFUSED when trying to connect NodeJS app to MySQL image via docker-compose

I have a project that uses NodeJS as a server (with ExpressJS) and MySQL to handle databases. To load them both together, I am using Docker. Although this project includes a ReactJS client (and I have a client folder for the react and a server folder for the nodejs), I have tested communication between the server and client and it works. Here is the code that pertains to both the server and mysql services:
docker-compose.yml
mysql:
image: mysql:5.7
environment:
MYSQL_HOST: localhost
MYSQL_DATABASE: sampledb
MYSQL_USER: gfcf14
MYSQL_PASSWORD: xxxx
MYSQL_ROOT_PASSWORD: root
ports:
- 3307:3306
restart: unless-stopped
volumes:
- /var/lib/mysql
- ./db/greendream.sql:/docker-entrypoint-initdb.d/greendream.sql
.
.
.
server:
build: ./server
depends_on:
- mysql
expose:
- 8000
environment:
API_HOST: "http://localhost:3000/"
APP_SERVER_PORT: 8000
ports:
- 8000:8000
volumes:
- ./server:/app
links:
- mysql
command: yarn start
Then there is the Dockerfile for the server:
FROM node:10-alpine
RUN mkdir -p /app
WORKDIR /app
COPY package.json /app
COPY yarn.lock /app
RUN yarn install
COPY . /app
CMD ["yarn", "start"]
In the server's package.json, the script start is simply this: "start": "nodemon index.js"
And the file index.js that gets executed is this:
const express = require('express');
const cors = require('cors');
const mysql = require('mysql');
const app = express();
const con = mysql.createConnection({
host: 'localhost',
user: 'gfcf14',
password: 'xxxx',
database: 'sampledb',
});
app.use(cors());
app.listen(8000, () => {
console.log('App server now listening on port 8000');
});
app.get('/test', (req, res) => {
con.connect(err => {
if (err) {
res.send(err);
} else {
res.send(req.query);
}
})
});
So all I want to do for now is confirm that a connection takes place. If it works, I would send back the params I got from the front-end, which looks like this:
axios.get('http://localhost:8000/test', {
params: {
test: 'hi',
},
}).then((response) => {
console.log(response.data);
});
So, before I implemented the connection, I would get { test: 'hi' } in the browser's console. I expect to get that as soon as the connection is successful, but what I get instead is this:
{
address: "127.0.0.1"
code: "ECONNREFUSED"
errno: "ECONNREFUSED"
fatal: true
port: 3306
syscall: "connect"
__proto__: Object
}
I thought that maybe I have the wrong privileges, but I also tried it using root as user and password, but I get the same. Weirdly enough, if I refresh the page I don't get an ECONNREFUSED, but a PROTOCOL_ENQUEUE_AFTER_FATAL_ERROR (with a fatal: false). Why would this happen if I am using the right credentials? Please let me know if you have spotted something I may have missed
In your mysql.createConnection method, you need to provide the mysql host. Mysql host is not localhost as mysql has its own container with its own IP. Best way to achieve this is to externalize your mysql host and allow docker-compose to resolve the mysql service name(in your case it is mysql) to its internal IP which is what we need. Basically, your nodejs server will connect to the internal IP of the mysql container.
Externalize the mysql host in nodejs server:
const con = mysql.createConnection({
host: process.env.MYSQL_HOST_IP,
...
});
Add this in your server service in docker-compose:
environment:
MYSQL_HOST_IP: mysql // the name of mysql service in your docker-compose, which will get resolved to the internal IP of the mysql container

Flask + MySQL + PHP + Docker-Compose = Pain

im trying to build a simple To do app with docker-compose having 3 Containers: One Flask Rest API with sqlalchemy and marshmallow, one PHP to call my Rest-Api and one MySQL Database. The error im getting is:
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003,
"Can't connect to MySQL server on 'db' ([Errno 111] Connection refused)")
(Background on this error at: http://sqlalche.me/e/e3q8)
And i cant call my rest api from my php container.
Here is the code with the important lines:
my docker-compose.yml:
version: '3.1' #compose version
services:
flaskapi-service:
build:
context: ./restapi #relative to docker-compose file directory
dockerfile: DOCKERFILE
volumes:
- ./restapi:/usr/src/app #mounting
ports:
- 5001:5001 #host:container
depends_on:
- db
restart: on-failure
db:
image: mysql:latest
restart: always
environment:
MYSQL_USER: username
MYSQL_PASSWORD: password
MYSQL_DATABASE: todo
MYSQL_ROOT_PASSWORD: password
ports:
- "3306:3306"
php-page:
build:
context: ./frontend
dockerfile: DOCKERFILE
volumes:
- ./frontend:/var/www/html #mount
ports:
- 5002:80 #host:container
depends_on:
- flaskapi-service
my rest-api:
from flask_sqlalchemy import SQLAlchemy
from flask_marshmallow import Marshmallow
import pymysql
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] =
'mysql+pymysql://username:password#db:3306/todo'
#order matters: ORM before serialization tool
db = SQLAlchemy(app)
ma = Marshmallow(app)
how im calling the rest-api from php:
<?php
$date_time = date('Y-m-d H:i:s');
$data = array(
'description' => $_POST['description'],
'deadline' => $_POST['deadline'],
'createdAt' => $date_time,
'finished' => 'false'
);
$encodedJSON = json_encode($data);
//Initiate cURL-handle
$ch = curl_init();
curl_setopt_array($ch, array(
CURLOPT_URL => 'http://flaskapi-service/todo',
CURLOPT_POST => 1,
CURLOPT_POSTFIELDS => $encodedJSON,
CURLOPT_HTTPHEADER => array('Content-Type: application/json')
));
$result = curl_exec($ch);
echo $result;
curl_close($ch);
header('Location: index.php');
?>
Any help is appreciated and the correct answer marked as correct.
Instead of "#db" in your SQLALCHEMY_DATABASE_URI, you want the actual address at which you can find the db container from your server.
You can, for example, find out what your docker machine ip is using
$ docker-machine ip
And then replace that ip in your URI. e.g.
'mysql+pymysql://username:password#192.168.0.123:3306/todo'
They might even be on the same host if all containers and on the same machine, so give this a shot:
'mysql+pymysql://username:password#localhost:3306/todo'
You can find other ways to determine the IP address of your MySQL instance in this discussion.
I solved this by downgrading from latest MySQL Version image to 5.7:
mysql:5.7
I dont have any clue why this should work since the returned error must be something e.g a misconfiguration with my docker and its networking and shouldnt have nothing to do with mysql. Maybe this helps someone.
Edit: allways check if your Database container is up before trying to connect since docker compose ist starting all containers simultaneously.