Spring boot app scale up with mysql - mysql

I have created spring-boot app and database using mysql. Then I Dockerised And Deployed it. below show my docker-compse.yml
version: '2'
services:
seat_reservation_service:
image: springio/seat_reservation_service
ports:
- "8090:8090"
environment:
- SPRING_PROFILES_ACTIVE=docker
seat_reservation_sql:
image: mysql:5.7
ports:
- 33306:3306
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=seat-reservation-query
this is my spring application.yml file
server:
port: 8090
spring:
profiles: docker
main:
banner-mode: 'off'
datasource:
url: jdbc:mysql://seat_reservation_sql:3306/seat-reservation-query?useSSL=false
username: root
password: root
validation-query: SELECT 1
test-on-borrow: true
jpa:
show_sql: false
hibernate:
ddl-auto: update
dialect: org.hibernate.dialect.MySQL5
properties:
hibernate:
cache:
use_second_level_cache: false
use_query_cache: false
generate_statistics: false
data:
rest:
base-path: /api/
rabbitmq:
host: rabbitmq-1
username: test
password: password
logging:
level:
org.springframework: false
org.hibernate: ERROR
path: logs/prod/
axon:
amqp:
exchange: SeatReserveEvents
eventhandling:
processors:
statistics.source: statisticsQueue
My problem is I need more replicas form seat_reservation_service service. If I scale up seat_reservation_service that refer same database. According to micro-service architecture I need separate database for each replica. How can I do that?
if I use in memory database it can do

According to micro-service architecture I need separate database for each replica. How can I do that?
This "rule" refers to the microservice types, not to the instances of the same microservice. So, you can scale separately the seat_reservation_service and seat_reservation_sql. For example, you could have 4 instances of seat_reservation_service and 3 instances of seat_reservation_sql (1 master and 2 slaves or a Galera cluster).

Related

Spring in Kubernetes tries to reach DB at pod IP

I'm facing an issue while deploying a Spring API which should connect to a MySQL database.
I am deploying a standalone MySQL using the [bitnami helm chart][1] with the following values:
primary:
service:
type: ClusterIP
persistence:
enabled: true
size: 3Gi
storageClass: ""
extraVolumes:
- name: mysql-passwords
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: mysql-spc
extraVolumeMounts:
- name: mysql-passwords
mountPath: "/vault/secrets"
readOnly: true
configuration: |-
[mysqld]
default_authentication_plugin=mysql_native_password
skip-name-resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mysql
plugin_dir=/opt/bitnami/mysql/lib/plugin
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
datadir=/bitnami/mysql/data
tmpdir=/opt/bitnami/mysql/tmp
max_allowed_packet=16M
bind-address=0.0.0.0
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
log-error=/opt/bitnami/mysql/logs/mysqld.log
character-set-server=UTF8
collation-server=utf8_general_ci
slow_query_log=0
slow_query_log_file=/opt/bitnami/mysql/logs/mysqld.log
long_query_time=10.0
[client]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
default-character-set=UTF8
plugin_dir=/opt/bitnami/mysql/lib/plugin
[manager]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
auth:
createDatabase: true
database: api-db
username: api
usePasswordFiles: true
customPasswordFiles:
root: /vault/secrets/db-root-pwd
user: /vault/secrets/db-pwd
replicator: /vault/secrets/db-replica-pwd
serviceAccount:
create: false
name: social-app
I use the following deployment which runs a spring API (with Vault secret injection):
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: social-api
name: social-api
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: social-api
template:
metadata:
labels:
app: social-api
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: 'social'
spec:
serviceAccountName: social-app
containers:
- image: quay.io/paulbarrie7/social-network-api
name: social-network-api
command:
- java
args:
- -jar
- "-DSPRING_DATASOURCE_URL=jdbc:mysql://social-mysql.default.svc.cluster.local/api-db?useSSL=false"
- "-DSPRING_DATASOURCE_USERNAME=api"
- "-DSPRING_DATASOURCE_PASSWORD=$(cat /secrets/db-pwd)"
- "-DJWT_SECRET=$(cat /secrets/jwt-secret)"
- "-DS3_BUCKET=$(cat /secrets/s3-bucket)"
- -Dlogging.level.root=DEBUG
- -Dspring.datasource.hikari.maximum-pool-size=5
- -Dlogging.level.com.zaxxer.hikari.HikariConfig=DEBUG
- -Dlogging.level.com.zaxxer.hikari=TRACE
- social-network-api-1.0-SNAPSHOT.jar
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 8080
volumeMounts:
- name: aws-credentials
mountPath: "/root/.aws"
readOnly: true
- name: java-secrets
mountPath: "/secrets"
readOnly: true
volumes:
- name: aws-credentials
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aws-secret-spc
- name: java-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: java-spc
Identifier are ok, when I run an interactive mysql pod I can connect to the database. However name resolution for the Spring API is wrong since I get the error:
java.sql.SQLException: Access denied for user 'api'#'10.24.0.194' (using password: YES)
which is wrong since 10.24.0.194 is the API pod address and not the mysql pod or service address, and I cant solve why.
Any idea?
[1]: https://artifacthub.io/packages/helm/bitnami/mysql
Thanks to David's suggestion I succeeded in solving my problem.
Actually there were two issues in my configs.
First the secrets were indeed misinterpreted, then I've changed my command/args to:
command:
- "/bin/sh"
- "-c"
args:
- |
DB_USER=$(cat /secrets/db-user)
DB_PWD=$(cat /secrets/db-pwd)
JWT=$(cat /secrets/jwt-secret)
BUCKET=$(cat /secrets/s3-bucket)
java -jar \
-DSPRING_DATASOURCE_URL=jdbc:mysql://social-mysql.default.svc.cluster.local/api-db?useSSL=false \
"-DSPRING_DATASOURCE_USERNAME=$DB_USER" \
"-DSPRING_DATASOURCE_PASSWORD=$DB_PWD" \
"-DJWT_SECRET=$JWT" \
"-DS3_BUCKET=$BUCKET" \
-Dlogging.level.root=DEBUG \
social-network-api-1.0-SNAPSHOT.jar
And the memory resources set were also too low, so I have changed them to:
resources:
limits:
cpu: 100m
memory: 400Mi
requests:
cpu: 100m
memory: 400Mi

Kubernetes - NodeJs MySQL pod does not connect with MySQL pod

I have a MySQL pod up and running. I opened a terminal for this pod and created a database and a user.
create database demodb;
create user demo identified by 'Passw0rd';
grant all on demodb.* to 'demo';
I have this Deployment to launch a NodeJs client for the MySQL pod. This is on my local minikube installation.
apiVersion: apps/v1
kind: Deployment
metadata:
name: demos
spec:
selector:
matchLabels:
app: demos
template:
metadata:
labels:
app: demos
spec:
containers:
- name: demo-db
image: 172.30.1.1:5000/demo/db-demos:0.1.0
resources:
limits:
memory: "128Mi"
cpu: "200m"
ports:
- containerPort: 4000
name: probe-port
---
apiVersion: v1
kind: Service
metadata:
name: demos
spec:
selector:
app: demos
ports:
- name: probe-port
port: 4001
targetPort: probe-port
The Dockerfile for the image passes the environment variables for the NodeJs client to use.
FROM node:alpine
ADD . .
RUN npm i
WORKDIR /app
ENV PROBE_PORT 4001
ENV MYSQL_HOST "mysql.demo.svc"
ENV MYSQL_PORT "3306"
ENV MYSQL_USER "demo"
ENV MYSQL_PASSWORD "Passw0rd"
ENV MYSQL_DATABASE "demodb"
CMD ["node", "index.js"]
And, the NodeJs client connects as follows.
const mysql = require('mysql')
const connection = mysql.createConnection({
host: process.env.MYSQL_HOST,
port: process.env.MYSQL_PORT,
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
database: process.env.MYSQL_DATABASE
});
connection.connect((err) => {
if (err) {
console.log('Database connection failed. ' + err.message)
} else {
console.log('Database connected.')
}
});
The database connection keeps failing with a message as Database connection failed. connect ENOENT tcp://172.30.88.64:3306. The TCP/IP address shown in this message is correct i.e. it matches with the service mysql.demo.svc of the running MySQL pod.
In the MySQL configuration files, I don't see bind-address. This should mean that, MySQL should accept connections from 'every where'. I am creating the user without the location qualifier i.e. the user is 'demo'#'%'. The connection is, obviously, not through sockets as I am passing the host and port values for connection.
What am I missing?
I got it working as follows.
const mysql = require('mysql')
const connection = mysql.createConnection({
host: process.env.MYSQL_HOST,
// port: process.env.MYSQL_PORT,
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
database: process.env.MYSQL_DATABASE
});
connection.connect((err) => {
if (err) {
console.log('Database connection failed. ' + err.message)
} else {
console.log('Database connected.')
}
});
That's right; I removed the port number from the option. :rolleyes: This example from RedHat is closest I have seen.
Also, I created the user with mysql_native_password as that is the only plugin mechanism that is supported by NodeJs client. See here.

How to set up kubernetes for Spring and MySql

i follow this tutorial https://medium.com/better-programming/kubernetes-a-detailed-example-of-deployment-of-a-stateful-application-de3de33c8632
I create mysql pod and backend pod, but when application get error com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
pod mysql: running
pod backend: CrashLoopBackOff
Dockerfile
FROM openjdk:14-ea-8-jdk-alpine3.10
ADD target/credit-0.0.1-SNAPSHOT.jar .
EXPOSE 8200
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom", "-Dspring.profiles.active=container","-jar","/credit-0.0.1-SNAPSHOT.jar"]
credit-deployment.yml
# Define 'Service' to expose backend application deployment
apiVersion: v1
kind: Service
metadata:
name: to-do-app-backend
spec:
selector: # backend application pod lables should match these
app: to-do-app
tier: backend
ports:
- protocol: "TCP"
port: 80
targetPort: 8080
type: LoadBalancer # use NodePort, if you are not running Kubernetes on cloud
---
# Configure 'Deployment' of backend application
apiVersion: apps/v1
kind: Deployment
metadata:
name: to-do-app-backend
labels:
app: to-do-app
tier: backend
spec:
replicas: 2 # Number of replicas of back-end application to be deployed
selector:
matchLabels: # backend application pod labels should match these
app: to-do-app
tier: backend
template:
metadata:
labels: # Must macth 'Service' and 'Deployment' labels
app: to-do-app
tier: backend
spec:
containers:
- name: to-do-app-backend
image: gitim21/credit_repo:1.0 # docker image of backend application
env: # Setting Enviornmental Variables
- name: DB_HOST # Setting Database host address from configMap
valueFrom:
configMapKeyRef:
name: db-conf # name of configMap
key: host
- name: DB_NAME # Setting Database name from configMap
valueFrom:
configMapKeyRef:
name: db-conf
key: name
- name: DB_USERNAME # Setting Database username from Secret
valueFrom:
secretKeyRef:
name: db-credentials # Secret Name
key: username
- name: DB_PASSWORD # Setting Database password from Secret
valueFrom:
secretKeyRef:
name: db-credentials
key: password
ports:
- containerPort: 8080
application.yml
spring:
datasource:
type: com.zaxxer.hikari.HikariDataSource
hikari:
idle-timeout: 10000
platform: mysql
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
url: jdbc:mysql://${DB_HOST}/${DB_NAME}
jpa:
hibernate:
naming:
physical-strategy: org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
I placed the application.yml file in the application folder "resources"
EDIT
Name: mysql-64c7df597c-s4gbt
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.8.160
Start Time: Thu, 12 Sep 2019 17:50:18 +0200
Labels: app=mysql
pod-template-hash=64c7df597c
tier=database
Annotations: <none>
Status: Running
IP: 172.17.0.5
Controlled By: ReplicaSet/mysql-64c7df597c
Containers:
mysql:
Container ID: docker://514d3f5af76f5e7ac11f6bf6e36b44ee4012819dc1cef581829a6b5b2ce7c09e
Image: mysql:5.7
Image ID: docker-pullable://mysql#sha256:1a121f2e7590f949b9ede7809395f209dd9910e331e8372e6682ba4bebcc020b
Port: 3306/TCP
Host Port: 0/TCP
Args:
--ignore-db-dir=lost+found
State: Running
Started: Thu, 12 Sep 2019 17:50:19 +0200
Ready: True
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'password' in secret 'db-root-credentials'> Optional: false
MYSQL_USER: <set to the key 'username' in secret 'db-credentials'> Optional: false
MYSQL_PASSWORD: <set to the key 'password' in secret 'db-credentials'> Optional: false
MYSQL_DATABASE: <set to the key 'name' of config map 'db-conf'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rgsmp (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-rgsmp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rgsmp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49m default-scheduler Successfully assigned default/mysql-64c7df597c-s4gbt to minikube
Normal Pulled 49m kubelet, minikube Container image "mysql:5.7" already present on machine
Normal Created 49m kubelet, minikube Created container mysql
Normal Started 49m kubelet, minikube Started container mysql
Name: to-do-app-backend-8669b5467-hrr9q
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.8.160
Start Time: Thu, 12 Sep 2019 18:27:45 +0200
Labels: app=to-do-app
pod-template-hash=8669b5467
tier=backend
Annotations: <none>
Status: Running
IP: 172.17.0.7
Controlled By: ReplicaSet/to-do-app-backend-8669b5467
Containers:
to-do-app-backend:
Container ID: docker://1eb8453939710aed7a93cddbd5046f49be3382858aa17d5943195207eaeb3065
Image: gitim21/credit_repo:1.0
Image ID: docker-pullable://gitim21/credit_repo#sha256:1fb2991394fc59f37068164c72263749d64cb5c9fe741021f476a65589f40876
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 12 Sep 2019 18:51:25 +0200
Finished: Thu, 12 Sep 2019 18:51:36 +0200
Ready: False
Restart Count: 9
Environment:
DB_HOST: <set to the key 'host' of config map 'db-conf'> Optional: false
DB_NAME: <set to the key 'name' of config map 'db-conf'> Optional: false
DB_USERNAME: <set to the key 'username' in secret 'db-credentials'> Optional: false
DB_PASSWORD: <set to the key 'password' in secret 'db-credentials'> Optional: false
DB_PORT: <set to the key 'port' in secret 'db-credentials'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rgsmp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-rgsmp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rgsmp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned default/to-do-app-backend-8669b5467-hrr9q to minikube
Normal Pulled 23m (x5 over 25m) kubelet, minikube Container image "gitim21/credit_repo:1.0" already present on machine
Normal Created 23m (x5 over 25m) kubelet, minikube Created container to-do-app-backend
Normal Started 23m (x5 over 25m) kubelet, minikube Started container to-do-app-backend
Warning BackOff 50s (x104 over 25m) kubelet, minikube Back-off restarting failed container
First and foremost make sure that you fillfull all requirements that are described in article.
During creating deployments objects like (eg. pods, services ) environment variables are injected from the configMaps and secrets that are created earlier. This deployment uses the image kubernetesdemo/to-do-app-backend which is created in step one. Make sure you've created configmap and secrets before, otherwise delete created during deployment objects, create configMap, secret and then run deployment config file once again.
Another possibility if get:
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications
link failure
error it means that the DB isn't reachable at all. This can have one or more of the following causes:
IP address or hostname in JDBC URL is wrong.
Hostname in JDBC URL is not recognized by local DNS server.
Port number is missing or wrong in JDBC URL.
~~4. DB server is down.~~
DB server doesn't accept TCP/IP connections.
DB server has run out of connections.
Something in between Java and DB is blocking connections, e.g. a firewall or proxy.
I assume that if your mysql pod is running your DB server is running and point ~~4. DB server is down.~~ is wrong.
To solve the one or the other, follow the following advices:
Verify and test them with ping. Refresh DNS or use IP address in JDBC URL instead.
Check if it is based on my.cnf of MySQL DB.
Start the DB once again. Check if mysqld is started without the --skip-networking option.
Restart the DB and fix your code accordingly that it closes connections in finally.
Disable firewall and/or configure firewall/proxy to allow/forward the port.
Similar error you can find here: communication-error.

Connect JavaScript running in docker container to MySQL database running on another docker container

I'm currently running a local instance of RocketChat and the RocketBot using docker-compose and a corresponding docker-compose.yaml file:
I use the standard mysql module like this:
var con = mysql.createConnection({
host: '<placeholder>',
user: 'root',
port: '3306',
password: '<placeholder>',
});
The host, user, port and password are gathered from running the inspect command on the container containing the MySQL server. The MySQL does work as I can run it and make changes to it and even connect to it using MySQL workbench. I get this error:
rosbot_1 | [Tue Jun 18 2019 18:42:06 GMT+0000 (UTC)] ERROR Error: connect ETIMEDOUT
rosbot_1 | at Connection._handleConnectTimeout (/home/hubot/node_modules/mysql/lib/Connection.js:412:13)
I have no idea how to proceed now, how can I connect from the bot served by docker-compose to the MySQL container using JavaScript?
EDIT:
docker-compose.yaml:
version: '2.1'
services:
mongo:
image: mongo:3.2
hostname: 'mongo'
volumes:
- ./db/data:/data/db
- ./db/dump:/dump
command: mongod --smallfiles --oplogSize 128 --replSet rs0
mongo-init-replica:
image: mongo:3.2
command: 'mongo mongo/rocketchat --eval "rs.initiate({ _id: ''rs0'', members: [ { _id: 0, host: ''localhost:27017'' } ]})"'
links:
- mongo:mongo
rocketchat:
image: rocketchat/rocket.chat:latest
hostname: 'rocketchat'
volumes:
- ./rocketchat/uploads:/app/uploads
environment:
- PORT=3000
- ROOT_URL=http://localhost:3000
- MONGO_URL=<placeholder>
- MONGO_OPLOG_URL=<placeholder>
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000"]
interval: 30s
timeout: 10s
retries: 5
links:
- mongo:mongo
ports:
- 3000:3000
<placeholder>:
image: <placeholder>
hostname: "<placeholder>"
environment:
- ROCKETCHAT_URL=<placeholder>
- ROCKETCHAT_ROOM=""
- ROCKETCHAT_USER=<placeholder>
- ROCKETCHAT_PASSWORD=<placeholder>
- ROCKETCHAT_AUTH=<placeholder>
- BOT_NAME=<placeholder>
- LISTEN_ON_ALL_PUBLIC=true
- EXTERNAL_SCRIPTS=hubot-help,hubot-seen,hubot-links,hubot-diagnostics,hubot-pugme,hubot-reload
- PENTEXT_PATH=/home/hubot/pentext
- ADDITIONAL_PACKAGES=mysql,lodash
- RESPOND_TO_LIVECHAT=true
- RESPOND_TO_DM=true
depends_on:
rocketchat:
condition: service_healthy
links:
- rocketchat:rocketchat
volumes:
- <placeholder>
ports:
- 3001:3001
Normally, you can connect to another container using the container name as hostname:
If you have a container with mysql, the container name (in this example 'db') is the host name to access the mysql container (also, you can use a hostname: 'mysqlhostname' to specify a different name):
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
In your rocketchat container you should add some environment variables for mysq root password and database to make it available to your container
rocketchat:
image: rocketchat/rocket.chat:latest
hostname: 'rocketchat'
volumes:
- ./rocketchat/uploads:/app/uploads
environment:
- PORT=3000
- ROOT_URL=http://localhost:3000
- MONGO_URL=<placeholder>
- MONGO_OPLOG_URL=<placeholder>
- MYSQL_ROOT_PASSWORD: mypass
- MYSQL_DATABASE: mydb
- MYSQL_HOSTNAME: db
...
links:
- rocketchat:rocketchat
- db : db
And then, use the host name and the environment variables to create your connection:
var con = mysql.createConnection({
host: 'db', // or process.env.MYSQL_HOSTNAME
user: 'root',
port: '3306',
password: 'mypass', // or process.env.MYSQL_ROOT_PASSWORD
});

Codeception DB module Exception

i'm trying to connect to my db in codeception. provided following configurations in my api.suite.dist.yml and codeception.dist.yml file (i didn't know where to provide configurations so i provide in both api.suite.dist.yml and codeception.dist.yml)
here is my api.dist.suite.yml
class_name: ApiTester
modules:
enabled:
- PhpBrowser:
url: http://192.168.1.143
- REST:
depends: PhpBrowser
url: https://dev-tv.dna.fi/api/user/guest/epg
- \Helper\Api
- Db:
dsn: 'mysql:host=127.0.0.1;dbname=db'
user: 'username'
password: 'passsword'
and here is my codeception.dist.yml
actor: Tester
paths:
tests: tests
log: tests/_output
data: tests/_data
support: tests/_support
envs: tests/_envs
settings:
bootstrap: _bootstrap.php
colors: true
memory_limit: 1024M
extensions:
enabled:
- Codeception\Extension\RunFailed
modules:
config:
Db:
dsn: 'mysql:host=127.0.0.1;dbname=db'
user: 'username'
password: 'password'
and this is the response i get
[Codeception\Exception\ModuleException]
Db: SQLSTATE[28000] [1045] Access denied for user 'webapiuser'#'localhost' (using password: YES) while creating PDO connection
run [-c|--config CONFIG] [--report] [--html [HTML]] [--xml [XML]] [--tap [TAP]] [--json [JSON]] [--colors] [--no-colors] [--silent] [--steps] [-d|--debug] [--coverage [COVERAGE]] [--coverage-html [COVERAGE-HTML]] [--coverage-xml [COVERAGE-XML]] [--coverage-text [COVERAGE-TEXT]] [--no-exit] [-g|--group GROUP] [-s|--skip SKIP] [-x|--skip-group SKIP-GROUP] [--env ENV] [-f|--fail-fast] [--no-rebuild] [--] [] []
Don't use codeception.yml. Configuration in api.suite.yml is enough.
Make sure you use right credentials.
My acceptance.suite.yml
class_name: WebGuy
modules:
enabled:
- Db
config:
Db:
dsn: mysql:host=127.0.0.1;dbname=mydbname
user: myuser
password: mypass
populate: false
cleanup: false