Subscription at Orion-LD fails with Mongo-DB error - fiware

I try to get a subscription from Orion-LD to Quantum Leap and CrateDB running. Unfortunately it seems that MongDB throws an error Error (string field 'csf' is missing in BSONObj or Error (string field 'name' is missing in BSONObj' when it tries to access the subscription. The result is that the data can't be passed to Quantum Leap for further processing.
...
time=Friday 24 Jun 11:03:59 2022.081Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=safeMongo.cpp[145]:getStringField | msg=Runtime Error (string field 'name' is missing in BSONObj <{ _id: "urn:ngsi-ld:Subscription:62b578ea4567412cdf07306e", expiration: 2147483647.0, reference: "http://172.18.1.5:8668/v2/notify", custom: false, mimeType: "application/json", throttling: 0.0, servicePath: "/", description: "Notify me of temperature", status: "active", entities: [ { id: "", isPattern: "", type: "https://uri.fiware.org/ns/data-models#WeatherObserved", isTypePattern: false } ], attrs: [ "https://uri.fiware.org/ns/data-models#temperature" ], metadata: [], blacklist: false, ldContext: "http://172.18.1.2/datamodels.context-ngsi.jsonld", createdAt: 1656060138.058298, modifiedAt: 1656060138.058298, conditions: [ "https://uri.fiware.org/ns/data-models#temperature" ], expression: { q: "https://uri=fiware=org/ns/data-models#temperature<100", mq: "", geometry: "", coords: "", georel: "", geoproperty: "" }, format: "normalized" }> from caller setName:280)
time=Friday 24 Jun 11:03:59 2022.081Z | lvl=ERROR | corr=N/A | trans=N/A | from=N/A | srv=N/A | subsrv=N/A | comp=Orion | op=safeMongo.cpp[145]:getStringField | msg=Runtime Error (string field 'csf' is missing in BSONObj <{ _id: "urn:ngsi-ld:Subscription:62b578ea4567412cdf07306e", expiration: 2147483647.0, reference: "http://172.18.1.5:8668/v2/notify", custom: false, mimeType: "application/json", throttling: 0.0, servicePath: "/", description: "Notify me of temperature", status: "active", entities: [ { id: "", isPattern: "", type: "https://uri.fiware.org/ns/data-models#WeatherObserved", isTypePattern: false } ], attrs: [ "https://uri.fiware.org/ns/data-models#temperature" ], metadata: [], blacklist: false, ldContext: "http://172.18.1.2/datamodels.context-ngsi.jsonld", createdAt: 1656060138.058298, modifiedAt: 1656060138.058298, conditions: [ "https://uri.fiware.org/ns/data-models#temperature" ], expression: { q: "https://uri=fiware=org/ns/data-models#temperature<100", mq: "", geometry: "", coords: "", georel: "", geoproperty: "" }, format: "normalized" }> from caller setCsf:302)
...
Before that I created and validated the created subscription using
Creation: http://localhost:1026/ngsi-ld/v1/subscriptions/
{
"description": "Notify me of temperature",
"type": "Subscription",
"entities": [{"type": "WeatherObserved"}],
"watchedAttributes": ["temperature"],
"notification": {
"attributes": ["temperature"],
"format": "normalized",
"endpoint": {
"uri": "http://172.18.1.5:8668/v2/notify",
"accept": "application/json"
}
},
"#context": "http://172.18.1.2/datamodels.context-ngsi.jsonld"
}
Validation: http://localhost:1026/ngsi-ld/v1/subscriptions/
[
{
"id": "urn:ngsi-ld:Subscription:62b578ea4567412cdf07306e",
"type": "Subscription",
"description": "Notify me of temperature",
"entities": [
{
"type": "WeatherObserved"
}
],
"watchedAttributes": [
"temperature"
],
"q": "https://uri.fiware.org/ns/data-models#temperature<100",
"notification": {
"attributes": [
"temperature"
],
"format": "normalized",
"endpoint": {
"uri": "http://172.18.1.5:8668/v2/notify",
"accept": "application/json"
}
},
"#context": "http://172.18.1.2/datamodels.context-ngsi.jsonld"
},
...
So it seems that the subscription actually exists and that orion context broker tries to access it actively according to the repeated MongoDB error in Orion-LD logs (docker logs -f <orion ld container>).
Could this be related to an old MongoDB driver? I found similar issues https://github.com/telefonicaid/fiware-orion/issues/3070.
Here is my actual docker-compose file:
version: "3.8"
services:
########
# CORE #
########
# -> Orion: context broker as central component
orion:
labels:
org.test: 'fiware'
image: fiware/orion-ld:${ORION_VERSION}
hostname: orion
container_name: fiware-orion
depends_on:
- mongo-db
networks:
default:
ipv4_address: 172.18.1.3
ports:
- "${ORION_PORT}:${ORION_PORT}"
command: -dbhost mongo-db -logLevel DEBUG -noCache
healthcheck:
test: curl --fail -s http://orion:${ORION_PORT}/version || exit 1
interval: 5s
# -> Context: provide ngsi-ld context file for smart data models
ld-context:
labels:
org.test: 'fiware'
image: httpd:alpine
hostname: context
container_name: fiware-ld-context
ports:
- "3004:80"
networks:
default:
ipv4_address: 172.18.1.2
volumes:
- ./context:/usr/local/apache2/htdocs/
healthcheck:
test: (wget --server-response --spider --quiet http://172.18.1.2/datamodels.context-ngsi.jsonld 2>&1 | awk 'NR==1{print $$2}'| grep -q -e "200") || exit 1
##################
# DATA MANGEMENT #
##################
# Quantum Leap: is persisting Short Term History to Crate-DB
quantumleap:
labels:
org.test: 'fiware'
image: orchestracities/quantumleap:${QUANTUMLEAP_VERSION}
hostname: quantumleap
container_name: fiware-quantumleap
depends_on:
- crate-db
- redis-db
networks:
default:
ipv4_address: 172.18.1.5
ports:
- "${QUANTUMLEAP_PORT}:${QUANTUMLEAP_PORT}"
environment:
- CRATE_HOST=crate-db
- REDIS_HOST=redis-db
- REDIS_PORT=${REDIS_PORT}
- LOGLEVEL=DEBUG
healthcheck:
test: curl --fail -s http://quantumleap:${QUANTUMLEAP_PORT}/version || exit 1
#################
# VISUALIZATION #
#################
# -> Grafana: Visualize Time Series data
grafana:
labels:
org.test: 'fiware'
image: grafana/grafana:6.1.6
container_name: grafana
depends_on:
- crate-db
networks:
default:
ipv4_address: 172.18.1.8
ports:
- "3003:3000"
environment:
- GF_INSTALL_PLUGINS=https://github.com/orchestracities/grafana-map-plugin/archive/master.zip;grafana-map-plugin,grafana-clock-panel,grafana-worldmap-panel
volumes:
- grafana:/var/lib/grafana
#############
# DATABASES #
#############
# -> MongoDB: database of Orion
mongo-db:
labels:
org.test: 'fiware'
image: mongo:${MONGO_DB_VERSION}
hostname: mongo-db
container_name: db-mongo
expose:
- "${MONGO_DB_PORT}"
ports:
- "${MONGO_DB_PORT}:${MONGO_DB_PORT}" # localhost:27017 # localhost:27017
networks:
default:
ipv4_address: 172.18.1.4
volumes:
- mongo-db:/data
healthcheck:
test: |
host=`hostname --ip-address || echo '127.0.0.1'`;
mongo --quiet $host/test --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)' && echo 0 || echo 1
interval: 5s
# -> CreateDB: database to store time-series data
crate-db:
labels:
org.test: 'fiware'
image: crate:${CRATE_VERSION}
hostname: crate-db
container_name: db-crate
networks:
default:
ipv4_address: 172.18.1.6
ports:
# Admin UI
- "4200:4200"
# Transport protocol
- "4300:4300"
command: crate -Cauth.host_based.enabled=false -Ccluster.name=democluster -Chttp.cors.enabled=true -Chttp.cors.allow-origin="*"
environment:
- CRATE_HEAP_SIZE=2g # see https://crate.io/docs/crate/howtos/en/latest/deployment/containers/docker.html#troubleshooting
volumes:
- crate-db:/data
# -> Redis: Normally used to efficiently store key value pairs
redis-db:
labels:
org.test: 'fiware'
image: redis:${REDIS_VERSION}
hostname: redis-db
container_name: db-redis
networks:
default:
ipv4_address: 172.18.1.7
ports:
- "${REDIS_PORT}:${REDIS_PORT}" # localhost:6379
volumes:
- redis-db:/data
healthcheck:
test: |
host=`hostname -i || echo '127.0.0.1'`;
ping=`redis-cli -h "$host" ping` && [ "$ping" = 'PONG' ] && echo 0 || echo 1
interval: 10s
# NETWORKS
networks:
default:
labels:
org.test: 'fiware'
ipam:
config:
- subnet: 172.18.1.0/24
# VOLUMES
volumes:
mongo-db: ~
context: ~
grafana: ~
crate-db: ~
redis-db: ~
And my related .env-file:
# Project name
COMPOSE_PROJECT_NAME=fiware
# Orion variables
ORION_PORT=1026
ORION_VERSION=1.0.0
# MongoDB variables
MONGO_DB_PORT=27017
MONGO_DB_VERSION=4.4
# QuantumLeap Variables
QUANTUMLEAP_VERSION=0.8.3
QUANTUMLEAP_PORT=8668
# CrateDB Version
CRATE_VERSION=4.6
# RedisDB Version
REDIS_PORT=6379
REDIS_VERSION=6
I would be really appreciated for further help as this issue already blocks me for some days. Thank you in advance!

Would you be so kind and create an issue on Orion-LD's github?
This seems like a bug report and not really suited for SOF - which we use for questions.
Bugs are to be reported as issues: https://github.com/FIWARE/context.Orion-LD/issues.
Create the issue and I promise to look into this asap.

Related

Docker swarm stack mysql/mysql-cluster not resolving service names

I'm trying to setup a mysql-cluster on a docker swarm setup.
Given we have 3 nodes (1 manager, 2 workers) we are trying to install it on the manager node.
This is the my.cnf file (correctly read)
[mysqld]
ndbcluster
ndb-connectstring=management1
user=mysql
skip_name_resolve
[mysql_cluster]
ndb-connectstring=management1
This is the mysql-cluster.cnf file (correctly read)
[ndbd default]
NoOfReplicas=2
DataMemory=80M
[ndb_mgmd]
HostName=management1
DataDir=/var/lib/mysql-cluster
[ndbd]
HostName=ndb1
DataDir=/var/lib/mysql-cluster
[ndbd]
HostName=ndb2
DataDir=/var/lib/mysql-cluster
[mysqld]
HostName=mysql1
Docker compose file (deployed from git repository via portainer)
executes ex: docker stack deploy --compose-file docker-compose.yml vossibility
version: '3.3'
services:
management1:
image: mysql/mysql-cluster
command: ndb_mgmd
networks:
- "meroex-network"
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
ndb1:
image: mysql/mysql-cluster
command: ndbd
networks:
- "meroex-network"
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
ndb2:
image: mysql/mysql-cluster
command: ndbd
networks:
- "meroex-network"
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
mysql1:
image: mysql/mysql-cluster
ports:
- "3306:3306"
restart: always
command: mysqld
depends_on:
- "management1"
- "ndb1"
- "ndb2"
networks:
- "meroex-network"
deploy:
placement:
constraints:
- node.role == manager
networks:
meroex-network:
external: true
The network is an overlay network with subnet/24
[
{
"Name": "meroex-network",
"Id": "vs7lmefftygiqkzfxf9u4dqxi",
"Created": "2021-10-07T06:29:10.608882532+08:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.3.0/24",
"Gateway": "10.0.3.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
...
"lb-meroex-network": {
"Name": "meroex-network-endpoint",
"EndpointID": "a82dd38ffeb66e3a365140b51d8614fdf08ca0f0ffb01c8262a16bde49c891ad",
"MacAddress": "02:42:0a:00:03:34",
"IPv4Address": "10.0.3.52/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4099"
},
"Labels": {},
"Peers": [
...
]
}
]
When deploying the stack we receive the following error in de management1 service:
2021-10-07 00:03:34 [MgmtSrvr] ERROR -- at line 33: Could not resolve hostname [node 1]: management1
2021-10-07 00:03:34 [MgmtSrvr] ERROR -- Could not load configuration from '/etc/mysql-cluster.cnf'
I'm stuck on why the service names are not resolved in this case. I have numerous other spring boot apps that can share their service names to communicate.
It might be that the name lookup for some hostname appear before any address is assigned and published by the name server.
The management server will by default verify all hostnames appearing in the configuration, if some lookup fails the management server will fail to start.
Since MySQL Cluster 8.0.22 there is a configuration parameter to allow management server to start without successfully verified all hostnames, suitable for environment there hosts appear on demand with possibly new ip address each time.
Try add the following to your mysql-cluster.cnf
[tcp default]
AllowUnresolvedHostnames=1
See manual: https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster-tcp-definition.html

Can't connect to flask app inside a docker container from host [duplicate]

This question already has answers here:
Deploying a minimal flask app in docker - server connection issues
(8 answers)
Closed 1 year ago.
I’m trying to run a Flask application and mysql database by running docker-compose up on my computer. The flask is running on port 5000.
if __name__ == "__main__":
app.run(port=5000, debug=True)
The docker container is responding properly when I use docker exec command. But I can't get any response from the host by using the url: http://localhost:5000.
The curl -X GET <url> command is giving the following output:
curl: (56) Recv failure: Connection reset by peer
The docker ps command is giving the following output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bffa59c471f6 customer_transaction_app "/bin/sh -c 'python …" About an hour ago Up About an hour 0.0.0.0:5000->5000/tcp customer_transaction_app_1
ad60c2830ac0 mysql "docker-entrypoint.s…" About an hour ago Up About an hour 33060/tcp, 0.0.0.0:32001->3306/tcp customertransaction_db_host
Here is the Dockerfile:
FROM python:3.8
EXPOSE 5000
COPY requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app CMD
python main.py
Here is the docker-compose.yml file:
version: "2"
services:
app:
build: ./
depends_on:
- db
ports:
- "5000:5000"
db:
container_name: customertransaction_db_host
image: mysql
restart: always
ports:
- "32001:3306"
volumes:
- customertransaction-db-vol:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 123456
MYSQL_DATABASE: customertransaction_db
MYSQL_USER: user
MYSQL_PASSWORD: 123456
volumes:
customertransaction-db-vol: {}
Both the containers reside inside a docker network customer_transaction_default. The docker network inspect command creates the following output:
[
{
"Name": "customer_transaction_default",
"Id": "4b5b20f503af0026a2f1ef185436c9a8e3d9c2ece690e93ece0e6b12f7821edb",
"Created": "2021-06-20T17:52:15.603679073+05:30",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.24.0.0/16",
"Gateway": "172.24.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"ad60c2830ac0f7e270daf03334ea8a8170200e92c2bc43492c378bd1d89cd3ac": {
"Name": "customertransaction_db_host",
"EndpointID": "de4597a1f58d711640f71a6169111f9842c7c5d74320825657a2518d07f36504",
"MacAddress": "02:42:ac:18:00:02",
"IPv4Address": "172.24.0.2/16",
"IPv6Address": ""
},
"bffa59c471f6762bb802fcee37db356cf2c7a59f4f88192e3546dd10ad9dbb2d": {
"Name": "customer_transaction_app_1",
"EndpointID": "a3ded03e28343921d799c0efc334034028821c231e1469d4359cd387c7f43f70",
"MacAddress": "02:42:ac:18:00:03",
"IPv4Address": "172.24.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Since you are trying to connect a server in your docker network, you should change the host in the connection string that you are using to connect Mysql to the name of the container that you are willing to connect.
For your case, you have to change localhost with "customertransaction_db_host".

Unable to read mysql through Jyputer in docker containers (error: DatabaseError: 2005 (HY000): Unknown MySQL server host 'localhost:3306' (22))

My code is as follows:
import mysql.connector
mydb = mysql.connector.connect(host="localhost:3306",user="root",password="example")
print("Connected")
Docker compose for mysql and jyputer files:
# Use root/example as user/password credentials
version: '3.1'
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
AWS_ACCESS_KEY_ID: "A"
AWS_SECRET_ACCESS_KEY: "k"
adminer:
image: adminer
restart: always
ports:
- 8080:8080
jyputer:
version: "3"
services:
pyspark:
image: "jupyter/all-spark-notebook"
volumes:
- c:/code/pyspark-data:/home/jovyan
ports:
- 8888:8888
environment:
AWS_ACCESS_KEY_ID: "5H"
AWS_SECRET_ACCESS_KEY: "0oRBJk"
I have also created a network and kept all containers under one network, by using this command
docker network connect mynetwork 929cd60b08df
Error Received while execuitng in jyputer network:
DatabaseError: 2005 (HY000): Unknown MySQL server host 'localhost:3306' (22)
What have I tried:
mysql error 2005 - Unknown MySQL server host 'localhost'(11001)
127.0.0.1 localhost
this netry is already there in etc/hosts folder. Dont know what to do now
When you created a network you need to connect both containers
docker network create mynetwork
docker network connect mynetwork db_container_id
docker network connect mynetwork pyspark_container_id
Then get its config
docker network inspect mynetwork
It gives an IP addresses of this bridge network
[
{
"Name": "t11",
"Id": "bb203079ab3e48badacb3bb53181dd6871b2f60f22b4079729bc069e1739bbe0",
"Created": "2021-05-21T04:46:48.591105541Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.28.0.0/16",
"Gateway": "172.28.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"3ff678e8ddeb600b1af38a06dda053c8bf6544b136bfca91363379df1d40c69a": {
"Name": "t1_db_1",
"EndpointID": "deefd7c8a187b9513fd074f9632f961312a3bdaa8aa568be6223f3d301389808",
"MacAddress": "02:42:ac:1c:00:03",
"IPv4Address": "172.28.0.3/16",
"IPv6Address": ""
},
"9c95f68416586d3f1a87376a4a669df834d7b25eb22e3b3f33afb0ce1918d6cc": {
"Name": "t2_pyspark_1",
"EndpointID": "0fcbf15530f5b0a526c2036d26890e087f3becdd6eae1896e58a6a67efe2f676",
"MacAddress": "02:42:ac:1c:00:02",
"IPv4Address": "172.28.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
you need IP of db container, in this example it's 172.28.0.3
go with it
mydb = mysql.connector.connect(port="3306", host="172.28.0.3", user="root", password="example")
print("Connected")
cursor = mydb.cursor()
query = ("select 1;")
cursor.execute(query)
for r in cursor:
print(r)
cursor.close()

Bookshelf.js Not connecting to the correct host?

I am working on a express.js application that has been dockerized. Here is the config information I give to bookshelf.
{
"database_dev" : {
"client": "mysql",
"connection": {
"host": "DB",
"database": "TERRA_DEV",
"user": "dev",
"port": "3306",
"password": "goon",
"charset": "utf8",
"host": ""
}
},
"database_test" : {
"client": "mysql",
"connection": {
"host": "DB",
"database": "TERRA_TEST",
"user": "tester",
"port": "3306",
"password": "goon",
"charset": "utf8",
"host": ""
}
},
....
Here is my docker-compose.test.yml I am running to try and execute my tests.
version: '2'
volumes:
services:
sut:
build: .
command: npm test
depends_on:
- web
web:
build: .
command: "npm start"
volumes:
- .:/usr/app/
- /usr/app/node_modules
ports:
- "3000:3000"
depends_on:
- redis
- DB
networks:
- web_sql_bridge
redis:
image: 'bitnami/redis:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
DB:
image: mysql:5.7
restart: always
environment:
- MYSQL_ROOT_PASSWORD="goon"
- MYSQL_DATABASE="TERRA_TEST"
- MYSQL_DATABASE="TERRA_DEV"
- MYSQL_USER="tester"
- MYSQL_PASSWORD="goon"
ports:
- "3306:3306"
networks:
- web_sql_bridge
volumes:
appconf:
networks:
web_sql_bridge:
driver: bridge
Here is the error
Error: connect ECONNREFUSED 127.0.0.1:3306
sut_1 | at Object._errnoException (util.js:1022:11)
sut_1 | at _exceptionWithHostPort (util.js:1044:20)
sut_1 | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1198:14)
sut_1 | --------------------
sut_1 | at Protocol._enqueue (/TerraServer/node_modules/mysql/lib/protocol/Protocol.js:145:48)
sut_1 | at Protocol.handshake (/TerraServer/node_modules/mysql/lib/protocol/Protocol.js:52:23)
sut_1 | at Connection.connect (/TerraServer/node_modules/mysql/lib/Connection.js:130:18)
So basically as you can see from up above I have a config.json with some database information that I get in a JS file and give it to bookshelf.js as an argument. This is used in my mocha tests where is throws the error I posted. The error seems to indicate it trying to connect to 127.0.0.1. Why is bookshelf.js trying to connect to 127.0.0.1 when I give it DB?
I found my problem. Turns out I needed to run.
docker system prune -a
When I would build for a second time it would not update the files and my config file was behind in the image but not my repo where it pulls the code.
and also removed the quotes around the DB enviroment.
DB:
image: mysql:5.7
restart: always
environment:
- MYSQL_ROOT_PASSWORD=goon
- MYSQL_DATABASE=TERRA_TEST
- MYSQL_DATABASE=TERRA_DEV
- MYSQL_USER=tester
- MYSQL_PASSWORD=goon

exec: \"mysql\": executable file not found in $PATH": unknown

I've created a container based on the mysql:5.7 image. Then I set the password with this
docker run --name mysql -v $(pwd):/src -e MYSQL_ROOT_PASSWORD=password -d mysql:5.7
Then I deployed my modules using docker-compose up -d on my docker-compose.yml file. Apparently, it raised an
exec: \"mysql\": executable file not found in $PATH": unknown
error and my 2 other modules also are having errors that keeps on restarting and point to mysql. I can import Python files in Django shell as well as mysql -u root -p but I cannot use the imports when it can't connect to the database.
Things I did based on my research:
I've set my Windows 10 environment variables to point at
C:\Program Files\MySQL\MySQL Server 8.0\bin where mysql.exe resides. It still didn't work.
mysql-init.txt (REF: https://dev.mysql.com/doc/refman/8.0/en/resetting-permissions.html): This runs the mysql prompt successfully.
grant all privileges on *.* to root#localhost;
ALTER USER 'root'#'localhost' IDENTIFIED BY 'password';
docker-compose.yml
version: '2'
services:
# Mysql
mysql:
image: mysql:5.7
restart: always
hostname: mysql
container_name: mysql
environment:
- MYSQL_USER=root
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DB=bitpal
ports:
- "3306:3306"
# Redis
redis:
image: redis:latest
restart: always
hostname: redis
container_name: redis
ports:
- "6379:6379"
# Django web server
bitpal:
image: python:3.5
restart: always
hostname: bitpal
container_name: bitpal
working_dir: /bitpal
command: ./bin/start_dev.sh
volumes:
- ./bitpal:/bitpal
- ./etc/config:/etc/config
- ./log:/log
ports:
- "80:80"
links:
- mysql
- redis
depends_on:
- mysql
environment:
# Database
- DB_NAME=bitpal
- DB_USER=root
- DB_PASSWORD=password
- DB_HOST=mysql
- DB_PORT=3306
# Celery worker
worker:
image: python:3.5
restart: always
container_name: worker
command: bash -c "./bin/install.sh && ./bin/celery_worker.sh"
working_dir: /bitpal
volumes:
- ./bitpal:/bitpal
- ./etc/config:/etc/config
- ./log:/log
links:
- mysql
- redis
depends_on:
- redis
# Bitshares websocket listener
websocket_listener:
image: python:3.5
restart: always
container_name: websocket_listener
command: bash -c "./bin/install.sh && ./bin/websocket_listener.sh"
working_dir: /bitpal
volumes:
- ./bitpal:/bitpal
- ./etc/config:/etc/config
- ./log:/log
links:
- mysql
- redis
depends_on:
- redis
# Nginx
nginx:
image: nginx:1.12.1
container_name: nginx
ports:
- "8000:80"
volumes:
- ./bitpal:/home/bitpal/bitpal/bitpal
- ./nginx:/etc/nginx/conf.d
depends_on:
- bitpal
Dockerfile
FROM python:3.5
RUN mkdir -p /bitpal
WORKDIR /bitpal
EXPOSE 80
ADD requirement.txt /bitpal/
RUN python3.5 -m pip install -r /bitpal/requirement.txt
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'bitpal',
'USER': 'root',
'PASSWORD': 'password',
'HOST': SECRETS['db']['default']['hostname'],
'PORT': '3306',
'OPTIONS': {'autocommit': SECRETS['db']['default']['commit']}
}
}
docker version
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:06:28 2018
OS/Arch: windows/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.04.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 3d479c0
Built: Tue Apr 10 18:23:35 2018
OS/Arch: linux/amd64
Experimental: false
docker info
Containers: 6
Running: 3
Paused: 0
Stopped: 3
Images: 6
Server Version: 18.04.0-ce
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 46
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.93-boot2docker
Operating System: Boot2Docker 18.04.0-ce (TCL 8.2.1); HEAD : b8a34c0 - Wed Apr 11 17:00:55 UTC 2018
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 995.6MiB
Name: default
ID: I7DZ:5SQN:EOBV:PJOE:YHNK:RSXK:F6EH:4J7P:LSTI:CR2M:E2MV:VI27
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
docker inspect mysql:5.7
[
{
"Id": "sha256:0d16d0a97dd13a8ca0c0e205ce1f31f64d9d32048379eb322749442bff35f144",
"RepoTags": [
"mysql:5.7"
],
"RepoDigests": [
"mysql#sha256:f030e84582d939d313fe2ef469b5c65ffd0f7dff3b4b98e6ec9ae2dccd83dcdf"
],
"Parent": "",
"Comment": "",
"Created": "2018-05-04T23:41:21.907294662Z",
"Container": "7fed895363c6e67ba9d52eaea107a1267d63e2d5ad46c567926d6897c7175624",
"ContainerConfig": {
"Hostname": "7fed895363c6",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"3306/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"GOSU_VERSION=1.7",
"MYSQL_MAJOR=5.7",
"MYSQL_VERSION=5.7.22-1debian9"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"mysqld\"]"
],
"ArgsEscaped": true,
"Image": "sha256:71f4997ae44aead33eefe93100990253bca456f7b63dc5aad5baa936e7c14c46",
"Volumes": {
"/var/lib/mysql": {}
},
"WorkingDir": "",
"Entrypoint": [
"docker-entrypoint.sh"
],
"OnBuild": [],
"Labels": {}
},
"DockerVersion": "17.06.2-ce",
"Author": "",
"Config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"3306/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"GOSU_VERSION=1.7",
"MYSQL_MAJOR=5.7",
"MYSQL_VERSION=5.7.22-1debian9"
],
"Cmd": [
"mysqld"
],
"ArgsEscaped": true,
"Image": "sha256:71f4997ae44aead33eefe93100990253bca456f7b63dc5aad5baa936e7c14c46",
"Volumes": {
"/var/lib/mysql": {}
},
"WorkingDir": "",
"Entrypoint": [
"docker-entrypoint.sh"
],
"OnBuild": [],
"Labels": null
},
"Architecture": "amd64",
"Os": "linux",
"Size": 371961246,
"VirtualSize": 371961246,
"GraphDriver": {
"Data": null,
"Name": "aufs"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:d626a8ad97a1f9c1f2c4db3814751ada64f60aed927764a3f994fcd88363b659",
"sha256:0fea3c8895d3871f5c577987f09735ae0889811efbad0dfde4d57851a4d40c00",
"sha256:ed9fd767a1ff316e1a5896e150cede3ba2bcfe406d2135a6bc6306295cc479af",
"sha256:f9dfc87a2e756c25245847a95ced53a6c065405b417d7635a28aa88235b30786",
"sha256:5081cf9eb26642b5373aaa6eea7e16b6caefc3a495cf8fa0f296df48d8651f2f",
"sha256:0404d129c384e4f45e5ae6a8d89c388a323bc0dda82ea45c6e4d0a442ea1e4b0",
"sha256:98bb41f25d3307bc5c124529cfde7c3c27a0d612d918e91a36fc1e852e2e629c",
"sha256:c11f67aad663de23cb77fbbbc6f7bee656e95e72cc820733aed6430a618738ab",
"sha256:d2f1dc45f8bf45758eee7bc59fe94e9f251c415ef8d08540529d1a004772ee9e",
"sha256:01df4e5c105921d20d800c1250c66009d472eb5628817bfa2c9523df5c53e03c",
"sha256:4f840ea0733fe23b9fda79ff521a2e6e8112a615df5db064570502af19c08511"
]
},
"Metadata": {
"LastTagTime": "0001-01-01T00:00:00Z"
}
}
]
Ok, I just got it resolved. Apparently, I forgot to create the DB and migrate the new changes. Here's how I did it.
Removed nginx module. My worker (celery) and websocket_listener modules are okay to keep on restarting. docker rm -f nginx
Ran in detached mode. docker-compose up -d mysql redis
Used bash under mysql container. docker exec -ti mysql bash
Entered credentials (I have initially set them in mysql-init.txt, REF: https://dev.mysql.com/doc/refman/8.0/en/resetting-permissions.html) mysql -u root -p
Created DB. CREATE DATABASE bitpal;
Imported a certain .sql file that was missing in my modules.
Ran bitpal in detached mode. docker-compose up -d bitpal
Used bash under bitpal container. docker exec -ti bitpal bash
Migrations. python manage.py migrate
Used bash under mysql container. docker exec -ti mysql bash
Lastly, ran the sql under bitpal container. mysql -u root -p bitpal < <.sql file>
And it worked.
Here is the best option for the above issue. I did a lot of search for the same issue:
sudo apt-get install --reinstall mariadb-server
sudo apt-get install --reinstall php-mysql
service mysql status
service mysql start