I am currently working on a team project utilizing Docker with Apache Mesos/Marathon. To deploy MySQL docker containers on Mesos/Marathon, we have to create a JSON file with port mapping. I have searched everywhere on the internet and just can't find any sample JSON file to look on for port mapping. Anyone have done this before?
Here's some example Marathon JSON for using Docker's bridged networking mode:
{
"id": "bridged-webapp",
"cmd": "python3 -m http.server 8080",
"cpus": 0.5,
"mem": 64.0,
"instances": 2,
"container": {
"type": "DOCKER",
"docker": {
"image": "python:3",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 8080, "hostPort": 0, "servicePort": 9000, "protocol": "tcp" },
{ "containerPort": 161, "hostPort": 0, "protocol": "udp"}
]
}
}
}
See the "Bridged Networking Mode" section in
https://mesosphere.github.io/marathon/docs/native-docker.html for more details.
Related
I don’t know where to look anymore, maybe someone has an idea what’s going wrong?
I created an MQTT subscription on my Orion Context Broker:
{
"description": "Subscription to notify of all WaterQualityObserved changes",
"subject": {
"entities": [{
"idPattern": ".*",
"type": "WaterQualityObserved"
}],
"condition": {
"attrs": []
}
},
"notification": {
"mqtt": {
"url": "mqtt://127.0.0.1:1883",
"topic": "water-quality-observed-changed"
}
}
}
I have both my Orion Context Broker and Mosquitto MQTT broker running locally in Docker containers.
I get this when listing the subscriptions in my Orion CB:
[
{
"id": "633bf12fe929777b6a60242b",
"description": "MQTT subscription to notify of all WaterQualityObserved changes",
"status": "active",
"subject": {
"entities": [
{
"idPattern": ".*",
"type": "WaterQualityObserved"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 3,
"lastNotification": "2022-10-04T08:47:55.000Z",
"attrs": [],
"onlyChangedAttrs": false,
"attrsFormat": "normalized",
"mqtt": {
"url": "mqtt://127.0.0.1:1883",
"topic": "water-quality-observed-changed",
"qos": 0
},
"lastFailure": "2022-10-04T08:47:55.000Z",
"failsCounter": 3,
"covered": false
}
}
]
As you can see “timesSent” augments when I PATCH the entity.
The strange thing is it worked before!
Any idea what I’m doing wrong?
Thanks.
Guy
The "The strange thing is it worked before!" sentence make me think it has to do with connectivity between container. I'd suggest to review all the involved connectivity (Orion -> MQTT broker, MQTT broker -> your MQTT subscriber). If that doesn't help, a re-deploy of all the docker containers could help.
I am trying to get the following set up working:
My local machine OS = Linux
I am building a docker mysql container on this local machine
I plan to seed the database within the container, and then run tests locally (on my local Linux machine) against this container (which i will spin up on my linux machine too)
Unfortunately when running my tests and trying to connect to the container, the default bridge networks Gateway IP is inaccessible.
My docker-compose.yaml file is as follows
version: "3.4"
services:
integration-test-mysql:
image: mysql:8.0
container_name: ${MY_SQL_CONTAINER_NAME}
environment:
- MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
ports:
- "3306:3306"
volumes:
# - ./src/db:/usr/src/db #Mount db folder so we can run seed files etc.
- ./seed.sql:/docker-entrypoint-initdb.d/seed.sql
network_mode: bridge
healthcheck:
test: "mysqladmin -u root -p$MYSQL_ROOT_PASSWORD -h 127.0.0.1 ping --silent 2> /dev/null || exit 1"
interval: 5s
timeout: 30s
retries: 5
start_period: 10s
entrypoint: sh -c "
echo 'CREATE SCHEMA IF NOT EXISTS gigs;' > /docker-entrypoint-initdb.d/init.sql;
/usr/local/bin/docker-entrypoint.sh --default-authentication-plugin=mysql_native_password
"
When running docker network ls i see the following
docker network ls
NETWORK ID NAME DRIVER SCOPE
42a11ef835dd bridge bridge local
c7453acfbc98 host host local
48572c69755a integration_default bridge local
bd470f8620fd none null local
So the integration_default network was created. Then if i inspect this network
docker network inspect integration_default
[
{
"Name": "integration_default",
"Id": "48572c69755ae1bbc1448ab203a01d81be4300da12c97a9c4f1142872b878387",
"Created": "2022-09-28T00:48:20.504251612Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.27.0.0/16",
"Gateway": "172.27.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"79e897decb4f0ae5836c018d82e78997e8ac2f615b399362a307cc7f585c0875": {
"Name": "integration-test-mysql-host",
"EndpointID": "1f7798554029cc2d07f7ba44d057c489b678eac918f7916029798b42585eda41",
"MacAddress": "02:42:ac:1b:00:02",
"IPv4Address": "172.27.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "integration",
"com.docker.compose.version": "2.7.0"
}
}
]
Comparing this to the default bridge
docker inspect bridge
[
{
"Name": "bridge",
"Id": "42a11ef835dd1b2aec3ecea57211bb2753e0ebd4a2a115ace8b7df3075e97d5a",
"Created": "2022-09-27T21:54:44.239215269Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Interestingly running ping 172.17.0.1 on my Linux machine works fine but ping 172.27.0.1 fails to return anything
UPDATE
I have got it working now. By specifying network_mode: bridge in my docker compose file i was able to use the default bridge network which was accessible on my local machine as i mentioned.
However, i would like to know why creating my own network didn't work here. Does anyone know why this was the case?
Docker networks are meant to be hidden and you should let docker do its job unless there is a good reason for it.
The correct way to interract with a service is through its open ports. And those ports are mapped on the host so that talking to the host:port is like talking to the app inside the container.
So when you say that you can't ping your container from the host, it is because Docker does its job good. "Fixing" this breaks the isolation of the container and makes it available to other services that shouldn't have acccess to it.
I pulled the official superset image:
git clone https://github.com/apache/incubator-superset.git
then added the MYSQL Client to requirements.txt
cd incubator-superset
touch ./docker/requirements-local.txt
echo "mysqlclient==1.4.6" >> ./docker/requirements-local.txt
docker-compose build --force-rm
docker-compose up -d
After which I made the MYSQL Container
docker run --detach --network="incubator-superset_default" --name=vedasupersetmysql --env="MYSQL_ROOT_PASSWORD=vedashri" --publish 6603:3306 mysql
Then connected Mysql to the Superset Bridge.
The bridge network is as follows:
docker inspect incubator-superset_default
[
{
"Name": "incubator-superset_default",
"Id": "56db7b47ecf0867a2461dddb1219c64c1def8cd603fc9668d80338a477d77fdb",
"Created": "2020-12-08T07:38:47.94934583Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"07a6e0d5d87ea3ccb353fa20a3562d8f59b00d2b7ce827f791ae3c8eca1621cc": {
"Name": "superset_db",
"EndpointID": "0dd4781290c67e3e202912cad576830eddb0139cb71fd348019298b245bc4756",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"096a98f22688107a689aa156fcaf003e8aaae30bdc3c7bc6fc08824209592a44": {
"Name": "superset_worker",
"EndpointID": "54614854caebcd9afd111fb67778c7c6fd7dd29fdc9c51c19acde641a9552e66",
"MacAddress": "02:42:ac:13:00:05",
"IPv4Address": "172.19.0.5/16",
"IPv6Address": ""
},
"34e7fe6417b109fb9af458559e20ce1eaed1dc3b7d195efc2150019025393341": {
"Name": "superset_init",
"EndpointID": "49c580b22298237e51607ffa9fec56a7cf155065766b2d75fecdd8d91d024da7",
"MacAddress": "02:42:ac:13:00:06",
"IPv4Address": "172.19.0.6/16",
"IPv6Address": ""
},
"5716e0e644230beef6b6cdf7945f3e8be908d7e9295eea5b1e5379495817c4d8": {
"Name": "superset_app",
"EndpointID": "bf22dab0714501cc003b1fa69334c871db6bade8816724779fca8eb81ad7089d",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},
"b09d2808853c54f66145ac43bfc38d4968d28d9870e2ce320982dd60968462d5": {
"Name": "superset_node",
"EndpointID": "70f00c6e0ebf54b7d3dfad1bb8e989bc9425c920593082362d8b282bcd913c5d",
"MacAddress": "02:42:ac:13:00:07",
"IPv4Address": "172.19.0.7/16",
"IPv6Address": ""
},
"d08f8a2b090425904ea2bdc7a23b050a1327ccfe0e0b50360b2945ea39a07172": {
"Name": "superset_cache",
"EndpointID": "350fd18662e5c7c2a2d8a563c41513a62995dbe790dcbf4f08097f6395c720b1",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"e21469db533ad7a92b50c787a7aa026e939e4cf6d616e3e6bc895a64407c1eb7": {
"Name": "vedasupersetmysql",
"EndpointID": "d658c0224d070664f918644584460f93db573435c426c8d4246dcf03f993a434",
"MacAddress": "02:42:ac:13:00:08",
"IPv4Address": "172.19.0.8/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "incubator-superset",
"com.docker.compose.version": "1.26.0"
}
}
]
How should I form the SQLAlchemy URI?
I have tried
mysql://user:password#8088:6603/database-name
But it shows connection error, when I enter this URI.
If there is any related documentation, that would also help.
The issue was not related to superset or network. You configured the right network but haven't enabled default-authentication-plugin on MySQL docker images. Due to this error showed on the console was
Plugin caching_sha2_password could not be loaded:
To reproduce:
from sqlalchemy import create_engine
engine = create_engine('mysql://root:sample#172.19.0.5/mysql')
engine.connect()
error logs:
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1045, 'Plugin caching_sha2_password could not be loaded: /usr/lib/x86_64-linux-gnu/mariadb19/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory')
(Background on this error at: http://sqlalche.me/e/13/e3q8)
To resolve the issue:
Create MySQL image with default-authentication-plugin
docker run --detach --network="incubator-superset_default" --name=mysql --env="MYSQL_ROOT_PASSWORD=sample" --publish 3306:3306 mysql --default-authentication-plugin=mysql_native_password
Superset already has a User-defined bridge network, so you can use both formats
mysql://root:sample#mysql/mysql
mysql://root:sample#172.19.0.5/mysql
I don't have a lot of experience with Docker, but I don't think you should use 8088 as the host for your MySQL database.
Try using mysql://user:password#172.19.0.8:6603/database-name as the URI.
I have dcos up and running. I created a service and i am able to access it through the ip:port but when i try to do the same with marathon-lb i just cant reach it. I tried curl http://marathon-lb.marathon.mesos:10000/ 10000 being the port number, i still get connection refused.
Here is my json for service:
{
"id": "/nginx-external",
"cmd": null,
"cpus": 0.1,
"mem": 65,
"disk": 0,
"instances": 1,
"acceptedResourceRoles": [],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "nginx:1.7.7",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 2000,
"servicePort": 10000,
"protocol": "tcp",
"labels": {}
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"healthChecks": [
{
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"timeoutSeconds": 10,
"maxConsecutiveFailures": 10,
"portIndex": 0,
"path": "/",
"protocol": "HTTP",
"ignoreHttp1xx": false
}
],
"labels": {
"HAPROXY_GROUP": "external"
},
"portDefinitions": [
{
"port": 10000,
"protocol": "tcp",
"name": "default",
"labels": {}
}
]
}
Can anyone help.
Both accessing it from outside the cluster by using public-ip:10000 (see here for finding the public ip) and from inside the cluster using curl http://marathon-lb.marathon.mesos:10000/ worked fine. Note, you need to have marathon-lb installed (dcos package install marathon-lb) and marathon-lb.marathon.mesos can only be resolved from inside the cluster.
In order to debug marathon-lb issues I ususally check the haproxy stats first: https://dcos.io/docs/1.9/networking/marathon-lb/marathon-lb-advanced-tutorial/#deploy-an-external-load-balancer-with-marathon-lb
From outside the cluster
From inside the cluster
core#ip-10-0-4-343 ~ $ curl http://marathon-lb.marathon.mesos:10000/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
In my environment running mesos-slave, mesos-master marathon and mesos-dns in standalone mode.
I deployed mysql app to marathon to run as docker container.
MySql app configurations as follows.
{
"id": "mysql",
"cpus": 0.5,
"mem": 512,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "mysql:5.6.27",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 3306,
"hostPort": 32000,
"protocol": "tcp"
}
]
}
},
"constraints": [
[
"hostname",
"UNIQUE"
]],
"env": {
"MYSQL_ROOT_PASSWORD": "password"
},
"minimumHealthCapacity" :0,
"maximumOverCapacity" : 0.0
}
Then I deploy app called mysql client. Mysql client app needs to connect to mysql app.
mysql app config as follows.
{
"id": "mysqlclient",
"cpus": 0.3,
"mem": 512.0,
"cmd": "/scripts/create_mysql_dbs.sh",
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "mysqlclient:latest",
"network": "BRIDGE",
"portMappings": [{
"containerPort": 3306,
"hostPort": 0,
"protocol": "tcp"
}]
}
},
"env": {
"MYSQL_ENV_MYSQL_ROOT_PASSWORD": "password",
"MYSQL_PORT_3306_TCP_ADDR": "mysql.marathon.slave.mesos.",
"MYSQL_PORT_3306_TCP_PORT": "32000"
},
"minimumHealthCapacity" :0,
"maximumOverCapacity" : 0.0
}
My mesos-dns config.json. as follows
{
"zk": "zk://127.0.0.1:2181/mesos",
"masters": ["127.0.0.1:5050"],
"refreshSeconds": 60,
"ttl": 60,
"domain": "mesos",
"port": 53,
"resolvers": ["127.0.0.1"],
"timeout": 5,
"httpon": true,
"dnson": true,
"httpport": 8123,
"externalon": true,
"listener": "127.0.0.1",
"SOAMname": "ns1.mesos",
"SOARname": "root.ns1.mesos",
"SOARefresh": 60,
"SOARetry": 600,
"SOAExpire": 86400,
"SOAMinttl": 60,
"IPSources": ["mesos", "host"]
}
I can ping with service name mysql.marathon.slave.mesos. from host machine. But when I try to ping from mysql docker container I get host unreachable. Why docker container cannot resolve hsot name?
I tried with set dns parameter to apps. But its not work.
EDIT:
I can ping mysql.marathon.slave.mesos. from master/slave hosts. But I cannot ping from mysqlclient docker container. It says unreachable. How can I fix this?
Not sure what your actual question is, by guessing I think you want to know how you can resolve a Mesos DNS service name to an actual endpoint the MySQL client.
If so, you can use my mesosdns-resolver bash script to get the endpoint from Mesos DNS:
mesosdns-resolver.sh -sn mysql.marathon.mesos -s <IP_ADDRESS_OF_MESOS_DNS_SERVER>
You can use this in your create_mysql_dbs.sh script (whatever it does) to get the actual IP address and port where your mysql app is running.
You can pass in an environment variable like
"MYSQL_ENV_SERVICE_NAME": "mysql.marathon.mesos"
and then use it like this in the image/script
mesosdns-resolver.sh -sn $MYSQL_ENV_SERVICE_NAME -s <IP_ADDRESS_OF_MESOS_DNS_SERVER>
Also, please note that Marathon is not necessarily the right tool for running one-off operations (I assume you initialize your DBs with the second app). Chronos would be a better choice for this.