DCOS connection refused on marathon-lb - containers

I have dcos up and running. I created a service and i am able to access it through the ip:port but when i try to do the same with marathon-lb i just cant reach it. I tried curl http://marathon-lb.marathon.mesos:10000/ 10000 being the port number, i still get connection refused.
Here is my json for service:
{
"id": "/nginx-external",
"cmd": null,
"cpus": 0.1,
"mem": 65,
"disk": 0,
"instances": 1,
"acceptedResourceRoles": [],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "nginx:1.7.7",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 2000,
"servicePort": 10000,
"protocol": "tcp",
"labels": {}
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"healthChecks": [
{
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"timeoutSeconds": 10,
"maxConsecutiveFailures": 10,
"portIndex": 0,
"path": "/",
"protocol": "HTTP",
"ignoreHttp1xx": false
}
],
"labels": {
"HAPROXY_GROUP": "external"
},
"portDefinitions": [
{
"port": 10000,
"protocol": "tcp",
"name": "default",
"labels": {}
}
]
}
Can anyone help.

Both accessing it from outside the cluster by using public-ip:10000 (see here for finding the public ip) and from inside the cluster using curl http://marathon-lb.marathon.mesos:10000/ worked fine. Note, you need to have marathon-lb installed (dcos package install marathon-lb) and marathon-lb.marathon.mesos can only be resolved from inside the cluster.
In order to debug marathon-lb issues I ususally check the haproxy stats first: https://dcos.io/docs/1.9/networking/marathon-lb/marathon-lb-advanced-tutorial/#deploy-an-external-load-balancer-with-marathon-lb
From outside the cluster
From inside the cluster
core#ip-10-0-4-343 ~ $ curl http://marathon-lb.marathon.mesos:10000/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Related

Fiware Orion MQTT notification not working (anymore)

I don’t know where to look anymore, maybe someone has an idea what’s going wrong?
I created an MQTT subscription on my Orion Context Broker:
{
"description": "Subscription to notify of all WaterQualityObserved changes",
"subject": {
"entities": [{
"idPattern": ".*",
"type": "WaterQualityObserved"
}],
"condition": {
"attrs": []
}
},
"notification": {
"mqtt": {
"url": "mqtt://127.0.0.1:1883",
"topic": "water-quality-observed-changed"
}
}
}
I have both my Orion Context Broker and Mosquitto MQTT broker running locally in Docker containers.
I get this when listing the subscriptions in my Orion CB:
[
{
"id": "633bf12fe929777b6a60242b",
"description": "MQTT subscription to notify of all WaterQualityObserved changes",
"status": "active",
"subject": {
"entities": [
{
"idPattern": ".*",
"type": "WaterQualityObserved"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 3,
"lastNotification": "2022-10-04T08:47:55.000Z",
"attrs": [],
"onlyChangedAttrs": false,
"attrsFormat": "normalized",
"mqtt": {
"url": "mqtt://127.0.0.1:1883",
"topic": "water-quality-observed-changed",
"qos": 0
},
"lastFailure": "2022-10-04T08:47:55.000Z",
"failsCounter": 3,
"covered": false
}
}
]
As you can see “timesSent” augments when I PATCH the entity.
The strange thing is it worked before!
Any idea what I’m doing wrong?
Thanks.
Guy
The "The strange thing is it worked before!" sentence make me think it has to do with connectivity between container. I'd suggest to review all the involved connectivity (Orion -> MQTT broker, MQTT broker -> your MQTT subscriber). If that doesn't help, a re-deploy of all the docker containers could help.

Failed to retrieve function source code when deploying a cloud function from a repository on a different project

I am trying to deploy a Cloud Function from a Cloud Source Repository placed in a different project, but getting the following error: Failed to retrieve function source code (see full proto below).
Project-A contains the cloud function and service accounts listed below.
Project-B contains the source repository.
I have successfully deployed the function on Project-B.
I've tried giving the following service accounts the Source Repository Administrator role on the cloud source repository, but that did not help.
{project_A_number}#cloudservices.gserviceaccount.com
{project_A_number}-compute#developer.gserviceaccount.com
{project_A_number}#cloudbuild.gserviceaccount.com
Project-A#appspot.gserviceaccount.com
I have also tried disabling the Cloud Functions API on Project-A and turning it back on again.
I am not sure what is going wrong - if anyone has a clue as to where to further look, I would appreciate it - thanks in advance!
The deployment creates two entries in monitoring - a NOTICE followed by an ERROR:
The ERROR log:
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 5,
"message": "Failed to retrieve function source code"
},
"authenticationInfo": {
"principalEmail": "***#***.**"
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"resourceName": "projects/Project-A/locations/europe-west1/functions/pubsub-to-gcs"
},
"insertId": "-vmfbt4cd54",
"resource": {
"type": "cloud_function",
"labels": {
"function_name": "pubsub-to-gcs",
"region": "europe-west1",
"project_id": "Project-A"
}
},
"timestamp": "2021-10-20T12:21:45.352043Z",
"severity": "ERROR",
"logName": "projects/Project-A/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/cm9ldHotbGlmZS1kYXRhLXRlc3QvZXVyb3BlLXdlc3QxL3B1YnN1Yi10by1nY3MvVEhFbUQtLTZITWM",
"producer": "cloudfunctions.googleapis.com",
"last": true
},
"receiveTimestamp": "2021-10-20T12:21:45.781856467Z"
}
The NOTICE log (logged right before the ERROR):
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"authenticationInfo": {
"principalEmail": "***#****.**"
},
"requestMetadata": {
"callerIp": "35.205.252.75",
"callerSuppliedUserAgent": "google-cloud-sdk gcloud/360.0.0 command/gcloud.functions.deploy invocation-id/917d697431e84b91bfa2bd9f9cc4f302 environment/devshell environment-version/None interactive/True from-script/False python/3.7.3 term/screen (Linux 5.4.144+),gzip(gfe),gzip(gfe)",
"requestAttributes": {
"time": "2021-10-20T12:21:44.909430Z",
"auth": {}
},
"destinationAttributes": {}
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"authorizationInfo": [
{
"resource": "projects/Project-A/locations/europe-west1/functions/pubsub-to-gcs",
"permission": "cloudfunctions.functions.update",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "projects/Project-A/locations/europe-west1/functions/pubsub-to-gcs",
"request": {
"#type": "type.googleapis.com/google.cloud.functions.v1.UpdateFunctionRequest",
"function": {
"timeout": "60s",
"status": "UNKNOWN",
"serviceAccountEmail": "Project-A#appspot.gserviceaccount.com",
"availableMemoryMb": 256,
"name": "projects/Project-A/locations/europe-west1/functions/pubsub-to-gcs",
"runtime": "python39",
"labels": {
"deployment-tool": "cli-gcloud"
},
"entryPoint": "pubsub-to-gcs",
"updateTime": "2021-10-20T12:21:40.149Z",
"sourceRepository": {
"url": "https://source.developers.google.com/projects/Project-B/repos/my-repo/moveable-aliases/master/paths/my-folder"
},
"httpsTrigger": {},
"ingressSettings": "ALLOW_ALL",
"versionId": "1"
},
"updateMask": "eventTrigger,httpsTrigger,runtime,sourceRepository"
},
"resourceLocation": {
"currentLocations": [
"europe-west1"
]
}
},
"insertId": "1xdbim3e16pgu",
"resource": {
"type": "cloud_function",
"labels": {
"function_name": "pubsub-to-gcs",
"region": "europe-west1",
"project_id": "Project-A"
}
},
"timestamp": "2021-10-20T12:21:44.650257Z",
"severity": "NOTICE",
"logName": "projects/Project-A/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/cm9ldHotbGlmZS1kYXRhLXRlc3QvZXVyb3BlLXdlc3QxL3B1YnN1Yi10by1nY3MvVEhFbUQtLTZITWM",
"producer": "cloudfunctions.googleapis.com",
"first": true
},
"receiveTimestamp": "2021-10-20T12:21:45.832588036Z"
}
Turns out it wasn't an IAM issue: I've tried deploying the function from the UI, but that's not possible when deploying from a source repo in a different project.
Deploying using gcloud function deploy solved the issue.

Cache images received from Firebase Storage in PWA application

I have an application in Angular with PWA configured, besides caching assets/images I would also like to cache the images that are in Firebase Storage once they are loaded when I am Online.
My application makes use of the Cloud Firestore database with data persistence enabled. When I need to load the avatar of the authenticated user on the system in offline mode, it tries to load through the photoURL field, but since it is offline I can not load the image so the image is not displayed and this is not legal for the user.
In my code I load the image as follows:
<img class="avatar mr-0 mr-sm-16" src="{{ (user$ | async)?.photoURL || 'assets/images/avatars/profile.svg' }}">
I would like it when it was offline, it would search somewhere in the cache for the image that was uploaded.
It would be very annoying every time I load the images to call some method to store the cached image or something, I know it is possible but I do not know how to do that.
Is it possible to do this through the ngsw-config.json configuration file?
ngsw-config.json:
{
"index": "/index.html",
"assetGroups": [
{
"name": "app",
"installMode": "prefetch",
"resources": {
"files": [
"/favicon.ico",
"/index.html",
"/*.css",
"/*.js"
],
"urls": [
"https://fonts.googleapis.com/css?family=Muli:300,400,600,700"
]
}
}, {
"name": "assets",
"installMode": "lazy",
"updateMode": "prefetch",
"resources": {
"files": [
"/assets/**",
"/*.(eot|svg|cur|jpg|png|webp|gif|otf|ttf|woff|woff2|ani)"
]
}
}
]
}
Yes, it's possible, I tried and works for me, I have a pwa with ionic and angular 7, in my 'ngsw-config.json' I used this config:
{
"index": "/index.html",
"assetGroups": [{
"name": "app",
"installMode": "prefetch",
"resources": {
"files": [
"/favicon.ico",
"/index.html",
"/*.css",
"/*.js"
]
}
}, {
"name": "assets",
"installMode": "lazy",
"updateMode": "prefetch",
"resources": {
"files": [
"/assets/**",
"/*.(eot|svg|cur|jpg|png|webp|gif|otf|ttf|woff|woff2|ani)"
]
}
}],
"dataGroups": [{
"name": "api-freshness",
"urls": [
"https://firebasestorage.googleapis.com/v0/b/mysuperrpwapp.appspot.com/"
],
"cacheConfig": {
"maxSize": 100,
"maxAge": "180d",
"timeout": "10s",
"strategy": "freshness"
}
}]
}
In this article is well explained how works and what strategies you can use.
https://medium.com/progressive-web-apps/a-new-angular-service-worker-creating-automatic-progressive-web-apps-part-1-theory-37d7d7647cc7
It was very important in testing to have a valid https connection for the 'service_worker' starts. Once get offline, you can see that the file comes from "service_worker"
Test img _ from service_worker
just do
storage.ref("pics/yourimage.jpg").updateMetatdata({ 'cacheControl': 'private, max-age=15552000' }).subscribe(e=>{ });
and in your ngsw-config.json
"assetGroups": [{
"name": "app",
"installMode": "prefetch",
"resources": {
"files": [
"/favicon.ico",
"/index.html",
"/*.css",
"/*.js"
],
"url":[
"https://firebasestorage.googleapis.com/v0/b/*"
]
}
}

Apache Mesos,MESOS-DNS, MARATHON and Docker

In my environment running mesos-slave, mesos-master marathon and mesos-dns in standalone mode.
I deployed mysql app to marathon to run as docker container.
MySql app configurations as follows.
{
"id": "mysql",
"cpus": 0.5,
"mem": 512,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "mysql:5.6.27",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 3306,
"hostPort": 32000,
"protocol": "tcp"
}
]
}
},
"constraints": [
[
"hostname",
"UNIQUE"
]],
"env": {
"MYSQL_ROOT_PASSWORD": "password"
},
"minimumHealthCapacity" :0,
"maximumOverCapacity" : 0.0
}
Then I deploy app called mysql client. Mysql client app needs to connect to mysql app.
mysql app config as follows.
{
"id": "mysqlclient",
"cpus": 0.3,
"mem": 512.0,
"cmd": "/scripts/create_mysql_dbs.sh",
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "mysqlclient:latest",
"network": "BRIDGE",
"portMappings": [{
"containerPort": 3306,
"hostPort": 0,
"protocol": "tcp"
}]
}
},
"env": {
"MYSQL_ENV_MYSQL_ROOT_PASSWORD": "password",
"MYSQL_PORT_3306_TCP_ADDR": "mysql.marathon.slave.mesos.",
"MYSQL_PORT_3306_TCP_PORT": "32000"
},
"minimumHealthCapacity" :0,
"maximumOverCapacity" : 0.0
}
My mesos-dns config.json. as follows
{
"zk": "zk://127.0.0.1:2181/mesos",
"masters": ["127.0.0.1:5050"],
"refreshSeconds": 60,
"ttl": 60,
"domain": "mesos",
"port": 53,
"resolvers": ["127.0.0.1"],
"timeout": 5,
"httpon": true,
"dnson": true,
"httpport": 8123,
"externalon": true,
"listener": "127.0.0.1",
"SOAMname": "ns1.mesos",
"SOARname": "root.ns1.mesos",
"SOARefresh": 60,
"SOARetry": 600,
"SOAExpire": 86400,
"SOAMinttl": 60,
"IPSources": ["mesos", "host"]
}
I can ping with service name mysql.marathon.slave.mesos. from host machine. But when I try to ping from mysql docker container I get host unreachable. Why docker container cannot resolve hsot name?
I tried with set dns parameter to apps. But its not work.
EDIT:
I can ping mysql.marathon.slave.mesos. from master/slave hosts. But I cannot ping from mysqlclient docker container. It says unreachable. How can I fix this?
Not sure what your actual question is, by guessing I think you want to know how you can resolve a Mesos DNS service name to an actual endpoint the MySQL client.
If so, you can use my mesosdns-resolver bash script to get the endpoint from Mesos DNS:
mesosdns-resolver.sh -sn mysql.marathon.mesos -s <IP_ADDRESS_OF_MESOS_DNS_SERVER>
You can use this in your create_mysql_dbs.sh script (whatever it does) to get the actual IP address and port where your mysql app is running.
You can pass in an environment variable like
"MYSQL_ENV_SERVICE_NAME": "mysql.marathon.mesos"
and then use it like this in the image/script
mesosdns-resolver.sh -sn $MYSQL_ENV_SERVICE_NAME -s <IP_ADDRESS_OF_MESOS_DNS_SERVER>
Also, please note that Marathon is not necessarily the right tool for running one-off operations (I assume you initialize your DBs with the second app). Chronos would be a better choice for this.

Deploying Docker containers with port-mapping on Mesos/Marathon

I am currently working on a team project utilizing Docker with Apache Mesos/Marathon. To deploy MySQL docker containers on Mesos/Marathon, we have to create a JSON file with port mapping. I have searched everywhere on the internet and just can't find any sample JSON file to look on for port mapping. Anyone have done this before?
Here's some example Marathon JSON for using Docker's bridged networking mode:
{
"id": "bridged-webapp",
"cmd": "python3 -m http.server 8080",
"cpus": 0.5,
"mem": 64.0,
"instances": 2,
"container": {
"type": "DOCKER",
"docker": {
"image": "python:3",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 8080, "hostPort": 0, "servicePort": 9000, "protocol": "tcp" },
{ "containerPort": 161, "hostPort": 0, "protocol": "udp"}
]
}
}
}
See the "Bridged Networking Mode" section in
https://mesosphere.github.io/marathon/docs/native-docker.html for more details.