I'm trying to set up the following environment on Google Cloud and have 3 major problems with it:
Database Cluster
3 nodes
one port open to world, a few ports open to the compute cluster
Compute Cluster
- 5 nodes
- communicated with the database cluster
- two ports open to the world
- runs Docker containers
a) The database cluster runs fine, I have the configuration port open to world, but I don't know how to limit the other ports to only the compute cluster?
I managed to get the first Pod and Replication-Controller running on the compute cluster and created a service to open the container to the world:
controller:
{
"id": "api-controller",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 2,
"replicaSelector": {
"name": "api"
},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "apiController",
"containers": [{
"name": "api",
"image": "gcr.io/my/api",
"ports": [{
"name": "api",
"containerPort": 3000
}]
}]
}
},
"labels": {
"name": "api"
}
}
}
}
service:
{
"id": "api-service",
"kind": "Service",
"apiVersion": "v1beta1",
"selector": {
"name": "api"
},
"containerPort": "api",
"protocol": "TCP",
"port": 80,
"selector": { "name": "api" },
"createExternalLoadBalancer": true
}
b) The container exposes port 3000, the service port 80. Where's the connection between the two?
The firewall works with labels. I want 4-5 different pods running in my compute cluster with 2 of them having open ports to the world. There can be 2 or more containers running on the same instance. The labels however are specific to the nodes, not the containers.
c) Do I expose all nodes with the same firewall configuration? I can't assign labels to containers, so not sure how to expose the api service for example?
I'll try my best to answer all of your questions as best I can.
First off, you will want to upgrade to using v1 of the Kubernetes API because v1beta1 and v1beta3 will no longer be available after Aug. 5th:
https://cloud.google.com/container-engine/docs/v1-upgrade
Also, Use YAML. It's so much less verbose ;)
--
Now on to the questions you asked:
a) I'm not sure I completely understand what you are asking here but it sounds like running the services in the same cluster (with resource limits) would be way easier than trying to deal with cross cluster networking.
b) You need to specify a targetPort so that the service knows what port to use on the container. This should match port 3000 that you have in your resource controller. See the docs for more info.
{
"kind": "Service",
"apiVersion": "v1",
"metadata: {
"labels": [{
"name": "api-service"
}],
},
"spec": {
"selector": {
"name": "api"
},
"ports": [{
"port": 80,
"targetPort": 3000
}]
"type": "LoadBalancer"
}
}
c) Yes. In Kubernetes the kube-proxy accepts traffic on any node and routes it to the appropriate node or local pod. You don't need to worry about mapping the load balancer to, or writing firewall rules for those specific nodes that happen to be running your pods (it could actually change if you do a rolling update!). kube-proxy will route traffic to the right place even if your service is not running on that node.
Related
I have an ARM template that deploys API's to an API Management instance
Here is an example of one API
{
"properties": {
"authenticationSettings": {
"subscriptionKeyRequired": false
},
"subscriptionKeyParameterNames": {
"header": "Ocp-Apim-Subscription-Key",
"query": "subscription-key"
},
"apiRevision": "1",
"isCurrent": true,
"subscriptionRequired": true,
"displayName": "DDD.CRM.PostLeadRequest",
"serviceUrl": "https://test1/api/FuncCreateLead?code=XXXXXXXXXX",
"path": "CRMAPI/PostLeadRequest",
"protocols": [
"https"
]
},
"name": "[concat(variables('ApimServiceName'), '/mms-crm-postleadrequest')]",
"type": "Microsoft.ApiManagement/service/apis",
"apiVersion": "2019-01-01",
"dependsOn": []
}
When I am deploying this to different environments I would like to be able to substitute the service url depending on the environment. I'm wondering the best approach?
Can I read in a config file or something like that?
At the time of deployment I have a variable that tells me the environment so I can base decisions on that. Just not sure the best way to do it
See about ARM template parameters: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates#parameters They can be specified in a separate file. So you will have single template, but environment specific parameter files.
I have built a docker application using docker-compose which has mysql involved in it.I have pushed those containers to azure and wanted to deploy it in an edge device using Azure IoT Edge. For this i used docker application container and mysql container to deploy in edge device, Application is running but mysql is not running at edge device after deployment.
Here is the container create options that i have given for mysql module
Is it because as i am using the root as User? Which is refusing connection with different client.
{
"Env": [
"ACCEPT_EULA=Y",
"MSSQL_ROOT_PASSWORD=root"
],
"HostConfig": {
"PortBindings": {
"13306/tcp": [
{
"HostPort": "13306"
}
],
"32000/tcp": [
{
"HostPort": "32000"
}
]
},
"Mounts": [
{
"Type": "volume",
"Source": "sqlVolume",
"Target": "/var/lib/mysql"
}
]
}
Tearing my hair out. Learnt lots from my previous mistakes (Cannot connect remotely to EC2 MySQL installation), however I have now configured identically (AFAICT, outputs below), but cannot get heroku to connect to my new AWS RDS DB MYSQL instance! my old instances are fine.
One concern I have is that the Heroku article https://devcenter.heroku.com/articles/amazon-rds seems to have conflicting info out there about how to use use wild cards for the GRANT statements.
RDS article: https://devcenter.heroku.com/articles/amazon-rds says
GRANT USAGE ON *.* TO 'username'#'%';
BUT https://www.flydata.com/blog/access-denied-issue-amazon-rds/, https://www.flydata.com/blog/access-denied-issue-amazon-rds/ suggest a different syntax using '%'
GRANT USAGE ON `%`.* TO `username`#`%` IDENTIFIED BY 'pwd';
to no affect.
So..
all instances created with same security group
security group has inbound access (and works for 2 other instances)
GRANT access (as per my original 2 instances )
Tried new suggested syntax of % not *
Have tried
with or without SSL
creating a new security group
Security groups (all instances are the same for my 3 environments, but one i cannot connect from heroku)
$ grep sg- aws_instance.txt
"VpcSecurityGroupId": "sg-c8ce36b4"
"VpcSecurityGroupId": "sg-c8ce36b4"
"VpcSecurityGroupId": "sg-c8ce36b4"
Security group config
and visually i can see inboound config: MYSQL,TCP,3306,0.0.0.0/0
{
"DBSecurityGroups": [
{
"DBSecurityGroupDescription": "default",
"IPRanges": [
{
"Status": "authorized",
"CIDRIP": "0.0.0.0/32"
},
{
"Status": "authorized",
"CIDRIP": "0.0.0.0/0"
},
{
"Status": "authorized",
"CIDRIP": "87.1.1.1/32"
}
],
"OwnerId": "xxxxxxx",
"DBSecurityGroupArn": "arn:aws:rds:us-east-1:xxxxxxx:secgrp:default",
"EC2SecurityGroups": [
{
"Status": "authorized",
"EC2SecurityGroupName": "default",
"EC2SecurityGroupOwnerId": "xxxxxxxxx",
"EC2SecurityGroupId": "sg-2aca2f43"
}
],
"DBSecurityGroupName": "default"
},
{
"VpcId": "vpc-a7d034c1",
"DBSecurityGroupDescription": "Inbound DB only",
"IPRanges": [],
"OwnerId": "xxxxxx",
"DBSecurityGroupArn": "arn:aws:rds:us-east-1:xxxxxxx:secgrp:mysecuritygroupdbonly",
"EC2SecurityGroups": [],
"DBSecurityGroupName": "mysecuritygroupdbonly"
}
]
}
I am just starting with BlueMix and in my space I have:
a Cloud Integration service: using a Basic Secure Connection, for which I have created an API endpoint; then in that Cloud Integration service I have added the corresponding API by importing a swagger 1.2 file, and published that customAPI to my organization;
a pretty simple node.js application;
From the Cloud Integration service> API view, I can get the URLs for the different resources (for instance http://endpoint_ip:endpoint_port/api/version/path_to_resource), so I can hardcode these URLs in my node.js application and it works.
But if I bind the Cloud Integration service and even the customAPI to my node.js application, I don't get any information in VCAP_SERVICES about the endpoint URL; but I have seen examples of VCAP_SERVICES where the API URL is available.
Below is my VCAP_SERVICES
{"CloudIntegration": [
{
"name": "Cloud Integration-b9",
"label": "CloudIntegration",
"plan": "cloudintegrationplan",
"credentials": {
"userid": "apiuser#CloudIntegration",
"password": "S!2w3e40",
"apis": [
{
"name": "Catalog Manager API",
"desc": "Catalog Manager API",
"resource": ""
}
]
}
}
]
}
What I am trying to achieve is to avoid hardcoding URLs in my application, since I can bind a BlueMix service to it, and perhaps get info from the environment.
Am I doing something wrong? Or is that not the way it is supposed to work?
Also I don't really get why there is nothing in the VCAP_SERVICES.CloudIntegration[0].credentials.apis[0].resource even though I have my customAPI specifies resources.
#Rick
Make sure you "publish" your API after configuring the Cloud Integration service. Then service credentials will reflect the changes:
"CloudIntegration": [
{
"name": "Cloud Integration-v5",
"label": "CloudIntegration",
"plan": "cloudintegrationplan",
"credentials": {
"userid": "apiuser#CloudIntegration",
"password": "S!2w3e40",
"apis": [
{
"name": "SwaggerPetStore",
"desc": "SwaggerPetStore",
"resource": "http",
"baseurl": "http://mypypatchank.mybluemix.net"
}
]
}
}
]
in the same way, if you use the API management service, you will have a corresponding VCAP_SERVICES entry
"Swagger Petstore v1 : Sandbox 551b2dcf0cf2521d98d061d4 prod": [
{
"name": "Swagger Petstore v1 : Sandbox prod-w0",
"label": "Swagger Petstore v1 : Sandbox 551b2dcf0cf2521d98d061d4 prod",
"plan": "plan1 : Sandbox prod",
"credentials": {
"clientID": "55cfe3fa-ff59-474c-a1b6-46d3cc9871",
"clientSecret": "uK3xM3eF4cA1qF7yW8mC2lP6wS6aG7sQ5cL2yJ4sC6iS1dE7",
"url": "https://api.eu.apim.ibmcloud.com/garciatemx1ibmcom/sb/api"
}
}
]
Since your goal is to "to avoid hardcoding URLs in my application, since I can bind a BlueMix service to it, and perhaps get info from the environment." I would like to suggest using a user provided service.
This will create a user provided service and start interactive input for you to enter the api url and a password. You can add more parameters if you need.
cf cups servicename -p "url, password"
Bind this service to your application and restage. You can access these parameters in your Node.js application easily with the cfenv module.
var cfenv = require("cfenv");
var appEnv = cfenv.getAppEnv();
var myService = appEnv.getService("servicename");
//use myService.credentials.url to access the url value.
//use myService.credentials.password to access the password value.
The user provided services VCAP_SERVICES looks like:
{
"user-provided": [
{
"name": "servicename",
"label": "user-provided",
"credentials": {
"url": "myURL",
"password": "myPassword"
}
}
]
}
I'm playing around with Google's managed VM feature and finding you can fairly easily create some interesting setups. However, I have yet to figure out whether it's possible to use persistent disks to mount a volume on the container, and it seems not having this feature limits the usefulness of managed VMs for stateful containers such as databases.
So the question is: how can I mount the persistent disk that Google creates for my Compute engine instance, to a container volume?
Attaching a persistent disk to a Google Compute Engine instance
Follow the official persistent-disk guide:
Create a disk
Attach to an instance during instance creation, or to a running instance
Use the tool /usr/share/google/safe_format_and_mount to mount the device file /dev/disk/by-id/google-...
As noted by Faizan, use docker run -v /mnt/persistent_disk:/container/target/path to include the volume in the docker container
Referencing a persistent disk in Google Container Engine
In this method, you specify the volume declaratively (after initializing it as mentioned above...) in the Replication Controller or Pod declaration. The following is a minimal excerpt of a replication controller JSON declaration. Note that the volume has to be declared read-only because no more than two instances may write to a persistent disk at one time.
{
"id": "<id>",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 3,
"replicaSelector": {
"name": "<id>"
},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "<id>",
"containers": [
{
"name": "<id>",
"image": "<docker_image>",
"volumeMounts": [
{
"name": "persistent_disk",
"mountPath": "/pd",
"readOnly": true
}
],
...
}
],
"volumes": [
{
"name": "persistent_disk",
"source": {
"persistentDisk": {
"pdName": "<persistend_disk>",
"fsType": "ext4",
"readOnly": true
}
}
}
]
}
},
"labels": {
"name": "<id>"
}
}
},
"labels": {
"name": "<id>"
}
}
If your persistent disk is attached and mounted already to the instance, I believe you can use it as a data volume with your docker container. I was able to find docker documentation which explains the steps on how to manage data in containers.