I am new to eks with fargate ,I am using one sample provided by aws .
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-namespace.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-deployment.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-service.yaml
$ kubectl apply -f 2048-ingress.yaml
I am using only fargate profile .After deploying this I am getting one error in ingress-controller .
E1113 13:48:19.338173 1 controller.go:217]
kubebuilder/controller "msg"="Reconciler error" "error"="failed to
build LoadBalancer configuration due to failed to resolve 2 qualified
subnet with at least 8 free IP Addresses for ALB. Subnets must
contains these tags: 'kubernetes.io/cluster/thingcenter': ['shared' or
'owned'] and 'kubernetes.io/role/elb': ['' or '1']. See
https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/config/#subnet-auto-discovery for more details. Resolved qualified subnets: '[]'"
"controller"="alb-ingress-controller"
"request"={"Namespace":"2048-game","Name":"2048-ingress"}
So according to aws doc I followed these steps as given in this link .
vpc-subnet
means ,I have tagged the private subnet with these key value as shown in below image .cluster name is thingcenter .
Is this the correct place need to tag or anything I missed ,please share your ideas .
How many subnets do you have?
According to the ALB-ingress-controller docs, you must have at least two subnets in different AZ. But based on the screenshot, there is only one subnet.
Related
My structure
Kubernetes cluster on GKE
Ingress controller deployed using helm
An application which will return list of IP ranges note: it will get updated periodically
curl https://allowed.domain.com
172.30.1.210/32,172.30.2.60/32
Secured application which is not working
What I am trying to do?
Have my clients IPs in my API endpoint which is done
curl https://allowed.domain.com
172.30.1.210/32,172.30.2.60/32
Deploy my example app with ingress so it can pull from the https://allowed.domain.com and allow people to access to the app
What I tried and didn't work?
Deploy the application with include feature of nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
include /tmp/allowed-ips.conf;
deny all;
yes its working but the problem is when /tmp/allowed-ips.conf gets updated the ingress config doesn't
I tried to use if condition to pull the IPs from the endpoint and deny if user is not in the list
nginx.ingress.kubernetes.io/configuration-snippet: |
set $deny_access off;
if ($remote_addr !~ (https://2ce8-73-56-131-204.ngrok.io)) {
set $deny_access on;
}
I am using nginx.ingress.kubernetes.io/whitelist-source-range annotation but that is not what I am looking for
None of the options are working for me.
From the official docs of ingress-nginx controller:
The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app)
After the nginx ingress resource was initially created, the ingress controller assembles the nginx.conf file and uses it for routing traffic. Nginx web server does not auto-reload its configuration if the nginx.conf and other config files were changed.
So, you can work around this problem in several ways:
update the k8s ingress resource with new IP addresses and then apply changes to the Kubernetes cluster (kubectl apply / kubectl patch / smth else) / for your options 2 and 3.
run nginx -s reload inside an ingress Pod to reload nginx configuration / for your option 1 with include the allowed list file.
$ kubectl exec ingress-nginx-controller-xxx-xxx -n ingress-nginx -- nginx -s reload
try to write a Lua script (there is a good example for Nginx+Lua+Redis here and here). You should have a good understanding of nginx and lua to estimate if it is worth trying.
Sharing what I implemented at my workplace. We had a managed monitoring tool called Site24x7. The tool pings our server from their VMs with dynamic IPs and we had to automate the whitelisting of the IPs at GKE.
nginx.ingress.kubernetes.io/configuration-snippet allows you to set arbitrary Nginx configurations.
Set up a K8s CronJob resource on the specific namespace.
The CronJob runs a shell script, which
fetches the list of IPs to be allowed (curl, getent, etc.)
generates a set of NGINX configurations (= the value for nginx.ingress.kubernetes.io/configuration-snippet)
runs a kubectl command which overwrites the annotation of the target ingresses.
Example shell/bash script:
#!/bin/bash
site24x7_ip_lookup_url="site24x7.enduserexp.com"
site247_ips=$(getent ahosts $site24x7_ip_lookup_url | awk '{print "allow "$1";"}' | sort -u)
ip_whitelist=$(cat <<-EOT
# ---------- Default whitelist (Static IPs) ----------
# Office
allow vv.xx.yyy.zzz;
# VPN
allow aa.bbb.ccc.ddd;
# ---------- Custom whitelist (Dynamic IPs) ----------
$site247_ips # Here!
deny all;
EOT
)
for target_ingress in $TARGET_INGRESS_NAMES; do
kubectl -n $NAMESPACE annotate ingress/$target_ingress \
--overwrite \
nginx.ingress.kubernetes.io/satisfy="any" \
nginx.ingress.kubernetes.io/configuration-snippet="$ip_whitelist" \
description="*** $(date '+%Y/%m/%d %H:%M:%S') NGINX annotation 'configuration-snippet' updated by cronjob $CRONJOB_NAME ***"
done
The shell/bash script can be stored as ConfigMap to be mounted on the CronJob resource.
My problem is to create 5k instances and retrieve there public IP addresses.
Specifically for zone us-west1-a I can create a group of 50 instances by the following:
gcloud compute instance-groups managed create test --base-instance-name morning --size 50 --template benchmark-template-micro --zone us-west1-a
Questions:
How to specify the start-script to run each created instances? I can't find them here.
How to get the public IP addresses of those created instances?
the startup-script can be assigned to the template for the instance used; see here.
one can obtain information with gcloud compute instance-groups managed describe.
while there are no public IP addresses unless you'd assign external IP addresses.
As mentioned by Martin, the startup-script is configured in the instance template.
Unfortunately, there is no API that lists the ip addresses of the instances in the group. There are however APIs (and gcloud commands) to get the list of instances and the ip addresses of instances. Here is an example to fetch this information from the command line:
gcloud compute instance-groups list-instances $INSTANCE_GROUP --uri \
| xargs -I '{}' gcloud compute instances describe '{}' \
--flatten networkInterfaces[].accessConfigs[] \
--format 'csv[no-heading](name,networkInterfaces.accessConfigs.natIP)'
To speed this up, you may want to use xarg's -P flag to parallelize the instance describe requests.
Since all instances in the group have the same prefix. You can also just do a list search by prefix. Although, this may pull in another that uses the same prefix even if not part of the instance group:
gcloud compute instances list --filter='name ~ ${PREFIX}*' \
--flatten networkInterfaces[].accessConfigs[] \
--format 'csv[no-heading](name,networkInterfaces.accessConfigs.natIP)'
I am following Tutorial: Creating Kubernetes clusters in IBM Bluemix Container Service but when I try to bind a service to my cluster I get:
$ bx cs cluster-service-bind kub_cluster myns cloudant
FAILED
Error storing cluster namespace secret (E0025)
Incident ID: ebdbdd0d-5d6a-4373-8e54-b7dd84733a29
I have a worker node:
$ bx cs workers kub_cluster
will list one in State 'normal' and Status 'Ready'.
I tried with different services (messageHub and Cloudant) and different names for the namespace. These are services I already have. Anyone know how to get around this?
I was able to test this out following the same guide. I used the tone analyzer service. For testing I used the default namespace.
Are you able to see the namespace you are using when you list out available kubernetes namespaces? The option "myns" will need to be a kubernetes namespace.
$ kubectl get namespaces
This should print out the default namespace as well as other system namespaces + any namespaces you created.
Earlier in the guide a namespace is setup for the docker registry, it is possible that you are using that namespace.
Other instances of this issue appear to be related to the status of the cluster. It looks like your cluster has an available node(normal and ready), so it should be able to store the secret in an available namespace.
You might be missing the specific namespace in your cluster.
You can create one by calling:
kubectl create namespace <your namespace>
Hi I am trying list compute instances in a specific network, and subnetwork, and can't seem to get the filtering right. For example, I have a network named "prod-net" with a subnetwork named "app-central". When I run the search I just get "Listed 0 items".
~ gcloud compute instances list --filter='network:prod-net'
Listed 0 items.
Any suggestions?
The --filter flag doesn't operate on the table data, but rather the underlying rich resource object. To see this object, run gcloud compute instances list --format=json.
What you're looking for in this case is:
$ gcloud compute instances list --filter='networkInterfaces.network=prod-net'
(I switched the : to = because the former means "contains" and the latter means an exact match. See gcloud topic filters for more).
You can indeed filter GCE instances by subnetwork using gcloud.
You need to filter by networkInterfaces.subnetwork and the literal value to compare with, is the full subnet resource url, not just the subnet-name.
The "resource url" for your subnet can be obtained by:
gcloud compute networks subnets list <YOUR_SUBNET_NAME> --format=flattened
Example:
$ gcloud compute networks subnets list sg-zk-1 --project my-gcp-project --format=flattened
---
creationTimestamp: 2017-04-20T02:22:17.853-07:00
gatewayAddress: 10.9.19.33
id: 6783412628763296550
ipCidrRange: 10.9.19.32/28
kind: compute#subnetwork
name: sg-zk-1
network: valkyrie
privateIpGoogleAccess: True
region: asia-southeast1
selfLink: https://www.googleapis.com/compute/v1/projects/my-gcp-project/regions/asia-southeast1/subnetworks/sg-zk-1
In the above example, the subnet-name is sg-zk-1.
The corresponding resource URL for the subnet is the value of the selfLink which is https://www.googleapis.com/compute/v1/projects/my-gcp-project/regions/asia-southeast1/subnetworks/sg-zk-1.
Now that I have the subnet_url I can filter the instances belonging to it:
$ subnet_url="https://www.googleapis.com/compute/v1/projects/my-gcp-project/regions/asia-southeast1/subnetworks/sg-zk-1"
$ gcloud compute instances list --filter="networkInterfaces.subnetwork=${subnet_url}"
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
sg-zookeeper-4 asia-southeast1-b n1-standard-2 10.9.19.37 RUNNING
sg-zookeeper-5 asia-southeast1-b n1-standard-2 10.9.19.38 RUNNING
sg-zookeeper-1 asia-southeast1-a n1-standard-2 10.9.19.34 RUNNING
sg-zookeeper-2 asia-southeast1-a n1-standard-2 10.9.19.35 RUNNING
sg-zookeeper-3 asia-southeast1-a n1-standard-2 10.9.19.36 RUNNING
I try to create an autoscaled container cluster on GKE.
When I use the "--enable-autoscaling" option (like the documentation indicates here : https://cloud.google.com/container-engine/docs/clusters/operations#create_a_cluster_with_autoscaling) :
$ gcloud container clusters create mycluster --zone $GOOGLE_ZONE --num-nodes=3 --enable-autoscaling --min-nodes=2 --max-nodes=5
but the MIG (Managed Instanced Group) is not displayed as 'autoscaled' as shown by both the web interface and the result of the following command :
$ gcloud compute instance-groups managed list
NAME SIZE TARGET_SIZE AUTOSCALED
gke-mycluster... 3 3 no
Why ?
Then, I tried the other way indicated in the kubernetes docs (http://kubernetes.io/docs/admin/cluster-management/#cluster-autoscaling) but got an error caused by the '=true' apparently :
$ gcloud container clusters create mytestcluster --zone=$GOOGLE_ZONE --enable-autoscaling=true --min-nodes=2 --max-nodes=5 --num-nodes=3
usage: gcloud container clusters update NAME [optional flags]
ERROR: (gcloud.container.clusters.update) argument --enable-autoscaling: ignored explicit argument 'true'
Is the doc wrong on this ?
Here is my gcloud version results :
$ gcloud version
Google Cloud SDK 120.0.0
beta 2016.01.12
bq 2.0.24
bq-nix 2.0.24
core 2016.07.29
core-nix 2016.03.28
gcloud
gsutil 4.20
gsutil-nix 4.18
kubectl
kubectl-linux-x86_64 1.3.3
Last precision : the autoscaler seems 'on' in the description on the cluster :
$ gcloud container clusters describe mycluster | grep auto -A 3
- autoscaling:
enabled: true
maxNodeCount: 5
minNodeCount: 2
Any idea to explain this behaviour please ?
Kubernetes cluster autoscaling does not use the Managed Instance Group autoscaler. It runs a cluster-autoscaler controller on the Kubernetes master that uses Kubernetes-specific signals to scale your nodes. The code is in the autoscaler repo if you want more info.
I've also sent out a PR to fix the invalid flag usage in the autoscaling docs. Thanks for catching that!