Changing an ApiGateway restapi stage deployment via the cli or sdk - aws-sdk

I have a system in place for creating new deployments but I would like to be able to change a stage to use a previous deployment. You can do this via the aws console but it appears it's not an option for v1 API gateways via the SDK or CLI?

Can be done via CLI for V1 APIs. You will have to run two commands -> get-deployments and update-stage. Get the deployment ID from output of first and use it in the second.
$ aws apigateway get-deployments --rest-api-id $API_ID
$ aws apigateway update-stage --rest-api-id $API_ID --stage $STAGE_NAME --patch-operations op=replace,path=/deploymentId,value=$DEPLOYMENT_ID
get-deployments
update-stage

Related

How to allow IP dynamically using ingress controller

My structure
Kubernetes cluster on GKE
Ingress controller deployed using helm
An application which will return list of IP ranges note: it will get updated periodically
curl https://allowed.domain.com
172.30.1.210/32,172.30.2.60/32
Secured application which is not working
What I am trying to do?
Have my clients IPs in my API endpoint which is done
curl https://allowed.domain.com
172.30.1.210/32,172.30.2.60/32
Deploy my example app with ingress so it can pull from the https://allowed.domain.com and allow people to access to the app
What I tried and didn't work?
Deploy the application with include feature of nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
include /tmp/allowed-ips.conf;
deny all;
yes its working but the problem is when /tmp/allowed-ips.conf gets updated the ingress config doesn't
I tried to use if condition to pull the IPs from the endpoint and deny if user is not in the list
nginx.ingress.kubernetes.io/configuration-snippet: |
set $deny_access off;
if ($remote_addr !~ (https://2ce8-73-56-131-204.ngrok.io)) {
set $deny_access on;
}
I am using nginx.ingress.kubernetes.io/whitelist-source-range annotation but that is not what I am looking for
None of the options are working for me.
From the official docs of ingress-nginx controller:
The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app)
After the nginx ingress resource was initially created, the ingress controller assembles the nginx.conf file and uses it for routing traffic. Nginx web server does not auto-reload its configuration if the nginx.conf and other config files were changed.
So, you can work around this problem in several ways:
update the k8s ingress resource with new IP addresses and then apply changes to the Kubernetes cluster (kubectl apply / kubectl patch / smth else) / for your options 2 and 3.
run nginx -s reload inside an ingress Pod to reload nginx configuration / for your option 1 with include the allowed list file.
$ kubectl exec ingress-nginx-controller-xxx-xxx -n ingress-nginx -- nginx -s reload
try to write a Lua script (there is a good example for Nginx+Lua+Redis here and here). You should have a good understanding of nginx and lua to estimate if it is worth trying.
Sharing what I implemented at my workplace. We had a managed monitoring tool called Site24x7. The tool pings our server from their VMs with dynamic IPs and we had to automate the whitelisting of the IPs at GKE.
nginx.ingress.kubernetes.io/configuration-snippet allows you to set arbitrary Nginx configurations.
Set up a K8s CronJob resource on the specific namespace.
The CronJob runs a shell script, which
fetches the list of IPs to be allowed (curl, getent, etc.)
generates a set of NGINX configurations (= the value for nginx.ingress.kubernetes.io/configuration-snippet)
runs a kubectl command which overwrites the annotation of the target ingresses.
Example shell/bash script:
#!/bin/bash
site24x7_ip_lookup_url="site24x7.enduserexp.com"
site247_ips=$(getent ahosts $site24x7_ip_lookup_url | awk '{print "allow "$1";"}' | sort -u)
ip_whitelist=$(cat <<-EOT
# ---------- Default whitelist (Static IPs) ----------
# Office
allow vv.xx.yyy.zzz;
# VPN
allow aa.bbb.ccc.ddd;
# ---------- Custom whitelist (Dynamic IPs) ----------
$site247_ips # Here!
deny all;
EOT
)
for target_ingress in $TARGET_INGRESS_NAMES; do
kubectl -n $NAMESPACE annotate ingress/$target_ingress \
--overwrite \
nginx.ingress.kubernetes.io/satisfy="any" \
nginx.ingress.kubernetes.io/configuration-snippet="$ip_whitelist" \
description="*** $(date '+%Y/%m/%d %H:%M:%S') NGINX annotation 'configuration-snippet' updated by cronjob $CRONJOB_NAME ***"
done
The shell/bash script can be stored as ConfigMap to be mounted on the CronJob resource.

How to view Routes pod in OpenShift

I have created a routes for my service in the OpenShift,
oc get routes
NAME HOST/PORT PATH SERVICES PORT
simplewebserver simpleweb.apps.devcluster.os.fly.com simplewebserver 9999
When I ran command: curl http://simpleweb.apps.devcluster.os.fly.com/world
it failed to access my web service. I suspect my route has some problem, but I could not see any route debug information.
My question is, how to find the route pod in the OpenShift Or how to find some route activity information when I access route?
You can check the router logs in logs container of router pods. in our OCP cluster i could see router pods in openshift-ingress namespace.
oc get pods -n openshift-ingress
NAME READY STATUS RESTARTS AGE
router-default-5f9c4b6cb4-12121a 2/2 Running 0 40h
router-default-5f9c4b6cb4-12133a 2/2 Running 0 40h
To get the logs, use below command,
oc -n openshift-ingress -c logs logs -f <router_pod_name>
Also make sure haproxy logs are enabled to find out urls getting hit via router.
https://access.redhat.com/solutions/3397701
As there is limited information about your problem. Here are few things you can try.
Try to curl using a port
curl -kv http://simpleweb.apps.devcluster.os.fly.com:9999
Access the pod logs for which the route was created. Check the service simplewebserver is using the correct selector to route the traffic to the pod.
Do a oc describe service simplewebserver to see the selectors being used.
Check if any network policy is blocking the external traffic.
Check if you can access the target pod using that service from within the same namespace. You can do that by rsh to a pod and then access the service using:
curl -kv http://servicename.projectname.svc.cluster.local

Get Error storing cluster namespace secret (E0025) trying to bind service to a cluster

I am following Tutorial: Creating Kubernetes clusters in IBM Bluemix Container Service but when I try to bind a service to my cluster I get:
$ bx cs cluster-service-bind kub_cluster myns cloudant
FAILED
Error storing cluster namespace secret (E0025)
Incident ID: ebdbdd0d-5d6a-4373-8e54-b7dd84733a29
I have a worker node:
$ bx cs workers kub_cluster
will list one in State 'normal' and Status 'Ready'.
I tried with different services (messageHub and Cloudant) and different names for the namespace. These are services I already have. Anyone know how to get around this?
I was able to test this out following the same guide. I used the tone analyzer service. For testing I used the default namespace.
Are you able to see the namespace you are using when you list out available kubernetes namespaces? The option "myns" will need to be a kubernetes namespace.
$ kubectl get namespaces
This should print out the default namespace as well as other system namespaces + any namespaces you created.
Earlier in the guide a namespace is setup for the docker registry, it is possible that you are using that namespace.
Other instances of this issue appear to be related to the status of the cluster. It looks like your cluster has an available node(normal and ready), so it should be able to store the secret in an available namespace.
You might be missing the specific namespace in your cluster.
You can create one by calling:
kubectl create namespace <your namespace>

Enable autoscaling on GKE cluster creation

I try to create an autoscaled container cluster on GKE.
When I use the "--enable-autoscaling" option (like the documentation indicates here : https://cloud.google.com/container-engine/docs/clusters/operations#create_a_cluster_with_autoscaling) :
$ gcloud container clusters create mycluster --zone $GOOGLE_ZONE --num-nodes=3 --enable-autoscaling --min-nodes=2 --max-nodes=5
but the MIG (Managed Instanced Group) is not displayed as 'autoscaled' as shown by both the web interface and the result of the following command :
$ gcloud compute instance-groups managed list
NAME SIZE TARGET_SIZE AUTOSCALED
gke-mycluster... 3 3 no
Why ?
Then, I tried the other way indicated in the kubernetes docs (http://kubernetes.io/docs/admin/cluster-management/#cluster-autoscaling) but got an error caused by the '=true' apparently :
$ gcloud container clusters create mytestcluster --zone=$GOOGLE_ZONE --enable-autoscaling=true --min-nodes=2 --max-nodes=5 --num-nodes=3
usage: gcloud container clusters update NAME [optional flags]
ERROR: (gcloud.container.clusters.update) argument --enable-autoscaling: ignored explicit argument 'true'
Is the doc wrong on this ?
Here is my gcloud version results :
$ gcloud version
Google Cloud SDK 120.0.0
beta 2016.01.12
bq 2.0.24
bq-nix 2.0.24
core 2016.07.29
core-nix 2016.03.28
gcloud
gsutil 4.20
gsutil-nix 4.18
kubectl
kubectl-linux-x86_64 1.3.3
Last precision : the autoscaler seems 'on' in the description on the cluster :
$ gcloud container clusters describe mycluster | grep auto -A 3
- autoscaling:
enabled: true
maxNodeCount: 5
minNodeCount: 2
Any idea to explain this behaviour please ?
Kubernetes cluster autoscaling does not use the Managed Instance Group autoscaler. It runs a cluster-autoscaler controller on the Kubernetes master that uses Kubernetes-specific signals to scale your nodes. The code is in the autoscaler repo if you want more info.
I've also sent out a PR to fix the invalid flag usage in the autoscaling docs. Thanks for catching that!

Adding permissions to a project

I am trying to follow this tutorial https://tensorflow.github.io/serving/serving_inception
But I see this
$ gcloud container clusters create inception-serving-cluster --num-nodes 5
ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Required "container.clusters.create" permission for "projects/tensorflow-serving".
I did not see an option to add permissions to the project anywhere. How do I do this using the CLI or the UI?
EDIT:
I do have the project created
EDIT:
Just saw that it works fine from the cloud shell
Update: Your project's name is tensorflow-serving-1360, so you should be running gcloud container clusters create inception-serving-cluster --num-nodes 5 --project=tensorflow-serving-1360.
The project tensorflow-serving is not owned by you. It is the example project name used in the linked tutorial, but you need to replace it with the name of your own project as described in the line at the beginning of Part 2:
Here we assume you have created and logged in a gcloud project named
tensorflow-serving
(Tested on 2019.04.07)
Firstly, check the list of auth accounts:
gcloud auth list
Next set the active account:
gcloud config set account <email_address_from_above_output>
Then, specify the parameter for create cluster commamd:
gcloud container clusters create <cluster_name> --num-nodes=2 --project=<PROJECT_ID>
e.g.
gcloud container clusters create prod-myapp-cluster --num-nodes=2 --project=myapp-20394823094
Expected output:
kubeconfig entry generated for prod-myapp-cluster.
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
prod-myapp-cluster asia-south1-a 1.11.7-gke.12 35.5xx.2xx.1xx n1-standard-1 1.11.7-gke.12 2 RUNNING
Get your project name or create a project if you have created on already at console.cloud.google.com
Enable Kubernetes engine API on the console
run this code on your command prompt
gcloud container clusters create bd-serving-cluster --num-nodes 5 -project=tensorflow-serving-264611 \
--zone=us-central1-f
replace 'bd' with the name of your serving cluster and 'tensorflow-serving-264611' with the project name you created in step 1 and you can choose your preferred zone or use the default 'us-central1-f'