Istio: How to modify the h2UpgradePolicy globally? - kubernetes-ingress

I want up upgrade all incoming http 1.1 connections to http2 in Istio. I understand how to achieve this via destination rules for a particular namespace and pod.
However, I want to upgrade all connections in service mesh from http1.1 too http2. Even the documentation recommends this, if Istio sidecar is auto injected here.
if sidecar is installed on all pods in the mesh, then this should be set to UPGRADE.
Can I update the "istio" ConfigMap under "Istio-system" namespace?
If yes, what would the entry look like?
If no, please suggest How can I achieve this with minimal effort?

Indeed, you will set it in the The configMap istio, and it would like this:
apiVersion: v1
data:
mesh: |-
accessLogEncoding: TEXT
accessLogFile: /dev/stdout
accessLogFormat: ""
h2UpgradePolicy: UPGRADE #<- here
defaultConfig:
concurrency: 2
configPath: ./etc/istio/proxy
Now, it is a little tricky to see that it works. I sent four requests; two of them without h2UpgradePolicy parameter, and two of them with h2UpgradePolicy: UPGRADE. But my all four of my requests from the client looked like this:
$ kubectl exec -it curler -- curl -I demo.istio
Defaulting container name to curler.
Use 'kubectl describe pod/curler -n default' to see all of the containers in this pod.
HTTP/1.1 200 OK
server: envoy
date: Mon, 22 Jun 2020 13:05:53 GMT
content-type: text/html
content-length: 612
last-modified: Tue, 26 May 2020 15:00:20 GMT
etag: "5ecd2f04-264"
accept-ranges: bytes
x-envoy-upstream-service-time: 1
I sent the requests from outside the mesh, as from within I was getting HTTP2 by default. So, in my case mTLS was disabled, but that's irrelevant.
To see that it works, you would check the logs of the downstream proxy:
...
[2020-06-22T13:03:03.942Z] "HEAD / HTTP/1.1" 200 - "-" "-" 0 0 0 0 "-" "curl/7.59.0" "a7c32d21-dcea-95da-b7c1-67c5783a1641" "demo.istio" "127.0.0.1:80" inbound|80|http|demo.istio.svc.cluster.local 127.0.0.1:33180 192.168.72.186:80 192.168.66.168:34814 outbound_.80_._.demo.istio.svc.cluster.local default
[2020-06-22T13:03:05.245Z] "HEAD / HTTP/1.1" 200 - "-" "-" 0 0 0 0 "-" "curl/7.59.0" "409b3432-365f-94fe-87cd-8a85b586b42d" "demo.istio" "127.0.0.1:80" inbound|80|http|demo.istio.svc.cluster.local 127.0.0.1:60952 192.168.72.186:80 192.168.66.168:34830 outbound_.80_._.demo.istio.svc.cluster.local default
[2020-06-22T13:03:36.732Z] "HEAD / HTTP/2" 200 - "-" "-" 0 0 0 0 "-" "curl/7.59.0" "45dd94e5-6f29-9114-b09f-bda065dfd1eb" "demo.istio" "127.0.0.1:80" inbound|80|http|demo.istio.svc.cluster.local 127.0.0.1:33180 192.168.72.186:80 192.168.66.168:35120 outbound_.80_._.demo.istio.svc.cluster.local default
[2020-06-22T13:03:38.743Z] "HEAD / HTTP/2" 200 - "-" "-" 0 0 0 0 "-" "curl/7.59.0" "79e72286-f247-9ed0-b510-2819a886c7f9" "demo.istio" "127.0.0.1:80" inbound|80|http|demo.istio.svc.cluster.local 127.0.0.1:33180 192.168.72.186:80 192.168.66.168:35120 outbound_.80_._.demo.istio.svc.cluster.local default
VERY IMPORTANT: To make it work, the service in front if the downstream peer, must have named port, and it must be called http
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
ports:
- name: http #<- this parameter is mandatory to upgrade to HTTP2
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx

Related

Ingress rewrite string is being ignored

The requirement is to access the burger service in https://meals.food.com/burger2.
The context path within the app is /burger.
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /burger/$2
spec:
rules:
- host: meals.food.com
http:
paths:
- backend:
service:
name: burger
port:
number: 80
path: /burger2(/|$)(.*)
pathType: Prefix
Upon checking the ingress controller logs:
[05/Jan/2022:13:54:11 +0000] "GET // HTTP/1.1" 304 0 "-" "Mozilla/5.0
(X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/96.0.4664.110 Safari/537.36" 957 0.002 [anotherservice-80] []
x.x.x.x:80 0 0.002 304 230200x023
Is my ingress config correct?
My suspicion is that something is altering the request between my request from the browser to ingress-controller.
Is my ingress config correct? My suspicion is that something is altering the request between my request from the browser to ingress-controller.
Your ingress config looks OK. I do not see any errors in it. He will act as follows:
The example address meals.food.com/burger2/blah-blah-blah will be rewirted to meals.food.com/burger/blah-blah-blah. If that was your intention then config is fine.
However you have got 304 HTTP code.
The HTTP 304 Not Modified client redirection response code indicates that there is no need to retransmit the requested resources. It is an implicit redirection to a cached resource. This happens when the request method is safe, like a GET or a HEAD request, or when the request is conditional and uses a If-None-Match or a If-Modified-Since header.
The equivalent 200 OK response would have included the headers Cache-Control, Content-Location, Date, ETag, Expires, and Vary.
In other words
When the browser receives a request, but does not know whether it has the latest version of a write, it sends a conditional validation request, communicating the last modified date and time to the server via the If-Modified-Since or If-None-Match header.
The server then checks these headers and determines if their values are the same. If so - the server will send back the HTTP 304 code and the browser will use the cached copy of the resource. If not, it means that the file has been modified, so the browser will save a new copy by sending HTTP 200 code.
In your case it looks as if someone tried to download the same (unchanged) resource multiple times and therefore got the code 304. If so, everything is fine.

Nginx Ingress Controller trailling slash with HTTPS redirect

Nginx Ingress Controller trailling slash with HTTPS redirect
I'm trying to redirect requests from HTTP to HTTPS using an Ingress with Nginx Ingress Controller. My app is written in Django v3.0.7, my Nginx Controller is v0.46.0 and k8s v1.19.8.
I have the following ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: INGRESS-NAME
namespace: INGRESS-NS
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /api/v1/$1/
cert-manager.io/cluster-issuer: "ISSUER-NAME"
nginx.ingress.kubernetes.io/permanent-redirect-code: '308'
spec:
tls:
...
rules:
- host: MY-DOMAIN
http:
paths:
- path: /api/v1/?(.*)
pathType: Prefix
backend:
service:
name: SVC-NAME
port:
number: SVC-PORT
Requests at https://.../api/v1/get-token/, raise this error:
[05/May/2021:20:39:49 +0000] "POST /api/v1/get-token// HTTP/1.1" 404 => POST get an extra / at the end. But the same request with HTTP or https://.../api/v1/get-token (no trailing /) is fine.
If I remove the
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /api/v1/$1/
The redirect removes trailing / and it causes POST to turn into GET in all HTTP POST request causing an 403 - Method not allowed as shown in Nginx Logs:
[05/May/2021:20:54:52 +0000] "POST /api/v1/get-token HTTP/1.1" 308 164
[05/May/2021:20:54:53 +0000] "POST /api/v1/get-token HTTP/1.1" 301 0
[05/May/2021:20:54:53 +0000] "GET /api/v1/get-token/ HTTP/1.1" 405
but HTTP POST request works fine with http://.../api/v1/get-token// (two trailing /).
Is there a way to solve this problem? The 308 HTTP -> HTTPS redirect is important, so I can't remove it, but is there a way to force requests to have one, and only one, trailing /? Thanks.
There are two problems here
Problem #1
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /api/v1/$1/
causes requests sent to https://.../api/v1/get-token/ to end with HTTP 404 Not Found, but https://.../api/v1/get-token woks fine.
Why?
Because trailing / at the end of nginx.ingress.kubernetes.io/rewrite-target: /api/v1/$1/ rewrite is added to the URL, and /api/v1/get-token// leads to a resource that does not exist.
What to do about it?
Change path key to /api/v1/?(.*\b)/. I'm not 100% sure it will work, but it's worth a try.
or
Remove trailing / from rewrite.
Now, doing that causes problem #2.
Problem #2
Requests to https://.../api/v1/get-token ends with 405 Method Not Allowed.
Why?
First redirection works fine (HTTP 308), however request is redirected again with HTTP 301.
MDN article on HTTP 301 states:
Even if the specification requires the method (and the body) not to be altered when the redirection is performed, not all user-agents align with it - you can still find this type of bugged software out there. It is therefore recommended to use the 301 code only as a response for GET or HEAD methods and to use the 308 Permanent Redirect for POST methods instead, as the method change is explicitly prohibited with this status.
Basically HTTP 301 causes POST to become GET, and GET is not allowed, hence HTTP 405.
What to do about it?
Make sure not to redirect requests twice, especially with HTTP 301.

Varnish vcl_backend_response not called

I came across inconsistencies with Varnish 4.x documentation.
According to the documentation vcl_backend_response will be called each time an object is fetched from the backend and the backend doesn't return an error (and is alive and healthy).
However, I noticed many of the 'fetch' requests (resulting from misses) are not going through this function at all.
Here is an example output:
- VCL_call HASH
- VCL_return lookup
- VCL_call MISS
- VCL_return fetch
- Link bereq 294915 fetch
- Timestamp Fetch: 1504198046.101306 0.003644 0.003644
- RespProtocol HTTP/1.1
- RespStatus 200
- RespReason OK
- RespHeader Server: nginx/1.10.3 (Ubuntu)
- RespHeader Date: Thu, 31 Aug 2017 16:47:26 GMT
- RespHeader Vary: Accept-Encoding
- RespHeader Last-Modified: Thu, 31 Aug 2017 16:42:58 GMT
- RespHeader Expires: Thu, 31 Aug 2017 16:47:26 GMT
- RespHeader Cache-Control: public, max-age=90, s-maxage=332
- RespHeader Pragma: cache
- RespHeader X-Lift-Version: 2.6.3
- RespHeader X-Frame-Options: SAMEORIGIN
- RespHeader Content-Encoding: gzip
- RespHeader Content-Type: application/json; charset=utf-8
- RespHeader X-Varnish: 294914
- RespHeader Age: 0
- RespHeader Via: 1.1 varnish-v4
- VCL_call DELIVER
- RespHeader grace: none
- VCL_return deliver
as opposed to a flow that adheres the documentation:
- VCL_call BACKEND_FETCH
- VCL_return fetch
- BackendOpen 22 boot.default 127.0.0.1 9919 127.0.0.1 22536
- Timestamp Bereq: 1504198040.603601 0.000222 0.000222
- Timestamp Beresp: 1504198040.659723 0.056345 0.056123
- BerespProtocol HTTP/1.1
- BerespStatus 200
- BerespReason OK
- BerespHeader Server: nginx/1.10.3 (Ubuntu)
- BerespHeader Date: Thu, 31 Aug 2017 16:47:20 GMT
- BerespHeader Content-Type: text/html;charset=utf-8
- BerespHeader Transfer-Encoding: chunked
- BerespHeader Connection: keep-alive
- BerespHeader Vary: Accept-Encoding
- BerespHeader Last-Modified: Thu, 31 Aug 2017 16:42:58 GMT
- BerespHeader Expires: Thu, 31 Aug 2017 16:47:20 GMT
- BerespHeader Cache-Control: public, max-age=21600, s-maxage=21600
- BerespHeader Pragma: cache
- BerespHeader X-Lift-Version: 2.6.3
- BerespHeader X-Frame-Options: SAMEORIGIN
- BerespHeader Content-Encoding: gzip
- TTL RFC 21600 10 -1 1504198041 1504198041 1504198040 1504198040 21600
- VCL_call BACKEND_RESPONSE
So - according to Varnish log - when there is a miss and the object is fetched from the backed vcl_backend_response is not invoked.
Unless I'm missing something - this contradicts the documentation.
There are no inconsistencies in the documentation. What you see is the normal behaviour since Varnish 4.0. During a cache miss, you'll see (at least) two transactions in varnishlog: (1) the one handling the client request (usual flow is vcl_recv → vcl_hash → vcl_miss → vcl_deliver; i.e. the first log excerpt in your question); and (2) the one handling the request to the backend side (usual flow is vcl_backend_fetch → vcl_backend_response; i.e. the second log excerpt in your question). That's why you don't see references to vcl_backend_response in the client transaction.
Obviously both transactions are connected. At some point the client transaction creates and waits for completion of the backend transaction. That's shown in the log of the client transaction:
- ...
- Link bereq 294915 fetch
- ...
That means a backend transaction was created (VXID 294915) in order to fetch an object from the origin.

Google Compute Engine health checks failing

I have a node.js app on two VM instances that I'm trying to load balance with network load balancing. To test that my servers are up and serving, I have the health check request '/health.txt' on my app internal listening port. I have two instances configured identically with the same tags, firewall rules, etc, but the health check fails to one instance continuously, I can do the check using curl on my internal network or from outside and the test works fine on both instances, but the network load balancer always reports one instance as down.
I used ngrep and running from the health instance, I see:
T 169.254.169.254:65374 -> my.pub.ip.addr:3000 [S]
#
T my.pub.ip.addr:3000 -> 169.254.169.254:65374 [AS]
#
T 169.254.169.254:65374 -> my.pub.ip.addr:3000 [A]
#
T 169.254.169.254:65374 -> my.pub.ip.addr:3000 [AP]
GET /health.txt HTTP/1.1.
Host: my.pub.ip.addr:3000.
.
#
T my.pub.ip.addr:3000 -> 169.254.169.254:65374 [A]
#
T my.pub.ip.addr:3000 -> 169.254.169.254:65374 [AP]
HTTP/1.1 200 OK.
X-Powered-By: NitroPCR.
Accept-Ranges: bytes.
Date: Fri, 14 Nov 2014 20:00:40 GMT.
Cache-Control: public, max-age=86400.
Last-Modified: Thu, 24 Jul 2014 17:58:46 GMT.
ETag: W/"2198506076".
Content-Type: text/plain; charset=UTF-8.
Content-Length: 13.
Connection: keep-alive.
.
#
T 169.254.169.254:65374 -> my.pub.ip.addr:3000 [AR]
But on the instance GCE claims is unhealthy, I see this:
T 169.254.169.254:61179 -> my.pub.ip.addr:3000 [S]
#
T 169.254.169.254:61179 -> my.pub.ip.addr:3000 [S]
#
T 169.254.169.254:61180 -> my.pub.ip.addr:3000 [S]
#
T 169.254.169.254:61180 -> my.pub.ip.addr:3000 [S]
#
T 169.254.169.254:61180 -> my.pub.ip.addr:3000 [S]
But if I curl the same file from my healthy instance > unhealthy instance, my 'unhealthy' instance responds fine.
I got this back working, after making contact with the Google Compute Engine team. There is a service process on a GCE VM that needs to run on boot, and continue running while the VM is alive. The process is named google-address-manager. It should run at runlevels 0-6. For some reason this service stopped and will not start when one of my VMs boots/reboots. Starting the service manually worked. Here are the steps we went through to determine what was wrong: (This is a Debian VM)
sudo ip route list table all
This will display your route table. In the table, there should be a route to your Load Balancer Public IP:
local lb.pub.ip.addr dev eth0 table local proto 66 scope host
If there is not, check that google-address-manager is running:
sudo service google-address-manager status
If it not running, start it:
sudo service google-address-manager start
If it starts ok, check your route table, and you should now have a route to your load balancer IP. You can also manually add this route:
sudo /sbin/ip route add to local lb.pub.ip.addr/32 dev eth0 proto 66
We have still not resolved why the address manager stopped and does not start on boot, but at least the LB Pool is healthy

Unable to tunnle to vmc mysql service due to memory limitation

I am unable to tunnle to my free hosted instance of a rails app on cloudfoundry inftrastructure.
When I run 'vmc tunnel mysql-service', I get the below:
1: none
2: mysql
3: mysqldump
Which client would you like to start?> 2
Opening tunnel on port 10000... FAILED
CFoundry::AccountNotEnoughMemory: 600: Not enough memory capacity, you're allowed: 2048M
For more information, see ~/.vmc/crash
Checking the ~/.vmc/crash logs I see:
Time of crash:
2013-03-13 18:16:54 -0400
CFoundry::AccountNotEnoughMemory: 600: Not enough memory capacity, you're allowed: 2048M
<<<
REQUEST: PUT https://api.cloudfoundry.com/apps/caldecott
REQUEST_HEADERS:
Authorization : bearer eyJhbGciOiJSUzI1NiJ9.eyJleHAiOjEzNjM4MTc3OTgsInVzZXJfbmFtZSI6ImhzdWVpbmczQGdtYWlsLmNvbSIsInNjb3BlIjpbImNsb3VkX2NvbnRyb2xsZXIucmVhZCIsIm9wZW5pZCIsInBhc3N3b3JkLndyaXRlIl0sImVtYWlsIjoiaHN1ZWluZzNAZ21haWwuY29tIiwiYXVkIjpbIm9wZW5pZCIsImNsb3VkX2NvbnRyb2xsZXIiLCJwYXNzd29yZCJdLCJqdGkiOiJkMzZjNDI3MS02ZDJkLTRjN2EtOThmYS1kNzc2MjhiZDFiNmMiLCJ1c2VyX2lkIjoiODY0OWZkMzEtY2JiNy00N2YyLTkyNmItODM5Y2MzNWFlMTlmIiwiY2xpZW50X2lkIjoidm1jIn0.Lt1Bw7mBP55Hi9MIPTn90s0RXkJcJwGZXZcqDep4BBnnwjrAOAPQPGlIwBA-Ovy9K5BazMXqnQCOv8kxpK8o4wo3vG6RAJPvF7p76JgZDq0C_n_PUV1LaxGrldnpc2PLawR0FHHChb7tKCJP4cf26lK8A8vg5GEwi8HWO5OJCERI-3CKKiGJB5mVj2rWGmE39-ihAWmT5LpS5jAEZ-XVvo4VDEKknJ8SQC6693FzdCZ2AJBHkAgNxRoCsBtvkxOgKkspI-IkcaMZx884BT24cGbseZ5XY3bj6ZjAb499AfbIFe97Hme4axtpWo8qn1grkrJxyI3gmYAVMHVgo1M1IQ
Content-Length : 310
Content-Type : application/json
REQUEST_BODY: {"name":"caldecott","instances":1,"state":"STARTED","staging":{"model":"sinatra","stack":"ruby19"},"resources":{"memory":64,"disk":2048,"fds":256},"env":["CALDECOTT_AUTH=43ae7176-67f6-41ac-8cff-bf21b4249a49"],"uris":["caldecott-d9149.cloudfoundry.com"],"services":["mysql-service"],"console":null,"debug":null}
RESPONSE: [403]
RESPONSE_HEADERS:
cache-control : no-cache
connection : keep-alive
content-type : application/json; charset=utf-8
date : Wed, 13 Mar 2013 22:16:54 GMT
keep-alive : timeout=20
server : nginx
transfer-encoding : chunked
x-ua-compatible : IE=Edge,chrome=1
RESPONSE_BODY:
{
"code": 600,
"description": "Not enough memory capacity, you're allowed: 2048M"
}
>
cfoundry-0.5.2/lib/cfoundry/baseclient.rb:156:in handle_error_response'
cfoundry-0.5.2/lib/cfoundry/baseclient.rb:135:inhandle_response'
cfoundry-0.5.2/lib/cfoundry/baseclient.rb:85:in request'
cfoundry-0.5.2/lib/cfoundry/baseclient.rb:74:input'
cfoundry-0.5.2/lib/cfoundry/v1/model_magic.rb:55:in block (2 levels) in define_client_methods'
cfoundry-0.5.2/lib/cfoundry/v1/model.rb:91:inupdate!'
cfoundry-0.5.2/lib/cfoundry/v1/app.rb:131:in update!'
cfoundry-0.5.2/lib/cfoundry/v1/app.rb:121:instart!'
tunnel-vmc-plugin-0.2.2/lib/tunnel-vmc-plugin/tunnel.rb:173:in start_helper'
tunnel-vmc-plugin-0.2.2/lib/tunnel-vmc-plugin/tunnel.rb:89:increate_helper'
tunnel-vmc-plugin-0.2.2/lib/tunnel-vmc-plugin/tunnel.rb:28:in open!'
tunnel-vmc-plugin-0.2.2/lib/tunnel-vmc-plugin/plugin.rb:41:inblock in tunnel'
interact-0.5.2/lib/interact/progress.rb:98:in with_progress'
tunnel-vmc-plugin-0.2.2/lib/tunnel-vmc-plugin/plugin.rb:40:intunnel'
mothership-0.5.1/lib/mothership/base.rb:66:in run'
mothership-0.5.1/lib/mothership/command.rb:72:inblock in invoke'
What actions should I take to resolve this?
To offer further background below are a few details about the env. my app is running in:
vmc stats logoff
Using manifest file manifest.yml
Getting stats for logoff... OK
instance cpu memory disk
0 0.1% 74.2K of 2G 63.3M of 2G
vmc env logoff
Using manifest file manifest.yml
Getting env for logoff... OK
vmc services
Getting services... OK
name service version
mysql-service mysql 5.1
This is because you have used all of your allotted 2Gb of RAM. To tunnel to a service, vmc needs to deploy a small Ruby application called Caldecott, this uses 64Mb. So in short, you need to free up 64Mb!