grpc server and gateway shutdown order - grpc-go

my gRPC service uses grpc-gateway to serve http requests.
in order to make the service shutdown gracefully, is there an order i need to pay attention to? i.e. is the shutdown order
A. gRPC service -> gateway
B. gateway -> gRPC service
the only article/document i can find is here, which recommend A but didn't explain why. my own reasoning for A would be we need the gateway to be alive to route the outstanding gRPC requests. but that's not supported by any documentation.

Related

Exposing a Postegres / Patroni db on Openshift to outside world

I am planning to run an SSIS ETL job , which has a sql server as SOURCE db , this is on a physical on-premise machine and the DESTINATION db (postegres/patroni) is running on Openshift platform as pod/containers. The issue I am facing now is like, DB hosted on openshift cannot be exposed via tcp port. As per few articles online, openshift only allows HTTP traffic via “routes”. Is this assumption right? If yes, how in real world people run ETL or bulk data transfer or migration to a db on openshift from outside. I am worried to use HTTP since I feel , it’s not efficient for ETL. Few folks mentioned like, use OC PORT FORWARDING. But for a production app, how an open shift port forwarding be stable? Please throw your comments
In a production environment it is a little questionable if you want to expose your database to the public internet. Normally you probably rather want to go with a site-to-site VPN.
That left aside it is correct that OCP is using routes for most use cases, which are then exposing an http(s) endpoint. If you need plain TCP however, you can create a service of type loadbalancer.
The regular setup with a route is stacked like
route --> service --> pods where the service is commonly of type clusterIP.
with a service of type loadbalancer, you eliminate the route and directly expose a TCP service.
If you run on a public cloud, OCP takes care of the leftover requirements for you. Namely that is to create a Loadbalancer with your cloudprovider. In the case of AWS for example, OCP would create an ELB (Elastic Loadbalancer) for you.
You can find more information in the documentation

Retries from Ingressgateway

Istio has virtual service for pods with istio-proxy side cars but what istio ingress-gateway pod itself , how to enable retries from istio ingressgateway pod.
Use case is that i am seeing 503 error in case of downscaling and want ingressgateway to retry for specific destination
https://istio.io/docs/concepts/traffic-management/
Basically, Istio mesh represents ingress communication model between external Load Balancer through istio-ingressgateway and logical traffic management CRD components, which define a network routes, Authentication/Authorization aspects and service-to-service interactions.
Istio Gateway as the major contributor with an edge istio-ingressgateway service describes essential information about ports and protocols for HTTP/HTTS/TCP connections that are entering the service mesh and a way how to manage the further routing scenarios, therefore istio-ingressgateway does not decide itself about network traffic workflow and target application endpoints.
Retries concept in Istio is actually enclosed in routing rules and composed within VirtualService resource, showing us the main principle of network request re-attempts and their timeouts in case of initial call's failure.
When Istio istio-ingressgateway Pod starts it retrieves the discovery data about Envoy sidecars from Pilot, approaching the desired state through pilot-agent specific flags.
However, I couldn't determine the reported 503 error, during down-scaling istio-ingressgateway replicas in Istio 1.3.

How to make ELB pass protocol to node.js process (Elastic Beanstalk)

I have ELB balancing TCP traffic to my Node.js processes. When ELB is balancing TCP connections it does not send the X-Forwarded-Proto header like it does with http connections. But I still need to know if the connection is using SSL/TLS so I can respond with a redirect from my Node process if it is not a secure connection.
Is there a way to make ELB send this header when balancing TCP connections?
Thanks
You can configure proxy protocol for your ELB to get connection related information. In case of HTTP the ELB adds headers telling about the client information, in case of TCP however, AWS ELB simply passes through the headers from the client without any modifications, this causes the back end server to lose client connection information as it is happening in your case.
To enable proxy control for your ELB, you will have to do it via API, there is currently no way to do it via UI.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
The above doc is a step-by-step guide on how to do this, I don't want to paste the same here as that information might change over time.
EDIT:
As it turns out, Amazon implements Version 1 of the proxy protocol only which does not give away SSL information. It does however give port numbers which was requested by the client and a process can be developed stating something like if the request was over port 443 then it was SSL. I don't like it as it is indirect, requires hardocoding and coordination between devops and developers... seems to be the only way for now...lets hope AWS ELB starts supporting Version 2 of the proxy protocol which does have SSL info soon.

Howto install the api gateway client certificate into Elastic beanstalk

I have a scalable application on elastic beanstalk running on Tomcat. I read that in front of Tomcat there is an Apache server for reverse proxy. I guess I have to install on apache the client certificate and configure it to accept only request encrypted by this certificate, but I have no idea how to do that.
Can you help me?
After many researches I found a solution. According to the difficult to discover it I want share with you my experience.
My platform on elastic beanstalk is Tomcat 8 with load balancer.
To use the client certificate (at the moment I was writing) you have to terminate the https on instance
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance.html
then
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-tomcat.html
I used this configuration to use both client and server certificates (seems that it doesn't work only with client certificate)
SSLEngine on
SSLCertificateFile "/etc/pki/tls/certs/server.crt"
SSLCertificateKeyFile "/etc/pki/tls/certs/server.key"
SSLCertificateChainFile "/etc/pki/tls/certs/GandiStandardSSLCA2.pem"
SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
SSLProtocol All -SSLv2 -SSLv3
SSLHonorCipherOrder On
SSLVerifyClient require
SSLVerifyDepth 1
SSLCACertificateFile "/etc/pki/tls/certs/client.crt"
And last thing: api gateway doesn't work with self signed cerificate (thanks to Client certificates with AWS API Gateway), so you have to buy one from a CA.
SSLCACertificateFile "/etc/pki/tls/certs/client.crt"
This is where you should point the API Gateway provided client side certificate.
You might have to configure the ELB's listener for vanilla TCP on the same port instead of HTTPS. Basically TCP pass through at your ELB, your instance needs to handle on the SSL in order to authorize the requests which provided a valid client certificate.

Rabbitmq listen to UDP connection

Is there a way to have RabbitMQ listen for UDP connections and put those packets into somesort of default queue which can then be pulled from by a standard client? Would ActiveMQ or ZeroMQ be better for this?
Consider using a simple proxy front for receiving incoming UDP packets and sending them off to RabbitMQ via AMQP. E.g. in Python you can setup a UDP server and then use the AMQP Pika library to speak with your RabbitMQ server.
Cheers!
Someone also built a udp-exchange plugin for rabbitMQ.
I haven't personally used this, but it seems like it would do the job for you without having to write your own udp to amqp forwarder ..
https://github.com/tonyg/udp-exchange
here's the excerpt
Extends RabbitMQ Server with support for a new experimental exchange type, x-udp.
Each created x-udp exchange listens on a specified UDP port for incoming messages, and relays them on to the queues bound to the exchange. It also takes messages published to the exchange and relays them on to a specified IP address and UDP port.