How to implement service discovery in AWS ECS? - containers

I'm planning to use Promethus in ECS Cluster for monitoring but seems that Promethus does not support ECS service discovery in native, unlike EC2.
I'm searching but not enough information. I would really appreciate if any share any information. Thanks in advance.

I'm quite not sure, if i understood the problem correctly.
You can create private hosted zone in Route 53 and configure it to a particular VPC with DNSresolution and DNS Hostname enabled.
with this, you can create an ECS instances with the above set VPC, create a service with task having a name say promotheus.local or your application (app.local) to which you want to monitor from promotheus, with this kind of set up, you can use http://app.local:port to extract monitoring information by name instead of private IP.
Ref: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-considerations.html
Hope this helps.
Thanks,
Sridhar

Related

Exposing a Postegres / Patroni db on Openshift to outside world

I am planning to run an SSIS ETL job , which has a sql server as SOURCE db , this is on a physical on-premise machine and the DESTINATION db (postegres/patroni) is running on Openshift platform as pod/containers. The issue I am facing now is like, DB hosted on openshift cannot be exposed via tcp port. As per few articles online, openshift only allows HTTP traffic via “routes”. Is this assumption right? If yes, how in real world people run ETL or bulk data transfer or migration to a db on openshift from outside. I am worried to use HTTP since I feel , it’s not efficient for ETL. Few folks mentioned like, use OC PORT FORWARDING. But for a production app, how an open shift port forwarding be stable? Please throw your comments
In a production environment it is a little questionable if you want to expose your database to the public internet. Normally you probably rather want to go with a site-to-site VPN.
That left aside it is correct that OCP is using routes for most use cases, which are then exposing an http(s) endpoint. If you need plain TCP however, you can create a service of type loadbalancer.
The regular setup with a route is stacked like
route --> service --> pods where the service is commonly of type clusterIP.
with a service of type loadbalancer, you eliminate the route and directly expose a TCP service.
If you run on a public cloud, OCP takes care of the leftover requirements for you. Namely that is to create a Loadbalancer with your cloudprovider. In the case of AWS for example, OCP would create an ELB (Elastic Loadbalancer) for you.
You can find more information in the documentation

Server vs Serverless for REST API

I have a REST API that I was thinking about deploying using a Serverless model. My data is in an AWS RDS server that needs to be put in a VPC for security reasons. To allow a Lambda to access the RDS, I need to configure the lambda to be in a VPC, but this makes cold starts an average of 8 seconds longer according to articles I read.
The REST API is for a website so an 8 second page load is not acceptable.
Is there anyway I can use a Serverless model to implement my REST API or should I just use a regular EC2 server?
Unfortunately, this is not yet released, but let us hope that this is a matter of weeks/months now. At re:Invent 2018 AWS has introduced Remote NAT for Lambda to be available this year (2019).
For now you have to either expose RDS to the outside (directly or through a tunnel), but this is a security issue. Or Create Lambda ENIs in VPC.
In order to keep your Lambdas "warm" you may create a scheduled "ping" mechanism. Some example of this pattern you can find in the Article of Yan Cui.

AWS authentication to Vault

We're using Vault to store our application secrets and config. When our app (Java) starts, a script does all the magic of getting the secrets and config from Vault and storing them locally for the application to read. The script is authenticating to Vault using AWS IAM role.
Now we're getting to a situation where the application needs to read secrets from Vault on the go, not just on startup. For that purpose, I need it to be able to do the authentication pretty much on every request. It's worth mentioning that the app might also run on the developer machine, so whatever authentication done - it needs to work on the EC2 instance as well as the local development environment.
I'm currently leaning towards creating a username and password, store them in Vault for the application to get when starting up. Then the application could use that username/password to authenticate to Vault when it needs.
I'm also considering AppRole, but can't really see any real advantage to it over simple user/password setup.
What's the best solution for this use-case? Any advise would be highly appreciated!
Thanks,
Yosi
The AWS recommendation for storing secrets is to use AWS Systems Manager Parameter Store.
Software running on an Amazon EC2 instance with an assigned Role can use those credentials to access the Parameter Store to retrieve application secrets.
The Parameter Store can also be used outside of EC2, but some AWS credentials will still be needed to authenticate to the Parameter Store.

CloudSql with Autoscaler access

I am stuck at one thing regarding CloudSQL.
I have my WordPress app running on GCE and I create Instance Group so I will utilise the AutoScaler.
for Db, I am using CloudSQL.
Now point where is stuck is the "Authorise network" in CloudSQL as it accepts only IPV4 Public IP.
How do I know when autoscaling happen what IP will attach to Instance so my instance will know where the DB is?
I can hard code the CloudSQL IP as a CNAME but from CloudSQL Side I am not able to figure it out how to provide access. I can make my DB access all open
If you can let me know what will be the point which I am missing.
I used cloudsql proxy also but that doesn't come with Service in Linux ... I hope you can understand my situation. Let me know if any idea you like to share on this.
Thank you
The recommended way is to use the second generation instances and Cloud SQL Proxy, you’ll need to configure the Proxy on Linux and start it by using service account credentials as outlined at the provided link.
Another way is to use startup script in your GCE instance template, so you can get your new instance’s external IP address and add it to a Cloud SQL instance’s authorized networks by using gcloud sql instance patch command. The IP can be removed from the authorized networks in the same way by using shutdown script. The external IP address of GCE VM instance can be retrieved from metadata by running:
$ curl "http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip" -H "Metadata-Flavor: Google".

Cloud SQL Connection + Auto Scaling

Per this, Cloud SQL requires the external IP address of the client in order to allow connections to it. The other suggested way is the sql proxy with a big disclaimer that the method may change over time.
Question: If I am auto scaling compute engine VMs running webservers, do I need to assign them all external IPs and then go set those in the Cloud SQL instance? Or am I missing something huge? Noob question perhaps, thanks for reading through.
The recommended way is to use the Cloud SQL proxy (but if you really don't want to use it you would need to add static IPs to your GCE VMs and whitelist them on the Cloud SQL instance).
Also, you can setup a single VM instance with cloud_sql_proxy and listen to your subnet interface (for example) to make possible to connect any new VM instance to the one with a proxy.