I have purchased a domain (deepalgorithm.net), however I am a bit curious as to how I would route all traffic that visit this domain to my Amazon Elastic BeanStalk instance, which is running my web application.
Secondly, how can I make it so that , when users type this link "deepalgorithm.net" it takes them to my web application.
You need to put Elastic Load Balancer in front of your application and point record in Route53 to the ELB
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html
Since you got the domian in Route53, there is an entire section in the AWS docs devoted to setting up the routing:
Routing traffic to an AWS Elastic Beanstalk environment
The process involves creating a Route 53 record to your EB domain:
Creating an Amazon Route 53 record that routes traffic to your Elastic Beanstalk environment
Related
RightNow, I am manging URL's and its redirection using Nginx hosted on physical machine. URL Redirection is achieved and pointed to different load balancers(Haproxy) as mentioned in the vhost file of Nginx.
Is there any option available in GCP to provide the services of redirection without using Nginx and Apache? And also let me know what are the alternate options available in GCP for Haproxy
From what I understand you have a few services (and maybe some static content) serving via Haproxy (which is doing load balancing) to the Internet.
Based on that I assume that if someone wants to go to "yourservice.com/example1" would be redirected by load balancer to service1, and if someone types "yourservice.com/static1" them he will be served static content by different service etc.
GCP has exactly a service you're asking for which can do url/content based load balancing. You can also move your current services to the Google Compute Engine (as virtual machines) or Google Kubernetes Engine which will run your services as a containers.
Also - using GCE or GKE can do autoscaling if that's what you need.
Load balancing provided by the GCP can do all the content based balancing that Haproxy which you're using now.
If you're using some internal load balancing now, I believe it could also be done with one load balancer ( and could simplify your configuration (just some VM or containers running your services and one load balancer).
You can read more about load balancing concepts and specifically about setting up GCP to do that.
I'm a bit confused on how the Elastic BeanStalk url works. From what I understand Elastic BeanStalk is just a management layer. Yet I have I have a route 53 alias which seem to point to the URL section of an Elastic Beanstalk environment url, which in turn has a load balancer. I'm not sure/confused on how the request can go from route53 into an Elastic Beanstalk url, shouldn't it go to the load balancer?
I have an autoscaling istance group, i need to setup a Proxy/Load balancer that take request and send it to the istance group.
I thinked to use a Load balancer, but I need to grab both HTTP(S) and TCP requests.
There is some way (or some workaround) to solve this?
EDIT: The problem is that from TCP LB settings i can set the backend service (the managed group that i need to set) only for one port.
For your use case, a single load balancing configuration available on Google Cloud Platform will not be able to serve the purpose. On the other hand, since you are using managed instance groups (Autoscaling), it can not be used as backend for 2 different load balancers.
As per my understanding, the closest you can go is by using Network load balancing (TCP) and install SSL certificate to handle HTTPS requests
on the instance level.
I've setup my ssl cert in AWS through EC2 using the Elastic IP Address and Elastic Load Balancing. It costs me about 20$ per month to run this.
Does anyone have cheaper suggestions?
Depends on what you are using your EC2 instance for... If for a web service, look at API Gateway in front of a Lambda function for a serverless architecture. If for a website and it is static, consider hosting in a S3 bucket.
Let'sencrypt would be the ideal solution for your case. https://letsencrypt.org/ offers free ssl certificates that you can generate and import into your ACM and attach them from ELB
OR
If you prefer it directly to your EC2 instance then you can install them in your apache (httpd) web server.
Refer: https://www.godaddy.com/help/apache-install-a-certificate-centos-5238
https://www.youtube.com/watch?v=_a4wRsT6LaI
Use certificates from the AWS certificate manager and you won't pay anything. They are free. https://aws.amazon.com/certificate-manager/pricing/
You can use AWS CloudFront as the gateway to your application which can use AWS Certificate Manager issued SSL certificates for free. There are no upfront commitments and you will pay only for the usage (More details refer CloudFront Pricing). You can connect your EC2 instance to CloudFront to receive traffic.
This will provide you a higher performance by caching the static content while reducing the load for your backend further reducing costs at scale.
I am using the Google Load-Balancer with the CDN option enabled.
When I setup the Backend Configuration for the load-balancer, I setup a backend with instances in US-Central, US-West and US-East.
Everything is working great, except all traffic is being routed only to the US-West backend service.
Does the load-balancer option route traffic to the closest backend service?
I see that there is an advanced menu in the load balancer for creating forwarding rules, target proxies and more.
Is there something I need to do to make my load-balancer load closest to client?
If they are in Florida and the CDN does not have the file, they get routed to the US-East VM Instance?
If that is not possible, it seems like having only an US-Central server would better than having US-Central, US-East and US-West? That way East Coast misses are not going to the West Coast to get the file. Everything will pull from the central location.
Unless there is a way to route traffic from the load-balancer to the closest VM instance, it seems as if the only solution would be to create different load balancers with the CDN enabled and use DNS routing to point to the CDN pool that is closest.
That setup would use 3 different CDN ip address's, 3 Compute Engine ip address's and dns latency or location routing. If they are in Florida, route them to the Google Load Balancer CDN in the east coast.
I'm not sure that would be a good solution on top of the Anycast ip routing. It seems like overkill.
Thank you for listening and any help or guidance would be appreciated.
"By default, to distribute traffic to instances, Google Compute Engine picks an instance based on a hash of the source IP and port and the destination IP and port."
Similar question: Google compute engine load balancing not routing properly Except all traffic in a live environment is all going to the same VM instance.
I am using the Google CDN Frontend Anycast ip address.
I think Elving is right and there may be a mis-configuration. Here is a screen shot of the VM instances in the Google Cloud. It says the two instances aren't in use.
Here is another picture of the Instances Groups. I don't see a clear way to make the instances attached to the instance groups.
The load balancer will automatically route traffic to the nearest instance group with capacity. You don't need to do anything other than configure your backend service to use multiple instance groups.
There's more information at https://cloud.google.com/compute/docs/load-balancing/http/.