Cheapest method to setup HTTPS? - html

I've setup my ssl cert in AWS through EC2 using the Elastic IP Address and Elastic Load Balancing. It costs me about 20$ per month to run this.
Does anyone have cheaper suggestions?

Depends on what you are using your EC2 instance for... If for a web service, look at API Gateway in front of a Lambda function for a serverless architecture. If for a website and it is static, consider hosting in a S3 bucket.

Let'sencrypt would be the ideal solution for your case. https://letsencrypt.org/ offers free ssl certificates that you can generate and import into your ACM and attach them from ELB
OR
If you prefer it directly to your EC2 instance then you can install them in your apache (httpd) web server.
Refer: https://www.godaddy.com/help/apache-install-a-certificate-centos-5238
https://www.youtube.com/watch?v=_a4wRsT6LaI

Use certificates from the AWS certificate manager and you won't pay anything. They are free. https://aws.amazon.com/certificate-manager/pricing/

You can use AWS CloudFront as the gateway to your application which can use AWS Certificate Manager issued SSL certificates for free. There are no upfront commitments and you will pay only for the usage (More details refer CloudFront Pricing). You can connect your EC2 instance to CloudFront to receive traffic.
This will provide you a higher performance by caching the static content while reducing the load for your backend further reducing costs at scale.

Related

Exposing a Postegres / Patroni db on Openshift to outside world

I am planning to run an SSIS ETL job , which has a sql server as SOURCE db , this is on a physical on-premise machine and the DESTINATION db (postegres/patroni) is running on Openshift platform as pod/containers. The issue I am facing now is like, DB hosted on openshift cannot be exposed via tcp port. As per few articles online, openshift only allows HTTP traffic via “routes”. Is this assumption right? If yes, how in real world people run ETL or bulk data transfer or migration to a db on openshift from outside. I am worried to use HTTP since I feel , it’s not efficient for ETL. Few folks mentioned like, use OC PORT FORWARDING. But for a production app, how an open shift port forwarding be stable? Please throw your comments
In a production environment it is a little questionable if you want to expose your database to the public internet. Normally you probably rather want to go with a site-to-site VPN.
That left aside it is correct that OCP is using routes for most use cases, which are then exposing an http(s) endpoint. If you need plain TCP however, you can create a service of type loadbalancer.
The regular setup with a route is stacked like
route --> service --> pods where the service is commonly of type clusterIP.
with a service of type loadbalancer, you eliminate the route and directly expose a TCP service.
If you run on a public cloud, OCP takes care of the leftover requirements for you. Namely that is to create a Loadbalancer with your cloudprovider. In the case of AWS for example, OCP would create an ELB (Elastic Loadbalancer) for you.
You can find more information in the documentation

In Google Cloud Platform, Is there a any service to be a replacement of Web server (Nginx Or Apache) and Load Balancer(Haproxy)

RightNow, I am manging URL's and its redirection using Nginx hosted on physical machine. URL Redirection is achieved and pointed to different load balancers(Haproxy) as mentioned in the vhost file of Nginx.
Is there any option available in GCP to provide the services of redirection without using Nginx and Apache? And also let me know what are the alternate options available in GCP for Haproxy
From what I understand you have a few services (and maybe some static content) serving via Haproxy (which is doing load balancing) to the Internet.
Based on that I assume that if someone wants to go to "yourservice.com/example1" would be redirected by load balancer to service1, and if someone types "yourservice.com/static1" them he will be served static content by different service etc.
GCP has exactly a service you're asking for which can do url/content based load balancing. You can also move your current services to the Google Compute Engine (as virtual machines) or Google Kubernetes Engine which will run your services as a containers.
Also - using GCE or GKE can do autoscaling if that's what you need.
Load balancing provided by the GCP can do all the content based balancing that Haproxy which you're using now.
If you're using some internal load balancing now, I believe it could also be done with one load balancer ( and could simplify your configuration (just some VM or containers running your services and one load balancer).
You can read more about load balancing concepts and specifically about setting up GCP to do that.

Server vs Serverless for REST API

I have a REST API that I was thinking about deploying using a Serverless model. My data is in an AWS RDS server that needs to be put in a VPC for security reasons. To allow a Lambda to access the RDS, I need to configure the lambda to be in a VPC, but this makes cold starts an average of 8 seconds longer according to articles I read.
The REST API is for a website so an 8 second page load is not acceptable.
Is there anyway I can use a Serverless model to implement my REST API or should I just use a regular EC2 server?
Unfortunately, this is not yet released, but let us hope that this is a matter of weeks/months now. At re:Invent 2018 AWS has introduced Remote NAT for Lambda to be available this year (2019).
For now you have to either expose RDS to the outside (directly or through a tunnel), but this is a security issue. Or Create Lambda ENIs in VPC.
In order to keep your Lambdas "warm" you may create a scheduled "ping" mechanism. Some example of this pattern you can find in the Article of Yan Cui.

Implementing SSL in Java Based Web Applications

I have a java based web application developed in Amazon EC2. It is doing transactions of confidential information. I have a MySQL server installed all by my self in the same amazon instance. The web application access the database via localhost. In Security Groups, I have created a custom security where the port 8080 (the Tomcat) can be accessed only via localhost.
Considering these, do I still need SSL to make sure the transactions are secured?
It depends. Are you comfortable with plain text inside the datacenter? Don't bother with SSL.
Are you worried about that traffic being sniffed locally (tcpdump) or from a malicious source (for instance, if data was being rerouted from the switch between EC2 instances)? Use SSL.
There's a trend of large companies making sure to encrypt local traffic.

can jsf/primefaces application be hosted out of AWS

Is it possible to use AWS services to host an application build in following technologies
jsf2/primefaces3
tomcat 6
mysql 5
Apart from these I need email services, blog etc a conventional java based package is this possible in AWS.
Presently I am using one of the hosting provider and my domain is also registered with them so how can I point the domain to point to the AWS hosted website. Is this possible
I can answer most of your questions. Yes it's possible to host an app with those technologies on AWS. You can host any application on an AWS server, as it's just like any other server but you must configure everything yourself, unless you are using a customized AMI.
I wouldn't recommend using AWS to send out email however, as in my experience, a lot of spammers have abused the AWS system, so if you are sending out email newsletters/etc... from an AWS server, it may be treated more strictly by other email server spam filters. It's best to use a third party solution for sending out bulk email.
As for your last question: "how can I point the domain to point to the AWS hosted website", that is way too complicated to answer here. I would suggest hiring someone experienced with DNS to manage this transition. I would recommend that you move your DNS hosting to Amazon's S3 routing service. Then you can easily manage your DNS and other AWS services from one console.
Good luck