Is it possible to run a Spring Cloud AWS application on Elastic Beanstalk? - amazon-elastic-beanstalk

I'm trying to run a web application of "hello world" complexity on Elastic Beanstalk. I have no problem doing this with Spring Boot on Elastic Beanstalk.
But when I try to use Spring Cloud AWS, I encounter a myriad of problems. The reference guide never mentions that running on Beanstalk is possible, so perhaps I am barking up the wrong tree?
The root problem I seem to encounter is the stackResourceRegistryFactoryBean blowing up when trying to identify the "stack" being used - i.e. the CloudFormation stack. But I'm using Elastic Beanstalk, not CloudFormation. The root exception is:
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.cloud.aws.core.env.stack.config.StackResourceRegistryFactoryBean]: Factory method 'stackResourceRegistryFactoryBean' threw exception; nested exception is java.lang.IllegalAccessError: tried to access class org.springframework.cloud.aws.core.env.stack.config.AutoDetectingStackNameProvider from class org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration$StackAutoDetectConfiguration
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:189)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:588)
... 89 more
Caused by: java.lang.IllegalAccessError: tried to access class org.springframework.cloud.aws.core.env.stack.config.AutoDetectingStackNameProvider from class org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration$StackAutoDetectConfiguration
at org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration$StackAutoDetectConfiguration.stackResourceRegistryFactoryBean(ContextStackAutoConfiguration.java:71)
at org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration$StackAutoDetectConfiguration$$EnhancerBySpringCGLIB$$432c7658.CGLIB$stackResourceRegistryFactoryBean$0(<generated>)
at org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration$StackAutoDetectConfiguration$$EnhancerBySpringCGLIB$$432c7658$$FastClassBySpringCGLIB$$47c6e7d2.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
...
There are tags present on the generated EC2 instance for "aws:cloudformation:stack-id" and "aws:cloudformation:stack-name" if it is relevant, and my understanding is that Beanstalk uses CloudFormation stacks behind the scenes. I've tried manually specifying the name of the stack via #EnableStackConfiguration, but since the name is generated I'd rather not do this, even if it did work.
So my questions are:
1) Is it possible to run a Spring Cloud AWS-based application on Elastic Beanstalk?
2) If so, are there any special steps required? For example, I already discovered the one about CloudFormation read access being required on the role.
3) Is there a way to disable the part of Spring Cloud AWS that attempts to obtain resource names from the stack? At this point my app doesn't need this.
thanks in advance,
k

Ok, so over time I've answered my own questions on this topic.
First, Elastic Beanstalk uses CloudFormation behind the scenes, so this is why there is a "stack".
Next, Spring Cloud AWS tries to make connections to DBs and such easier by binding to other resources that may have been created in the same stack. This is reasonable - if you are expecting it. If not, as #barthand says, it is probably better to turn this feature off with cloud.aws.stack.auto=false than have the app fail to start up.
Third, When using Elastic Beanstalk, you have the opportunity to associate an execution role with your instance - otherwise the code in your instance isn't able to do much of anything with the AWS SDK. To explore the resources in a CloudFormation stack, Spring Cloud AWS tries to make some API calls, and by default these are not allowed. To make them allowed, I added these permissions to the role:
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudformation:DescribeStacks",
"cloudformation:DescribeStackEvents",
"cloudformation:DescribeStackResource",
"cloudformation:DescribeStackResources",
"cloudformation:GetTemplate",
"cloudformation:List*"
],
"Resource": "*"
}
]
So to answer my original questions:
Yes it is definitely possible (and easy!) to run a Spring Cloud AWS
program in Elastic Beanstalk.
Special requirements - Need to open the permissions on the associated role to include CloudFormation read operations, or...
Disable these using cloud.aws.stack.auto=false
Hope this info helps someone in the future

spring-cloud-aws seems to assume by default that you're running your app using your custom CloudFormation template.
In case of Elastic Beanstalk, you simply need to tell spring-cloud-aws to resign from obtaining information about the stack automatically:
cloud.aws.stack.auto = false
Not sure why it is not mentioned in the docs. For basic Java apps, Elastic Beanstalk seems to be an obvious choice.

Related

How to use MySQL to store Spring Cloud Gateway sessions

I am trying to set up two instances of Spring Cloud Gateway to work with IdP to authenticate with the oAuth2 Authorization Code flow. The load balancer in front of the two gateway instances is round robin fashion.
One of the challenges is to share the session between the two Spring Cloud Gateway instances (if no session sharing, after the user successfully logged in IdP and redirected to the instance #2, it will have problem because the instance #2 doesn't have the session information that generated by instance #1 who redirect the user to the login page). And I don't want to use sticky session fashion in load balancer.
I can manage to use Redis to store the sessions by using dependencies org.springframework.boot:spring-boot-starter-data-redis and org.springframework.session:spring-session-data-redis, but I failed to use MySQL to do it (and I have to use MySQL to do it for cost consideration).
I tried this sample, which uses spring boot starter web, it does able to use MySQL to store the sessions.
But when I tried to replace the dependency org.springframework.boot:spring-boot-starter-web with org.springframework.cloud:spring-cloud-starter-gateway:3.1.0 and configured the spring.session.store-type: jdbc, I encountered below error:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.boot.autoconfigure.session.SessionAutoConfiguration$ReactiveSessionRepositoryValidator': Invocation of init method failed; nested exception is org.springframework.boot.autoconfigure.session.SessionRepositoryUnavailableException: No session repository could be auto-configured, check your configuration (session store type is 'jdbc')
Any idea to solve it? Many thanks!

AWS Fargate CannotPullContainerError

I'm using AWS ECS to practice and I met this issue when launching my task, it failed with the reason:
CannotPullContainerError: inspect image has been retried 5 time(s): failed to resolve ref "docker.io/vaultwarden/server:latest": failed to do request: Head https://registry-1.docker.io/v2/vaultwarden/server/manifests/latest: dial tcp 3.226.210.61:443: ...
In task definition I'm using the image url from docker hub which i believe is an official image(docker.io/vaultwarden/server:latest
)
After doing some research, I checked my VPC, subnet(can access public internet), IAM, and tried whether enable auto assign public IP or disable it when I created the service, also I compared my task definition with my colleague and can't tell the difference, but his working fine.
After that, I tried to delete everything, cluster, service, task definition, etc and even switched my aws account to anther zone(from California to Oregon) and tried from scratch.
Finally I still get this same error, is my account been blocked for some reasons or anyone can provide some helps to me? I really appreciate that!

Is there documentation regarding exceptions thrown by kubernetes api server, it would be good to have in java but any language will do

We have a use case to monitor kubernetes clusters and I am trying to find the list of exceptions thrown by kubernetes to reflect the status of the k8s server (in a namespace) while trying to submit a job on the UI.
Example: if k8s server throws ClusterNotFound exception that means we cannot submit any more jobs to that api server.
Is there such a comprehensive list?
I came across this in Go Lang. Will this be it? Does java has something like this?
The file you are referencing is a part of Kubernetes library used by many Kubernetes components for API requests fields validations. As all Kubernetes components are written in Go and I couldn't find any plans to port Kubernetes to Java, it's unlikely to have a Java version of that file.
However, there is an officially supported Kubernetes client library, written in Java, so you can check for the proper modules to validate API requests and process API responses in the java-client repostiory or on the javadoc site.
For example, objects that are used to contain proper or improper HTTP replies from Kubernetes apiserver: V1Status and ApiExceptions, (repository link)
Please consider to check java-client usage examples for better understanding.
Detailed Kubernetes RESTful API reference could be found on the official page
For example: Deployment create request
If you are really interested in Kubernetes cluster monitoring and logging aspects, please consider to read the following articles at the beginning:
Metrics For Kubernetes System Components
Kubernetes Control Plane monitoring with Datadog
How to monitor Kubernetes control plane
Logging Architecture
A Practical Guide to Kubernetes Logging

Is there a way to have multiple external IP addresses with Elastic Beanstalk?

I'm using Amazon Elastic Beanstalk with a VPC and I want to have multiple environments (workers) with different IP addresses. I don't need them to be static, I would actually prefer them to change regularly if possible.
Is there a way to have multiple environments with dynamic external IP addresses?
It's hard to understand the use case of wanting to change the instance IP address of an Elastic Beanstalk environment. The fundamental advantage that a managed service like Elastic Beanstalk provides is abstraction over the underlying architecture for a deployment. You are given a CNAME to access the environment's (your application's) API and you shouldn't be relying on the internal IP addresses or Load Balancer URLs for anything as they can be added, removed by the beanstalk service at will.
That being said, there is a way that you can achieve having changing IPs for the underlying instances.
Elastic Beanstalk Rebuild Environment destroys the existing resources including EC2s and creates new resources resulting in your instances having new IP addresses. This would work given that a scheduled downtime (of a few minutes depending on your resources) is not a problem for this use case.
You can use one the following two ways to schedule an environment rebuild
Solution 1:
You can schedule your Rebuild Environment using a simple lambda function.
import boto3
envid=['e-awsenvidid']
client = boto3.client('elasticbeanstalk')
def handler(event, context):
try:
for appid in range(len(envid)):
response = client.rebuild_environment(EnvironmentId=str(envid[appid].strip()))
if response:
print('Restore environment %s' %str(envid[appid]))
else:
print('Failed to Restore environment %s' %str(envid[appid]))
except Exception as e:
print('EnvironmentID is not valid')
In order to do this you will have to create an IAM role with the required permissions.
You can find a comprehensive guide in this AWS Guide.
Solution 2:
You can use a cron job to rebuild the environment using aws-cli. You can follow the steps below to achieve this.
Create EC2 instance
Create IAM Role with permission to rebuild environment
The following example policy would work
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"elasticbeanstalk:RebuildEnvironment"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Attach the IAM role to the EC2 instance
Add a cron job using command crontab -e
The following example cron job rebuilds the environment at 12.00 am on the 1st of every month
0 0 1 * * aws elasticbeanstalk rebuild-environment --environment-name my-environment-name
Save the cronjob and exit.
It is not recommended to rebuild the environment unnecessarily, but as of now there is no explicit way to achieve your particular requirement. So hope this helps!
Further Reading:
https://docs.aws.amazon.com/cli/latest/reference/elasticbeanstalk/rebuild-environment.html
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-management-rebuild.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html
https://awspolicygen.s3.amazonaws.com/policygen.html

Issue when trying to connect to the cluster after updating the version of Java SDK

We are experiencing the issue when trying to connect to the cluster after updating the version of Java SDK.
The setup of the system is as follows:
We have a web application that is using Java SDK and a Couchbase cluster. In between we have a VIP (Virtual IP Address). We realise that isn’t ideal but we’re not able to change that immediately since VIP was mandated by Tech Ops. VIP is basically only there to reroute the initial request on application startup. That way we can make modifications on the cluster and ensure that when application starts it can find the cluster regardless of the actual nodes in the cluster and their IPs.
Prior to the issue we used JAVA SDK version 1.4.4. Our application would start and Java SDK would initiate a request on port 8091 to VIP. Please note that port 8091 is the only port open on VIP. VIP would reroute the request to one of the node cluster currently in use the cluster would respond to Java SDK. At that point Java SDK would discover all the nodes in the cluster and application would run fine. During up time if we would add, remove a node from the cluster Java SDK would update automatically and everything would run without the issue.
In the last sprint we updated the Java SDK to version 2.1.3. Our application would start and Java SDK would initiate a request on port 11210 to VIP. Since this port is not open the request would fail and Java SDK would throw an exception:
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:93)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:108)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:99)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:89)
No further request would be made on any port.
It appears the order in which port are being used has been changed between versions. Could somebody please confirm, or dispute, that the order in which ports are being used for cluster discovery has been changed between versions. Also could somebody please provide some advice on how we could resolve the issue. We are trying to understand the clients behavior, if we could open all those ports on the VIP would the client still then function correctly and at full performance?
The issue is happening on our production environment which we cannot use for testing out potential solutions since it will interfere with our products.
In v2.x of the Java SDK, it defaults to 11210 to get the cluster map to bootstrap the application. This is a huge improvement actually as now the map comes from the managed cache and not the cluster manager (8091). The SDK should use 8091 as a fall back if it cannot get the map on 11210 though. Regardless, you really want to get that map from 11210, trust me. It cleans up a lot of problems.
To resolve this long term and follow Couchbase best practices, upgrade to the Java 2.2.x SDK, get rid of the VIP entirely and go with a DNS SRV record instead. That gives you one DNS entry for the SDK connection object and you just manage the node list in DNS. It works great. I say SDK 2.2 as the DNS SRV record solution is fully supported there, in 2.1 it is experimental. VIPs are specifically recommended against by Couchbase these days. In older versions of the SDKs it was fine to do this and it helped with limiting the number of connections from the app to the DB nodes, but that is no longer necessary and can actually be a bad thing.
in addition to Kirk's long term answer (which I also advise you to follow), a shorter term solution may be to deactivate the 11210 bootstraping (carrier bootstrap) through the CouchbaseEnvironment by calling bootstrapCarrierEnabled(false) on the builder.
I don't guarantee that it'll work with a vIP even after that, but that may be worth a try if you're in a hurry.