I am trying to set up two instances of Spring Cloud Gateway to work with IdP to authenticate with the oAuth2 Authorization Code flow. The load balancer in front of the two gateway instances is round robin fashion.
One of the challenges is to share the session between the two Spring Cloud Gateway instances (if no session sharing, after the user successfully logged in IdP and redirected to the instance #2, it will have problem because the instance #2 doesn't have the session information that generated by instance #1 who redirect the user to the login page). And I don't want to use sticky session fashion in load balancer.
I can manage to use Redis to store the sessions by using dependencies org.springframework.boot:spring-boot-starter-data-redis and org.springframework.session:spring-session-data-redis, but I failed to use MySQL to do it (and I have to use MySQL to do it for cost consideration).
I tried this sample, which uses spring boot starter web, it does able to use MySQL to store the sessions.
But when I tried to replace the dependency org.springframework.boot:spring-boot-starter-web with org.springframework.cloud:spring-cloud-starter-gateway:3.1.0 and configured the spring.session.store-type: jdbc, I encountered below error:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.boot.autoconfigure.session.SessionAutoConfiguration$ReactiveSessionRepositoryValidator': Invocation of init method failed; nested exception is org.springframework.boot.autoconfigure.session.SessionRepositoryUnavailableException: No session repository could be auto-configured, check your configuration (session store type is 'jdbc')
Any idea to solve it? Many thanks!
Related
We are running CAS Server 5.2.2 with 2 authentication handlers. One of them is a custom authentication handler where we are able to manage any kind of scenario. The other one is setup to delegate authentication to another identity provider which validates several types of credentials (digital certificates, ...).
Our problem is with the delegated authentication: the integration via oauth2 works fine and we release Principal attributes using a groovy script. Our new case is the following:
After a successful delegated authentication we need to evaluate some attributes and depending on their values mark the final authentication as failed.
How can we achieve this? We have tried to throw an Exception (registered in application.properties) in the groovy script that releases the Principal attributes but it does not seem to work.
I've been trying to deploy an ASP.NET Boilerplate application to Azure for the last couple of days and I haven't been able to do so. I keep stumbling upon this error:
First I make a Web App + SQL resource and then publish my project to it from inside Visual Studio using the right click -> deploy option, and then importing the publication profile I got from Azure. I make sure the database connection string in the appsettings.production.json are correct but I can't seem to connect to the database. If I try to connect to the database through the SQL Server Management Studio I get a similar error. This means something is wrong with the database itself.
Do I need to create a separate SQL database in Azure that makes use of the SQL database server that comes with the Web App + MySQL resource?
Any help would be greatly appreciated! I have been spending way too much time on this problem already.
This is a screenshot of all the resources in the Azure portal, these were created while creating the Web app + MySQL resource:
Edit: This is the error I am getting:
Win32Exception: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. Unknown location SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.) Microsoft.Data.ProviderBase.DbConnectionPool.CheckPoolBlockingPeriod(Exception e) InvalidOperationException: An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure()' to the 'UseSqlServer' call. Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerExecutionStrategy.Execute<TState, TResult>(TState state, Func<DbContext, TState, TResult> operation, Func<DbContext, TState, ExecutionResult<TResult>> verifySucceeded) Win32Exception: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Since you mentioned you can't access db in any other way, I provided step-by-step instruction
You can get your connection string from Azure resource panel
Navigate to Connection security tab and enable access from other Azure resources. This way WebApp will be able access database server inside Azure network. I also added my client IP to test db connection from my PC later. Click Save at the top when you are done
On the same page there is setting called configure SSL connection. Certificate can be downloaded from https://learn.microsoft.com/uk-ua/azure/mysql/howto-configure-ssl. I used this file https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem . Optionally, you could disable this feature for now.
Now we have everything to test db connection. I will show it with the use of HeidiSQL client.
Click open and see your databases
According to this you need to have CACertificateFile parameter appended to connection string to use SSL, so you need to put file into repository and provide its relative path in connection string
This was all caused by a stupid mistake, in the Azure portal I selected the Web App + MySQL resource instead of the Web App + SQL resource. For the ASP.NET Boilerplate project I am using you need a Web App + SQL resource.
try to terminate a running ATP instance using the following API from oci java sdk.
TerminateDbSystemRequest terminateDbSystemRequest =
TerminateDbSystemRequest.builder().dbSystemId(dbSystemId).build();
databaseClient.terminateDbSystem(terminateDbSystemRequest);
an exception caught
Exception in thread "main" com.oracle.bmc.model.BmcException: (404, NotAuthorizedOrNotFound, false) Authorization failed or requested resource not found.
I can stop and start the atp instance successfully, just wondering which class should be used to terminate the atp instance.
That is the right code to perform the action you are trying to perform using the OCI Java SDK, but it looks like you are hitting one of the two below issues:
dbSystemId is not a valid DB System identifier. One way to confirm if you have the correct dbSystemId value is to confirm if you are able to stop and start the DB System using the same value.
dbSystemId is a valid DB System identifier, but the credentials you are using to terminate the DB System do not have the proper permissions to do so. One way to confirm if this is a permissions issue is to see if, using the same account whose credentials you are using from the Java SDK, you are able to terminate the DB System from the OCI web portal
I'm trying to run a web application of "hello world" complexity on Elastic Beanstalk. I have no problem doing this with Spring Boot on Elastic Beanstalk.
But when I try to use Spring Cloud AWS, I encounter a myriad of problems. The reference guide never mentions that running on Beanstalk is possible, so perhaps I am barking up the wrong tree?
The root problem I seem to encounter is the stackResourceRegistryFactoryBean blowing up when trying to identify the "stack" being used - i.e. the CloudFormation stack. But I'm using Elastic Beanstalk, not CloudFormation. The root exception is:
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.cloud.aws.core.env.stack.config.StackResourceRegistryFactoryBean]: Factory method 'stackResourceRegistryFactoryBean' threw exception; nested exception is java.lang.IllegalAccessError: tried to access class org.springframework.cloud.aws.core.env.stack.config.AutoDetectingStackNameProvider from class org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration$StackAutoDetectConfiguration
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:189)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:588)
... 89 more
Caused by: java.lang.IllegalAccessError: tried to access class org.springframework.cloud.aws.core.env.stack.config.AutoDetectingStackNameProvider from class org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration$StackAutoDetectConfiguration
at org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration$StackAutoDetectConfiguration.stackResourceRegistryFactoryBean(ContextStackAutoConfiguration.java:71)
at org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration$StackAutoDetectConfiguration$$EnhancerBySpringCGLIB$$432c7658.CGLIB$stackResourceRegistryFactoryBean$0(<generated>)
at org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration$StackAutoDetectConfiguration$$EnhancerBySpringCGLIB$$432c7658$$FastClassBySpringCGLIB$$47c6e7d2.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
...
There are tags present on the generated EC2 instance for "aws:cloudformation:stack-id" and "aws:cloudformation:stack-name" if it is relevant, and my understanding is that Beanstalk uses CloudFormation stacks behind the scenes. I've tried manually specifying the name of the stack via #EnableStackConfiguration, but since the name is generated I'd rather not do this, even if it did work.
So my questions are:
1) Is it possible to run a Spring Cloud AWS-based application on Elastic Beanstalk?
2) If so, are there any special steps required? For example, I already discovered the one about CloudFormation read access being required on the role.
3) Is there a way to disable the part of Spring Cloud AWS that attempts to obtain resource names from the stack? At this point my app doesn't need this.
thanks in advance,
k
Ok, so over time I've answered my own questions on this topic.
First, Elastic Beanstalk uses CloudFormation behind the scenes, so this is why there is a "stack".
Next, Spring Cloud AWS tries to make connections to DBs and such easier by binding to other resources that may have been created in the same stack. This is reasonable - if you are expecting it. If not, as #barthand says, it is probably better to turn this feature off with cloud.aws.stack.auto=false than have the app fail to start up.
Third, When using Elastic Beanstalk, you have the opportunity to associate an execution role with your instance - otherwise the code in your instance isn't able to do much of anything with the AWS SDK. To explore the resources in a CloudFormation stack, Spring Cloud AWS tries to make some API calls, and by default these are not allowed. To make them allowed, I added these permissions to the role:
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudformation:DescribeStacks",
"cloudformation:DescribeStackEvents",
"cloudformation:DescribeStackResource",
"cloudformation:DescribeStackResources",
"cloudformation:GetTemplate",
"cloudformation:List*"
],
"Resource": "*"
}
]
So to answer my original questions:
Yes it is definitely possible (and easy!) to run a Spring Cloud AWS
program in Elastic Beanstalk.
Special requirements - Need to open the permissions on the associated role to include CloudFormation read operations, or...
Disable these using cloud.aws.stack.auto=false
Hope this info helps someone in the future
spring-cloud-aws seems to assume by default that you're running your app using your custom CloudFormation template.
In case of Elastic Beanstalk, you simply need to tell spring-cloud-aws to resign from obtaining information about the stack automatically:
cloud.aws.stack.auto = false
Not sure why it is not mentioned in the docs. For basic Java apps, Elastic Beanstalk seems to be an obvious choice.
I have a EJB application running on Glassfish server which stores data on MySQL DB which I call as Global DB.
I have two exact remote Swing applications which are stand alone applications accessing EJB's using RMI. They have their own local DB in case of lost connection.
My aim is to implement two phase commit protocol i.e to make one participant as coordinator and others as participants.
One method which I could think of was to implement using JMS i.e send a message across queue and make remote clients listen to these messages and take appropriate action.
I do this my sending a message on Buttonclick of one of the Swing application.
Problem is, even tough I have implemented MessageListener, onMessage() method does not receive any message for the other client.
Each Remote client has following properties set:
props.setProperty("java.naming.factory.initial", "com.sun.enterprise.naming.SerialInitContextFactory");
props.setProperty("java.naming.factory.url.pkgs", "com.sun.enterprise.naming");
props.setProperty("java.naming.factory.state", "com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl");
props.setProperty("org.omg.CORBA.ORBInitialHost", "localhost");
props.setProperty("org.omg.CORBA.ORBInitialPort", "3700");
This is to connect to Glassfish server and access the connectionFactory and Queue which I have already configured.
Is it because only application running on server are allowed to receive messages and not remote applications?
Any suggestions for topology for 2 PC are welcome.
For this, we used JMS for exchanging the messages between these systems i.e one acting as coordinator who will initiate the process by sending message on the queue and others will respond accordingly by sending back again a message on the queue.
Since you are using EJB,you can use JTA to manage transcation,it a standard implementation of two-phased commit protocal,and JMS support JTA too.
Here are my steps:
config the trans-attribute to Required/Mandatory /Supports, depends on you need.
in your client get UserTransaction by lookup jndi from the EJB server.
start the transaction from client.
commit/rollback the transaction at client side
This is the so called "Client owner tranaction design pattern". I suggest you to read the book javatransactionsbook