Exception caught When try to terminate a always free ATP database instance - oci-java-sdk

try to terminate a running ATP instance using the following API from oci java sdk.
TerminateDbSystemRequest terminateDbSystemRequest =
TerminateDbSystemRequest.builder().dbSystemId(dbSystemId).build();
databaseClient.terminateDbSystem(terminateDbSystemRequest);
an exception caught
Exception in thread "main" com.oracle.bmc.model.BmcException: (404, NotAuthorizedOrNotFound, false) Authorization failed or requested resource not found.
I can stop and start the atp instance successfully, just wondering which class should be used to terminate the atp instance.

That is the right code to perform the action you are trying to perform using the OCI Java SDK, but it looks like you are hitting one of the two below issues:
dbSystemId is not a valid DB System identifier. One way to confirm if you have the correct dbSystemId value is to confirm if you are able to stop and start the DB System using the same value.
dbSystemId is a valid DB System identifier, but the credentials you are using to terminate the DB System do not have the proper permissions to do so. One way to confirm if this is a permissions issue is to see if, using the same account whose credentials you are using from the Java SDK, you are able to terminate the DB System from the OCI web portal

Related

Kubectl how to wait for operator package manifest to be created

How do I wait in using oc command for an operator package manifest to be available?
I am trying this
❯ oc wait --for=condition=ready packagemanifest/example-manifest -n openshift-marketplace
Error from server (MethodNotAllowed): the server does not allow this method on the requested resource
This is failing because there is no ready state under spec field of package manifest
This error may occur when the server is configured in a way that does not allow you to perform a specific action for a particular URL.
Check below possible solutions :
1)Kubectl resolves are driven by discovery. Please check you may have created two resources with conflicting names one of which is not listable.
2)Check that your Application Default Credentials are configured for a different user than your credentials.
3)Also make sure that your Application Credentials environment variable isn't pointing somewhere unexpected.

How to use MySQL to store Spring Cloud Gateway sessions

I am trying to set up two instances of Spring Cloud Gateway to work with IdP to authenticate with the oAuth2 Authorization Code flow. The load balancer in front of the two gateway instances is round robin fashion.
One of the challenges is to share the session between the two Spring Cloud Gateway instances (if no session sharing, after the user successfully logged in IdP and redirected to the instance #2, it will have problem because the instance #2 doesn't have the session information that generated by instance #1 who redirect the user to the login page). And I don't want to use sticky session fashion in load balancer.
I can manage to use Redis to store the sessions by using dependencies org.springframework.boot:spring-boot-starter-data-redis and org.springframework.session:spring-session-data-redis, but I failed to use MySQL to do it (and I have to use MySQL to do it for cost consideration).
I tried this sample, which uses spring boot starter web, it does able to use MySQL to store the sessions.
But when I tried to replace the dependency org.springframework.boot:spring-boot-starter-web with org.springframework.cloud:spring-cloud-starter-gateway:3.1.0 and configured the spring.session.store-type: jdbc, I encountered below error:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.boot.autoconfigure.session.SessionAutoConfiguration$ReactiveSessionRepositoryValidator': Invocation of init method failed; nested exception is org.springframework.boot.autoconfigure.session.SessionRepositoryUnavailableException: No session repository could be auto-configured, check your configuration (session store type is 'jdbc')
Any idea to solve it? Many thanks!

SQLException when deploying ASP.NET Boilerplate application to Azure

I've been trying to deploy an ASP.NET Boilerplate application to Azure for the last couple of days and I haven't been able to do so. I keep stumbling upon this error:
First I make a Web App + SQL resource and then publish my project to it from inside Visual Studio using the right click -> deploy option, and then importing the publication profile I got from Azure. I make sure the database connection string in the appsettings.production.json are correct but I can't seem to connect to the database. If I try to connect to the database through the SQL Server Management Studio I get a similar error. This means something is wrong with the database itself.
Do I need to create a separate SQL database in Azure that makes use of the SQL database server that comes with the Web App + MySQL resource?
Any help would be greatly appreciated! I have been spending way too much time on this problem already.
This is a screenshot of all the resources in the Azure portal, these were created while creating the Web app + MySQL resource:
Edit: This is the error I am getting:
Win32Exception: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. Unknown location SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.) Microsoft.Data.ProviderBase.DbConnectionPool.CheckPoolBlockingPeriod(Exception e) InvalidOperationException: An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure()' to the 'UseSqlServer' call. Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerExecutionStrategy.Execute<TState, TResult>(TState state, Func<DbContext, TState, TResult> operation, Func<DbContext, TState, ExecutionResult<TResult>> verifySucceeded) Win32Exception: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Since you mentioned you can't access db in any other way, I provided step-by-step instruction
You can get your connection string from Azure resource panel
Navigate to Connection security tab and enable access from other Azure resources. This way WebApp will be able access database server inside Azure network. I also added my client IP to test db connection from my PC later. Click Save at the top when you are done
On the same page there is setting called configure SSL connection. Certificate can be downloaded from https://learn.microsoft.com/uk-ua/azure/mysql/howto-configure-ssl. I used this file https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem . Optionally, you could disable this feature for now.
Now we have everything to test db connection. I will show it with the use of HeidiSQL client.
Click open and see your databases
According to this you need to have CACertificateFile parameter appended to connection string to use SSL, so you need to put file into repository and provide its relative path in connection string
This was all caused by a stupid mistake, in the Azure portal I selected the Web App + MySQL resource instead of the Web App + SQL resource. For the ASP.NET Boilerplate project I am using you need a Web App + SQL resource.

How do you start a Dataflow template from a Compute Engine instance?

From my workstation I can fire templated Dataflow jobs with the gcloud dataflow jobs command. The required authorization to insert a new job come from my workstation where I'm logged in.
On the Compute Engine instance I rely on it's service account. The one with (number)-compute#. Within the AIM section I enabled Dataflow/Dataflow Admin, Dataflow/Dataflow Developer and Dataflow/Dataflow Worker for this service account to be safe.
I even added Cloud Dataflow Service Agent when I came across that one.
Then I try to start a Dataflow from the command line but I get an error about insufficient authentication scopes: ERROR: (gcloud.dataflow.jobs.run) PERMISSION_DENIED: Request had insufficient authentication scopes.
If I do a gcloud config auth and login with my personal account, of course, it works.
Somehow I'm missing the proper permissions to set to the applied service account.
Is there a guideline I missed? Can somebody please point me into the right direction?
The error message indicates that the instance does not setup access scope properly. To launches a job from a GCE VM, the VM must have compute.read-only, compute, or cloud-platform scope for the project.
The way to verify it is using the command "gcloud compute instances describe --zone=[zone][instance]" and look for "scopes".
This document and this existing question may provide useful guidelines for you.

Implement two phase commit protocol between EJB Application(Running on Glassfish) and Swing application

I have a EJB application running on Glassfish server which stores data on MySQL DB which I call as Global DB.
I have two exact remote Swing applications which are stand alone applications accessing EJB's using RMI. They have their own local DB in case of lost connection.
My aim is to implement two phase commit protocol i.e to make one participant as coordinator and others as participants.
One method which I could think of was to implement using JMS i.e send a message across queue and make remote clients listen to these messages and take appropriate action.
I do this my sending a message on Buttonclick of one of the Swing application.
Problem is, even tough I have implemented MessageListener, onMessage() method does not receive any message for the other client.
Each Remote client has following properties set:
props.setProperty("java.naming.factory.initial", "com.sun.enterprise.naming.SerialInitContextFactory");
props.setProperty("java.naming.factory.url.pkgs", "com.sun.enterprise.naming");
props.setProperty("java.naming.factory.state", "com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl");
props.setProperty("org.omg.CORBA.ORBInitialHost", "localhost");
props.setProperty("org.omg.CORBA.ORBInitialPort", "3700");
This is to connect to Glassfish server and access the connectionFactory and Queue which I have already configured.
Is it because only application running on server are allowed to receive messages and not remote applications?
Any suggestions for topology for 2 PC are welcome.
For this, we used JMS for exchanging the messages between these systems i.e one acting as coordinator who will initiate the process by sending message on the queue and others will respond accordingly by sending back again a message on the queue.
Since you are using EJB,you can use JTA to manage transcation,it a standard implementation of two-phased commit protocal,and JMS support JTA too.
Here are my steps:
config the trans-attribute to Required/Mandatory /Supports, depends on you need.
in your client get UserTransaction by lookup jndi from the EJB server.
start the transaction from client.
commit/rollback the transaction at client side
This is the so called "Client owner tranaction design pattern". I suggest you to read the book javatransactionsbook