Elastic Beanstalk:Error while deploying during a node update from node 12 to 14 - amazon-elastic-beanstalk

ERROR
Failed to deploy configuration.
ERROR
To enable immutable config and application deployments together, you must set both the DeploymentPolicy option (aws:elasticbeanstalk:command namespace) and the RollingUpdateType option (aws:autoscaling:updatepolicy:rollingupdate namespace) to Immutable.
I have the DeploymnentType and Rollingupdate to immutable but it is still failing during deployment

Related

How to connect to JBoss EAP 7.3 using VisualVM in OpenShift

I am trying to connect an application with VisualVM, but VisualVM unable to connect with application: Below is the env:
JBoss EAP 7.3
Java 11
OpenShift
I have tried to configure it in different ways, but all failed:
Config try 1:
Use few env variables in script file, so that it could execute first (file contents are mentioned below):
echo *** Adding system property for VisulVM ***
batch
/system-property=jboss.modules.system.pkgs:add(value="org.jboss.byteman,com.manageengine,org.jboss.logmanager")
/system-property=java.util.logging.manager:add(value="org.jboss.logmanager.LogManager")
run-batch
I can see that above commands executed successfully and above properties are available in JBoss config (I verified using Jboss cli command).
JAVA_TOOLS_OPTIONS: -agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=3000 -Dcom.sun.management.jmxremote.rmi.port=3001 -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Xbootclasspath/a:/opt/eap/modules/system/layers/base/org/jboss/log4j/logmanager/main/log4j-jboss-logmanager-1.2.0.Final-redhat-00001.jar -Xbootclasspath/a:/opt/eap/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-2.1.14.Final-redhat-00001.jar
Result:
- java.lang.RuntimeException: WFLYCTL0079: Failed initializing module org.jboss.as.logging
- Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: WFLYLOG0078: The logging subsystem requires the log manager to be org.jboss.logmanager.LogManager. The subsystem has not be initialized and cannot be used. To use JBoss Log Manager you must add the system property "java.util.logging.manager" and set it to "org.jboss.logmanager.LogManager"
Config 2:
JAVA_TOOL_OPTIONS= -agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=3000 -Dcom.sun.management.jmxremote.rmi.port=3001 -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Djboss.modules.system.pkgs=org.jboss.byteman,org.jboss.logmanager -Xbootclasspath/a:/opt/eap/modules/system/layers/base/org/jboss/log4j/logmanager/main/log4j-jboss-logmanager-1.2.0.Final-redhat-00001.jar -Xbootclasspath/a:/opt/eap/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-2.1.14.Final-redhat-00001.jar
Result:
WARNING: Failed to instantiate LoggerFinder provider; Using default.
java.lang.IllegalStateException: The LogManager was not properly installed (you must set the "java.util.logging.manager" system property to "org.jboss.logmanager.LogManager")
Config 3:
• Modify in standalone.conf, where I put all required configuration in this file.
Result:
WARNING: Failed to instantiate LoggerFinder provider; Using default.
java.lang.IllegalStateException: The LogManager was not properly installed (you must set the "java.util.logging.manager" system property to "org.jboss.logmanager.LogManager")
Kindly suggest that what is the correct configurations?

Spring Cloud Data Flow Kubernetes Deployment fails for external database - Cannot load driver class: com.mysql.cj.jdbc.Driver

I am following the developer guide to deploy SCDF to a minikube cluster on my local machine. Used the helm chart approach. Was able to get it working with the defaults. The default deploys a mariadb in the cluster. I wanted to change it to use an external mysql db that is running in a docker container in my machine (outside the cluster). Followed the recommendations to change the values.yaml to enable the external DB and attributes for external DB connection (URI, dbname, user/pwd etc.).
Then deployed using "helm install my-release -f values.yaml bitnami/spring-cloud-dataflow"
The SCDF pod (& skipper pod) errors out because it can't find the mysql jdbc driver. kubectl logs on the pods show the following error: "java.lang.IllegalStateException: Cannot load driver class: com.mysql.cj.jdbc.Driver"
How do I include the mysql jdbc driver in the image for SCDF that gets deployed (or resolve this problem). I read SCDF already includes the drivers for std databases (true ?). New to helm/k8s so apologies if solution is obvious... Other posts on similar error all talk about including this in the pom.xml . But this is not a dependency issue with my (task) app but SCDF itself.
thanks
-------------------- more detail on the exception stack ---------------
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dataSource' defined in class path resource [org/springframework/boot/autoconfigure/jdbc/DataSourceConfiguration$Hikari.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.zaxxer.hikari.HikariDataSource]: Factory method 'dataSource' threw exception; nested exception is java.lang.IllegalStateException: Cannot load driver class: com.mysql.cj.jdbc.Driver
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:657)
...
Looks like I just need to specify the mariadb jdbc driver in the values.yaml file for mysql as well. That got past the load class error.
But still would like to know how to prevent class load error for a different driver specified when SCDF is deployed to k8s via the helm chart.

What can cause flyway to not auto-detect migrations at startup, but then be able to do so at runtime?

Question:
What can cause flyway to not auto-detect migrations from the default path, and prevent resolution of migrations from a custom location, during startup only?
Given the following:
io.micronaut.flyway:micronaut-flyway uses flyway 6.4.4
Flyway, when run at application startup by micronaut, is unable to auto-detect migrations
Flyway, when run during bean initialization (i.e. in the constructor of the controller bean), is able to auto-detect migrations
Flyway is able to pick up and apply the migrations during startup during integration-testing. This gives me confidence that it is configured correctly. I can break it in expected ways by messing with the config / file location.
Migration file is certainly on the classpath during runtime on prod at the expected location, as evidenced by runtime-logs.
Context
I want to setup flyway migrations for my Kotlin-Micronaut-GoogleCloudFunction. As described in the docs, I have my migrations under src/main/resources/db/migration, named like V1__create_xyz_table.sql.
I verified that the migration is on the classpath at runtime, by logging it in the function body:
val fileContent = FunctionController::class.java.getResource("/db/migration/V1__create_xyz_table.sql").readText()
println(fileContent) // "create table xyz(id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY)"
This works, and logs the contents of the file to stdout as expected.
My integration tests run fine. Migrations are automatically detected and applied to the mysql-testcontainer instance. Data is written to and read from the dockerized DB.
However, when I start the application locally, or deploy it, the application warns me:
No migrations found. Are your locations set up correctly?
Unsurprisingly, triggering the function results in errors like "Table xyz does not exist".
Besides the actual db-credentials, my test and production setup share the following config:
# application.yml
datasources:
mysql:
url: <url>
username: <user>
password: <pw>
flyway:
datasources:
mysql:
enabled: true
Other things I have tried:
Using a Java-Based migration (same result)
Using the custom locations config (same result)
What "works":
When I autowire the datasource into the function controller, and apply the migrations inside the constructor it works: Successfully validated 1 migration.
init {
Flyway.configure().dataSource(mysqlDS).load().migrate()
}
This confirms, that all the necessary files are present and discoverable by flyway. Why would this not work during application startup?
I attached a debugger and found that different ClassLoaders are used to discover the resources:
During startup: AppClassLoader
During function execution: FunctionClassLoader
I was having the same issue. Debugging, I realized that the method that triggers the flyway migration wasn't running. This method lives inside a Micronaut BeanCreatedEventListener which listens on the creation of DataSource type beans.
What left me scratching my head was that the DataSource type bean was created successfully, which I confirmed on runtime by fetching it from the application context. So why wasn't the even listener triggering?
This is because the bean was being created before the event listener was even initialized. Why was this happening? Because I had another custom even listener in my app that injected a Jdbi bean. The jdbi bean subsequently injected the DataSource bean. This means that my custom event listener was injecting the DataSource bean, so it was impossible for it to be initialized before the bean was created.
I suggest setting a breakpoint in this method to check if it's being triggered. If it's not, it's possible the cause of your issue was similar to mine.

Cannot set configuration in Elastic Beanstalk

I have 4 Elastic Beanstalk deployments: 3 are Corretto 8 and the other one is Corretto 11.
On the Corretto 8 deployments, I can set new configuration without issue. On the Corretto 11 instance, however, any attempt to set a new configuration fails and causes a rollback.
The Corretto versions might not be the problem, but it's the only difference I can see. All 4 apps are Spring Boot apps that run as web servers (i.e embedded tomcat with exposed web ports). I am trying to set the exact same configuration name and value, and it only fails on the one instance.
The configuration I'm trying to set is pretty simple:
VALIDATE_RENEWALS = true
Even just trying to set DEBUG = true causes a failure and rollback.
I don't see a lot of information from the console about what's failing. Here is the event log:
2020-03-16 13:55:17 UTC-0600 INFO The environment was reverted to the previous configuration setting.
2020-03-16 13:54:45 UTC-0600 ERROR During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
2020-03-16 13:54:45 UTC-0600 ERROR Failed to deploy configuration.
2020-03-16 13:54:45 UTC-0600 ERROR Unsuccessful command execution on instance id(s) 'i-00553f4ac36afd327'. Aborting the operation.
2020-03-16 13:54:45 UTC-0600 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2020-03-16 13:54:45 UTC-0600 ERROR [Instance: i-00553f4ac36afd327] Command failed on instance. An unexpected error has occurred [ErrorCode: 0000000001].
2020-03-16 13:54:20 UTC-0600 INFO Updating environment XXX's configuration settings.
2020-03-16 13:54:15 UTC-0600 INFO Environment update is starting.
I've also downloaded the full set of logs for the instance and don't see anything obvious. The app stdout doesn't have any errors or exceptions, it just starts normally and then gets terminated. None of the other log files have messages around the times above, so I'm really not sure what else I can look at.
Edit
The times don't line up but I do see this in eb-engine.log file:
2020/03/16 17:54:38.508634 [INFO] checking whether command is applicable to this instance...
2020/03/16 17:54:38.508658 [INFO] this command is applicable to the instance, thus instance should execute command
2020/03/16 17:54:38.508665 [INFO] check whether this is an enhanced env...
2020/03/16 17:54:38.508794 [INFO] Executing instruction: StageJavaApplication
2020/03/16 17:54:38.508858 [ERROR] GetArchivedFileType with file /opt/elasticbeanstalk/deployment/app_source_bundle failed with error open /opt/elasticbeanstalk/deployment/app_source_bundle: no such file or directory
2020/03/16 17:54:38.508868 [ERROR] An error occurred during execution of command [config-deploy] - [StageJavaApplication]. Stop running the command. Error: staging java app failed with error GetArchivedFileType with file /opt/elasticbeanstalk/deployment/app_source_bundle failed with error open /opt/elasticbeanstalk/deployment/app_source_bundle: no such file or directory

Getting Assemble script failed error building application in Redhat Openshift

I'm new to Redhat OpenShift and trying to deploy node application (with angularjs + mysql) and running into build issues.
Using openshift console created node application and in advanced options pointed to the private repository and linked configured secret (ssh key to private repository).
My build is failing with "Assemble script failed". Pasting the logs as below (from console - obfuscated private keys and values).
Not sure if I'm missing some configurations. Appreciate help on this.
Cloning "ssh://username#bitbucket.org/username/my-app.git" ...
Commit: xxxxxxxxxxxxxxxxx (Fixed readme)
Author: Name <email>
Date: Wed Sep 6 19:50:59 2017 -0700
Pulling image "docker-
registry.default.svc:5000/openshift/nodejs#sha256:0000000000000" ...
---> Installing application source
---> Building your Node application from source
Current git config
url.https://github.com.insteadof=git#github.com:
url.https://.insteadof=ssh://
url.https://github.com.insteadof=ssh://git#github.com
core.repositoryformatversion=0
core.filemode=true
core.bare=false
core.logallrefupdates=true
remote.origin.url=ssh://username#bitbucket.org/username/my-app.git
remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*
branch.master.remote=origin
branch.master.merge=refs/heads/master
---> Installing dependencies
---> Using 'npm install -s --only=production'
error: build error: non-zero (13) exit code from docker-
registry.default.svc:5000/openshift/nodejs#sha256:0000000000000
Pls note that my source code is hosted on private repository and per log above it appears openshift is able to access the repository and download the source code.
Thanks Graham for the pointer. I recreated the application and this time in advanced options on the web console selected 1 GB (from default of 500 mb) after which my build worked fine.