Mule project has multiple flows some of which have endpoints that may be offline at startup during testing. A failed endpoint in any flow is causing the entire Mule project to fail to deploy. Console logs that domain status is deployed but application status = FAILED.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Starting app 'test' +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
. Root Exception was: Connection refused: connect. Type: class java.net.ConnectException
ERROR 2018-01-09 10:31:08,287 [main] org.mule.module.launcher.application.DefaultMuleApplication:
********************************************************************************
Message : Could not connect to broker URL: tcp://localhost:61616.
Reason: java.net.ConnectException: Connection refused: connect
JMS Code : null
*************************************************************
* Application "test" shut down normally on: 1/9/18 10:31 AM *
* Up for: 0 days, 0 hours, 0 mins, 1.449 sec *
*************************************************************
ERROR 2018-01-09 10:31:08,413 [main] org.mule.module.launcher.DefaultArchiveDeployer:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Failed to deploy artifact 'test', see below +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
org.mule.module.launcher.DeploymentStartException: ConnectException: Connection refused: connect
Have tried to set initialState="stopped" on flows that could have startup connection issues but has no affect on running the project. Project still fails to run and no flows are running.
Added CatchExceptionStrategy to the inbound endpoints that can fail at startup to no available. Also tried "Until Successful" scope in flow.
In particular have some JMS and Web service components which may be offline at different times during development and testing. Want to configure flows to allow overall project to continue even if a single component/flow fails to connect at startup. Want to manage a single project with multiple flows such that some flows may not be active.
Environment: Anypoint Studio and Mule 3.9.0 EE.
If you would like your deployment to succeed even when your service is not available, you will need to supply a reconnection strategy on the JMS Connector with blocking=false. For example:
<jms:activemq-connector name="Active_MQ" username="a" password="b" brokerURL="tcp://localhost:61616" validateConnections="true" doc:name="Active MQ">
<reconnect-forever blocking="false"/>
</jms:activemq-connector>
More information on reconnection strategies can be found in the MuleSoft documentation here: https://docs.mulesoft.com/mule-user-guide/v/3.9/configuring-reconnection-strategies if needed.
Related
Not able to change the default H2 database to Mysql in WSO2 .
Getting SSL handshake error when connecting from WSO2 to Mysql(MySql is a saas hosted in azure)
Steps I have taken to connect to the MySQL 5.7 from wso2 api manager 260
1) I have added the https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem to the client-truststore.jks
Link form where I got the crt is https://learn.microsoft.com/en-us/azure/mysql/howto-configure-ssl
2) I have updated the master-datasources.xml file
<url>jdbc:mysql://***.mysql.database.azure.com:3306/regdb?verifyServerCertificate=false&useSSL=true&requireSSL=false</url>
<username>***</username>
<password>***</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
3) Created the schema and the required tables as well
Below is the error log when starting the wso2 server
Start Level Event Dispatcher, received EOFException: error
Start Level Event Dispatcher, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
Start Level Event Dispatcher, SEND TLSv1.2 ALERT: fatal, description = handshake_failure
Start Level Event Dispatcher, WRITE: TLSv1.2 Alert, length = 2
Start Level Event Dispatcher, Exception sending alert: java.net.SocketException: Broken pipe (Write failed)
I have a previously working Spring MVC project using Spring Boot, Spring JPA, Spring Data. My DBA updated the security on the MySQL server to accept only TLSv1.1 and TLSv1.2 and now my website Spring MVC won't connect to it. The website literally died the minute they put the TLS 1.2 on it, so it's my problem, with their security.
Please note that my dev box is entirely isolated. I cannot copy and paste many lines of code and have to hand type it.
First, Tried to add the SSL via the JDBC string:
jdbc:mysql://host:3306/bdd_name?useUnicode=true&characterEncoding=utf8&useSSL=true&requireSSL=true
Second, Tried to add More information (per this example):
jdbc:mysql://example.com:3306/MYDB?verifyServerCertificate=true&useSSL=true&requireSSL=true&clientCertificateKeyStoreUrl=file:cert/keystore.jks&clientCertificateKeyStorePassword=123456&trustCertificateKeyStoreUrl=file:cert/truststore.jks&trustCertificateKeyStorePassword=123456
Third, tried to add java security:
-Djavax.net.ssl.keyStore=/path/to/keystore/keystore.jks
-Djavax.net.ssl.keyStorePassword=password
-Djavax.net.ssl.trustStore=/path/to/keystore/truststore.jks
-Djavax.net.ssl.trustStorePassword=password
Fourth, tried to add via Spring properties:
server.ssl.enabled=true
server.ssl.enabled-protocols=TLSv1.2
After many attempts to get this resolved, I have a setup with 2 and 4 in place. The jdbc connection property resolves and finds my key files and unlocks them, so the URL changes (2) does work.
So, after all this, I still get:
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
Caused by: Received fatal alert: handshake_failure
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
I had openshift 2 starter account where I had my application running.
Openshift 2 been shut down and now I got mail to migrate it to 3
But I don't have backup of an application
I am getting following errors
Upon rhc save-snapshot myapp I am getting following error.
Error in trying to save snapshot. You can try to save manually by
running: ssh 54f03dbd4382ec9101000159#myapp-myapps.rhcloud.com
'snapshot' > myapp.tar.gz
If I try to ssh application then connection is getting closed.
ssh 54f03dbd4382ec9101000159#myapp-myapps.rhcloud.com
Connection to myapp-myapps.rhcloud.com closed.
If I try to restart application from console then I am getting error
could not open session
could not open session
could not open session Failed to execute: 'control restart' for
/var/lib/openshift/54f03dbd4382ec9101000159/mysql Failed to execute:
'control restart' for
/var/lib/openshift/54f03dbd4382ec9101000159/phpmyadmin Failed to
execute: 'control restart' for
/var/lib/openshift/54f03dbd4382ec9101000159/php
EDIT : I get following error in browser when I try to open my site.
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /.
Reason: Error reading from remote server
Apache/2.2.15 (Red Hat) Server at www.mydomain.com Port 80
Need your suggestions. Thanks.
There are a new post on OpenShift blog:
Updated October 3, 2017
We understand how important your data is, and
we have made a one-time exception to allow you to access your
OpenShift Online v2 data. You have until October 5, 2017 at 4:00 PM
UTC to perform a backup of your application. If you have not used it
before, you can download the rhc tool here.
Then you can perform your backup until the 2017/10/05.
Reading around (don't know exactly where) I found that the payed accounts still working until decebmber 31th, so I updated to bronze and could restart the service and backup it. Don't know exactly if is because the upgrade or if was fixed some issue.
SnappyData v0.5
My goal is to start a "spark-shell" from my SnappyData install's /bin directory and issue Scala commands against existing tables in my SnappyData store.
I am on the same host as my SnappyData store, locator, and lead (and yes, they are all running).
To do this, I am running this command as per the documentation here:
Connecting to a Cluster with spark-shell
~/snappydata/bin$ spark-shell --master local[*] --conf snappydata.store.locators=10.0.18.66:1527 --conf spark.ui.port=4041
I get this error trying to create a spark-shell to my store:
[TRACE 2016/08/12 15:21:55.183 UTC GFXD:error:FabricServiceAPI
tid=0x1] XJ040 error occurred while starting server :
java.sql.SQLException(XJ040): Failed to start datab
ase 'snappydata', see the cause for details.
java.sql.SQLException(XJ040): Failed to start database 'snappydata',
see the cause for details.
at com.pivotal.gemfirexd.internal.impl.jdbc.SQLExceptionFactory40.getSQLException(SQLExceptionFactory40.java:124)
at com.pivotal.gemfirexd.internal.impl.jdbc.Util.newEmbedSQLException(Util.java:110)
at com.pivotal.gemfirexd.internal.impl.jdbc.Util.newEmbedSQLException(Util.java:136)
at com.pivotal.gemfirexd.internal.impl.jdbc.Util.generateCsSQLException(Util.java:245)
at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedConnection.bootDatabase(EmbedConnection.java:3380)
at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedConnection.(EmbedConnection.java:450)
at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedConnection30.(EmbedConnection30.java:94)
at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedConnection40.(EmbedConnection40.java:75)
at com.pivotal.gemfirexd.internal.jdbc.Driver40.getNewEmbedConnection(Driver40.java:95)
at com.pivotal.gemfirexd.internal.jdbc.InternalDriver.connect(InternalDriver.java:351)
at com.pivotal.gemfirexd.internal.jdbc.InternalDriver.connect(InternalDriver.java:219)
at com.pivotal.gemfirexd.internal.jdbc.InternalDriver.connect(InternalDriver.java:195)
at com.pivotal.gemfirexd.internal.jdbc.AutoloadedDriver.connect(AutoloadedDriver.java:141)
at com.pivotal.gemfirexd.internal.engine.fabricservice.FabricServiceImpl.startImpl(FabricServiceImpl.java:290)
at com.pivotal.gemfirexd.internal.engine.fabricservice.FabricServerImpl.start(FabricServerImpl.java:60)
at io.snappydata.impl.ServerImpl.start(ServerImpl.scala:32)
Caused by: com.gemstone.gemfire.GemFireConfigException: Unable to
contact a Locator service (timeout=5000ms). Operation either timed out
or Locator does not exist. Configured list of
locators is "[dev-snappydata-1(null):1527]".
at com.gemstone.gemfire.distributed.internal.membership.jgroup.GFJGBasicAdapter.getGemFireConfigException(GFJGBasicAdapter.java:533)
at com.gemstone.org.jgroups.protocols.TCPGOSSIP.sendGetMembersRequest(TCPGOSSIP.java:212)
at com.gemstone.org.jgroups.protocols.PingSender.run(PingSender.java:82)
at java.lang.Thread.run(Thread.java:745)
hmm! I assume you are trying the Spark-shell from your desktop and connecting to the cluster in AWS?
Not sure this is going to work because the local JVM launched by spark-shell will attempt to connect to the p2p cluster in Snappydata which is not likely to work.
Snappy-shell on the other hand merely uses the JDBC client to connect (and, hence will work).
And, you cannot use the locator client port (1527), anyway. See here
Can you try with snappydata.store.locators=10.0.18.66:10334 NOT 1527 as the port ? Unlikely this will work but worth a try.
Maybe there is a way to open up all ports and access to these nodes on AWS. Not recommended for production, though.
I am curious for other responses from the engg team.
Until then, you may have to start the spark-shell from within the network (AWS node).
I have a Java EE application running in GlassFish on EC2, with a MySQL database on Amazon RDS.
I am trying to configure the JDBC connection pool to in order to minimize downtime in case of database failover.
My current configuration isn't working correctly during a Multi-AZ failover, as the standby database instance appears to be available in a couple of minutes (according to the AWS console) while my GlassFish instance remains stuck for a long time (about 15 minutes) before resuming work.
The connection pool is configured like this:
asadmin create-jdbc-connection-pool --restype javax.sql.ConnectionPoolDataSource \
--datasourceclassname com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource \
--isconnectvalidatereq=true --validateatmostonceperiod=60 --validationmethod=auto-commit \
--property user=$DBUSER:password=$DBPASS:databaseName=$DBNAME:serverName=$DBHOST:port=$DBPORT \
MyPool
If I use a Single-AZ db.m1.small instance and reboot the database from the console, GlassFish will invalidate the broken connections, throw some exceptions and then reconnect as soon the database is available. In this setup I get less than 1 minute of downtime.
If I use a Multi-AZ db.m1.small instance and reboot with failover from the AWS console, I see no exception at all. The server halts completely, with all incoming requests timing out. After 15 minutes I finally get this:
Communication failure detected when attempting to perform read query outside of a transaction. Attempting to retry query. Error was: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 940,715 milliseconds ago. The last packet sent successfully to the server was 935,598 milliseconds ago.
It appears as if each HTTP thread gets blocked on an invalid connection without getting an exception and so there's no chance to perform connection validation.
Downtime in the Multi-AZ case is always between 15-16 minutes, so it looks like a timeout of some sort but I was unable to change it.
Things I have tried without success:
connection leak timeout/reclaim
statement leak timeout/reclaim
statement timeout
using a different validation method
using MysqlDataSource instead of MysqlConnectionPoolDataSource
How can I set a timeout on stuck queries so that connections in the pool are reused, validated and replaced?
Or how can I let GlassFish detect a database failover?
As I commented before, it is because the sockets that are open and connected to the database don't realize the connection has been lost, so they stayed connected until the OS socket timeout is triggered, which I read might be usually in about 30 minutes.
To solve the issue you need to override the socket Timeout in your JDBC Connection String or in the JDNI COnnection Configuration/Properties to define the socketTimeout param to a smaller time.
Keep in mind that any connection longer than the value defined will be killed, even if it is being used (I haven't been able to confirm this, is what I read).
The other two parameters I mention in my comment are connectTimeout and autoReconnect.
Here's my JDBC Connection String:
jdbc:(...)&connectTimeout=15000&socketTimeout=60000&autoReconnect=true
I also disabled Java's DNS cache by doing
java.security.Security.setProperty("networkaddress.cache.ttl" , "0");
java.security.Security.setProperty("networkaddress.cache.negative.ttl" , "0");
I do this because Java doesn't honor the TTL's, and when the failover takes place, the DNS is the same but the IP changes.
Since you are using an Application Server, the parameters to disable DNS cache must be passed to the JVM when starting the glassfish with -Dnet and not the application itself.