Openshift 4 Readiness probe failed - openshift

I'm using thorntail microprofile framework to monitor a simple rest service application. The application deployment on openshift works fine but not the health monitor since receive this message:
Readiness probe failed: Get http://10.116.0.57:8080/health/live: dial tcp 10.116.0.57:8080: connect: connection refused
But can access to the health service using the service route url e.g. http://thorntail-myproject.apps-crc.testing/health/live and get the results:
{"status":"UP","checks":[{"name":"server-state","status":"UP"}]}
Both Liveness and Readiness annotations are included in the HealthCheck implementation class. Also get service response when execute curl through pod's remote container shell.
These are the dependencies I'm using in pom.xml:
<dependencies>
<dependency>
<groupId>io.thorntail</groupId>
<artifactId>jaxrs</artifactId>
</dependency>
<dependency>
<groupId>io.thorntail</groupId>
<artifactId>microprofile-health</artifactId>
</dependency>
</dependencies>
Any ideas?

The problem could be caused by many things, but here are some things you can try:
Verify that the Service object for your deployment / deploymentConfig is connecting to the correct Pods and to the correct Ports.
Verify that the Route/Ingress objects are connecting to the correct Service object.
The two things above seem correct as you can access the Route URL, but we don't know your deployments and how many you have.
Verify that your Liveness and Readiness probes are hitting the correct: Ports, page (might be a typo somewhere), protocol - are you using HTTP or HTTPS?
If all of the above is correct, check if you have additional NetworkPolicies for your namespace.

Related

multiple ingress controller in kubernetes

I've a microservice architecture running on baremetal kubernetes cluster.We've mainly two services out of which one is to be exposed publically whereas the other service is to be made available internally. I'm using ingress nginx to expose my service internally,but now i have to expose the other service also,so i thought of using another ingress controller for that.
When i'm trying to deploy another ingress controller in different namespace,I'm getting error like :
Error: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:ingress-nginx:ingress-nginx" cannot list resource "endpoints" in API group "" at the cluster scope
and my first ingress also stops working properly.
The ingress deployment yaml which i'm using is:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/baremetal/deploy.yaml
Whereas,the second ingress yaml which i'm using in another namespace is : https://github.com/wali97/second-ingress-controller.yaml/blob/main/ingress.yaml

GCP deployment fails on "Updating service"

I have asp.net core application hosted on GCP App Engine. When I try to deploy the application it fails on last step:
Updating service [name] (this may take several minutes)... ...failed
ERROR: (gcloud.app.deploy) Error Response: [9] An internal error occurred while processing task /app-engine-flex/flex_await_healthy/flex_await_healthy>blablabla.wm.1
The exception stack trace show that service running in background couldn't find MySQL table (that table obviously exists).
my app.yaml file:
service: XXX
runtime: custom
env: flex
automatic_scaling:
max_concurrent_requests: 80
min_num_instances: 1
max_num_instances: 1
resources:
cpu: XXX
memory_gb: XXX
beta_settings:
cloud_sql_instances: "XXX:XXXX:XXXX=tcp:3306"
It looks like the application is deployed properly despite the error. This is the only error and backgroud service desn't throw any exceptions at later point. In fact it works properly and can connect to the database.
My guess was that maybe GCP is checking health while the application is not connected do database. So I tried to add liveness_check and readiness_check to app.yaml and configured dedicated /healthcheck endpoint in my application but it didn't make any change.
Any ideas how to fix it and what might be a cause?
Deploying app with new version fixed the issue

Jhipster-generated Spring Boot + MySql project config for spatial queries

My goal is to add the capability for geospatial queries to my jhipster-generated Spring Boot + MySql project, but I have failed to properly configure my H2 database for queries performed by my tests and by my dev database for local deployments of the app. Since we have a strict CI/CD pipeline, this means I have not been able to test in prod yet, but I suspect I'd run into the same error there too. The error I get when performing a spatial query in a test or dev environment: org.h2.jdbc.JdbcSQLSyntaxErrorException: Function "WITHIN" not found;.
There are a number of posts and guides addressing this issue, but they have not resolved the problem for me. I have followed the tutorial here, the helpful documentation here, and have tried the solutions/suggestions in post 1, post 2, post 3, post 4, and several others. I also compared my code to this example project. But I am still unable to get past this error.
Relevant config...
pom.xml:
...
<java.version>1.8</java.version>
<spring-boot.version>2.1.6.RELEASE</spring-boot.version>
<spring.version>5.1.8.RELEASE</spring.version>
<hibernate.version>5.3.10.Final</hibernate.version>
<h2.version>1.4.199</h2.version>
<jts.version>1.13</jts.version>
...
<repositories>
<repository>
<id>OSGEO GeoTools repo</id>
<url>http://download.osgeo.org/webdav/geotools</url>
</repository>
<repository>
<id>Hibernate Spatial repo</id>
<url>http://www.hibernatespatial.org/repository</url>
</repository>
</repositories>
...
<dependencies>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-spatial</artifactId>
</dependency>
<dependency>
<groupId>com.vividsolutions</groupId>
<artifactId>jts</artifactId>
<version>${jts.version}</version>
</dependency>
</dependencies>
My main application.yml:
spring:
jpa:
open-in-view: false
properties:
hibernate.jdbc.time_zone: UTC
hibernate:
dialect: org.hibernate.spatial.dialect.mysql.MySQL56SpatialDialect
ddl-auto: none
My application-dev.yml for my dev environment:
spring:
h2:
console:
enabled: false
jpa:
database-platform: org.hibernate.spatial.dialect.h2geodb.GeoDBDialect
database: H2
show-sql: true
hibernate:
dialect: org.hibernate.spatial.dialect.h2geodb.GeoDBDialect
My application-prod.yml for prod:
spring:
jpa:
database-platform: org.hibernate.spatial.dialect.mysql.MySQL56SpatialDialect
database: MYSQL
show-sql: false
My test/application.yml:
spring:
jpa:
database-platform: org.hibernate.spatial.dialect.h2geodb.GeoDBDialect
database: H2
open-in-view: false
show-sql: false
hibernate:
dialect: org.hibernate.spatial.dialect.h2geodb.GeoDBDialect
ddl-auto: none
Relevant code in service layer:
#Override
#Transactional(readOnly = true)
public Page<MyObject> findAllWithinDistanceOfLocation(Float distance, Point location, Pageable pageable) {
log.debug("Request to get all MyObject within a distance centered on location");
GeometricShapeFactory shapeFactory = new GeometricShapeFactory();
shapeFactory.setNumPoints(32); // 32 = number of points to define circle. Default is 100. Higher the number, the more accurately drawn the circle
shapeFactory.setCentre(location.getCoordinate());
shapeFactory.setSize(distance * 2);
Geometry areaOfInterest = shapeFactory.createCircle();
return myObjectRepository.findAllWithinCircle(areaOfInterest, pageable);
}
Relevant code in repository:
#Query("select e from MyObjectTable e where within(e.location, :areaOfInterest) = true")
Page<MyObject> findAllWithinCircle(#Param("areaOfInterest") Geometry areaOfInterest, Pageable pageable);
Relevant code in database config bean:
/**
* Open the TCP port for the H2 database, so it is available remotely.
*
* #return the H2 database TCP server.
* #throws SQLException if the server failed to start.
*/
#Bean(initMethod = "start", destroyMethod = "stop")
#Profile(JHipsterConstants.SPRING_PROFILE_DEVELOPMENT)
public Object h2TCPServer() throws SQLException {
String port = getValidPortForH2();
log.debug("H2 database is available on port {}", port);
return H2ConfigurationHelper.createServer(port);
}
private String getValidPortForH2() {
int port = Integer.parseInt(env.getProperty("server.port"));
if (port < 10000) {
port = 10000 + port;
} else {
if (port < 63536) {
port = port + 2000;
} else {
port = port - 2000;
}
}
return String.valueOf(port);
}
I've tried different values for the properties above, trying to do so in a principled way based on documentation and other projects, but I can't seem to get this working properly. I suspect I am missing an h2 initial configuration command that creates an alias for WITHIN but still have not been able to grok it and get this working.
Note: I've included and excluded the pom file's above section to no effect.
I went through this path for spatial Postgresql and then it was painful: CI did not catch bugs until we decided to give up H2.
I would recommend that you use same database in dev and prod using docker and testcontainers, JHipster supports this but it's easy to do by yourself too.
For those who want to know how we resolved this...
The problem: We had a Heroku CI/CD pipeline that did not support test containers, as stated here: https://devcenter.heroku.com/articles/heroku-ci#docker-deploys
To quote the documentation: "Currently, it is not possible to use Heroku CI to test container builds."
Compounding this problem was that H2 support for spatial queries was too problematic and gave different results than a native MySql db and posed a myriad of dialect-related problems outlined in the original post.
The not-ideal but workable solution: Was a combination of a development process "workaround" combined with some standard testing practices.
First, we created a test-containers profile that would run geospatial integration tests when ./mvnw verify was executed with that test-containers profile. The Heroku CI/CD pipeline did not run the geospatial integration tests, but we made it part of our "definition of done" to run those tests locally.
To make this less bad and error-prone, we did the typical unit testing strategy: mock the repositories that employ geospatial queries and exercise business logic in the unit tests. These ran in the CI/CD pipeline.
The next step will be to migrate the CI/CD pipeline to one that supports containers. But in the meantime, the above approach gave us enough overlapping coverage to feel confident to promote the geospatial-based features to prod. After several months of being stress tested with feature enhancements and extensions, so far things seem to have worked well from a product point-of-view.

How to upload a file to OCI Object storage

I am trying to use UploadObjectExample.java code to upload a file to OCI object storage. I am running into connection timeout error while connecting to the object storage URL. The same config file is used by OCI CLI to successfully upload files to OCI config.
Here is the Error log:
Exception in thread "main" com.oracle.bmc.model.BmcException: (-1, null, true) Timed out while communicating to: https://objectstorage.us-ashburn-1.oraclecloud.com (outbound opc-request-id: 1EB5AA4A7FD64D58A54F876AD0C9E83B)
at com.oracle.bmc.http.internal.RestClient.convertToBmcException(RestClient.java:572)
at com.oracle.bmc.http.internal.RestClient.put(RestClient.java:380)
at com.oracle.bmc.objectstorage.ObjectStorageClient.putObject(ObjectStorageClient.java:1053)
at com.oracle.bmc.objectstorage.transfer.internal.SimpleRetry$1.apply(SimpleRetry.java:34)
at com.oracle.bmc.objectstorage.transfer.internal.SimpleRetry$1.apply(SimpleRetry.java:26)
at com.oracle.bmc.objectstorage.transfer.UploadManager.singleUpload(UploadManager.java:111)
at com.oracle.bmc.objectstorage.transfer.UploadManager.upload(UploadManager.java:73)
at UploadObjectExample.main(UploadObjectExample.java:74)
Caused by: javax.ws.rs.ProcessingException: java.net.SocketTimeoutException: connect timed out
at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:284)
at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:278)
at org.glassfish.jersey.client.JerseyInvocation.lambda$invoke$0(JerseyInvocation.java:753)
at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
at org.glassfish.jersey.internal.Errors.process(Errors.java:229)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:414)
at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:752)
at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:445)
at org.glassfish.jersey.client.JerseyInvocation$Builder.put(JerseyInvocation.java:334)
at com.oracle.bmc.http.internal.ForwardingInvocationBuilder.put(ForwardingInvocationBuilder.java:141)
at com.oracle.bmc.http.internal.RestClient.put(RestClient.java:377)
Please test curl -v https://objectstorage.us-ashburn-1.oraclecloud.com from the same machine where the Java client times out, just to make sure there are no connection issues. If it works fine you may try to change the timeout value in ClientConfiguration. You can see more details here: https://github.com/oracle/oci-java-sdk/issues/92
Before creating a support ticket you might also try to create a new issue on github/oci-java-sdk.
without knowing more about the config file (I do not suggest you post it here), your home region and other elements it is very hard to help.
I would suggest you open a support ticket at https://support.oracle.com, making sure that you select the Cloud tab and the Service as "Oracle Cloud Infrastructure".
Are you using a proxy? If so, you may need to use the OCI Java SDK ApacheConnector.
This was an issue with the proxy. This was resolved by using the ash7 proxy.

Hono adapters cannot connect to enmasse

I'm currently installing hono together with enmasse on top of openshift/okd. Everything goes fine except for the connection between the adapters and enmasse. When I deploy the amqp adapter for example (happens with http and mqtt adapter as well), I'm getting following logging from the hono adapter:
12:25:45.404 [vert.x-eventloop-thread-0] DEBUG o.e.hono.client.impl.HonoClientImpl - starting attempt [#5] to connect to server [messaging-hono-default.enmasse-infra.svc:5672]
12:25:45.404 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - connecting to AMQP 1.0 container [amqp://messaging-hono-default.enmasse-infra.svc:5672]
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - can't connect to AMQP 1.0 container [amqp://messaging-hono-default.enmasse-infra.svc:5672]: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.hono.client.impl.HonoClientImpl - connection attempt failed
io.netty.channel.ConnectTimeoutException: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:125)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - can't connect to AMQP 1.0 container [amqp://messaging-hono-default.enmasse-infra.svc:5672]: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.hono.client.impl.HonoClientImpl - connection attempt failed
io.netty.channel.ConnectTimeoutException: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:125)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Enmasse logs following:
2019-01-07 12:36:24.962160 +0000 SERVER (info) [160]: Accepted connection to 0.0.0.0:5672 from 10.128.0.1:44664
2019-01-07 12:36:24.962258 +0000 SERVER (info) [160]: Connection from 10.128.0.1:44664 (to 0.0.0.0:5672) failed: amqp:connection:framing-error No valid protocol header found
Additional info:
Hono version: 0.8.x
Enmasse version: 0.24.1
Can somebody tell me what I'm missing?
Thanks!
PS: if somebody with enough reputation could add a newly "enmasse" tag, would be nice.
I've found the solution to this problem.
First of all: the framing errors are not incoming connections from hono. I already see this logging when enmasse is installed without installing hono. I don't know where they are coming from. If somebody has an idea, please tell me.
As for the real problem: it seems I needed to allow communication between the two projects (enmasse-infra and hono). This is documented on the Openshift documentation.
TLDR
Used solution: oc adm pod-network make-projects-global enmasse-infra. I used this because the enmasse framework needs to be reachable by all projects (including hono but also ditto and our custom backend application).
Should also work (not tested): oc adm pod-network join-projects --to=enmasse-infra hono