Deploy Spring Cloud Data Flow 2.6.0 to Openshift - openshift

I'm trying to deploy SCDF 2.6.0 to Openshift.
I can verify DB schema is updated successfully, but seems like the Tomcat failed to start with below error and I have no idea what is going on.
Caused by: java.lang.IllegalArgumentException: standardService.connector.startFailed
Caused by: org.apache.catalina.LifecycleException: Protocol handler start failed
Caused by: java.net.SocketException: Permission denied
Steps reproduce
Use MariaDB, then import the *.yaml in below sequence
server-roles.yaml
server-rolebinding.yaml
service-account.yaml
server-config.yaml (make sure to change the DB connection here )
server-svc.yaml
server-deployment.yaml
I uploaded all the yml and full log file in my repo :
https://github.com/gry77/app-issue-repo/tree/master/Openshift-SCDF-issue/k8s-config

Apparently this error gone after I changed the server port from 80 to something else.
so just change the server.port in the environment to other then 80

OpenShift will not allow you to run containers as privileged by default, so you'll need to specifically allow that using a SecurityContextConstraint. There is a good documentation on how to get SCDF to run on OpenShift here: https://donovanmuller.blog/spring-cloud-dataflow-server-openshift/docs/1.1.0.RELEASE/reference/htmlsingle/#_creating_and_configuring_service_accounts
Basically, you'll need to add the anyuid SCC to the ServiceAccount running the Pods:
oc adm policy add-scc-to-user anyuid system:serviceaccount:scdf:scdf

Related

TIBCO BW failing to debug successfully a REST service .I have created a jdbc resource and tested the connection was a success but i have this error

<>#BWEclipseAppNode> 12:12:32.545 INFO [main] com.tibco.thor.frwk.Deployer - Started by BusinessStudio.
12:12:34.970 INFO [main] com.tibco.bw.frwk.engine.BWEngine - TIBCO-BW-FRWK-300002: BW Engine [Main] started successfully.
12:12:35.355 INFO [Framework Event Dispatcher: Equinox Container: 80edabb3-8e47-001c-18c5-90bdbc610de0] com.tibco.thor.frwk.Deployer - TIBCO-THOR-FRWK-300001: Started OSGi Framework of AppNode [BWEclipseAppNode] in AppSpace [BWEclipseAppSpace] of Domain [BWEclipseDomain]
12:12:35.483 INFO [Framework Event Dispatcher: Equinox Container: 80edabb3-8e47-001c-18c5-90bdbc610de0] com.tibco.thor.frwk.Application - TIBCO-THOR-FRWK-300018: Deploying BW Application [RESTservice2.application:1.0].
12:12:40.067 INFO [Framework Event Dispatcher: Equinox Container: 80edabb3-8e47-001c-18c5-90bdbc610de0] com.tibco.thor.frwk.Application - TIBCO-THOR-FRWK-300021: All Application dependencies are resolved for Application [RESTservice2.application:1.0]
12:12:41.431 INFO [Thread-28] com.tibco.thor.frwk.Application - TIBCO-THOR-FRWK-300019: BW Application [RESTservice2.application:1.0] is impaired.
12:12:41.435 INFO [Framework Event Dispatcher: Equinox Container: 80edabb3-8e47-001c-18c5-90bdbc610de0] com.tibco.thor.frwk.Application - Started by BusinessStudio, ignoring .enabled settings.
12:12:41.438 ERROR [CM Configuration Updater (Update: pid=bw.resource.jdbc.916120c9-fbcb-45de-a13c-b22e3edf76ec)] com.tibco.bw.sharedresource.runtime.dependency.ReferenceDependency - TIBCO-BW-SR-FRWK-503000: Unable to start SharedResource [restservice2.JDBCConnectionResource] from Module [RESTservice2:1.0.0.qualifier], DeploymentUnit [RESTservice2.application:1.0]. <Reason>: TIBCO-BW-SR-JDBC-500003: The database driver [com.mysql.jdbc.Driver] is not found. Ensure that DataSourceFactory bundle providing this driver is available in the environment.
so this is the error i am getting as a result i cant run OSGi commands to request endpoint urls and for my rest service even though my code has no errors before debug ,i am getting this during debug ,please help.
In order to use the MySql JDBC driver you need to install it in the BusinessWorks environment of your machine.
This is done by running the command 'bwinstall mysql-driver' from the <TIBCO_HOME>bw/6.X/bin folder.
As Emmanuel suggested, according to the error, you probably missed the required JDBC driver installation.
Once you execute the command, it will start successfully.

How to connect to JBoss EAP 7.3 using VisualVM in OpenShift

I am trying to connect an application with VisualVM, but VisualVM unable to connect with application: Below is the env:
JBoss EAP 7.3
Java 11
OpenShift
I have tried to configure it in different ways, but all failed:
Config try 1:
Use few env variables in script file, so that it could execute first (file contents are mentioned below):
echo *** Adding system property for VisulVM ***
batch
/system-property=jboss.modules.system.pkgs:add(value="org.jboss.byteman,com.manageengine,org.jboss.logmanager")
/system-property=java.util.logging.manager:add(value="org.jboss.logmanager.LogManager")
run-batch
I can see that above commands executed successfully and above properties are available in JBoss config (I verified using Jboss cli command).
JAVA_TOOLS_OPTIONS: -agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=3000 -Dcom.sun.management.jmxremote.rmi.port=3001 -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Xbootclasspath/a:/opt/eap/modules/system/layers/base/org/jboss/log4j/logmanager/main/log4j-jboss-logmanager-1.2.0.Final-redhat-00001.jar -Xbootclasspath/a:/opt/eap/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-2.1.14.Final-redhat-00001.jar
Result:
- java.lang.RuntimeException: WFLYCTL0079: Failed initializing module org.jboss.as.logging
- Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: WFLYLOG0078: The logging subsystem requires the log manager to be org.jboss.logmanager.LogManager. The subsystem has not be initialized and cannot be used. To use JBoss Log Manager you must add the system property "java.util.logging.manager" and set it to "org.jboss.logmanager.LogManager"
Config 2:
JAVA_TOOL_OPTIONS= -agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=3000 -Dcom.sun.management.jmxremote.rmi.port=3001 -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.util.logging.manager=org.jboss.logmanager.LogManager -Djboss.modules.system.pkgs=org.jboss.byteman,org.jboss.logmanager -Xbootclasspath/a:/opt/eap/modules/system/layers/base/org/jboss/log4j/logmanager/main/log4j-jboss-logmanager-1.2.0.Final-redhat-00001.jar -Xbootclasspath/a:/opt/eap/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-2.1.14.Final-redhat-00001.jar
Result:
WARNING: Failed to instantiate LoggerFinder provider; Using default.
java.lang.IllegalStateException: The LogManager was not properly installed (you must set the "java.util.logging.manager" system property to "org.jboss.logmanager.LogManager")
Config 3:
• Modify in standalone.conf, where I put all required configuration in this file.
Result:
WARNING: Failed to instantiate LoggerFinder provider; Using default.
java.lang.IllegalStateException: The LogManager was not properly installed (you must set the "java.util.logging.manager" system property to "org.jboss.logmanager.LogManager")
Kindly suggest that what is the correct configurations?

Apache Drill: Failure setting up ZK for client

I am testing Apache Drill with a two server cluster.
Let's say their external IPs are:
1.1.1.1
2.2.2.2
I first setup Zookeeper to run on both, and when I do the status command I get positive response:
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: leader
The way I have my zoo.cfg to get it working was like this:
Server 1:
// other default values omitted
clientPort=2181
server.1=0.0.0.0:2888:3888
server.2=2.2.2.2:2888:3888
Server 2:
// other default values omitted
clientPort=2181
server.1=1.1.1.1:2888:3888
server.2=0.0.0.0:2888:3888
Next I wanted to get Drill running with this cluster, so I modify the drill-override.conf file for the 2 servers as follows:
Server 1:
drill.exec: {
cluster-id: "test",
zk.connect: "1.1.1.1:2181,2.2.2.2:2181"
}
Server 2:
drill.exec: {
cluster-id: "test",
zk.connect: "2.2.2.2:2181,1.1.1.1:2181"
}
I can start a drillbit on both servers, and when I do status I get this response on both servers:
drillbit is running.
But when I then try to open the console via bin/drill-conf I get this stack trace:
Error: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client. (state=,code=0)
java.sql.SQLException: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init>(DrillConnectionImpl.java:159)
at org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:64)
at org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:69)
at net.hydromatic.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:126)
at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
at sqlline.Commands.connect(Commands.java:1083)
at sqlline.Commands.connect(Commands.java:1015)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:742)
at sqlline.SqlLine.initArgs(SqlLine.java:528)
at sqlline.SqlLine.begin(SqlLine.java:596)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:208)
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init>(DrillConnectionImpl.java:151)
... 18 more
Caused by: java.io.IOException: Failure to connect to the zookeeper cluster service within the allotted time of 10000 milliseconds.
at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCoordinator.java:123)
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:206)
... 19 more
apache drill 1.7.0
"start your sql engine"
Why would drill fail to connect to the ZK cluster, which is running just fine?
All ports are open between these two boxes.
Pre-Requisites
Prerequisites for starting drill in distributed mode:
(Required) Running Oracle JDK version 7
(Required) Running a ZooKeeper quorum
(Recommended) Running a Hadoop cluster
(Recommended) Using DNS
Configuration
As your server IP address:
Server 1 - 1.1.1.1
Server 2 - 2.2.2.2
Put same configuration in zoo.cfg in both Server 1 and Server 2
clientPort=2181
server.1=1.1.1.1:2888:3888
server.2=2.2.2.2:2888:3888
Similarly same configuration in drill-override.conf for both the servers
drill.exec: {
cluster-id: "test",
zk.connect: "1.1.1.1:2181,2.2.2.2:2181"
}
Starting Drill
Start drillbit on all the cluster nodes using
bin/drillbit.sh start
Using Drill
Web UI:
Open web UI using any node address. For example:
1.1.1.1:8047
Via Shell:
Fire bin/drill-localhost command and drill shell will appear.
Verify Installation
From drill shell or UI fire
SELECT * FROM sys.drillbits;
Drill lists information about the Drillbits that are running
Stopping Drill
Fire command
bin/drillbit.sh stop

Oozie - Got exception running sqoop: Could not load db driver class: com.mysql.jdbc.Driver

I am trying to perform an sqoop export on HDP sandbox 2.1 via Oozie. When I run the Oozie job I get the following java runtime exception.
'>>> Invoking Sqoop command line now >>>
7598 [main] WARN org.apache.sqoop.tool.SqoopTool - $SQOOP_CONF_DIR
has not been set in the environment. Cannot check for additional
configuration.
7714 [main] INFO org.apache.sqoop.Sqoop - Running Sqoop version:
1.4.4.2.1.1.0-385
7760 [main] WARN org.apache.sqoop.SqoopOptions - Character argument
'\t' has multiple characters; only the first will be used.
7791 [main] WARN org.apache.sqoop.ConnFactory - $SQOOP_CONF_DIR has
not been set in the environment. Cannot check for additional
configuration.
7904 [main] INFO org.apache.sqoop.manager.MySQLManager - Preparing
to use a MySQL streaming resultset.
7905 [main] INFO org.apache.sqoop.tool.CodeGenTool - Beginning code
generation
7946 [main] ERROR org.apache.sqoop.Sqoop - Got exception running
Sqoop: java.lang.RuntimeException: Could not load db driver class:
com.mysql.jdbc.Driver Intercepting System.exit(1)
I have copied jdbc driver file "mysql-connector-java.jar" to Oozie's shared library folder which I believe is "/usr/lib/oozie/share/lib/sqoop/". I have restarted my sandbox and tried to perform the export with Oozie again and I still get the same error.
The export works perfectly fine when I try performing it only via sqoop, so I presume Oozie needs its own set of drivers.
My question is, which Oozie directory am I suppose to copy my jdbc drivers to?
If you guys think I'm doing something wrong or you need further information, please let me know.
Thank you for your time.
Normally for Oozie the sharelib directory is /user/oozie/share/lib/ on HDFS where "oozie" would be the name of the user which is used to start the Oozie Server. I don't know what that is in case of HDP sandbox 2.1 , but you can use ps command to figure that out.
And for jars needed for sqoop action, I think you should copy the jar to /user/oozie/share/lib/sqoop/ folder.

Failed to Resolve Target Definition on OpenShift

I'm trying to get a Tycho build to work on an OpenShift server. Locally resolving the target definition and building works fine, but when deploying I get the following error:
Unable to read repository at http://download.eclipse.org/rt/rap/2.2/content.xml. Permission denied
I'm not sure if its a problem in the Tycho configuration (but as far as I can remember, I didn't do anything outside the build reactor to make Tycho work) or the one of OpenShift. Could someone tell me what the problem is or how to narrow it down?
I tried switching from a target platform file to defining the repository in the pom.xml. Then I get a similar error message:
Failed to load p2 repository with ID 'rap' from location http://download.eclipse.org/rt/rap/2.2
I found a Linux bug that might be relevant. It doesn't help me, since I'm not even able confirm the server is a Debian. I tried to get the Jenkins to build on some other platform and tried any abbreviation or combination of "Windows" in the (seemingly undocumented) "platform" field, but neither of these values made it change the OS.
Finally got Maven to work with parameters (for how, see here). The log shows nothing out of the ordinary, as far as I can see:
[INFO] Computing target platform for MavenProject: org.acme.dummy:org.acme.dummy.feature:0.0.1-SNAPSHOT # /var/lib/openshift/537cb333e0b8cd68ad000187/app-root/runtime/repo/org.acme.dummy.feature/pom.xml
[DEBUG] Added p2 repository rap-repository (http://download.eclipse.org/rt/rap/2.2)
[DEBUG] Using default execution environment 'JavaSE-1.6'
[DEBUG] Registered artifact repository org.eclipse.tycho.repository.registry.facade.RepositoryBlackboardKey(uri=file:/resolution-context-artifacts#/var/lib/openshift/537cb333e0b8cd68ad000187/app-root/runtime/repo/org.acmr.dummy.feature)
May 21, 2014 10:15:59 AM org.apache.http.impl.client.DefaultRequestDirector tryConnect
INFO: I/O exception (java.net.BindException) caught when connecting to the target host: Permission denied
It's retrying to connect a couple of times, then there is the above exception:
!ENTRY org.eclipse.equinox.p2.transport.ecf 2 0 2014-05-21 10:16:00.073
!MESSAGE Connection to http://download.eclipse.org/rt/rap/2.2/p2.index failed on Permission denied. Retry attempt 0 started
!STACK 0
java.net.BindException: Permission denied
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:376)
at java.net.Socket.bind(Socket.java:631)
at org.eclipse.ecf.internal.provider.filetransfer.httpclient4.ECFHttpClientProtocolSocketFactory.connectSocket(ECFHttpClientProtocolSocketFactory.java:82)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:148)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:150)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:121)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:575)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:425)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:754)