Error Setting up MySQL database with Gov Registry 4.6.0 - mysql

I am trying to setup the Gov. registry 4.6.0 with a MySQL 5.6 community edition database
I followed the setup instructions in the 4.6.0 documentation and when I start the registry with wso2server.bat -Dsetup option I get the following error
C:\Apps\wso2greg-4.6.0\bin>wso2server.bat -Dsetup
JAVA_HOME environment variable is set to C:\Program Files\Java\jre6
CARBON_HOME environment variable is set to C:\Apps\WSO2GR~1.0\bin\..
[2014-03-03 13:52:57,979] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Starting WSO2 Carbon...
[2014-03-03 13:52:57,982] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Operating System : Windows 7 6.1, amd64
[2014-03-03 13:52:57,983] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Home : C:\Program Files\Java\jre6
[2014-03-03 13:52:57,983] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Version : 1.6.0_29
[2014-03-03 13:52:57,983] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java VM : Java HotSpot(TM) 64-Bit Server VM 20.4-b02,Sun Microsystems Inc.
[2014-03-03 13:52:57,984] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Carbon Home : C:\Apps\WSO2GR~1.0\bin\..
[2014-03-03 13:52:57,984] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - Java Temp Dir : C:\Apps\WSO2GR~1.0\bin\..\tmp
[2014-03-03 13:52:57,984] INFO {org.wso2.carbon.core.internal.CarbonCoreActivator} - User : xxxxx, en-US, America/New_York
[2014-03-03 13:52:58,056] WARN {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter} - The default keystore (wso2carbon.jks) is currently being used. To maximize security when deploying to a production environment, configure a new keystore with a unique password in the production server profile.
[2014-03-03 13:52:58,063] INFO {org.wso2.carbon.databridge.agent.thrift.AgentHolder} - Agent created !
[2014-03-03 13:52:58,079] INFO {org.wso2.carbon.databridge.agent.thrift.internal.AgentDS} - Successfully deployed Agent Client
[2014-03-03 13:52:59,096] ERROR {org.wso2.carbon.user.core.internal.Activator} - Cannot start User Manager Core bundle
java.lang.Exception: Error in creating the database
at org.wso2.carbon.user.core.common.DefaultRealmService.initializeDatabase(DefaultRealmService.java:285)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:90)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:114)
at org.wso2.carbon.user.core.internal.Activator.startDeploy(Activator.java:70)
at org.wso2.carbon.user.core.internal.BundleCheckActivator.start(BundleCheckActivator.java:61)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683)
at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381)
at org.eclipse.osgi.framework.internal.core.AbstractBundle.resume(AbstractBundle.java:390)
at org.eclipse.osgi.framework.internal.core.Framework.resumeBundle(Framework.java:1176)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:559)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:544)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.incFWSL(StartLevelManager.java:457)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:243)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:438)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:1)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340)
Caused by: java.lang.Exception: Error occurred while executing : CREATE INDEX REG_PATH_IND_BY_PATH_VALUE USING HASH ON REG_PATH(REG_PATH_VALUE, REG_TENANT_ID)
at org.wso2.carbon.utils.dbcreator.DatabaseCreator.executeSQL(DatabaseCreator.java:169)
at org.wso2.carbon.utils.dbcreator.DatabaseCreator.executeSQLScript(DatabaseCreator.java:325)
at org.wso2.carbon.utils.dbcreator.DatabaseCreator.createRegistryDatabase(DatabaseCreator.java:61)
at org.wso2.carbon.user.core.common.DefaultRealmService.initializeDatabase(DefaultRealmService.java:278)
... 19 more
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was too long; max key length is 767 bytes
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.Util.getInstance(Util.java:386)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1054)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4237)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4169)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2617)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2778)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2828)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2777)
at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:949)
at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:795)
at org.wso2.carbon.utils.dbcreator.DatabaseCreator.executeSQL(DatabaseCreator.java:139)
... 22 more
[2014-03-03 13:53:05,444] INFO {org.apache.catalina.startup.TaglibUriRule} - TLD skipped. URI: http://tiles.apache.org/tags-tiles is already defined
What is wrong?
Thanks

I guess same issue has been mentioned in the jira. Workaround that, they have mentioned is
This occurs when an encoding like UTF-8 is used, because it takes more than 1 byte to represent a character. When an encoding like latin1 is used, this exception does not occur.
You can try it and check

Related

Apache Drill 1.17.0 on Windows 10 - Trouble Getting Drill to Run (Embedded Mode)

Details:
Apache Drill 1.17.0
Windows 10 64 bit
Java JDK1.8.0_241
New installation. Unable to get Apache Drill to load successfully.
Command line: c:\Users\floodb\Software\Drill\apache-drill-1.17.0\bin>drill-embedded
Error Received: Error: Failure in starting embedded Drillbit: UNSUPPORTED_OPERATION ERROR: Failure while attempting to load instance of the class of type org.apache.drill.exec.store.StoragePluginRegistry requested at path drill.exec.storage.registry.
[Error Id: 7c1b33eb-7a27-4e39-af06-5ba22e5ffae6 ] (state=,code=0)
java.sql.SQLException: Failure in starting embedded Drillbit: UNSUPPORTED_OPERATION ERROR: Failure while attempting to load instance of the class of type org.apache.drill.exec.store.StoragePluginRegistry requested at path drill.exec.storage.registry.
There is no 'hadoop_home' environment variable set (as suggested by other posts on StackOverflow).
Partial Log:
2020-02-19 15:55:42,315 [main] INFO
o.a.drill.common.util.GuavaPatcher - Google's Stopwatch patched for
old HBase Guava version. 2020-02-19 15:55:42,319 [main] INFO
o.a.drill.common.util.GuavaPatcher - Google's Closeables patched for
old HBase Guava version. 2020-02-19 15:55:42,333 [main] INFO
o.a.drill.common.util.GuavaPatcher - Google's Preconditions were
patched to hold new methods. 2020-02-19 15:55:42,693 [main] INFO
o.a.drill.common.config.DrillConfig - Configuration and plugin file(s)
identified in 32ms. Base Configuration:
- jar:file:/C:/Users/floodb/Software/Drill/apache-drill-1.17.0/jars/drill-common-1.17.0.jar!/drill-default.conf
(Bunch of log lines deleted)
2020-02-19 15:55:45,134 [main] INFO o.a.d.c.s.persistence.ScanResult
- loading 22 classes for org.apache.drill.common.logical.data.LogicalOperator took 4ms
2020-02-19 15:55:45,138 [main] INFO o.a.d.c.s.persistence.ScanResult
- loading 12 classes for org.apache.drill.common.logical.StoragePluginConfig took 3ms
2020-02-19 15:55:45,146 [main] INFO o.a.d.c.s.persistence.ScanResult
- loading 15 classes for org.apache.drill.common.logical.FormatPluginConfig took 7ms 2020-02-19
15:55:45,179 [main] INFO o.a.drill.common.config.DrillConfig - User
Error Occurred: Failure while attempting to load instance of the class
of type org.apache.drill.exec.store.StoragePluginRegistry requested at
path drill.exec.storage.registry. (null)
org.apache.drill.common.exceptions.UserException:
UNSUPPORTED_OPERATION ERROR: Failure while attempting to load instance
of the class of type org.apache.drill.exec.store.StoragePluginRegistry
requested at path drill.exec.storage.registry.
[Error Id: 7c1b33eb-7a27-4e39-af06-5ba22e5ffae6 ] at
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:637)
at
org.apache.drill.common.config.DrillConfig.getInstance(DrillConfig.java:92)
at
org.apache.drill.exec.server.DrillbitContext.(DrillbitContext.java:113)
at org.apache.drill.exec.work.WorkManager.start(WorkManager.java:116)
at org.apache.drill.exec.server.Drillbit.run(Drillbit.java:221) at
org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:134)
at
org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:67)
at
org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:67)
at
org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
at org.apache.drill.jdbc.Driver.connect(Driver.java:75) at
sqlline.DatabaseConnection.connect(DatabaseConnection.java:135) at
sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:192)
at sqlline.Commands.connect(Commands.java:1364) at
sqlline.Commands.connect(Commands.java:1244) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:730) at
sqlline.SqlLine.initArgs(SqlLine.java:410) at
sqlline.SqlLine.begin(SqlLine.java:515) at
sqlline.SqlLine.start(SqlLine.java:267) at
sqlline.SqlLine.main(SqlLine.java:206) Caused by:
java.lang.reflect.InvocationTargetException: null at
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at
org.apache.drill.common.config.DrillConfig.getInstance(DrillConfig.java:88)
... 22 common frames omitted Caused by:
java.lang.UnsatisfiedLinkError:
org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native
Method) at
org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:645)
at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1230) at
org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1435) at
org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:493)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1910)
at
org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:678)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1910)
at
org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus(DrillFileSystem.java:563)
at
org.apache.drill.exec.util.FileSystemUtil.listNonRecursive(FileSystemUtil.java:224)
at
org.apache.drill.exec.util.FileSystemUtil.list(FileSystemUtil.java:209)
at
org.apache.drill.exec.util.FileSystemUtil.listFiles(FileSystemUtil.java:104)
at
org.apache.drill.exec.util.DrillFileSystemUtil.listFiles(DrillFileSystemUtil.java:86)
at
org.apache.drill.exec.store.sys.store.LocalPersistentStore.getRange(LocalPersistentStore.java:121)
at
org.apache.drill.exec.store.sys.BasePersistentStore.getAll(BasePersistentStore.java:27)
at
org.apache.drill.exec.store.StoragePluginRegistryImpl.initPluginsSystemTable(StoragePluginRegistryImpl.java:277)
at
org.apache.drill.exec.store.StoragePluginRegistryImpl.(StoragePluginRegistryImpl.java:90)
... 27 common frames omitted 2020-02-19 15:55:46,199 [main] INFO
o.apache.drill.exec.server.Drillbit - Shutdown completed (1018 ms).
The problem was that the 32 bit version of the Java JDK was installed. If you are having this problem, check to make sure that the 64 bit version of Java is installed.

How to start drill in distributed mode in window operating system

I am using apache drill in window 10 having latest version (1.9).
I want to start my drill in distributed mode.
I have configure zookeeper zoo.cfg file:-
tickTime=2000
initLimit=10
syncLimit=5
dataDir=F:/zookeepertest/data
clientPort=2181
server.1=192.589.XX.01:2888:3888
server.1=192.565.XX.02:2888:3888
And Drill folder inside drill-override.conf
drill.exec: {
cluster-id: "test",
zk.connect: "192.589.XX.01:2181,192.565.XX.02:2181"
}
And my zookeeper is running..
Now When i trying to start my drill using this command:-
sqlline.bat -u "jdbc:drill:zk=192.589.XX.01:2181"
Its thoughing following error:-
Error: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client. (state=,code=0)
java.sql.SQLException: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init>(DrillConnectionImpl.java:161)
at org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:70)
at org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:69)
at org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:143)
at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
at sqlline.Commands.connect(Commands.java:1083)
at sqlline.Commands.connect(Commands.java:1015)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:742)
at sqlline.SqlLine.initArgs(SqlLine.java:528)
at sqlline.SqlLine.begin(SqlLine.java:596)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:245)
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init>(DrillConnectionImpl.java:154)
... 18 more
Caused by: java.io.IOException: Failure to connect to the zookeeper cluster service within the allotted time of 10000 milliseconds.
at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCoordinator.java:123)
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:243)
... 19 more
Can anyone tell how to start drill in distributed mode in window.??
From Drill documentation
To start drill in distributed mode, use drillbit.sh not sqlline i.e.
drillbit.sh start
Since drillbit.sh is a shell script (not a windows batch job) you'll need a 3rd party shell scripting tool such as Cygwin or since you're using Windows 10, you can also enable Bash on Ubuntu.

Troubleshoot DCHQ Host

I have been running DCHQ.io (On-Prem) for a few months now with no major issue.
My Container Hosts environment looks like this:
DCHQ-VM-Host
RAM: 14gb / CPU 4 / 100gb Storage
10.21.38.165
host0
This is where DCHQ resides
Docker-VM-1
RAM: 8gb / CPU 2
10.21.36.201
host1
Where LB containers are hosted
Docker-Metal-2
RAM 96gb / CPU 12
10.21.39.71
host2
Where APP containers are hosted
Docker-Metal-4
RAM 96gb / CPU 12
10.21.38.170
host4
Where DB containers are hosted
Today, while attempting to deploy the 3-Tier Java (ApacheHTTP – Tomcat – MySQL) application template for a POC at work, host-4 went offline.
Couple days ago I converted host-1 from a BM machine to a VM. Therefore, I removed that host from DCHQ and added it back using the same name (host-1) but this time as a VM ona different ESX server. Not sure if this has something to do with host-4 throwing the error below. As a result, no template involving host-4 can be deployed so as a workaround I'm using host-1 and host-2 to deploy.
I have tried restarting the host-4, deactivating/activating host-4 from within DCHQ UI and restarting the agent on host-4 but to no avail.
My last resort is to remove and reinstall the client on host-4 but I wanted to post it here first. I have also email DCHQ support with this issue.
The DCHQ log shows the following error :
2016-01-29 15:56:31.026 INFO 1217 --- [pool-3-thread-2] c.d.a.o.impl.ImagePullQueueProcessor : Processing pull req
2016-01-29 15:56:31.028 INFO 1217 --- [pool-3-thread-2] c.d.a.o.impl.TemplateOperationsImpl : Received pull request for image [mysql:latest] registry [null]
2016-01-29 15:56:31.028 INFO 1217 --- [pool-3-thread-2] c.d.a.o.impl.DockerClientBuilderUtil : Using public repo since username [null] or password is empty
2016-01-29 15:56:31.043 ERROR 1217 --- [pool-15-thread-1] c.g.d.core.async.ResultCallbackTemplate : Error during callback
org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:10624 [/127.0.0.1] failed: Connection refused
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:71)
at com.github.dockerjava.jaxrs.connector.ApacheConnector.apply(ApacheConnector.java:443)
at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:246)
at org.glassfish.jersey.client.JerseyInvocation$2.call(JerseyInvocation.java:683)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:424)
at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:679)
at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:435)
at org.glassfish.jersey.client.JerseyInvocation$Builder.post(JerseyInvocation.java:338)
at com.github.dockerjava.jaxrs.async.POSTCallbackNotifier.response(POSTCallbackNotifier.java:29)
at com.github.dockerjava.jaxrs.async.AbstractCallbackNotifier.call(AbstractCallbackNotifier.java:45)
at com.github.dockerjava.jaxrs.async.AbstractCallbackNotifier.call(AbstractCallbackNotifier.java:22)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 25 common frames omitted
2016-01-29 15:56:31.044 ERROR 1217 --- [pool-3-thread-2] c.d.a.o.impl.TemplateOperationsImpl : Error pulling image [mysql] response logs []
2016-01-29 15:56:31.046 INFO 1217 --- [pool-3-thread-2] c.d.a.o.impl.ImagePullQueueProcessor : Finished processing pull req
2016-01-29 15:58:39.687 WARN 1217 --- [pool-3-thread-1] c.d.a.o.impl.SysInfoMonitorService : org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:10624 [/127.0.0.1] failed: Connection refused
2016-01-29 15:58:39.688 ERROR 1217 --- [pool-3-thread-2] c.d.a.o.impl.MachineOperationsImpl : org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:10624 [/127.0.0.1] failed: Connection refused
Thank you in advance,
Rod
Thanks for reporting this issue. Feel free to report this on our issue tracker.
https://github.com/dchqinc/dchq-on-premise-issue-tracker/issues
Please make sure that the information in the application.properties file has not changed. Make sure that the server key matches whatever the DCHQ UI shows for host-4.
vi /opt/dchq/config/application.properties
You can then restart the agent:
service dchq stop
ps -ef | grep dchq
## forcefully kill any DCHQ process that may not have stopped otherwise using kill -9
service dchq start
Lastly -- can you please test the connection from the UI? This will help us narrow down the issue.
For any help deploying Docker Compose applications, please refer to our documentation: http://dchq.co/docker-compose.html

com.mysql.jdbc.Driver not found on classpath while starting spark sql and thrift server

I am receiving the following errors on starting the spark-sql shell.
But when I start the shell using the command it works
./spark-sql --jars /usr/local/hive/lib/mysql-connector-java.jar
But when I start the thrift server in the same way using below comamnd it throws the same error again.
/usr/local/spark/sbin/start-thriftserver.sh --jars /usr/local/hive/lib/mysql-connector-java.jar
Please help me in understanding how this can be resolved so that I dont have to pass the jar path externally and why is it working for the spark-sql case and not with thrift server. Do I need to set classpath somewhere that I am missing ?
Please let me know if you need anything else.
5/10/18 05:15:33 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/10/18 05:15:33 INFO server.AbstractConnector: Started SocketConnector#0.0.0.0:47703
15/10/18 05:15:33 INFO util.Utils: Successfully started service 'HTTP file server' on port 47703.
15/10/18 05:15:33 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/10/18 05:15:38 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/10/18 05:15:38 INFO server.AbstractConnector: Started SelectChannelConnector#0.0.0.0:4040
15/10/18 05:15:38 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/10/18 05:15:38 INFO ui.SparkUI: Started SparkUI at http://192.168.1.12:4040
15/10/18 05:15:38 INFO spark.SparkContext: Added JAR file:/usr/local/hive/lib/mysql-connector-java.jar at http://192.168.1.12:47703/jars/mysql-connector-java.jar with timestamp 1445125538564
15/10/18 05:15:38 INFO client.AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster#192.168.1.12:7077/user/Master...
15/10/18 05:15:38 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20151018051538-0018
15/10/18 05:15:38 INFO client.AppClient$ClientActor: Executor added: app-20151018051538-0018/0 on worker-20151018024224-192.168.1.12-50211 (192.168.1.12:50211) with 4 cores
15/10/18 05:15:38 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20151018051538-0018/0 on hostPort 192.168.1.12:50211 with 4 cores, 512.0 MB RAM
15/10/18 05:15:38 INFO client.AppClient$ClientActor: Executor updated: app-20151018051538-0018/0 is now LOADING
15/10/18 05:15:38 INFO client.AppClient$ClientActor: Executor updated: app-20151018051538-0018/0 is now RUNNING
15/10/18 05:15:39 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 43460.
15/10/18 05:15:39 INFO netty.NettyBlockTransferService: Server created on 43460
15/10/18 05:15:39 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/10/18 05:15:39 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.1.12:43460 with 265.4 MB RAM, BlockManagerId(driver, 192.168.1.12, 43460)
15/10/18 05:15:39 INFO storage.BlockManagerMaster: Registered BlockManager
15/10/18 05:15:39 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
15/10/18 05:15:40 INFO hive.HiveContext: Initializing execution hive, version 0.13.1
15/10/18 05:15:40 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
15/10/18 05:15:40 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
15/10/18 05:15:40 INFO metastore.ObjectStore: ObjectStore, initialize called
15/10/18 05:15:41 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
15/10/18 05:15:41 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
15/10/18 05:15:41 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
15/10/18 05:15:41 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
15/10/18 05:15:42 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor#192.168.1.12:56227/user/Executor#-1120183734]) with ID 0
15/10/18 05:15:42 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.1.12:34713 with 265.4 MB RAM, BlockManagerId(0, 192.168.1.12, 34713)
15/10/18 05:15:52 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
15/10/18 05:15:52 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
15/10/18 05:15:52 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5. Encountered: "#" (64), after : "".
15/10/18 05:15:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
15/10/18 05:15:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
15/10/18 05:16:01 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
15/10/18 05:16:01 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
15/10/18 05:16:03 INFO metastore.ObjectStore: Initialized ObjectStore
15/10/18 05:16:04 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.1aa
15/10/18 05:16:05 INFO metastore.HiveMetaStore: Added admin role in metastore
15/10/18 05:16:05 INFO metastore.HiveMetaStore: Added public role in metastore
15/10/18 05:16:05 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
15/10/18 05:16:05 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
15/10/18 05:16:05 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
15/10/18 05:16:05 INFO hive.HiveContext: Initializing HiveMetastoreConnection version 0.13.1 using Spark classes.
15/10/18 05:16:06 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
15/10/18 05:16:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/10/18 05:16:06 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
15/10/18 05:16:06 INFO metastore.ObjectStore: ObjectStore, initialize called
15/10/18 05:16:07 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
15/10/18 05:16:07 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
15/10/18 05:16:07 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:105)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.liftedTree1$1(IsolatedClientLoader.scala:170)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.<init>(IsolatedClientLoader.scala:166)
at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:212)
at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:175)
at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:55)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:73)
at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:62)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340)
... 21 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410)
... 26 more
Caused by: javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
NestedThrowables:
java.lang.reflect.InvocationTargetException
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:310)
at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:339)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:248)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:223)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:497)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:475)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:356)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:54)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:171)
... 31 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325)
at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:282)
at org.datanucleus.store.AbstractStoreManager.<init>(AbstractStoreManager.java:240)
at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:286)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187)
at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775)
... 60 more
Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "dbcp-builtin" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:259)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:131)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.<init>(ConnectionFactoryImpl.java:85)
... 78 more
Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
at org.datanucleus.store.rdbms.connectionpool.AbstractConnectionPoolFactory.loadDriver(AbstractConnectionPoolFactory.java:58)
at org.datanucleus.store.rdbms.connectionpool.DBCPBuiltinConnectionPoolFactory.createConnectionPool(DBCPBuiltinConnectionPoolFactory.java:49)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238)
... 80 more
15/10/18 05:16:07 INFO spark.SparkContext: Invoking stop() from shutdown hook
copy mysql-connector-java-5.1.38-bin.jar to spark jars location in spark 2.x versions
$ cp -r $HIVE_HOME/lib/mysql-connector-java-5.1.38-bin.jar $SPARK_HOME/jars/
The main problem:
"Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient"
So in Hive lib you have copied mysql connector. put that mysql class path in spark-env.sh file.
export SPARK_CLASSPATH="/home/hadoop/work/apache-hive-2.0.0-bin/lib/mysql-connector-java-5.1.38-bin.jar"
Finally Place hive-site.xml in spark conf folder. Now check this problem will resolve.
Try including jars in SPARK_CLASSPATH. You can update that spark-env.sh as well. Which version of spark you are using? Spark 1.3 and lower versions --jars has issues with adding JDBC drivers.
Add SPARK_HOME into your ~/.bash_profile file, like the following
# set spark directory
export SPARK_HOME=/var/www/python_project/extras/spark/spark-2.4.4-bin-hadoop2.7
Once you saved this file, Run the following command,
source ~/.bash_profile
This saved my day :)
Hope this helps.

Error deploying WAR with mysql driver to Glassfish4 on CloudBees

I'm trying to deploy a WAR on CloudBees Glassfish4 server. I've followed the instructions at the bottom of http://developer.cloudbees.com/bin/view/RUN/Glassfish4 to include the jar in the META-INF/lib directory.
When I deploy with:
bees app:deploy target/app.war -a myDomain/app -t glassfish4-full
I get the error:
ERROR: Server.InternalError - java.lang.IllegalArgumentException: Platform error -
plugin_setup_error: glassfish4-full 1 [main] INFO com.cloudbees.clickstack.glassfish.Setup - Setup clickstack com.cloudbees.clickstack:glassfish-clickstack:4-full-1.0.2 - 2013-12-12T13:06:29.572+0100, current dir /mnt/genapp/apps/1cabb3f9/.
[main] INFO com.cloudbees.clickstack.glassfish.Setup - Setup: Environment{,
appUser='app_1cabb3f9',
appId='1cabb3f9',
appPort=8336,
appDir=/var/genapp/apps/1cabb3f9,
logDir=/var/genapp/apps/1cabb3f9/.genapp/log,
genappDir=/var/genapp/apps/1cabb3f9/.genapp,
controlDir=/var/genapp/apps/1cabb3f9/.genapp/control,
clickstackDir=/mnt/genapp-tmp/genapp-remote-plugin-1389871636905879,
packageDir=/mnt/genapp-tmp/stax-genapp-1389871636.236927/app,
}, com.cloudbees.clickstack.domain.metadata.Metadata#385cbbb1
Exception in thread "main" java.lang.Exception: Exception deploying on 10.159.35.35
at com.cloudbees.clickstack.glassfish.Setup.main(Setup.java:147)
Caused by: java.lang.IllegalArgumentException
at com.sun.nio.zipfs.ZipPath.relativize(ZipPath.java:238)
at com.cloudbees.clickstack.util.Files2$3.visitFile(Files2.java:188)
at com.cloudbees.clickstack.util.Files2$3.visitFile(Files2.java:184)
at java.nio.file.FileTreeWalker.walk(Unknown Source)
at java.nio.file.FileTreeWalker.walk(Unknown Source)
at java.nio.file.FileTreeWalker.walk(Unknown Source)
at java.nio.file.Files.walkFileTree(Unknown Source)
at java.nio.file.Files.walkFileTree(Unknown Source)
at com.cloudbees.clickstack.util.Files2.unzipSubDirectoryIfExists(Files2.java:184)
at com.cloudbees.clickstack.util.ApplicationUtils.extractContainerExtraLibs(ApplicationUtils.java:49)
at com.cloudbees.clickstack.glassfish.Setup.installApplication(Setup.java:259)
at com.cloudbees.clickstack.glassfish.Setup.setup(Setup.java:154)
at com.cloudbees.clickstack.glassfish.Setup.main(Setup.java:139)
I got a reply from CloudBees support.
The documentation at http://developer.cloudbees.com/bin/view/RUN/Glassfish4 was wrong, you don't need to include the mysql connector in your project.
As I replied to you on our support platform. We fixed the bug on both "glassfish4-full" and "glassfish4" (web profile) ClickStacks.
Sorry for the inconvenience,
Cyrille
Clickstacks release notes:
https://github.com/CloudBees-community/glassfish4-clickstack/releases/tag/v4-web-1.0.1
https://github.com/CloudBees-community/glassfish4-clickstack/releases/tag/v4-full-1.0.3