Data Analytics Server 3.1.0 throwing exceptions - exception

I am using WSO2 API Manager 2.0.0 & WSO2 DataAnalyticsServer 3.1.0.
I have made the following configurations:
Enabled Analytics in api-manger.xml
Directed it to my DAS Server Port
Added DAS_AGENT to log4j properties
The servers started properly
In DAS' management console, I uploaded the APIM_Realtime_Analytics.car
All this was in accordance with :
https://docs.wso2.com/display/AM200/Running+the+Product#RunningtheProduct-AccessingtheManagementConsole
https://docs.wso2.com/display/AM200/Configuring+APIM+Analytics
docs.wso2.com/display/DAS310/Quick+Start+Guide
But I am getting the following error:
org.wso2.carbon.databridge.core.exception.EventConversionException: Error when converting org.wso2.apimgt.statistics.request:1.1.0 of event bundle with events 1
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:181)
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.toEventList(ThriftEventConverter.java:90)
at org.wso2.carbon.databridge.core.internal.queue.QueueWorker.run(QueueWorker.java:73)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.wso2.carbon.databridge.core.exception.EventConversionException: No StreamDefinition for streamId org.wso2.apimgt.statistics.request:1.1.0 present in cache
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:166)
... 7 more
[2016-10-08 16:05:49,621] ERROR {org.wso2.carbon.databridge.core.internal.queue.QueueWorker} - Dropping wrongly formatted event sent for -1234
org.wso2.carbon.databridge.core.exception.EventConversionException: Error when converting org.wso2.apimgt.statistics.execution.time:1.0.0 of event bundle with events 1
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:181)
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.toEventList(ThriftEventConverter.java:90)
at org.wso2.carbon.databridge.core.internal.queue.QueueWorker.run(QueueWorker.java:73)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.wso2.carbon.databridge.core.exception.EventConversionException: No StreamDefinition for streamId org.wso2.apimgt.statistics.execution.time:1.0.0 present in cache
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:166)
... 7 more
[2016-10-08 16:05:49,625] ERROR {org.wso2.carbon.databridge.core.internal.queue.QueueWorker} - Dropping wrongly formatted event sent for -1234
org.wso2.carbon.databridge.core.exception.EventConversionException: Error when converting org.wso2.apimgt.statistics.response:1.1.0 of event bundle with events 1
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:181)
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.toEventList(ThriftEventConverter.java:90)
at org.wso2.carbon.databridge.core.internal.queue.QueueWorker.run(QueueWorker.java:73)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.wso2.carbon.databridge.core.exception.EventConversionException: No StreamDefinition for streamId org.wso2.apimgt.statistics.response:1.1.0 present in cache
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:166)
Since the server wasn't getting certain Stream Definitions,
I also tried deploying APIM_Realtime_Analytics_REST.car(from a previous version of DAS) but to no avail. I'm getting similar exceptions for that
How do I rectify this?
Thanks in advance!

As mentioned in the document you are referring to, now APIM has its own Analytics Server, which is a customized DAS. So now you have a very few configurations to do to see API stats. That distribution already has required CApps installed too. So you don't need to install them manually.
But, as I understand, you are using a vanilla DAS Server instead of APIM Analytics Server. If possible, please try with that. If you can't for some reason, take the car file from that distribution and install it in DAS. That should solve your issue.

Related

Unable to export Android Studio Code Inspection report

when I try to export the report from Analyze -> Inspect Code, the report is not saved and the event log show the following error from IDE Fatal Error window:
null
java.lang.NullPointerException
at com.intellij.codeInspection.ex.DescriptorComposer.composeDescription(DescriptorComposer.java:195)
at com.intellij.codeInspection.ex.DescriptorComposer.compose(DescriptorComposer.java:64)
at com.intellij.codeInspection.export.InspectionTreeHtmlWriter.lambda$null$0(InspectionTreeHtmlWriter.java:107)
at com.intellij.codeInspection.export.InspectionTreeHtmlWriter.traverseInspectionTree(InspectionTreeHtmlWriter.java:63)
at com.intellij.codeInspection.export.InspectionTreeHtmlWriter.traverseInspectionTree(InspectionTreeHtmlWriter.java:65)
at com.intellij.codeInspection.export.InspectionTreeHtmlWriter.traverseInspectionTree(InspectionTreeHtmlWriter.java:65)
at com.intellij.codeInspection.export.InspectionTreeHtmlWriter.traverseInspectionTree(InspectionTreeHtmlWriter.java:65)
at com.intellij.codeInspection.export.InspectionTreeHtmlWriter.lambda$serializeTreeToHtml$2(InspectionTreeHtmlWriter.java:84)
at com.intellij.codeInspection.export.InspectionTreeHtmlWriter.appendTree(InspectionTreeHtmlWriter.java:185)
at com.intellij.codeInspection.export.InspectionTreeHtmlWriter.serializeTreeToHtml(InspectionTreeHtmlWriter.java:72)
at com.intellij.codeInspection.export.InspectionTreeHtmlWriter.(InspectionTreeHtmlWriter.java:54)
at com.intellij.codeInspection.ui.actions.ExportHTMLAction.lambda$null$0(ExportHTMLAction.java:102)
at com.intellij.openapi.application.impl.ApplicationImpl.runReadAction(ApplicationImpl.java:912)
at com.intellij.codeInspection.ui.actions.ExportHTMLAction.lambda$null$1(ExportHTMLAction.java:96)
at com.intellij.openapi.progress.impl.CoreProgressManager$2.run(CoreProgressManager.java:247)
at com.intellij.openapi.progress.impl.CoreProgressManager$TaskRunnable.run(CoreProgressManager.java:713)
at com.intellij.openapi.progress.impl.CoreProgressManager$5.run(CoreProgressManager.java:397)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcess$1(CoreProgressManager.java:157)
at com.intellij.openapi.progress.impl.CoreProgressManager.registerIndicatorAndRun(CoreProgressManager.java:543)
at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:488)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:94)
at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:144)
at com.intellij.openapi.application.impl.ApplicationImpl.lambda$null$10(ApplicationImpl.java:575)
at com.intellij.openapi.application.impl.ApplicationImpl$1.run(ApplicationImpl.java:315)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Solved by updating gradle build file and Android Studio version.

Error when trying to load JDBC sink connector

I am trying to stream data from a Kafka topic to a MySQL database unsuccessfully. Although the source connector works fine (i.e. streaming data from a MySQL database to kafka topic), sink connector fails to load.
Here is my sink-mysql.properties file:
name=sink-mysql
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=1
topics=test-mysql-jdbc-foobar
connection.url=jdbc:mysql://127.0.0.1:3306/demo?user=user1&password=user1pass
auto.create=true
When I try to execute
./bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/sink-mysql.properties
the following errors are reported:
[2018-02-01 16:17:43,019] ERROR WorkerSinkTask{id=sink-mysql-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. (org.apache.kafka.connect.runtime.WorkerSinkTask:515)
org.apache.kafka.connect.errors.ConnectException: No fields found using key and value schemas for table: test-mysql-jdbc-foobar
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:127)
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:64)
at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:71)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:66)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:69)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:288)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:198)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:166)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2018-02-01 16:17:43,020] ERROR WorkerSinkTask{id=sink-mysql-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:172)
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:517)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:288)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:198)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:166)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.connect.errors.ConnectException: No fields found using key and value schemas for table: test-mysql-jdbc-foobar
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:127)
at io.confluent.connect.jdbc.sink.metadata.FieldsMetadata.extract(FieldsMetadata.java:64)
at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:71)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:66)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:69)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:495)
... 10 more
[2018-02-01 16:17:43,021] ERROR WorkerSinkTask{id=sink-mysql-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:173)
Note that topic test-mysql-jdbc-foobar contains data streamed from MySQL to kafka however, I am not able to stream this data from MySQL back to kafka. The content of sink-mysql.properties looks identical to the one used in the official confluent's documentation but it doesn't seem to work. Also, mysql-connector is placed in the right directory (under share/java/kafka-connect-jdbc/).
EDIT
Here is the content of my worker config file:
bootstrap.servers=localhost:9092
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
# Local storage file for offset data
offset.storage.file.filename=/tmp/connect.offsets
plugin.path=share/java
To be able to use the JDBC Sink, your messages must have a schema. This can be by using Avro + Schema Registry, or JSON with schemas. In your worker config you've specified:
key.converter.schemas.enable=false
value.converter.schemas.enable=false
Which means that the JSON will not contain the schemas.
Here's an example of the JSON that Kafka Connect will produce (as a source) and expect (as a sink) if you enable schemas: https://gist.github.com/rmoff/2b922fd1f9baf3ba1d66b98e9dd7b364

How to start drill in distributed mode in window operating system

I am using apache drill in window 10 having latest version (1.9).
I want to start my drill in distributed mode.
I have configure zookeeper zoo.cfg file:-
tickTime=2000
initLimit=10
syncLimit=5
dataDir=F:/zookeepertest/data
clientPort=2181
server.1=192.589.XX.01:2888:3888
server.1=192.565.XX.02:2888:3888
And Drill folder inside drill-override.conf
drill.exec: {
cluster-id: "test",
zk.connect: "192.589.XX.01:2181,192.565.XX.02:2181"
}
And my zookeeper is running..
Now When i trying to start my drill using this command:-
sqlline.bat -u "jdbc:drill:zk=192.589.XX.01:2181"
Its thoughing following error:-
Error: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client. (state=,code=0)
java.sql.SQLException: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init>(DrillConnectionImpl.java:161)
at org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:70)
at org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:69)
at org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:143)
at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
at sqlline.Commands.connect(Commands.java:1083)
at sqlline.Commands.connect(Commands.java:1015)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:742)
at sqlline.SqlLine.initArgs(SqlLine.java:528)
at sqlline.SqlLine.begin(SqlLine.java:596)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:245)
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init>(DrillConnectionImpl.java:154)
... 18 more
Caused by: java.io.IOException: Failure to connect to the zookeeper cluster service within the allotted time of 10000 milliseconds.
at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCoordinator.java:123)
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:243)
... 19 more
Can anyone tell how to start drill in distributed mode in window.??
From Drill documentation
To start drill in distributed mode, use drillbit.sh not sqlline i.e.
drillbit.sh start
Since drillbit.sh is a shell script (not a windows batch job) you'll need a 3rd party shell scripting tool such as Cygwin or since you're using Windows 10, you can also enable Bash on Ubuntu.

Analyze code with SonarLint for IntelliJ

i have installed latest SonarLint version 2 for IntelliJ and configure it to work with my SonarQube server version 5.2. i also bind SonarLint to my project in SonarQube.
When i'm trying to run analysis for one file or for all project, i'm getting below error:
Error running SonarLint analysis
java.lang.NullPointerException
at org.sonarlint.intellij.analysis.SonarLintAnalysisConfigurator.analyzeModule(SonarLintAnalysisConfigurator.java:90)
at org.sonarlint.intellij.analysis.SonarLintTask.run(SonarLintTask.java:93)
at com.intellij.openapi.progress.impl.CoreProgressManager$TaskRunnable.run(CoreProgressManager.java:563)
at com.intellij.openapi.progress.impl.CoreProgressManager$2.run(CoreProgressManager.java:152)
at com.intellij.openapi.progress.impl.CoreProgressManager.a(CoreProgressManager.java:452)
at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:402)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:54)
at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:137)
at com.intellij.openapi.progress.impl.ProgressManagerImpl$1.run(ProgressManagerImpl.java:126)
at com.intellij.openapi.application.impl.ApplicationImpl$8.run(ApplicationImpl.java:400)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at org.jetbrains.ide.PooledThreadExecutor$1$1.run(PooledThreadExecutor.java:56)
Any idea what is wrong?
SonarLint 2.0 is broken in IDEA 14.
A bug fix will be released soon.
I've just created a ticket: https://jira.sonarsource.com/browse/SLI-62.

Unable to connect to host MySQL database on application deployed to CloudBees

I followed the instructions here but when attempted I got the following error:
hudson.util.IOException2: remote file operation failed: /scratch/jenkins/workspace/Xinco Demo Publish/Xinco/target/Xinco-2012-08-30_00-20-05.war at hudson.remoting.Channel#1fc6bdea:s-50b0ae50
at hudson.FilePath.act(FilePath.java:783)
at hudson.FilePath.act(FilePath.java:769)
at com.cloudbees.plugins.deployer.DeployPublisher.perform(DeployPublisher.java:108)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19)
at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:707)
at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:682)
at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:660)
at hudson.model.Build$RunnerImpl.post2(Build.java:162)
at hudson.model.AbstractBuild$AbstractRunner.post(AbstractBuild.java:629)
at hudson.model.Run.run(Run.java:1433)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:238)
Caused by: hudson.remoting.ProxyException: hudson.util.IOException2: Server.InternalError - Invalid WEB-INF/cloudbees-web.xml: resource
at com.cloudbees.plugins.deployer.deployables.Deployable.deployFile(Deployable.java:151)
at com.cloudbees.plugins.deployer.deployables.Deployable$DeployFileCallable.invoke(Deployable.java:342)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2048)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:287)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: hudson.remoting.ProxyException: com.cloudbees.api.BeesClientException: Server.InternalError - Invalid WEB-INF/cloudbees-web.xml: resource
at com.cloudbees.api.BeesClient.readResponse(BeesClient.java:850)
at com.cloudbees.api.BeesClient.applicationDeployArchive(BeesClient.java:435)
at com.cloudbees.plugins.deployer.deployables.Deployable.deployFile(Deployable.java:123)
... 11 more
Build step 'Deploy to CloudBees' marked build as failure
The full output can be seen here.
Caused by: hudson.remoting.ProxyException: com.cloudbees.api.BeesClientException: Server.InternalError - Invalid WEB-INF/cloudbees-web.xml: resource
Your cloudbees-web.xml doesn't follow the correct format.
See http://wiki.cloudbees.com/bin/view/RUN/CloudBeesWebXml - as the cloudbees-web.xml needs to be wrapped in an outer <cloudbees-web-app> element
You also don't have to use cloudbees-web.xml if you don't want - if you bind your app to a DB it will make it available as a named datasource automatically.
(see bees app:bind command).
You only have to do this once - and then the app will know about the datasource.
http://developer.cloudbees.com/bin/view/RUN/Resource+Management
and
https://developer.cloudbees.com/bin/view/RUN/DatabaseGuide
(sorry, still working on docs).