I'm testing a search function in the booking system I recently coded for my first semester project. My idea to test this was: I let the original search method return an arraylist of customers found in the search, fetch the list of all customers, and if the list of all customers contains the arraylist of customers found in the search, I assert that this works.
Note: I am prior creating a specific entry which I know I will find (hopefully, right?).
My problem is that when I call -
assertTrue(allCustomers.containsAll(searchResults));
- I get an error:
junit.framework.AssertionFailedError
at TestPackages.newBookingControllerTest.testOnEnter(newBookingControllerTest.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:86)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:74)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:211)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Can anyone elaborate on why this happens? I can edit some source code in if you need. Thanks!
AssertEquals takes two parameter, so you may see compilation error.
You should be using
assertTrue(allCustomers.contains(searchResults));//meaning allCustomers indeed contains searchResults
OR
assertEquals(allCustomers.contains(searchResults), true);//Meaning boolean returned by contains api is going to be compared with true
Related
I'm using Spark Notebook to process a huge number of Tweets. My final result is stored in a dataframe. While I try to export it as a CSV file, it shows an error though I used the same method before for exporting small dataset. Any suggestions, please?
Spark Notebook 0.7.0
Scala 2.10.6
Spark 2.0.1
Hadoop 2.7.2
Scala code for exporting dataframe
df.coalesce(1)
.write.format("com.databricks.spark.csv")
.option("header", "true")
.save("/home/shoeb/results")
Error Message
org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:149)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:115)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:60)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:58)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:510)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:98)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:105)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:107)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:109)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:111)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:113)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:115)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:117)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:119)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:121)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:123)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:125)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:127)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:129)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:131)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:133)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:135)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:137)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:139)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:141)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:143)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:145)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:147)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:149)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:151)
at $iwC$$iwC$$iwC.<init>(<console>:153)
at $iwC$$iwC.<init>(<console>:155)
at $iwC.<init>(<console>:157)
at <init>(<console>:159)
at .<init>(<console>:163)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1046)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1327)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:822)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:853)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:801)
at notebook.kernel.Repl$$anonfun$6.apply(Repl.scala:202)
at notebook.kernel.Repl$$anonfun$6.apply(Repl.scala:202)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at scala.Console$.withOut(Console.scala:126)
at notebook.kernel.Repl.evaluate(Repl.scala:201)
at notebook.client.ReplCalculator$$anonfun$10$$anon$1$$anonfun$27.replEvaluate$1(ReplCalculator.scala:402)
at notebook.client.ReplCalculator$$anonfun$10$$anon$1$$anonfun$27.apply(ReplCalculator.scala:415)
at notebook.client.ReplCalculator$$anonfun$10$$anon$1$$anonfun$27.apply(ReplCalculator.scala:396)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 16 in stage 5.0 failed 1 times, most recent failure: Lost task 16.0 in stage 5.0 (TID 98, localhost): java.io.FileNotFoundException: /tmp/blockmgr-bacb743a-120b-4b06-aecc-606755c69cf8/2a/temp_shuffle_3b5b834c-6477-4849-8915-1f7afaff6f14 (Too many open files)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:88)
at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:181)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:150)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1441)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1667)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1890)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1903)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1923)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:143)
... 77 more
Caused by: java.io.FileNotFoundException: /tmp/blockmgr-bacb743a-120b-4b06-aecc-606755c69cf8/2a/temp_shuffle_3b5b834c-6477-4849-8915-1f7afaff6f14 (Too many open files)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:88)
at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:181)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:150)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
TL;DR; Don't use coalesce(1).
While useful in some cases, when data is small enough, coalesce(1) moves data to a single worker node, and possibly, escalates this back to the source.
Therefore it is inherently not scalable and should be avoided. Almost all tools these days can, one way or another, read multiple files, and if you really need one file, using dedicated utilities is a much better choice.
how can I avoid WARN messages to be displayed in logs (without putting the log4j level to ERROR) when I launch Confluent ?
I have set up my plugin.path variable in the properties file with value ${CONFLUENT_HOME}/share/java/kafka-connect-jdbc (with final comma).
I tried to put in the classpath the kafka-connect-jdbc repository, without success.
The following is just an example a small part of the log file:
[2018-07-10 15:40:30,168] INFO Reflections took 1 ms to scan 1 urls, producing 5 keys and 6 values [using 1 cores] (org.reflection
s.Reflections)
[2018-07-10 15:40:30,170] WARN could not get type for name org.jmock.Mockery from any class loader (org.reflections.Reflections)
org.reflections.ReflectionsException: could not get type for name org.jmock.Mockery
at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:390)
at org.reflections.Reflections.expandSuperTypes(Reflections.java:381)
at org.reflections.Reflections.<init>(Reflections.java:126)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader$InternalReflections.<init>(DelegatingClassLoader.java:
365)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:277)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:216)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:208)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:177)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:154)
at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:56)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:77)
Caused by: java.lang.ClassNotFoundException: org.jmock.Mockery
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:388)
... 10 more
That does not seem causing any issues, but can be confusing to read about it.
Have you got any suggestions about that ?
Thanks in advance,
Diego
You can set log level for particular package in log4j config file:
log4j.logger.org.reflections=ERROR
This helped me
I need to feed a collection in MongoDB based on values obtained from webservices that provide JSON, but I'm having trouble mounting the job because the URI of some of the web services rely on values that can be obtained from other webservices.
For example, the URI http://172.31.15.180:80/ws/getAgenciasUF/52 provides a JSON with a collection, in this format:
{ "COD_AGENCIA", "521800300", "NAME", "PORANGATU", "UORG": "902", "INTRA_MUNICIPAL": "0"},
{ "COD_AGENCIA", "521830000", "NAME", "HOLD", "UORG": "904", "INTRA_MUNICIPAL": "0"}
...
(20 other values)
...
Through this webservice, I could have insert in a MongoDB collection using the tREST and tExtractJSONFields components.
However, there is another webservice whose URI is http://172.31.15.180:80/ws/getCidadesPorAgencia/521800300, where the latter value is one of COD_AGENCIA available in JSON above. That is, if I read the COD_AGENCIA up and put in some component that iterate on these values and shoot x times the second URI, varying only the code, I could get all the values needed to feed another collection MongoDB.
Using the ESB TOS 6.2.1, I tried to interconnect a Trest a tExtractJSONField and this to a tRESTRequest, like this:
but I received the following error:
[statistics] connecting to socket on port 3587
[statistics] connected
[WARN ]: org.eclipse.jetty.util.component.AbstractLifeCycle - FAILED SelectChannelConnector#172.31.15.180:80: java.net.BindException: Cannot assign requested address: bind
java.net.BindException: Cannot assign requested address: bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Unknown Source)
at sun.nio.ch.Net.bind(Unknown Source)
at sun.nio.ch.ServerSocketChannelImpl.bind(Unknown Source)
at sun.nio.ch.ServerSocketAdaptor.bind(Unknown Source)
at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:293)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.apache.cxf.transport.http_jetty.JettyHTTPServerEngine.addServant(JettyHTTPServerEngine.java:472)
at org.apache.cxf.transport.http_jetty.JettyHTTPDestination.activate(JettyHTTPDestination.java:175)
at org.apache.cxf.transport.AbstractObservable.setMessageObserver(AbstractObservable.java:53)
at org.apache.cxf.binding.AbstractBindingFactory.addListener(AbstractBindingFactory.java:95)
at org.apache.cxf.jaxrs.JAXRSBindingFactory.addListener(JAXRSBindingFactory.java:88)
at org.apache.cxf.endpoint.ServerImpl.start(ServerImpl.java:123)
at org.apache.cxf.jaxrs.JAXRSServerFactoryBean.create(JAXRSServerFactoryBean.java:206)
at bdogo.teste_0_1.teste$Thread4RestServiceProviderEndpoint.run(teste.java:791)
[WARN ]: org.eclipse.jetty.util.component.AbstractLifeCycle - FAILED org.eclipse.jetty.server.Server#13df084: java.net.BindException: Cannot assign requested address: bind
java.net.BindException: Cannot assign requested address: bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Unknown Source)
at sun.nio.ch.Net.bind(Unknown Source)
at sun.nio.ch.ServerSocketChannelImpl.bind(Unknown Source)
at sun.nio.ch.ServerSocketAdaptor.bind(Unknown Source)
at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:293)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.apache.cxf.transport.http_jetty.JettyHTTPServerEngine.addServant(JettyHTTPServerEngine.java:472)
at org.apache.cxf.transport.http_jetty.JettyHTTPDestination.activate(JettyHTTPDestination.java:175)
at org.apache.cxf.transport.AbstractObservable.setMessageObserver(AbstractObservable.java:53)
at org.apache.cxf.binding.AbstractBindingFactory.addListener(AbstractBindingFactory.java:95)
at org.apache.cxf.jaxrs.JAXRSBindingFactory.addListener(JAXRSBindingFactory.java:88)
at org.apache.cxf.endpoint.ServerImpl.start(ServerImpl.java:123)
at org.apache.cxf.jaxrs.JAXRSServerFactoryBean.create(JAXRSServerFactoryBean.java:206)
at bdogo.teste_0_1.teste$Thread4RestServiceProviderEndpoint.run(teste.java:791)
[ERROR]: org.apache.cxf.transport.http_jetty.JettyHTTPServerEngine - Could not start Jetty server on port 80: Cannot assign requested address: bind
org.apache.cxf.service.factory.ServiceConstructionException
at org.apache.cxf.jaxrs.JAXRSServerFactoryBean.create(JAXRSServerFactoryBean.java:219)
at bdogo.teste_0_1.teste$Thread4RestServiceProviderEndpoint.run(teste.java:791)
Caused by: org.apache.cxf.interceptor.Fault: Could not start Jetty server on port 80: Cannot assign requested address: bind
at org.apache.cxf.transport.http_jetty.JettyHTTPServerEngine.addServant(JettyHTTPServerEngine.java:483)
at org.apache.cxf.transport.http_jetty.JettyHTTPDestination.activate(JettyHTTPDestination.java:175)
at org.apache.cxf.transport.AbstractObservable.setMessageObserver(AbstractObservable.java:53)
at org.apache.cxf.binding.AbstractBindingFactory.addListener(AbstractBindingFactory.java:95)
at org.apache.cxf.jaxrs.JAXRSBindingFactory.addListener(JAXRSBindingFactory.java:88)
at org.apache.cxf.endpoint.ServerImpl.start(ServerImpl.java:123)
at org.apache.cxf.jaxrs.JAXRSServerFactoryBean.create(JAXRSServerFactoryBean.java:206)
... 1 more
Caused by: java.net.BindException: Cannot assign requested address: bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Unknown Source)
at sun.nio.ch.Net.bind(Unknown Source)
at sun.nio.ch.ServerSocketChannelImpl.bind(Unknown Source)
at sun.nio.ch.ServerSocketAdaptor.bind(Unknown Source)
at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:293)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.apache.cxf.transport.http_jetty.JettyHTTPServerEngine.addServant(JettyHTTPServerEngine.java:472)
... 7 more
If I eliminate tRESTRequest and link the tExtractJSONField directly to tLog this error doesn't happen, and the output of tExtract is listed on the console. The use of this tRESTRequest component (which is new to me) is that I seem to be something wrong. It is so even that it should be used (see figure below)?
Note that there is a warning in tExtractJSONFields (the text says: "This component has not enough Row type outputs"). The following figures shows as the configuration of the components was made.
Could help me on how to configure it from values received from tExtractJSONFields?
What am I doing wrong? There is another way to get the desired result ?
Your approach is fine, you only seem to lack a few components.
Job design
Get list of COD_AGENCIA
The components tREST and tExtractJSONFields are well suited for this.
At this point you should have a list of all the values you got with the first REST call.
Now for the connection between the first and the second call.
Get list of getCidadesPorAgencia per COD_AGENCIA
Here, you could again use a tREST component and maybe a tExtractJSONFields. Also, here would be the spot where to use tMongoDBOutput.
To connect both of those requests, use a tFlowToIterate component after the first tExtractJSONFields.
Add a key value for COD_AGENCIA and connect the second tREST with Iterate. Now in the second tREST you will have access to a global variable which you just named as the key value. Use this in the call, e.g.: "http://172.31.15.180:80/ws/getCidadesPorAgencia/" + globalMap.get("var_agencia")"
Now you should be able to loop through every agency and get all the cities connected to it.
Reason
What you did was a great idea but the wrong connector. OnComponentOk waits until the connected component runs through without an error. Then the next component will be started. No data will be transmitted. And no row iteration will happen, which seems to be key here.
I have used Kibana timelion plugin to do some query and make some time series data.
This is my query
.es(index='linux_cpu-*', metric='avg:CPU(%)', split='Hostname:5')
I didn't find anything wrong with this query but just could not understand why it could not generate the desired result. Then, when I cross check with elasticsearch data log, I found the following stating search parse error.
[2016-06-26 23:32:43,290][DEBUG][action.search ] [Bentley Wittman] [linux_cpu-2015.01.21][4], node[rtIRvgffRha3nFqYhjh8YA], [P], v[2], s[STARTED], a[id=xIKFCsvnRKaGqt7XyJtHNg]: Failed to execute [org.elasticsearch.action.search.SearchRequest#148c67f] lastShard [true]
RemoteTransportException[[Bentley Wittman][172.16.1.238:9300][indices:data/read/search[phase/query]]]; nested: SearchParseException[failed to parse search source [{"query":{"bool":{"must":[{"range":{"#timestamp":{"gte":1420096382918,"lte":1422689282919,"format":"epoch_millis"}}}],"must_not":[],"filter":{}}},"aggs":{"q":{"meta":{"type":"split"},"filters":{"filters":{"*":{"query_string":{"query":"*"}}}},"aggs":{"Hostname":{"meta":{"type":"split"},"terms":{"field":"Hostname","size":5},"aggs":{"time_buckets":{"meta":{"type":"time_buckets"},"date_histogram":{"field":"#timestamp","interval":"1w","time_zone":"Asia/Shanghai","extended_bounds":{"min":1420096382918,"max":1422689282919},"min_doc_count":0},"aggs":{"avg(CPU(%))":{"avg":{"field":"CPU(%)"}}}}}}}}},"size":0}]]; nested: IllegalStateException[Field data loading is forbidden on [Hostname]];
Caused by: SearchParseException[failed to parse search source [{"query":{"bool":{"must":[{"range":{"#timestamp":{"gte":1420096382918,"lte":1422689282919,"format":"epoch_millis"}}}],"must_not":[],"filter":{}}},"aggs":{"q":{"meta":{"type":"split"},"filters":{"filters":{"*":{"query_string":{"query":"*"}}}},"aggs":{"Hostname":{"meta":{"type":"split"},"terms":{"field":"Hostname","size":5},"aggs":{"time_buckets":{"meta":{"type":"time_buckets"},"date_histogram":{"field":"#timestamp","interval":"1w","time_zone":"Asia/Shanghai","extended_bounds":{"min":1420096382918,"max":1422689282919},"min_doc_count":0},"aggs":{"avg(CPU(%))":{"avg":{"field":"CPU(%)"}}}}}}}}},"size":0}]]; nested: IllegalStateException[Field data loading is forbidden on [Hostname]];
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:855)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:654)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:620)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:371)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)
at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.IllegalStateException: Field data loading is forbidden on [Hostname]
at org.elasticsearch.index.fielddata.IndexFieldDataService$1.build(IndexFieldDataService.java:74)
at org.elasticsearch.index.fielddata.IndexFieldDataService.getForField(IndexFieldDataService.java:275)
at org.elasticsearch.search.aggregations.support.ValuesSourceParser.config(ValuesSourceParser.java:209)
at org.elasticsearch.search.aggregations.bucket.terms.TermsParser.parse(TermsParser.java:76)
at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:198)
at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:176)
at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:103)
at org.elasticsearch.search.aggregations.AggregationParseElement.parse(AggregationParseElement.java:60)
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:838)
... 12 more
Does anyone know what does that mean and how should I solve it?
When I attempt to save a job that runs code coverage tests and is configured to publish an rcov report I get the error message listed below and the changes I made aren't saved. This problem cropped up with Hudson version 1.362 and exists in 1.363. If I uncheck the "Publish coverage report" checkbox the job can be saved.
Status Code: 500
Exception:
Stacktrace:
java.lang.InstantiationError: hudson.plugins.rubyMetrics.rcov.model.MetricTarget
at org.kohsuke.stapler.RequestImpl.bindParametersToList(RequestImpl.java:271)
at hudson.plugins.rubyMetrics.rcov.RcovPublisher$DescriptorImpl.newInstance(RcovPublisher.java:143)
at hudson.plugins.rubyMetrics.rcov.RcovPublisher$DescriptorImpl.newInstance(RcovPublisher.java:104)
at hudson.util.DescribableList.rebuild(DescribableList.java:147)
at hudson.model.Project.submit(Project.java:198)
at hudson.model.FreeStyleProject.submit(FreeStyleProject.java:97)
at hudson.model.Job.doConfigSubmit(Job.java:1050)
at hudson.model.AbstractProject.doConfigSubmit(AbstractProject.java:555)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.kohsuke.stapler.Function$InstanceFunction.invoke(Function.java:235)
at org.kohsuke.stapler.Function.bindAndInvoke(Function.java:116)
at org.kohsuke.stapler.Function.bindAndInvokeAndServeResponse(Function.java:57)
at org.kohsuke.stapler.MetaClass$1.doDispatch(MetaClass.java:75)
at org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:30)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:525)
at org.kohsuke.stapler.MetaClass$6.doDispatch(MetaClass.java:181)
at org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:30)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:525)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:441)
at org.kohsuke.stapler.Stapler.service(Stapler.java:123)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:45)
at winstone.ServletConfiguration.execute(ServletConfiguration.java:249)
at winstone.RequestDispatcher.forward(RequestDispatcher.java:335)
at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:378)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:94)
at hudson.util.PluginServletFilter.doFilter(PluginServletFilter.java:86)
at winstone.FilterConfiguration.execute(FilterConfiguration.java:195)
at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:368)
at hudson.security.csrf.CrumbFilter.doFilter(CrumbFilter.java:47)
at winstone.FilterConfiguration.execute(FilterConfiguration.java:195)
at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:368)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:84)
at hudson.security.ChainedServletFilter.doFilter(ChainedServletFilter.java:76)
at hudson.security.HudsonFilter.doFilter(HudsonFilter.java:164)
at winstone.FilterConfiguration.execute(FilterConfiguration.java:195)
at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:368)
at winstone.RequestDispatcher.forward(RequestDispatcher.java:333)
at winstone.RequestHandlerThread.processRequest(RequestHandlerThread.java:244)
at winstone.RequestHandlerThread.run(RequestHandlerThread.java:150)
at java.lang.Thread.run(Thread.java:619)
Does anyone have a good solution? Thanks.
See Hudson Bugreport 6808