I'm stumped. I'm trying to implement persistence with Drools-flow, and I'd like to grab the value of a property on a workitem / processinstance, but everytime I try to get the workitem or process instance I end up with the stack trace below.
I'm walking through the source and from what I can tell, this happens anytime I try and grab a property that is annotated with #Lob in an entity class.
My environment is hibernate/mysql/JPA persistence using BTM as a transaction manager.
I'm calling getProcessInstance as follows:
ksession = JPAKnowledgeService.loadStatefulKnowledgeSession(ksession.getId(), m_kbase, null, m_environment);
m_pi = ksession.getProcessInstance(m_pi.getId());
What am I doing wrong?
java.lang.NullPointerException
at java.io.ByteArrayInputStream.(ByteArrayInputStream.java:89)
at org.drools.persistence.processinstance.ProcessInstanceInfo.getProcessInstance(ProcessInstanceInfo.java:135)
at org.drools.persistence.processinstance.JPAProcessInstanceManager.getProcessInstance(JPAProcessInstanceManager.java:62)
at org.drools.common.AbstractWorkingMemory.getProcessInstance(AbstractWorkingMemory.java:1793)
at org.drools.impl.StatefulKnowledgeSessionImpl.getProcessInstance(StatefulKnowledgeSessionImpl.java:261)Hibernate: select at org.drools.command.runtime.process.GetProcessInstanceCommand.execute(GetProcessInstanceCommand.java:29)
at org.drools.command.runtime.process.GetProcessInstanceCommand.execute(GetProcessInstanceCommand.java:12)
at org.drools.persistence.session.SingleSessionCommandService.execute(SingleSessionCommandService.java:254)
at org.drools.command.impl.CommandBasedStatefulKnowledgeSession.getProcessInstance(CommandBasedStatefulKnowledgeSession.java:95)
at com.thoughtvine.tracks.workflow.TracksFlow.getParameterFromWorkItem(TracksFlow.java:182)
at com.thoughtvine.tracks.workflow.test.TestTracksFlow.testGetWorkItem(TestTracksFlow.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
I'm not an expert on these things... but often LOBs are treated quite differently, is it possible your LOB's actual retrieval is being deferred and thus you are getting a null point when trying to read from it (With the ByteArrayInputStream)?
Might be a setting in the JPA config... As I said, this is a theoretical possibilty
Related
I'm using Spark Notebook to process a huge number of Tweets. My final result is stored in a dataframe. While I try to export it as a CSV file, it shows an error though I used the same method before for exporting small dataset. Any suggestions, please?
Spark Notebook 0.7.0
Scala 2.10.6
Spark 2.0.1
Hadoop 2.7.2
Scala code for exporting dataframe
df.coalesce(1)
.write.format("com.databricks.spark.csv")
.option("header", "true")
.save("/home/shoeb/results")
Error Message
org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:149)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:115)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:60)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:58)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:510)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:98)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:105)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:107)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:109)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:111)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:113)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:115)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:117)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:119)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:121)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:123)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:125)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:127)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:129)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:131)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:133)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:135)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:137)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:139)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:141)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:143)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:145)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:147)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:149)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:151)
at $iwC$$iwC$$iwC.<init>(<console>:153)
at $iwC$$iwC.<init>(<console>:155)
at $iwC.<init>(<console>:157)
at <init>(<console>:159)
at .<init>(<console>:163)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1046)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1327)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:822)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:853)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:801)
at notebook.kernel.Repl$$anonfun$6.apply(Repl.scala:202)
at notebook.kernel.Repl$$anonfun$6.apply(Repl.scala:202)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at scala.Console$.withOut(Console.scala:126)
at notebook.kernel.Repl.evaluate(Repl.scala:201)
at notebook.client.ReplCalculator$$anonfun$10$$anon$1$$anonfun$27.replEvaluate$1(ReplCalculator.scala:402)
at notebook.client.ReplCalculator$$anonfun$10$$anon$1$$anonfun$27.apply(ReplCalculator.scala:415)
at notebook.client.ReplCalculator$$anonfun$10$$anon$1$$anonfun$27.apply(ReplCalculator.scala:396)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 16 in stage 5.0 failed 1 times, most recent failure: Lost task 16.0 in stage 5.0 (TID 98, localhost): java.io.FileNotFoundException: /tmp/blockmgr-bacb743a-120b-4b06-aecc-606755c69cf8/2a/temp_shuffle_3b5b834c-6477-4849-8915-1f7afaff6f14 (Too many open files)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:88)
at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:181)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:150)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1441)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1441)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1667)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1622)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1611)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1890)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1903)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1923)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:143)
... 77 more
Caused by: java.io.FileNotFoundException: /tmp/blockmgr-bacb743a-120b-4b06-aecc-606755c69cf8/2a/temp_shuffle_3b5b834c-6477-4849-8915-1f7afaff6f14 (Too many open files)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:88)
at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:181)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:150)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
TL;DR; Don't use coalesce(1).
While useful in some cases, when data is small enough, coalesce(1) moves data to a single worker node, and possibly, escalates this back to the source.
Therefore it is inherently not scalable and should be avoided. Almost all tools these days can, one way or another, read multiple files, and if you really need one file, using dedicated utilities is a much better choice.
how can I avoid WARN messages to be displayed in logs (without putting the log4j level to ERROR) when I launch Confluent ?
I have set up my plugin.path variable in the properties file with value ${CONFLUENT_HOME}/share/java/kafka-connect-jdbc (with final comma).
I tried to put in the classpath the kafka-connect-jdbc repository, without success.
The following is just an example a small part of the log file:
[2018-07-10 15:40:30,168] INFO Reflections took 1 ms to scan 1 urls, producing 5 keys and 6 values [using 1 cores] (org.reflection
s.Reflections)
[2018-07-10 15:40:30,170] WARN could not get type for name org.jmock.Mockery from any class loader (org.reflections.Reflections)
org.reflections.ReflectionsException: could not get type for name org.jmock.Mockery
at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:390)
at org.reflections.Reflections.expandSuperTypes(Reflections.java:381)
at org.reflections.Reflections.<init>(Reflections.java:126)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader$InternalReflections.<init>(DelegatingClassLoader.java:
365)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:277)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:216)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:208)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:177)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:154)
at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:56)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:77)
Caused by: java.lang.ClassNotFoundException: org.jmock.Mockery
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:388)
... 10 more
That does not seem causing any issues, but can be confusing to read about it.
Have you got any suggestions about that ?
Thanks in advance,
Diego
You can set log level for particular package in log4j config file:
log4j.logger.org.reflections=ERROR
This helped me
I'm testing a search function in the booking system I recently coded for my first semester project. My idea to test this was: I let the original search method return an arraylist of customers found in the search, fetch the list of all customers, and if the list of all customers contains the arraylist of customers found in the search, I assert that this works.
Note: I am prior creating a specific entry which I know I will find (hopefully, right?).
My problem is that when I call -
assertTrue(allCustomers.containsAll(searchResults));
- I get an error:
junit.framework.AssertionFailedError
at TestPackages.newBookingControllerTest.testOnEnter(newBookingControllerTest.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:86)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:74)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:211)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Can anyone elaborate on why this happens? I can edit some source code in if you need. Thanks!
AssertEquals takes two parameter, so you may see compilation error.
You should be using
assertTrue(allCustomers.contains(searchResults));//meaning allCustomers indeed contains searchResults
OR
assertEquals(allCustomers.contains(searchResults), true);//Meaning boolean returned by contains api is going to be compared with true
I want to remove final key word of 1 class. I'm writting Junit for this class and I call spy of Mockito to mock a method to return an expect values to cover all cases.
After using:
CtClass ctClazz = ClassPool.getDefault().get(Livre.class.getName());
ctClazz.setModifiers(ctClazz.getModifiers() & ~Modifier.FINAL);
ctClazz.writeFile();
The modifier of this class is 17 but I think it should be 1 (public).
And I got the error:
Caused by: java.lang.IllegalArgumentException: Cannot subclass final class class mediatheque.document.Livre
at net.sf.cglib.proxy.Enhancer.generateClass(Enhancer.java:446)
at net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at net.sf.cglib.proxy.Enhancer.createHelper(Enhancer.java:377)
at net.sf.cglib.proxy.Enhancer.createClass(Enhancer.java:317)
at org.powermock.api.easymock.internal.signedsupport.SignedSupportingClassProxyFactory.createProxy(SignedSupportingClassProxyFactory.java:147)
at org.easymock.internal.MocksControl.createMock(MocksControl.java:40)
at org.easymock.classextension.internal.MocksClassControl.createMock(MocksClassControl.java:43)
at org.powermock.api.easymock.PowerMock.doMock(PowerMock.java:2076)
at org.powermock.api.easymock.PowerMock.createMock(PowerMock.java:79)
at org.powermock.api.easymock.PowerMock.createPartialMock(PowerMock.java:765)
at test.mediatheque.document.TestDocument.testRestituer_With_Exception3(TestDocument.java:379)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$2.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:217)
... 19 more
This final class extends from an abstract class.
Thanks for any help.
You cannot extend (subclass) a final class. If you really need to do so (whether for testing or because you really need to override a method in someone else's library), you'll have to 'fork' the class. Copy its contents into a new class (updating package declaration and class references), and remove the final designation there. Then, you can extend it to your heart's content. I've used this approach quite effectively recently with respect to Spring Security.
When I attempt to save a job that runs code coverage tests and is configured to publish an rcov report I get the error message listed below and the changes I made aren't saved. This problem cropped up with Hudson version 1.362 and exists in 1.363. If I uncheck the "Publish coverage report" checkbox the job can be saved.
Status Code: 500
Exception:
Stacktrace:
java.lang.InstantiationError: hudson.plugins.rubyMetrics.rcov.model.MetricTarget
at org.kohsuke.stapler.RequestImpl.bindParametersToList(RequestImpl.java:271)
at hudson.plugins.rubyMetrics.rcov.RcovPublisher$DescriptorImpl.newInstance(RcovPublisher.java:143)
at hudson.plugins.rubyMetrics.rcov.RcovPublisher$DescriptorImpl.newInstance(RcovPublisher.java:104)
at hudson.util.DescribableList.rebuild(DescribableList.java:147)
at hudson.model.Project.submit(Project.java:198)
at hudson.model.FreeStyleProject.submit(FreeStyleProject.java:97)
at hudson.model.Job.doConfigSubmit(Job.java:1050)
at hudson.model.AbstractProject.doConfigSubmit(AbstractProject.java:555)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.kohsuke.stapler.Function$InstanceFunction.invoke(Function.java:235)
at org.kohsuke.stapler.Function.bindAndInvoke(Function.java:116)
at org.kohsuke.stapler.Function.bindAndInvokeAndServeResponse(Function.java:57)
at org.kohsuke.stapler.MetaClass$1.doDispatch(MetaClass.java:75)
at org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:30)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:525)
at org.kohsuke.stapler.MetaClass$6.doDispatch(MetaClass.java:181)
at org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:30)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:525)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:441)
at org.kohsuke.stapler.Stapler.service(Stapler.java:123)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:45)
at winstone.ServletConfiguration.execute(ServletConfiguration.java:249)
at winstone.RequestDispatcher.forward(RequestDispatcher.java:335)
at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:378)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:94)
at hudson.util.PluginServletFilter.doFilter(PluginServletFilter.java:86)
at winstone.FilterConfiguration.execute(FilterConfiguration.java:195)
at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:368)
at hudson.security.csrf.CrumbFilter.doFilter(CrumbFilter.java:47)
at winstone.FilterConfiguration.execute(FilterConfiguration.java:195)
at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:368)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:84)
at hudson.security.ChainedServletFilter.doFilter(ChainedServletFilter.java:76)
at hudson.security.HudsonFilter.doFilter(HudsonFilter.java:164)
at winstone.FilterConfiguration.execute(FilterConfiguration.java:195)
at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:368)
at winstone.RequestDispatcher.forward(RequestDispatcher.java:333)
at winstone.RequestHandlerThread.processRequest(RequestHandlerThread.java:244)
at winstone.RequestHandlerThread.run(RequestHandlerThread.java:150)
at java.lang.Thread.run(Thread.java:619)
Does anyone have a good solution? Thanks.
See Hudson Bugreport 6808