Kotlin/Android – KotlinReflectionInternalError in Data Class with a lambda - exception

kotlin.reflect.jvm.internal.KotlinReflectionInternalError:
Introspecting local functions, lambdas, anonymous functions and local
variables is not yet fully supported in Kotlin reflection
This exception comes from toString() of a data class.
The data class contains a lambda.
I can't reproduce it in my environment.
Do I need to override toString() to exclude the lambda? Or lambdas are not allowed in data classes at all?
data class PersistJob(
private val id: Int,
private val delay: Long = 10_000L,
private val maxDelay: Long = 60_000L,
private val iteration: Int = 0,
private val block: suspend (Int) -> Boolean) {
fun getDelay() = minOf(delay, maxDelay)
fun withDelayIncreased() = copy(
delay = minOf(delay * 2, maxDelay),
iteration = iteration + 1)
suspend fun execute() = block(iteration)
}
Line producing the error:
val job: PersistJob = ...
log.debug("start job id($id): $job")`// job.toString()
Stack trace:
at kotlin.reflect.jvm.internal.EmptyContainerForLocal.fail(SourceFile:41)
at kotlin.reflect.jvm.internal.EmptyContainerForLocal.getFunctions(SourceFile:37)
at kotlin.reflect.jvm.internal.KDeclarationContainerImpl.findFunctionDescriptor(SourceFile:145)
at kotlin.reflect.jvm.internal.KFunctionImpl$descriptor$2.invoke(SourceFile:54)
at kotlin.reflect.jvm.internal.KFunctionImpl$descriptor$2.invoke(SourceFile:34)
at kotlin.reflect.jvm.internal.ReflectProperties$LazySoftVal.invoke(SourceFile:93)
at kotlin.reflect.jvm.internal.ReflectProperties$Val.getValue(SourceFile:32)
at kotlin.reflect.jvm.internal.KFunctionImpl.getDescriptor(SourceFile)
at kotlin.reflect.jvm.internal.ReflectionFactoryImpl.renderLambdaToString(SourceFile:59)
at kotlin.jvm.internal.Reflection.renderLambdaToString(SourceFile:80)
at kotlin.jvm.internal.Lambda.toString(SourceFile:22)
at java.lang.String.valueOf(String.java:2683)
at java.lang.StringBuilder.append(StringBuilder.java:129)

It looks like a bug in Kotlin lambdas.
This code is enough to reproduce the exception:
({i: Int -> true}).toString()
I advice you post an issue on youtrack.jetbrains.com/issues/KT and see what the team says about it.

Related

An exception occurs when spark converts a json string to a HashMap in spark

there is no problem in the local environment, but a exception occur when performing spark submit.
The approximate code is as follows
class Test extends Serializable {
def action() = {
val sc = SparkContext.getOrCreate(sparkConf)
val rdd1 = sc.textFile(.. )
val rdd2 = rdd1.map ( logline => {
//gson
val jsonObject jsonParser.parse(logLine).getAsJsonObject
//jackson
val jsonObject = objectMapper.readValue(logLine,classOf[HashMap[String,String]])
MyDataSet ( parsedJson.get("field1"), parsedJson.get("field2"),...)
}
}
}
Exception
Exception in thread "main" org.apache.spark.SparkException: Task not serializable
ensureSerializable(ClosureCleanser. scala:444)..
........
........
caused by: java.io.NotSerializableException : com.fasterxml.jackson.module.scala.modifiers.ScalaTypeModifier
I have used both gson and jackson libraries.
Isn't this a problem that can be solved just by inheriting from serializable ?
The exception NotSerializableException is pretty self explanatory. Your task is not serializable. Spark is a parallel computing engine. The driver (where your main program is executed) ships the transformations you want to make on the RDD (the code written inside map functions) to executors where they are executed. Therefore those transformations need to be serializable. In your case, jsonParser and objectMapper are created on the driver. To use them inside a transformation, spark tries to serialize them and fails because they are not serializable. That's your error. I don't know which one is not serializable, maybe both.
Let's take an example and see how we can fix it.
// let's create a non serializable class
class Stuff(val i : Int) {
def get() = i
}
// we instantiate it in the driver
val stuff = new Stuff(4)
//this fails "Caused by: java.io.NotSerializableException: Stuff"
val result = sc.parallelize(Seq(1, 2,3)).map( x => (x, stuff.get)).collect
To fix it let's create the object inside the transformation
val result = sc.parallelize(Seq(1, 2,3))
.map( x => {
val new_stuff = new Stuff(4)
(x, new_stuff.get)
}).collect
It works, but obviously, creating the object for every record can be quite expensive. We can do better with mapPartition and create the object only once per partition:
val result = sc.parallelize(Seq(1, 2,3))
.mapPartitions(part => {
val new_stuff = new Stuff(4)
part.map( x => (x, new_stuff.get))
}).collect

Create a generic Bundle in Chisel3

In Chisel3, I want to create a generic Bundle ParamsBus with parameterized type.
Then I follow the example on the Chisel3 website:
class ParamBus[T <: Data](gen: T) extends Bundle {
val dat1 = gen
val dat2 = gen
override def cloneType = (new ParamBus(gen)).asInstanceOf[this.type]
}
class TestMod[T <: Data](gen: T) extends Module {
val io = IO(new Bundle {
val o_out = Output(gen)
})
val reg_d = Reg(new ParamBus(gen))
io.o_out := 0.U
//io.o_out := reg_d.dat1 + reg_d.dat2
dontTouch(reg_d)
}
However, during code generation, I have the following error:
chisel3.AliasedAggregateFieldException: Aggregate ParamBus(Reg in TestMod) contains aliased fields List(UInt<8>)...
at fpga.examples.TestMod.<init>(test.scala:20)
Moreover, if I exchange the two lines to connect io.o_out, another error appears:
/home/escou64/Projects/fpga-io/src/main/scala/examples/test.scala:23:34: type mismatch;
found : T
required: String
io.o_out := reg_d.dat1 + reg_d.dat2
^
Any idea of the issue ?
Thanks for the help!
The issue you're running into is that the argument gen to ParamBus is a single object that is used for both dat1 and dat2. Scala (and thus Chisel) has reference semantics (like Java and Python), and thus dat1 and dat2 are both referring to the exact same object. Chisel needs the fields of Bundles to be different objects, thus the aliasing error you are seeing.
The easiest way to deal with this is to call .cloneType on gen when using it multiple times within a Bundle:
class ParamBus[T <: Data](gen: T) extends Bundle {
val dat1 = gen.cloneType
val dat2 = gen.cloneType
// Also note that you shouldn't need to implement cloneType yourself anymore
}
(Scastie link: https://scastie.scala-lang.org/mJmSdq8xSqayOceSjxHkRQ)
This is definitely a bit of a wart in the Chisel3 API because we try to hide the need to call .cloneType yourself, but least as of v3.4.3, this remains the case.
Alternatively, you could wrap the uses of gen in Output. It may seem weird to use a direction here but if all directions are Output, it's essentially the same as having no directions:
class ParamBus[T <: Data](gen: T) extends Bundle {
val dat1 = Output(gen)
val dat2 = Output(gen)
}
(Scastie link: https://scastie.scala-lang.org/TWajPNItRX6qOKDGDPnMmw)
A third (and slightly more advanced) technique is to make gen a 0-arity function (ie. a function that takes no arguments). Instead of gen being an object to use as a type template, it's instead a function that will create fresh types for you when called. Scala is a functional programming language so functions can be passed around as values just like objects can:
class ParamBus[T <: Data](gen: () => T) extends Bundle {
val dat1 = gen()
val dat2 = gen()
}
// You can call it like so:
// new ParamBus(() => UInt(8.W))
(Scastie link: https://scastie.scala-lang.org/JQ7D8VZsSCWP2i6DWJ4cLA)
I tend to prefer this final version, but I understand it can be more daunting for new users. Eventually I'd like to fix the issue you're seeing with a more direct use of gen, but these are ways to deal with the issue for the time being.

jackson databind Json serialization is writing Some(99): Option[Int] as {"empty":true,"defined":false}

I'm using Jackson in my Scala/Spark program and I've distilled my issue to the simple example
below. My problem is that when my case class has the Option[Int] field (age) set to None
I see reasonable deserialization output (that is: a struct with empty=true). However, when
age is defined, i.e., set to some Int like Some(99), I never see the integer value in the
deserialization output .
Given :
import com.fasterxml.jackson.databind.ObjectMapper
import java.io.ByteArrayOutputStream
import scala.beans.BeanProperty
case class Dog(#BeanProperty name: String, #BeanProperty age: Option[Integer])
object OtherTest extends App {
jsonOut(Dog("rex", None))
jsonOut(Dog("mex", Some(99)))
private def jsonOut(dog: Dog) = {
val mapper = new ObjectMapper();
val stream = new ByteArrayOutputStream();
mapper.writeValue(stream, dog);
System.out.println("result:" + stream.toString());
}
}
My output is as shown below. Any hints/help greatly appreciated !
result:{"name":"rex","age":{"empty":true,"defined":false}}
result:{"name":"mex","age":{"empty":false,"defined":true}}
Update after the Helpful Answer
Here are the dependencies that worked for me:
implementation 'org.scala-lang:scala-library:2.12.2'
implementation "org.apache.spark:spark-sql_2.12:3.1.2"
implementation "org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2"
implementation "org.apache.spark:spark-avro_2.12:3.1.2"
implementation 'com.fasterxml.jackson.module:jackson-module-scala_2.12:2.10.0'
Here is the updated code (with frequent flyer bonus - round trip example):
private def jsonOut(dog: Dog) = {
val mapper = new ObjectMapper()
mapper.registerModule(DefaultScalaModule)
val stream = new ByteArrayOutputStream();
mapper.writeValue(stream, dog);
val serialized = stream.toString()
System.out.println("result:" + serialized);
// verify we can read the serialized thing back to case class:
val recovered = mapper.readValue(serialized, classOf[Dog])
System.out.println("here is what we read back:" + recovered);
}
Here is the resultant output (as expected now ;^) ->
> Task :OtherTest.main()
result:{"name":"rex","age":null}
here is what we read back:Dog(rex,None)
result:{"name":"mex","age":99}
here is what we read back:Dog(mex,Some(99))
You need to add the Jackson module for Scala to make it work with standard Scala data types.
Add this module as your dependency: https://github.com/FasterXML/jackson-module-scala
Follow the readme on how to initialize your ObjectMapper with this module.

Why are Scala class methods not first-class citizens?

I've just started Scala and am tinkering in worksheets. For example:
def merp(str: String) : String = s"Merrrrrrrp $str"
val merp2 = (str: String) => s"Merrrrrrrp $str"
val merp3 = (str: String) => merp(str)
val merp4 = merp _
merp("rjkghleghe")
merp4("rjkghleghe")
And the corresponding worksheet results:
merp: merp[](val str: String) => String
merp2: String => String = <function1>
merp3: String => String = <function1>
merp4: String => String = <function1>
res0: String = Merrrrrrrp rjkghleghe
res1: String = Merrrrrrrp rjkghleghe
Saying, for example, val merp5 = merp produces an error, because apparently methods cannot be values the way functions can. But I can still pass methods as arguments. I demonstrate this in the following code snippet, adapted from a similar SO question:
def intCombiner(a: Int, b: Int) : String = s"herrrrrrp $a derrrrrrp $b"
def etaAbstractor[A, B](combineFoo: (A, B) ⇒ String, a: A, b: B) = combineFoo(a, b)
etaAbstractor(intCombiner, 15, 16)
worksheet result:
intCombiner: intCombiner[](val a: Int,val b: Int) => String
etaAbstractor: etaAbstractor[A,B](val combineFoo: (A, B) => String,val a: A,val b: B) => String
res10: String = herrrrrrp 15 derrrrrrp 16
Is methods-not-being-first-class a limitation, perhaps imposed by Scala's JVM interaction, or is it a decision in the language's design?
Why do I need to roll my own eta abstractions, as in merp3?
Is merp4 also an eta abstraction, or is it something sneakily similar?
Why does my etaAbstractor work? Is Scala quietly replacing intCombiner with intCombiner _?
Theoretical, computer sciencey answers are welcome, as are pointers to any relevant points in the language specification. Thanks!
Disclaimer: I'm not a computer scientist, but I will try to guess:
Method is a part of an object and doesn't exist outside of it. You can't pass method alone. Closure is another (equivalent?) way of encapsulating state, by converting an object method to a standalone function (which is by the way just another object with apply() method in Scala) you are creating a closure. This process is known as eta-expansion. §3.3.1, §6.26.5
You don't have to. You can also write val merp3 : (String => String) = merp. §6.26.5
Yes, merp4 is eta-expansion too. §6.7
§6.26.2
The reason it works with etaAbstractor is that the compiler can infer that a function (not a function invocation) is required.
If I had to guess why the underscore is required where a function type cannot be inferred, I'd think that it's to improve error reporting of a common class of errors (getting functions where invocations are intended). But again, that's just a guess.
In the JVM, a method is not an object, whereas a first-class function must be one. So the method must be boxed into an object to convert it to a function.

Using kotlin with jmockit

I need some advice using jmockit with kotlin.
(CUT) This is my (Java) class under test:
public final class NutritionalConsultant {
public static boolean isLunchTime() {
int hour = LocalDateTime.now().getHour();
return hour >= 12 && hour <= 14;
}
}
(j.1) This is a working Java test class
#RunWith(JMockit.class)
public class NutritionalConsultantTest {
#Test
public void shouldReturnTrueFor12h(#Mocked final LocalDateTime dateTime) {
new Expectations() {{
LocalDateTime.now(); result = dateTime;
dateTime.getHour(); result = 12;
}};
boolean isLunchTime = NutritionalConsultant.isLunchTime();
assertThat(isLunchTime, is(true));
}
}
(kt.1) However, the corresponding kotlin class throws an exception
RunWith(javaClass<JMockit>())
public class NutritionalConsultantKt1Test {
Test
public fun shouldReturnTrueFor12h(Mocked dateTime : LocalDateTime) {
object : Expectations() {{
LocalDateTime.now(); result = dateTime;
dateTime.getHour(); result = 12;
}}
val isLunchTime = NutritionalConsultant.isLunchTime()
assertThat(isLunchTime, eq(true));
}
}
Exception:
java.lang.Exception: Method shouldReturnTrueFor12h should have no parameters
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:41)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
The same exception is thrown when run with gradle.
(kt.2) Using the #Mocked syntax with kotlin I am getting a different exception:
RunWith(javaClass<JMockit>())
public class NutritionalConsultantKt2Test {
Mocked
var dateTime : LocalDateTime by Delegates.notNull()
Test
public fun shouldReturnTrueFor12h() {
object : Expectations() {{
LocalDateTime.now(); result = dateTime;
dateTime.getHour(); result = 12;
}}
val isLunchTime = NutritionalConsultant.isLunchTime()
assertThat(isLunchTime, eq(true));
}
}
Exception:
java.lang.IllegalArgumentException: Final mock field "dateTime$delegate" must be of a class type
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:74)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
edit 20150224 maybe this is related to "For a mock field, an instance of the declared type will be automatically created by JMockit and assigned to the field, provided it's not final." (from http://jmockit.org/tutorial/BehaviorBasedTesting.html)
(kt.3) However, changing the val to var and using a !! operator leads to a working test... but this is not idiomatic kotlin code:
RunWith(javaClass<JMockit>())
public class NutritionalConsultantKt3Test {
Mocked
var dateTime : LocalDateTime? = null
Test
public fun shouldReturnTrueFor12h() {
object : Expectations() {{
LocalDateTime.now(); result = dateTime;
dateTime!!.getHour(); result = 12;
}}
val isLunchTime = NutritionalConsultant.isLunchTime()
assertThat(isLunchTime, eq(true));
}
}
Did you have more success using kotlin with jmockit?
I don't think you'll be able to use JMockit from Kotlin (or most other JVM alternative languages, with the possible exception of Groovy), not reliably anyway.
The reasons are that 1) JMockit was not developed with such languages in mind, and isn't tested with them; and 2) these languages, when compiled to bytecode, produce additional or different constructs that may confuse a tool like JMockit; also they usually insert calls to their own internal APIs which may also get in the way.
In practice, alternative languages tend to develop their own testing/mocking/etc. tools, that not only work well for that language and its runtime, but also let you take full advantage of the language's strenghts.
Personally, even though I can recognize the many benefits such languages bring (and I particularly like Kotlin), I would rather stick with Java (which continues to evolve - see Java 8). The fact is, so far no alternative JVM language has come even close of Java's widespread use, and (IMO) they never will.
We've experimented a little and found that you can define special function like this:
fun uninitialized<T>() = null as T
and then use it like this:
[Mocked] val dateTime : LocalDateTime = uninitialized()
You can also use it instead of Matchers.any() for the same effect.
We will consider adding it to compiler or standard library.