Kotlin: catch extension - exception

Because Kotlin doesn't support multiple catch like java does, I want to create extension to partially solve the problem.
fun <T: Throwable> (() -> Unit).catch(vararg exceptions: KClass<T>, catchBlock: (Throwable) -> Unit) {
try {
this()
} catch (e: Throwable) {
if (e::class in exceptions) catchBlock(e) else throw e
}
}
That can be called like this:
{
throw NotImplementedException.exception()
}.catch(NotImplementedException::class) {
//handle it
}
But the problem is that if to pass several arguments with different types it doesn't work (Type inference failed):
{
throw IndexOutOfBoundsException()
}.catch(NotImplementedException::class, IndexOutOfBoundsException::class) {
}
So how can I change signature of the extension to catch several exceptions of different types?

Let's look at the types of the two arugments you're trying to pass to your function:
val kclass1: KClass<NotImplementedException> = NotImplementedException::class
val kclass2: KClass<IndexOutOfBoundsException> = IndexOutOfBoundsException::class
While they are both KClass instances, their type parameters are different - NotImplementedException and IndexOutOfBoundsException. This means that no generic T type parameter can be found for the function that would fit both of these types exactly.
Just for demonstration and explanation purposes, you could help type inference by casting both of your types to KClass<Throwable> (or KClass<Exception>, or KClass<RuntimeException, you get the idea) yourself, that way it could figure out the generic type:
{
throw IndexOutOfBoundsException()
}.catch(NotImplementedException::class as KClass<Throwable>, IndexOutOfBoundsException::class as KClass<Throwable>) {
println("Caught something: $it")
}
But the real solution is to use the out keyword to specify use-site variance for the type parameter of the KClass instances:
fun <T : Throwable> (() -> Unit).catch(vararg exceptions: KClass<out T>, catchBlock: (Throwable) -> Unit) {
try {
this()
} catch (e: Throwable) {
if (e::class in exceptions) catchBlock(e) else throw e
}
}
This way the compiler will find a type for T that's both a subtype of Throwable as specified, and is a supertype of all argument's KClass type parameters - this will be RuntimeException in this case, which you can find out by opening intention actions on the catch call (Alt + Enter on Windows, ⌥↩ on macOS) and choosing Add explicit type arguments. This will produce the following:
{
throw IndexOutOfBoundsException()
}.catch<RuntimeException>(NotImplementedException::class, IndexOutOfBoundsException::class) {
println("Caught something: $it")
}

Related

How to return concrete type from generic function?

In the example below the Default trait is used just for demonstration purposes.
My questions are:
What is the difference between the declarations of f() and g()?
Why g() doesn't compile since it's identical to f()?
How can I return a concrete type out of a impl trait generically typed declaration?
struct Something {
}
impl Default for Something {
fn default() -> Self {
Something{}
}
}
// This compiles.
pub fn f() -> impl Default {
Something{}
}
// This doesn't.
pub fn g<T: Default>() -> T {
Something{}
}
What is the difference between the declarations of f() and g()?
f returns some type which implements Default. The caller of f has no say in what type it will return.
g returns some type which implements Default. The caller of g gets to pick the exact type that must be returned.
You can clearly see this difference in how f and g can be called. For example:
fn main() {
let t = f(); // this is the only way to call f()
let t = g::<i32>(); // I can call g() like this
let t = g::<String>(); // or like this
let t = g::<Vec<Box<u8>>(); // or like this... and so on!
// there's potentially infinitely many ways I can call g()
// and yet there is only 1 way I can call f()
}
Why g() doesn't compile since it's identical to f()?
They're not identical. The implementation for f compiles because it can only be called in 1 way and it will always return the exact same type. The implementation for g fails to compile because it can get called infinitely many ways for all different types but it will always return Something which is broken.
How can I return a concrete type out of a impl trait generically typed declaration?
If I'm understanding your question correctly, you can't. When you use generics you let the caller decide the types your function must use, so your function's implementation itself must be generic. If you want to construct and return a generic type within a generic function the usual way to go about that is to put a Default trait bound on the generic type and use that within your implementation:
// now works!
fn g<T: Default>() -> T {
T::default()
}
If you need to conditionally select the concrete type within the function then the only other solution is to return a trait object:
struct Something;
struct SomethingElse;
trait Trait {}
impl Trait for Something {}
impl Trait for SomethingElse {}
fn g(some_condition: bool) -> Box<dyn Trait> {
if some_condition {
Box::new(Something)
} else {
Box::new(SomethingElse)
}
}
how can I return a concrete type out of a "impl trait" generically typed declaration?
By "impl trait" generically typed declaration I presume you mean "impl trait" rewritten to use named generics. However, that's a false premise - impl Trait in return position was introduced precisely because you can't express it using named generics. To see this, consider first impl Trait in argument position, such as this function:
fn foo(iter: impl Iterator<Item = u32>) -> usize {
iter.count()
}
You can rewrite that function to use named generics as follows:
fn foo<I: Iterator<Item = u32>>(iter: I) -> usize {
iter.count()
}
Barring minor technical differences, the two are equivalent. However, if impl Trait is in return position, such as here:
fn foo() -> impl Iterator<Item = u32> {
vec![1, 2, 3].into_iter()
}
...you cannot rewrite it to use generics without losing generality. For example, this won't compile:
fn foo<T: Iterator<Item = u32>>() -> T {
vec![1, 2, 3].into_iter()
}
...because, as explained by pretzelhammer, the signature promises the caller the ability to choose which type to return (out of those that implement Iterator<Item = u32>), but the implementation only ever returns a concrete type, <Vec<u32> as IntoIterator>::IntoIter.
On the other hand, this does compile:
fn foo() -> <Vec<u32> as IntoIterator>::IntoIter {
vec![1, 2, 3].into_iter()
}
...but now the generality is lost because foo() must be implemented as a combination of Vec and into_iter() - even adding a map() in between the two would break it.
This also compiles:
fn foo() -> Box<dyn Iterator<Item = u32>> {
Box::new(vec![1, 2, 3].into_iter())
}
...but at the cost of allocating the iterator on the heap and disabling some optimizations.

Is it possible to enable/disable a custom deserializer depending on the API endpoint being called?

I'm accessing a JSON API which has 2 kinds of endpoints:
the first kind returns a list of objects of the same type (Symptom, ChronicDisease...)
the second kind (a search function) returns a mixed list of objects of different types (those types are the same than can be returned by the first kind of API)
In the second case, each item of the list has a type field telling which is the type of the object. This field doesn't exist in the first case.
I would like to use the default deserializer for the first kind of API and a custom deserializer for the second kind of API. Is it possible?
If I only use the default deserializer, API calls of the first kind will work but I'm unable to perform a search.
If I enable the following deserializer, the search will work but the deserializer is also used when using the first kind of API and it fails because the type field is missing.
Custom deserializer I'd like to use:
class SearchableItemDeserializer : JsonDeserializer<SearchableItem>() {
override fun deserialize(p: JsonParser, ctxt: DeserializationContext): SearchableItem {
val root : JsonNode = p.readValueAsTree()
val type : String = root.get("type").asText()
when(type){
"symptom" -> {
return ObjectMapper().readValue(root.asText(), Symptom::class.java)
}
"symptom_group" -> {
return ObjectMapper().readValue(root.asText(), SymptomGroup::class.java)
}
"diagnosis" -> {
return ObjectMapper().readValue(root.asText(), Diagnose::class.java)
}
"chronic_disease" -> {
return ObjectMapper().readValue(root.asText(), ChronicDisease::class.java)
}
}
throw Exception("Unable to deserialize type $type")
}
}
Interface common to Symptom, SymptomGroup, Diagnose and ChronicDisease:
#JsonDeserialize(using = SearchableItemDeserializer::class)
interface SearchableItem
It's possible. You can extent Converter.Factory to create you custom converter. Probably most dumb and direct way would be to add check for specific retrofit annotation inside "requestBodyConverter" or "responseBodyConverter" methods.
Something like:
class CustomConverter : Converter.Factory() {
override fun responseBodyConverter(type: Type,
annotations: Array<Annotation>,
retrofit: Retrofit): Converter<ResponseBody, *>? {
return responseConverter(*annotations)
.responseBodyConverter(type, annotations, retrofit)
}
private fun responseConverter(vararg methodAnnotations: Annotation): Converter.Factory {
return when {
endpoint1(*methodAnnotations) -> converter1
endpoint2(*methodAnnotations) -> converter2
else -> defaultConverter
}
}
override fun requestBodyConverter(type: Type,
parameterAnnotations: Array<Annotation>,
methodAnnotations: Array<Annotation>,
retrofit: Retrofit): Converter<*, RequestBody>? {
//same approach here
}
fun endpoint1(vararg annotations: Annotation): Boolean {
//condition check here
}
fun endpoint2(vararg annotations: Annotation): Boolean {
//and here (if needed)
}
Just add your endpoints 1/2 implementation (probably just compare #Get() contents with certain pattern or something like that) and repeat same instruction for requestConverter.
When ready, just add it to retrofit:
return Retrofit.Builder()
.baseUrl(url)
.client(client)
.addConverterFactory(CustomConverter())
.build()

How does the Liskov Substitution Principle apply to function return types?

The Liskov Substitution Principle states that:
Objects in a program should be replaceable with instances of their sub-types without altering the correctness of that program.
Assuming that:
interface Iterable<T> {
fun getIterator(): Iterator<T>
}
interface Collection<T> : Iterable<T> {
val size: Int
}
interface List<T> : Collection<T> {
fun get(index: Int): T
}
interface MutableList<T> : List<T> {
fun set(index: Int, item: T): Unit
}
When LSP is applied to input parameters, the lowest-level abstraction should be applied:
DO
fun foo(items: Iterable<Any>) { ... }
DON'T
fun foo(items: List<Any>) { ... }
But, does LSP apply to function return types, and if so, does the reverse apply?
fun bar(): Iterable<Any> { ... }
OR
fun bar(): List<Any> { ... }
Yes and yes. In order to comply with the LSP, argument types in an overriding method must be contravariant, as you point out. The reverse is true for the return type -- this must be covariant, i.e. of the same type, or a more specific type, as the return type in the method being overidden.
Think of the slogan "demand no more, promise no less." Let's say the superclass method returns a Rectangle. This method can be overidden to return a Square, as this "promises more," but not to return a Shape, as this would "promise less."

Using enums in Typescript's generics with strictFunctionTypes

I have the following code (TS playground link):
const enum Enum { A, B, C }
interface Args {
e: Enum.A;
}
interface GenericClass<A> {
new (args: A) : void;
}
class TestClass {
constructor(args: Args) {}
}
function func<A>(C: GenericClass<A>, args: A) {
return new C(args);
}
func(TestClass, { e: Enum.A });
The last line [1] throws an error with strictFunctionTypes enabled:
Argument of type 'typeof TestClass' is not assignable to parameter of type 'GenericClass<{ e: Enum; }>'.
Types of parameters 'args' and 'args' are incompatible.
Type '{ e: Enum; }' is not assignable to type 'Args'.
Types of property 'e' are incompatible.
Type 'Enum' is not assignable to type 'Enum.A'.
That's strange because I accept exact enum value Enum.A and I pass the exactly same value Enum.A into function.
I know I can use type casting { e: <Enum.A>Enum.A }, but it looks strange for me. Is here a way to fix this problem without type casting?
I am not 100% sure why this happens, but I belive that when inferring A the compiler will consider both places where A appears and decide that the widest possible type is { e:Enum} based on the fact that object literals don't usually infer literal types for their fields. After the inference it will see that under strict functions the type is not compatible with the class. Under this theory, if we decrees the priority of the second inference site, we should get the correct type for A,. We can do this using an intersection type A&{} (I am not sure where exactly I read this but it was in a github issue and a member of the compiler team mentioned that this way of decreasing inference priority is probably going to work for the foreseeable future) .
Again that is mostly an educated guess, but the solution works :
const enum Enum { A, B, C }
interface Args {
e: Enum.A;
}
interface GenericClass<A> {
new (args: A) : void;
}
class TestClass {
constructor(args: Args) {}
}
function func<A>(C: GenericClass<A>, args: A & {}) {
return new C(args);
}
func(TestClass, { e: Enum.A });
playground link

Can I avoid redundantly casting a Throwable when using catching(...).either?

I'm using util.control.Exception.catching to convert internal exceptions into an exception type specific to my library:
import util.control.Exception._
abstract class MyException extends Exception
case class ErrorOccurredDuringFoo(e : Exception) extends MyException
def foo : Foo = {
catching(classOf[Exception]) either { fooInternals } match {
case Left(e) => throw ErrorOccurredDuringFoo(e)
case Right(v) => v
}
}
Unfortunately, this doesn't work. Applying the Catch returned by either doesn't return Either[Exception,Foo], it returns Either[Throwable,Foo]. But I've already told catching I want it to catch only subtypes of Exception, not all Throwables, and internally it's already matched an Exception.
Am I using this correctly? Is there no way I can convince catching to return the exception it catches as an instance of the class of exceptions I asked it to catch? Is my best bet to just add a redundant asInstanceOf[Exception]? I'd rather not if I can avoid it, as the catching instance could logically be created elsewhere, and I'd like to get a compile error if I one day change it to catching[Throwable] without changing ErrorOccurredDuringFoo, not a runtime error when the cast to Exception fails.
Catch isn't parameterised on Throwable, only on the result type. The only way to downcast the Throwable type is with the mkCatcher method:
val c = catching[Foo](
mkCatcher(
(t: Throwable) => t.getClass == classOf[MyException],
(e: MyException) => throw new ErrorOccurredDuringFoo(e)))
c(fooInternals)
But, Catch takes a Catcher[T] – which is really just an alias for a PartialFunction[Throwable, T].
As a case statement is a PartialFunction we can use pattern matching:
val c: Catcher[Foo] = {
case e: MyException => throw new ErrorOccurredDuringFoo(e)
}
catching(c)(fooInternals)
You could write it like this:
def foo : Foo = {
catching(classOf[Exception]) either { fooInternals } match {
case Left(e: Exception) => throw ErrorOccurredDuringFoo(e)
case Right(v) => v
}
}
It is interesting that it doesn't complain about missing cases.