I was recently learning about Kotlin inline functions. I thought the below function, twipsToPixels, seemed like a good use-case for that, but IntelliJ gives me a warning saying there's an "expected performance impact". I would've thought the opposite would be true here? This makes me think I'm missing something deeper. Does anyone have any thoughts?
private inline fun twipsToPixels(value: Int) = (value * SCREEN_RESOLUTION / TWIPS_CONVERSION).roundToInt()
private fun screenBoxInPixels(screenBox: ScreenBox): ScreenBox {
val left = twipsToPixels(screenBox.position.left)
val top = twipsToPixels(screenBox.position.top)
val width = twipsToPixels(screenBox.size.width)
val height = twipsToPixels(screenBox.size.height)
return ScreenBox(Position(left, top), Size(width, height))
}
but IntelliJ gives me a warning saying there's an "expected performance impact"
You misread it; it says this impact is insignificant, that is, probably too small to bother. The second part of the message is
Inlining works best for functions with parameters of functional types
mainly because it allows avoiding creating function objects for these parameters in the first place.
Related
Consider this example given here: https://stackoverflow.com/a/53376287/11819720
fun onClick(action: () -> Unit) { ... }
view.onClick({ toast(it.toString())} )
view.onClick() {toast(it.toString()) }
view.onClick { toast(it.toString()) }
and this code:
fun convert(x: Double, converter: (Double) -> Double): Double{
val result = converter(x)
println("$x is converted to $result")
return result
}
fun convertFive (converter: (Int) -> double): Double{
val result = converter(5)
println(" 5 is converted to $result")
return result
}
fun main (args: Array<String>){
convert(20.0) { it * 1.8 + 32 }
convertFive { it * 1.8 + 32 }
}
It seems the effort is to save on writing a few parentheses at the risk of confusing a reader.
Would it not be better if it was kept standard? The usual way to use the functions in the second example would be:
convert(20, {it * 1.8 + 32})
convertFive({it * 1.8 = 32})
but IntelliJ will complain and suggest moving the lambda out of the parenthesis. Why? Is there a practical benefit? Seems like writing 10 + 2 * 5 / 34 instead of 10 + (2 * 5) / 34.
The real benefit of the trailing-braces lambda is that, like all the best language features, it changes the way you think.
The examples you have provided are just as well written with parentheses, and I agree that the IntelliJ suggestion to use the trailing form all the time is unnecessary...
But when I write something like this:
with(someObject) {
doSomething();
doSomethingElse();
}
It looks like I'm using a cool new feature of the language even though I'm really just calling a function.
The result is that people start thinking like they can write functions that add things to the language, because they kinda can, and that leads them to create new ways of doing things for the people who use their code.
The type-safe builder pattern is a great example of this. A lot of the Kotlin language features work together so that, even though it's just calling functions and passing lambdas, it provides a new experience for the developers that use it.
They get a whole new way to instantiate complex objects that is much more natural than the old ways, and nothing needed to be added to the language just for that -- all the little building-block features can be used for many other things.
There is no practical benefit at all, it is just a convention.
In regards to what you said about "keeping it standard". Where exactly did you get the "usual way" from? There are no global programming conventions that I am aware of, only language-specific ones, so this notation is standard by definition.
Conventions are important as they make reading code a lot less effort for anyone also familiar with the syntax. The conventions also reflect the usage of the language. With Kotlin they promote a very functional style with heavy use of lambdas and inline functions so 'lambdas out of parentheses' is necessary to keep the code clean and explicit.
Also as #Tenfour04 said in the comments, your examples really don't reflect the intended usage of the syntax. Generally you have multiple lines and even if you don't, the pattern is supposed to convey something more. Take the measureTimeMillis function for example:
measureTimeMillis {
askQuestion()
comment()
answerQuestion()
}
By having the lambda outside of the parenthesis it is immediately evident what the function does, even to a non-technical reader which is exactly what conventions are there for.
Closer to your example. Let's say you need to convert an array of numbers and square all the positive ones. Compare what is easier to read:
val result = arrayOf(1.0, 2.0, -3.0).map({ number ->
convert(number, {
if (it > 0) it * it else it
})
})
val result = arrayOf(1.0, 2.0, -3.0).map { number ->
convert(number) {
if (it > 0) it * it else it
}
}
I'm new to Scala and I'm having a problem understanding this. Why are there two syntaxes for the same concept, and none of them more efficient or shorter at that (merely from a typing standpoint, maybe they differ in behavior - which is what I'm asking).
In Go the analogues have a practical difference - you can't forward-reference the lambda assigned to a variable, but you can reference a named function from anywhere. Scala blends these two if I understand it correctly: you can forward-reference any variable (please correct me if I'm wrong).
Please note that this question is not a duplicate of What is the difference between “def” and “val” to define a function.
I know that def evaluates the expression after = each time it is referenced/called, and val only once. But this is different because the expression in the val definition evaluates to a function.
It is also not a duplicate of Functions vs methods in Scala.
This question concerns the syntax of Scala, and is not asking about the difference between functions and methods directly. Even though the answers may be similar in content, it's still valuable to have this exact point cleared up on this site.
There are three main differences (that I know of):
1. Internal Representation
Function expressions (aka anonymous functions or lambdas) are represented in the generated bytecode as instances of any of the Function traits. This means that function expressions are also objects. Method definitions, on the other hand, are first class citizens on the JVM and have a special bytecode representation. How this impacts performance is hard to tell without profiling.
2. Reference Syntax
References to functions and methods have different syntaxes. You can't just say foo when you want to send the reference of a method as an argument to some other part of your code. You'll have to say foo _. With functions you can just say foo and things will work as intended. The syntax foo _ is effectively wrapping the call to foo inside an anonymous function.
3. Generics Support
Methods support type parametrization, functions do not. For example, there's no way to express the following using a function value:
def identity[A](a: A): A = a
The closest would be this, but it loses the type information:
val identity = (a: Any) => a
As an extension to Ionut's first point, it may be worth taking a quick look at http://www.scala-lang.org/api/current/#scala.Function1.
From my understanding, an instance of a function as you described (ie.
val f = (x: Int) => x + 1) extends the Function1 class. The implications of this are that an instance of a function consumes more memory than defining a method. Methods are innate to the JVM, hence they can be determined at compile time. The obvious cost of a Function is its memory consumption, but with it come added benefits such as composition with other Function objects.
If I understand correctly, the reason defs and lambdas can work together is because the Function class has a self-type (T1) ⇒ R which is implied by its apply() method https://github.com/scala/scala/blob/v2.11.8/src/library/scala/Function1.scala#L36. (At least I THINK that's what going on, please correct me if I'm wrong). This is all just my own speculation, however. There's certain to be some extra compiler magic taking place underneath to allow method and function interoperability.
A lot of statically typed languages, like C++ and C#, have local variable type inference (with the keywords auto and var respectively, I think).
However, I haven't seen many C-derived languages (apart from those mentioned in the comments) implementing compile-time return type inference. I'll describe what I mean by "return type inference" before I ask the question. (I definitely don't mean overloading by return type.)
Consider this code in a hypothetical C#-like language:
private auto SomeMethod(int x)
{
return 3 * x;
}
It's more than obvious (to humans and to the compiler) that the return type is int (and the compilers can verify it).
The same goes for multiple paths:
private auto SomeOtherMethod(int x)
{
if(x == 0) return 1;
else return 3 * x;
}
It's still not ambiguous at all, because there is already an algorithm in said languages to resolve whether two expressions have compatible types:
private auto YetAnotherMethod(int x)
{
var r = (x == 0) ? 1 : 3 * x;
return r;
}
Since the algorithm exists and it is already implemented in some form, it's probably not a technical problem in this regard. But still, I haven't seen it anywhere in statically typed languages, which got me thinking about whether there's something bad about it.
My question:
Does return type inference, as a concept, have any disadvantage or subtle pitfall that I'm not seeing? (Apart from readability - I already understand that.)
Is there some corner case where it would introduce problems or ambiguity to a statically typed language? (By "introduce", I'm referring to issues that local variable type inference doesn't already have.)
yes, there are disadvantages. one you already mentioned: readability. second - the type has to be calculated so it takes time (in turing-complete type systems it may be infinite). but there is also something different - theory of type systems is much more complicated.
let's write a function that takes a list and return its head. what's its type? or function that takes a function, and a parameter applies that and return the result. in many languages you can't declare it. to support this kind of stuff, java introduced generics and it failed miserably. currently it's one of the most hated features of the language because of consistency problems
another thing: returned type may depend on not only the body of the function but also context of the invocation. let's look at haskell (that has best type system i've ever seen) http://learnyouahaskell.com/types-and-typeclasses
there is a function called read that takes a string, parse it and return... whatever you need, an int, an array.
so each time a type system is designed, the designer has to choose at which level she wants to stop. dynamic languages decided not to infer types at all, scala decided to do some local inference but not, for example, for overloaded or recursive functions and c++ decided not to infer the result
the question is:
when to use private functions and when to use nested functions? (i'm asking about F# but maybe answers can be relevant in other functional languages)
a small example
namespace SomeName
module BinaryReaderExt =
open System.IO
let seek (reader : BinaryReader) positions =
reader.BaseStream.Seek(positions, SeekOrigin.Current) |> ignore
module Mod =
open System.IO
let private prFun reader:BinaryReader =
//do something
BinaryReaderExt.seek reader 10L
let outerFun (stream :System.IO.Stream) =
let reader = new System.IO.BinaryReader(stream)
let seek = BinaryReaderExt.seek reader
let nestedFun () =
seek 10L
//do something
nestedFun()
prFun reader
it's a big bonus that a nested function can use data from higher scope. also it does not pollute the surrounding module. but is looks clumsy, isn't it? especially when there are some large nested functions
on opposite, private functions can be made public and be tested. and it seems that they look more readable
what's your opinion?
I use private functions in modules quite often -- usually for "helper" functions that are consumed by other functions in the module, but which don't need to be exposed to outside code.
One other use case for private functions is to simply make the code more readable. If a function is nested within another function, but it gets too long to read -- e.g., if the nested function's code makes up more than half of the length of the function it's contained by -- I'll usually move it out to the module level and make it private so the caller function's code is easier to understand.
It's a big bonus that a nested function can use data from higher scope. also it does not pollute the surrounding module.
I agree with your points. My advice is to keep functions at the right scopes. For example, if the function is used in only one place, it's better to be a nested function. For example, there's no point to move loop upwards and make it a private function.
let length xs =
let rec loop acc = function
| [] -> acc
| _::xs -> loop (acc + 1) xs
loop 0 acc
But is looks clumsy, isn't it? especially when there are some large nested functions
If you need large nested functions, it's likely you're doing it wrong. They should be broken into multiple small nested functions or the outermost function should be converted to a type.
On opposite, private functions can be made public and be tested. and it seems that they look more readable.
Readability is a subjective matter. I think organization issue is more important. The points of nested functions are that they're simple and could be tested by testing outermost functions.
When functions have more applicability, you can put them into a utility module and open that module when needed. Note that there are other techniques to hide functions other than marking them private. For example, you can use an fsi file to indicate what interface is exposed.
For a game I'm attempting to develop, I am writing a resource pool class in order to recycle objects without calling the "new" operator. I would like to be able to specify the size of the pool, and I would like it to be strongly typed.
Because of these considerations, I think that a Vector would be my best choice. However, as Vector is a final class, I can't extend it. So, I figured I'd use composition instead of inheritance, in this case.
The problem I'm seeing is this - I want to instantiate the class with two arguments: size and class type, and I'm not sure how to pass a type as an argument.
Here's what I tried:
public final class ObjPool
{
private var objects:Vector.<*>;
public function ObjPool(poolsize:uint, type:Class)
{
objects = new Vector.<type>(poolsize); // line 15
}
}
And here's the error I receive from FlashDevelop when I try to build:
\src\ObjPool.as(15): col: 18 Error: Access of undefined property type.
Does anybody know of a way to do this? It looks like the Flash compiler doesn't like to accept variable names within the Vector bracket notation. (I tried changing constructor parameter "type" to String as a test, with no results; I also tried putting a getQualifiedClassName in there, and that didn't work either. Untyping the objects var was fruitless as well.) Additionally, I'm not even sure if type "Class" is the right way to do this - does anybody know?
Thanks!
Edit: For clarification, I am calling my class like this:
var i:ObjPool = new ObjPool(5000, int);
The intention is to specify a size and a type.
Double Edit: For anyone who stumbles upon this question looking for an answer, please research Generics in the Java programming language. As of the time of this writing, they are not implemented in Actionscript 3. Good luck.
I have been trying to do this for a while now and Dominic Tancredi's post made me think that even if you can't go :
objects = new Vector.<classType>(poolsize);
You could go something like :
public final class ObjPool
{
private var objects:Vector.<*>;
public function ObjPool(poolsize:uint, type:Class)
{
var className : String = getQualifiedClassName(type);
var vectorClass : Class = Class(getDefinitionByName("Vector.<" + className + ">"));
objects = new vectorClass(poolsize);
}
}
I tried it with both int and a custom class and it seems to be working fine. Of course you would have to check if you actually gain any speed from this since objects is a Vector.<*> and flash might be making some implicit type checks that would negate the speed up you get from using a vector.
Hope this helps
This is an interesting question (+1!), mostly because I've never tried it before. It seems like from your example it is not possible, which I do find odd, probably something to do with how the compiler works. I question why you would want to do this though. The performance benefit of a Vector over an Array is mostly the result of it being typed, however you are explicitly declaring its type as undefined, which means you've lost the performance gain. So why not just use an array instead? Just food for though.
EDIT
I can confirm this is not possible, its an open bug. See here: http://bugs.adobe.com/jira/browse/ASC-3748 Sorry for the news!
Tyler.
It is good you trying to stay away from new but:
Everything I have ever read about Vector<> in actionscript says it must be strongly typed. So
this shouldn't work.
Edit: I am saying it can't be done.
Here see if this helps.
Is it possible to define a generic type Vector in Actionsctipt 3?
Shot in the dock, but try this:
var classType:Class = getDefinitionByName(type) as Class;
...
objects = new Vector.<classType>(poolsize); // line 15
drops the mic
I don't really see the point in using a Vector.<*>. Might as well go with Array.
Anyhow, I just came up with this way of dynamically create Vectors:
public function getCopy (ofVector:Object):Object
{
var copy:Object = new ofVector.constructor;
// Do whatever you like with the vector as long as you don't need to know the type
return copy;
}