Q: Does functionality exist to invoke some promise over an array of arguments and "all"-ify it without boilerplate for-each code? - bluebird

I was struggling to describe this succintly in the title so I'll paste in my typescript code that achieves what I'm talking about -
aggregate<T, A>(args: A[], invokable: (arg: A) => promise<T>): promise<T[]> {
let allPromises = new Array<promise<T>>();
for (let arg of args) {
allPromises.push(invokable(arg));
}
return promise.all(allPromises);
}
This takes a list of arguments of type A and for each of them invokes some function (which returns a promise which returns type T). Each of these promises are collected into a list which is then all-ified and returned.
My question is, does this function already exist in Bluebird as I'd rather do things properly and use that existing, tested functionality! I had problems getting my head around some of the documentation so I might not have grokked something I should have!

Your problem is perfectly solvable with Array.prototype.map.
Your code can be turned into:
aggregate<T, A>(args: A[], invokable: (arg: A) => promise<T>): promise<T[]> {
return promise.all(args.map(invocable));
}

Related

Function variable and an array of functions in Chapel

In the following code, I'm trying to create a "function pointer" and an array of functions by regarding function names as usual variables:
proc myfunc1() { return 100; }
proc myfunc2() { return 200; }
// a function variable?
var myfunc = myfunc1;
writeln( myfunc() );
myfunc = myfunc2;
writeln( myfunc() );
// an array of functions?
var myfuncs: [1..2] myfunc1.type;
writeln( myfuncs.type: string );
myfuncs[ 1 ] = myfunc1;
myfuncs[ 2 ] = myfunc2;
for fun in myfuncs do
writeln( fun() );
which seems to be working as expected (with Chapel v1.16)
100
200
[domain(1,int(64),false)] chpl__fcf_type_void_int64_t
100
200
So I'm wondering whether the above usage of function variables is legitimate? For creating an array of functions, is it usual to define a concrete function with desired signature first and then refer to its type (with .type) as in the above example?
Also, is it no problem to treat such variables as "usual" variables, e.g., pass them to other functions as arguments or include them as a field of class/record? (Please ignore these latter questions if they are too broad...) I would appreciate any advice if there are potential pitfalls (if any).
This code is using first class function support, which is prototype/draft in the Chapel language design. You can read more about the prototype support in the First-class Functions in Chapel technote.
While many uses of first-class functions work in 1.16 and later versions, you can expect that the language design in this area will be revisited. In particular there isn't currently a reasonable answer to the question of whether or not variables can be captured (and right now attempting to do so probably results in a confusing error). I don't know in which future release this will change, though.
Regarding the myfunc1.type part, the section in the technote I referred to called "Specifying the type of a first-class function" presents an alternative strategy. However I don't see any problem with using myfunc1.type in this case.
Lastly, note that the lambda support in the current compiler actually operates by creating a class with a this method. So you can do the same - create a "function object" (to borrow a C++ term) - that has the same effect. A "function object" could be a record or a class. If it's a class, you might use inheritance to be able to create an array of objects that can respond to the same method depending on their dynamic type. This strategy might allow you to work around current issues with first class functions. Even if first-class-function support is completed, the "function object" approach allow you to be more explicit about captured variables. In particular, you might store them as fields in the class and set them in the class initializer. Here is an example creating and using an array of different types of function objects:
class BaseHandler {
// consider these as "pure virtual" functions
proc name():string { halt("base name called"); }
proc this(arg:int) { halt("base greet called"); }
}
class HelloHandler : BaseHandler {
proc name():string { return "hello"; }
proc this(arg:int) { writeln("Hello ", arg); }
}
class CiaoHandler : BaseHandler {
proc name():string { return "ciao"; }
proc this(arg:int) { writeln("Ciao ", arg); }
}
proc test() {
// create an array of handlers
var handlers:[1..0] BaseHandler;
handlers.push_back(new HelloHandler());
handlers.push_back(new CiaoHandler());
for h in handlers {
h(1); // calls 'this' method in instance
}
}
test();
Yes, in your example you are using Chapel's initial support for first-class functions. To your second question, you could alternatively use a function type helper for the declaration of the function array:
var myfuncs: [1..2] func(int);
These first-class function objects can be passed as arguments into functions – this is how Futures.async() works – or stored as fields in a record (Try It Online! example). Chapel's first-class function capabilities also include lambda functions.
To be clear, the "initial" aspect of this support comes with the caveat (from the documentation):
This mechanism should be considered a stopgap technology until we have developed and implemented a more robust story, which is why it's being described in this README rather than the language specification.

How to turn a Ceylon Sequential or array into a generic Tuple with the appropriate type?

I've got a generic function that needs to create a Tuple to call a function whose arguments I don't know the types of.
Something like this (except array in this example is created by some external code, so I can't just apply the function directly):
Result apply<Result, Where>(
Anything[] array,
Callable<Result, Where> fun)
given Where satisfies Anything[] => nothing;
Is there a type-safe way to implement this method and get the function to be called with the given arguments?
This cannot be done completely type-safely... but assuming that the array indeed contains elements of the correct types as they should appear in a Tuple of type Where, the following function will do the trick:
Tuple<Anything, Anything, Anything> typedTuple({Anything+} array) {
if (exists second = array.rest.first) {
return Tuple(array.first, typedTuple({ second }.chain(array.rest.rest)));
}
else {
return Tuple(array.first, []);
}
}
And apply gets implemented as:
Result apply<Result, Where>(
[Anything+] array,
Callable<Result, Where> fun)
given Where satisfies Anything[] {
value tuple = typedTuple(array);
assert(is Where tuple);
return fun(*tuple);
}
There's nothing relating the type of array to the parameters of fun, so that signature can't possibly be implemented in a type-safe way. You're not constraining the type of array at all; it could contain anything. How in principle would a type-safe implementation handle the case where fun expects [String, Integer] but array is [Boolean+]?

What is the benefit of nesting functions (in general/in Swift)

I'm just learning some Swift and I've come across the section that talks about nesting functions:
Functions can be nested. Nested functions have access to variables that were declared in the outer function. You can use nested functions to organize the code in a function that is long or complex.
From here
So if the purported benefit is to "organize the code", why not just have the nested function independently, outside of the outer function? That, to me, seems more organized.
The only benefit I can discern is that you "have access to variables that were declared in the outer function", but this seems trivial in comparison to the messiness of having nested functions.
Any thoughts?
So if the purported benefit is to "organize the code", why not just have the nested function independently, outside of the outer function? That, to me, seems more organized.
Oh, I totally disagree. If the only place where the second function is needed is inside the first function, keeping it inside the first function is much more organized.
Real-life examples here: http://www.apeth.com/swiftBook/ch02.html#_function_in_function
Plus, a function in a function has the local environment in scope. Code inside the nested function can "see" local variables declared before the nested function declaration. This can be much more convenient and natural than passing a bunch of parameters.
However, the key thing that a local function lets you do that you could not readily do in any other way is that you can form the function in real time (because a function is a closure) and return it from the outer function.
http://www.apeth.com/swiftBook/ch02.html#_function_returning_function
One really nice thing is that Xcode will indent nested functions within their parent function in the function pop-up. The function popup is much easier to navigate with functions related to recalculating the layout indented and all grouped in one place.
IMO, the only difference of closures and nested functions is recursion. You can refer the function itself in the function body without a trick.
func a() {
func b() {
b() // Infinite loop!
}
b()
}
Captured reference type object dies when the capturer dies. In this case, capturer is the function's lexical scope. Which means the function is going to die when it finishes its execution.
Technically, this makes a reference-cycle, and usually discouraged. But this can be useful if you use this wisely.
For example, combine this with asynchronous operations.
func spawnAsyncOp1(_ completion: #escaping () -> Void) {
enum Continuation {
case start
case waitForSomethingElse1
case retry
case end
}
let someResource = SomeResource()
func step(_ c: Continuation) {
switch c {
case .start:
return step(.waitForSomethingElse1)
case .waitForSomethingElse1:
DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(10), execute: {
let fc = (someResource.makeRandomResult() % 100 < 50) ? .end : .retry as Continuation
print("\(fc)")
return step(fc)
})
case .retry:
return step(.start)
case .end:
return completion()
}
}
return step(.start)
}
It can make resource management in a coroutine execution simpler without an explicit object instance. Resources are simply captured in function spawnAsyncOp1 and will be released when the function dies.

Improvements to a custom scala recursion prevention mechanisem

I would like to create a smart recursion prevention mechanism. I would like to be able to annotate a piece of code somehow, to mark that it should not be executed in recursion, and if it is indeed executed in recursion, then I want to throw a custom error (which can be caught to allow executing custom code when this happens)
Here is my attempt until here:
import scala.collection.mutable.{Set => MutableSet, HashSet => MutableHashSet }
case class RecursionException(uniqueID:Any) extends Exception("Double recursion on " + uniqueID)
object Locking {
var locks:MutableSet[Any] = new MutableHashSet[Any]
def acquireLock (uniqueID:Any) : Unit = {
if (! (locks add uniqueID))
throw new RecursionException(uniqueID)
}
def releaseLock (uniqueID:Any) : Unit = {
locks remove uniqueID
}
def lock1 (uniqueID:Any, f:() => Unit) : Unit = {
acquireLock (uniqueID)
try {
f()
} finally {
releaseLock (uniqueID)
}
}
def lock2[T] (uniqueID:Any, f:() => T) : T = {
acquireLock (uniqueID)
try {
return f()
} finally {
releaseLock (uniqueID)
}
}
}
and now to lock a code segment I do:
import Locking._
lock1 ("someID", () => {
// Custom code here
})
My questions are:
Is there any obvious way to get rid of the need for hard coding a unique identifier? I need a unique identifier which will actually be shared between all invocations of the function containing the locked section (so I can't have something like a counter for generating unique values, unless somehow scala has static function variables). I thought on somehow
Is there any way to prettify the syntax of the anonymouse function? Specifically, something that will make my code look like lock1 ("id") { /* code goes here */ } or any other prettier look.
A bit silly to ask in this stage, but I'll ask anyway - Am I re-inventing the wheel? (i.e. does something like this exist?)
Wild final thought: I know that abusing the synchronized keyword (at least in java) can gaurantee that there would be only one execution of the code (in the sense that no multiple threads can enter that part of the code at the same time). I don't think it prevents from the same thread to execute the code twice (although I may be wrong here). Anyway, if it does prevent it, I still don't want it (even thoug my program is single threaded) since I'm pretty sure it will lead to a deadlock and won't report an exception.
Edit: Just to make it clearer, this project is for error debugging purposes and for learning scala. It has no real useage other than easily finding code errors at runtime (for detecting recursion where it shouldn't happen). See the comments to this post.
Not quite sure what you're aiming at, but a few remarks:
First, you do not need to do lock1 and lock2 to distinguish Unit and the other type. Unit is a proper value type, the generic method will work for it too. Also, you should probably use a call by name argument => T, rather than a function () => T, and use two argument lists:
def lock[T] (uniqueID:Any)(f: => T) : T = {
acquireLock (uniqueID)
try {
f
} finally {
releaseLock (uniqueID)
}
}
Then you can call with lock(id){block} and it looks like common instructions such as if or synchronized.
Second, why do you need a uniqueId, why make Lock a singleton? Instead, make Lock a class, an have as many instances as you would have had ids.
class Lock {
def lock[T](f: => T): T = {acquireLock() ...}
}
(You may even name your lock method apply, so you can just do myLock{....} rather than myLock.lock{...})
Multithreading aside, you now just need a Boolean var for acquire/releaseLock
Finally, if you need to support multithreading, you have to decide whether several thread can enter the lock (that would not be recursion). If they can, the boolean should be replaced with a DynamicVariable[Boolean] (or maybe a java ThreadLocal, as DynamicVariable is an InheritableThreadLocal, which you may or may not want). If they cannot, you just need to synchronize access in acquire/releaseLock.
Is there any obvious way to get rid of the need for hard coding a unique identifier?
Since for what you said on the comments this is not prod code, I guess you could use the functions hashCode property like this:
def lock1 (f:() => Unit) : Unit = {
acquireLock (f.hashCode)
try {
f()
} finally {
releaseLock (f.hashCode)
}
Is there any way to prettify the syntax of the anonymouse function?
With the before-mentioned change the syntax should be prettier:
lock1 {
If you're planning on keeping the identifier (if hashcode doesn't cut it for you) you can define your method like this:
def lock1 (uniqueID:Any)(f:() => Unit) : Unit = {
That will let you call the lock1 method with:
lock("foo") {
}
Cheers!

How do you return non-copyable types?

I am trying to understand how you return non-primitives (i.e. types that do not implement Copy). If you return something like a i32, then the function creates a new value in memory with a copy of the return value, so it can be used outside the scope of the function. But if you return a type that doesn't implement Copy, it does not do this, and you get ownership errors.
I have tried using Box to create values on the heap so that the caller can take ownership of the return value, but this doesn't seem to work either.
Perhaps I am approaching this in the wrong manner by using the same coding style that I use in C# or other languages, where functions return values, rather than passing in an object reference as a parameter and mutating it, so that you can easily indicate ownership in Rust.
The following code examples fails compilation. I believe the issue is only within the iterator closure, but I have included the entire function just in case I am not seeing something.
pub fn get_files(path: &Path) -> Vec<&Path> {
let contents = fs::walk_dir(path);
match contents {
Ok(c) => c.filter_map(|i| { match i {
Ok(d) => {
let val = d.path();
let p = val.as_path();
Some(p)
},
Err(_) => None } })
.collect(),
Err(e) => panic!("An error occurred getting files from {:?}: {}", pa
th, e)
}
}
The compiler gives the following error (I have removed all the line numbers and extraneous text):
error: `val` does not live long enough
let p = val.as_path();
^~~
in expansion of closure expansion
expansion site
reference must be valid for the anonymous lifetime #1 defined on the block...
...but borrowed value is only valid for the block suffix following statement
let val = d.path();
let p = val.as_path();
Some(p)
},
You return a value by... well returning it. However, your signature shows that you are trying to return a reference to a value. You can't do that when the object will be dropped at the end of the block because the reference would become invalid.
In your case, I'd probably write something like
#![feature(fs_walk)]
use std::fs;
use std::path::{Path, PathBuf};
fn get_files(path: &Path) -> Vec<PathBuf> {
let contents = fs::walk_dir(path).unwrap();
contents.filter_map(|i| {
i.ok().map(|p| p.path())
}).collect()
}
fn main() {
for f in get_files(Path::new("/etc")) {
println!("{:?}", f);
}
}
The main thing is that the function returns a Vec<PathBuf> — a collection of a type that owns the path, and are more than just references into someone else's memory.
In your code, you do let p = val.as_path(). Here, val is a PathBuf. Then you call as_path, which is defined as: fn as_path(&self) -> &Path. This means that given a reference to a PathBuf, you can get a reference to a Path that will live as long as the PathBuf will. However, you are trying to keep that reference around longer than vec will exist, as it will be dropped at the end of the iteration.
How do you return non-copyable types?
By value.
fn make() -> String { "Hello, World!".into() }
There is a disconnect between:
the language semantics
the implementation details
Semantically, returning by value is moving the object, not copying it. In Rust, any object is movable and, optionally, may also be Clonable (implement Clone) and Copyable (implement Clone and Copy).
That the implementation of copying or moving uses a memcpy under the hood is a detail that does not affect the semantics, only performance. Furthermore, this being an implementation detail means that it can be optimized away without affecting the semantics, which the optimizer will try very hard to do.
As for your particular code, you have a lifetime issue. You cannot return a reference to a value if said reference may outlive the value (for then, what would it reference?).
The simple fix is to return the value itself: Vec<PathBuf>. As mentioned, it will move the paths, not copy them.