How to construct an empty DeviceInformationCollection? - windows-runtime

I'm implementing an interface that returns a DeviceInformationCollection. The implementation can time out (or fail), in which case I would like to return an empty collection. This is to allow clients of the interface to always iterate over the returned collection, regardless of whether it succeeded or not, e.g.
auto&& devices{ co_await MyType::GetDevicesAsync() };
for (auto&& device : devices)
{
// Do crazy stuff with 'device'
}
However, I cannot figure out, how to construct an empty DeviceInformationCollection. The following code 'works', but causes undefined behavior when clients use the code above:
IAsyncOperation<DeviceInformationCollection> MyType::GetDevicesAsync()
{
// Doing Guru Meditation
// ...
co_return { nullptr };
}
My current workaround is to return an IVector<DeviceInformation> instead, and copy the items of the internal DeviceInformationCollection into the vector on success. That's both tedious as well as inefficient. I'd much rather just return the DeviceInformationCollection as-is, and construct an empty collection on failure.
Is there a way to do this?

Officially, this is not supported as the DeviceInformationCollection class does not provide a way to create an empty instance of itself. Unless you can find some function in the Windows.Devices.Enumeration API that does this for you you're out of luck.
Unofficially, we can observe that the default interface for the DeviceInformationCollection class is IVectorView. This means that this interface represents the class on the ABI. So you can play tricks with this knowledge but in general, this is very dangerous because APIs that accept a DeviceInformationCollection as input may assume that its implementation is exclusive and thus rely on some internal layout that you may not be aware of. Better to return IVectorView every time in a polymorphic and safe manner. Something like this:
using namespace winrt;
using namespace Windows::Foundation;
using namespace Windows::Foundation::Collections;
using namespace Windows::Devices::Enumeration;
IAsyncOperation<IVectorView<DeviceInformation>> Async()
{
DeviceInformationCollection devices = co_await // ... some async call
if (devices)
{
co_return devices;
}
// Returns empty IVectorView...
co_return single_threaded_observable_vector<DeviceInformation>().GetView();
}
int main()
{
for (auto&& device : Async().get())
{
printf("%ls\n", device.Name().c_str());
}
}

Related

Function variable and an array of functions in Chapel

In the following code, I'm trying to create a "function pointer" and an array of functions by regarding function names as usual variables:
proc myfunc1() { return 100; }
proc myfunc2() { return 200; }
// a function variable?
var myfunc = myfunc1;
writeln( myfunc() );
myfunc = myfunc2;
writeln( myfunc() );
// an array of functions?
var myfuncs: [1..2] myfunc1.type;
writeln( myfuncs.type: string );
myfuncs[ 1 ] = myfunc1;
myfuncs[ 2 ] = myfunc2;
for fun in myfuncs do
writeln( fun() );
which seems to be working as expected (with Chapel v1.16)
100
200
[domain(1,int(64),false)] chpl__fcf_type_void_int64_t
100
200
So I'm wondering whether the above usage of function variables is legitimate? For creating an array of functions, is it usual to define a concrete function with desired signature first and then refer to its type (with .type) as in the above example?
Also, is it no problem to treat such variables as "usual" variables, e.g., pass them to other functions as arguments or include them as a field of class/record? (Please ignore these latter questions if they are too broad...) I would appreciate any advice if there are potential pitfalls (if any).
This code is using first class function support, which is prototype/draft in the Chapel language design. You can read more about the prototype support in the First-class Functions in Chapel technote.
While many uses of first-class functions work in 1.16 and later versions, you can expect that the language design in this area will be revisited. In particular there isn't currently a reasonable answer to the question of whether or not variables can be captured (and right now attempting to do so probably results in a confusing error). I don't know in which future release this will change, though.
Regarding the myfunc1.type part, the section in the technote I referred to called "Specifying the type of a first-class function" presents an alternative strategy. However I don't see any problem with using myfunc1.type in this case.
Lastly, note that the lambda support in the current compiler actually operates by creating a class with a this method. So you can do the same - create a "function object" (to borrow a C++ term) - that has the same effect. A "function object" could be a record or a class. If it's a class, you might use inheritance to be able to create an array of objects that can respond to the same method depending on their dynamic type. This strategy might allow you to work around current issues with first class functions. Even if first-class-function support is completed, the "function object" approach allow you to be more explicit about captured variables. In particular, you might store them as fields in the class and set them in the class initializer. Here is an example creating and using an array of different types of function objects:
class BaseHandler {
// consider these as "pure virtual" functions
proc name():string { halt("base name called"); }
proc this(arg:int) { halt("base greet called"); }
}
class HelloHandler : BaseHandler {
proc name():string { return "hello"; }
proc this(arg:int) { writeln("Hello ", arg); }
}
class CiaoHandler : BaseHandler {
proc name():string { return "ciao"; }
proc this(arg:int) { writeln("Ciao ", arg); }
}
proc test() {
// create an array of handlers
var handlers:[1..0] BaseHandler;
handlers.push_back(new HelloHandler());
handlers.push_back(new CiaoHandler());
for h in handlers {
h(1); // calls 'this' method in instance
}
}
test();
Yes, in your example you are using Chapel's initial support for first-class functions. To your second question, you could alternatively use a function type helper for the declaration of the function array:
var myfuncs: [1..2] func(int);
These first-class function objects can be passed as arguments into functions – this is how Futures.async() works – or stored as fields in a record (Try It Online! example). Chapel's first-class function capabilities also include lambda functions.
To be clear, the "initial" aspect of this support comes with the caveat (from the documentation):
This mechanism should be considered a stopgap technology until we have developed and implemented a more robust story, which is why it's being described in this README rather than the language specification.

Is it possible to convert saga iterator to regular promise?

I'm building abstraction layer for keepassxc webextension. It's using redux-saga channels to make look chrome messaging synchronous. It's working (un)surprisingly well. However I want to completely abstract redux-saga, the way it will look like normal function returning Promise.
tl;dr
KeePassXC-browser will be browser extension that will allow retrieving passwords stored in KeePassXC app from the browser.
There are two possible communication protocols: HTTP and NativeClient. So I decided to use typescript interface and depending on communication protocol there will be two classes that implements this interface.
Interface:
interface Keepass {
getDatabaseHash(): Promise<string>;
getCredentials(origin: string, formUrl: string): Promise<KeepassCredentials[]>;
associate(): Promise<KeepassAssociation>;
isAssociated(dbHash: string): Promise<boolean>;
}
First implementation representing HTTP communication protocol is using fetch api, which is already Promise based, so implementation is straight forward and 100% conformed to this interface.
Second implementation representing NativeClient protocol is using redux-saga (effects and channels) to make asynchronous messaging look like synchronous function call. It's bit complicated, but works pretty well and covers edge cases, that would be hard to handle any other way, because native messaging is protocol based on standard input and standard output streams, so request and responses can be interleaved, out of order etc...
The actual problem I'm failing to solve, is that second implementation is not implementing interface, because it's generators not Promises.
Basically would like to convert (wrap) saga iterator function with function returning Promise. There is nice co library that basically does this for normal generators. But doesn't seem to work with redux saga.
function* someGenerator() {
const state = yield select(); // execution freeze here when called from wrapper
const result = yield call(someEffect);
return result;
}
function wrapper() {
return co(someGenerator); // returns Promise
}
Is this possible? If so, what I'm doing wrong?
Redux-saga is based on generator functions for special reason - to allow split asynchronous actions to separated yielded parts and manage them from one endpoint, which located at internal saga process-manager. Instead, in general case, Promise is a thing-in-self, and can't be partial executed. In other simplified words, Promises manage control flow in which them are located, and generators are managed by outer control flow.
yield select(); // execution freeze here when called from wrapper
Your main misconception is to assume that select actual perform some async operation. No, it just pauses function somegenatator on that point and transfers control to redux-saga engine, which knows that to do with returned value, and maybe state async process (Maybe no - it does not matter)
When process is done, saga engine resumes generator, and passes return value to it.
You can easily see it in source code of select (https://github.com/redux-saga/redux-saga/blob/master/src/internal/io.js#L139 ). It just returns an object with some structure, which can be understood by saga engine, then engine perform real action, and calls your generator in generatorName.next(resultValue) format.
UPD. Pure theoretically, you can wrap it to re-assignable promise, but it's not usable case
// Your library code
function deferredPromise() {
let resolver = null;
const promise = new Promise(resolve => (resolver = resolve));
return [
resolver,
promise
];
}
function generateSomeGenerator() {
let [ selectDoneResolve, selectDonePromise ] = deferredPromise();
const someGenetator = function* () {
const state = yield select(); // execution freeze here when called from wrapper
const [newSelectDoneResolve, newSelectDonePromise] = deferredPromise();
selectDoneResolve({
info: state, nextPromise: newSelectDonePromise
});
selectDoneResolve = newSelectDoneResolve;
selectDonePromise = newSelectDonePromise;
const result = yield call(someEffect);
return result;
}
return {
someGenetator,
selectDonePromise
};
}
const { someGenetator: someGenetatorImpl, selectDonePromise } = generateSomeGenerator();
export const someGenetator = someGenetatorImpl;
// Wrapper for interface
selectDonePromise.then(watchDone)
function watchDone({ info, nextPromise }) {
// Do something with your info
nextPromise.then(watchDone);
}

How to compare NPVariant objects?

I am registering listeners from JS to NPAPI plugin.
In order not to register same listener multiple times I need a way to compare passed NPVariant object to those already in the list.
This is how I'm registering listeners from JS :
PluginObject.registerListener("event", listener);
and then in plugin source :
for (l=head; l!=NULL; l=l->next) {
// somehow compare the listeners
// l->listener holds NPVariant object
if (l->listener-> ??? == new_lle->listener-> ???)
{
found = 1;
DBG("listener is a duplicate, not adding.");
NPN_MemFree(new_lle->listener);
free(new_lle);
break;
}
}
when you're talking about a javascript function the NPVariant is just an NPObject.
typedef struct _NPVariant {
NPVariantType type;
union {
bool boolValue;
int32_t intValue;
double_t doubleValue;
NPString stringValue;
NPObject *objectValue;
} value;
} NPVariant;
compare the val.type and val.objectValue. This will usually work, but if it doesn't there isn't another way so you're still better off trying it. I guess one other possibility would be to create a javascript function to compare them, inject it with NPN_Evaluate and call it with the two objects.
I don't think you can rely on objectValue. For instance if you do the following:
foo={};
bar=foo;
x={};
x.f=foo; x.b=bar;
Now, if you call NPN_Enumerate and pass x as the NPObject, you get two identifiers. Calling GetProperty for each of these returns NPVariants, but the value of variant->value.objectValue will be different for each, and different again in subsequent calls to NPN_Enumerate.
taxilian: is there significant overhead in calling NPN_Invoke with the two NPObjects, just to test for equality? This also involves some calls to GetProperty and the creation of identifiers and calling the NPVARIANT macros to test the results, etc.. I am wondering just how much logic I should be injecting and evaluating in Javascript.. this code injection seems to come up as a solution again and again. Is it costly?

Improvements to a custom scala recursion prevention mechanisem

I would like to create a smart recursion prevention mechanism. I would like to be able to annotate a piece of code somehow, to mark that it should not be executed in recursion, and if it is indeed executed in recursion, then I want to throw a custom error (which can be caught to allow executing custom code when this happens)
Here is my attempt until here:
import scala.collection.mutable.{Set => MutableSet, HashSet => MutableHashSet }
case class RecursionException(uniqueID:Any) extends Exception("Double recursion on " + uniqueID)
object Locking {
var locks:MutableSet[Any] = new MutableHashSet[Any]
def acquireLock (uniqueID:Any) : Unit = {
if (! (locks add uniqueID))
throw new RecursionException(uniqueID)
}
def releaseLock (uniqueID:Any) : Unit = {
locks remove uniqueID
}
def lock1 (uniqueID:Any, f:() => Unit) : Unit = {
acquireLock (uniqueID)
try {
f()
} finally {
releaseLock (uniqueID)
}
}
def lock2[T] (uniqueID:Any, f:() => T) : T = {
acquireLock (uniqueID)
try {
return f()
} finally {
releaseLock (uniqueID)
}
}
}
and now to lock a code segment I do:
import Locking._
lock1 ("someID", () => {
// Custom code here
})
My questions are:
Is there any obvious way to get rid of the need for hard coding a unique identifier? I need a unique identifier which will actually be shared between all invocations of the function containing the locked section (so I can't have something like a counter for generating unique values, unless somehow scala has static function variables). I thought on somehow
Is there any way to prettify the syntax of the anonymouse function? Specifically, something that will make my code look like lock1 ("id") { /* code goes here */ } or any other prettier look.
A bit silly to ask in this stage, but I'll ask anyway - Am I re-inventing the wheel? (i.e. does something like this exist?)
Wild final thought: I know that abusing the synchronized keyword (at least in java) can gaurantee that there would be only one execution of the code (in the sense that no multiple threads can enter that part of the code at the same time). I don't think it prevents from the same thread to execute the code twice (although I may be wrong here). Anyway, if it does prevent it, I still don't want it (even thoug my program is single threaded) since I'm pretty sure it will lead to a deadlock and won't report an exception.
Edit: Just to make it clearer, this project is for error debugging purposes and for learning scala. It has no real useage other than easily finding code errors at runtime (for detecting recursion where it shouldn't happen). See the comments to this post.
Not quite sure what you're aiming at, but a few remarks:
First, you do not need to do lock1 and lock2 to distinguish Unit and the other type. Unit is a proper value type, the generic method will work for it too. Also, you should probably use a call by name argument => T, rather than a function () => T, and use two argument lists:
def lock[T] (uniqueID:Any)(f: => T) : T = {
acquireLock (uniqueID)
try {
f
} finally {
releaseLock (uniqueID)
}
}
Then you can call with lock(id){block} and it looks like common instructions such as if or synchronized.
Second, why do you need a uniqueId, why make Lock a singleton? Instead, make Lock a class, an have as many instances as you would have had ids.
class Lock {
def lock[T](f: => T): T = {acquireLock() ...}
}
(You may even name your lock method apply, so you can just do myLock{....} rather than myLock.lock{...})
Multithreading aside, you now just need a Boolean var for acquire/releaseLock
Finally, if you need to support multithreading, you have to decide whether several thread can enter the lock (that would not be recursion). If they can, the boolean should be replaced with a DynamicVariable[Boolean] (or maybe a java ThreadLocal, as DynamicVariable is an InheritableThreadLocal, which you may or may not want). If they cannot, you just need to synchronize access in acquire/releaseLock.
Is there any obvious way to get rid of the need for hard coding a unique identifier?
Since for what you said on the comments this is not prod code, I guess you could use the functions hashCode property like this:
def lock1 (f:() => Unit) : Unit = {
acquireLock (f.hashCode)
try {
f()
} finally {
releaseLock (f.hashCode)
}
Is there any way to prettify the syntax of the anonymouse function?
With the before-mentioned change the syntax should be prettier:
lock1 {
If you're planning on keeping the identifier (if hashcode doesn't cut it for you) you can define your method like this:
def lock1 (uniqueID:Any)(f:() => Unit) : Unit = {
That will let you call the lock1 method with:
lock("foo") {
}
Cheers!

gcroot has no value

I have a curious problem with a managed object in unmanaged code. I have this C++/CLI module that bridges C++ and C# code. I have a structure like this:
template <class T>
struct ManagedReference
{
gcroot<T^> addonHost;
}
Now, at some point I create an instance of this managed reference and set the addonHost. All is well, I am able to use the handle.
However, in some cases (would require to much contextual description I'm afraid) the value cannot be evaluated:
In this case, calling a method with addonHost results in a "Entry point for found" exception.
As you can see from the screenshots, it is not two difference instances, two different handles. It's the very same. I don't understand how come in some situation the "value" is not evaluated. And maybe how I could catch that. Because it's not null.
What I should also mention is that I have several gcroot<T> and all of them have this problem, except one that is a gcroot<System::String>.
UPDATE
Here is what debugger shows during execution. The object is created and available, then at some point, the value is 'vanishing', and at the next call it's still there. But this is very reproducible. It's not random.
handle 0x0E1618EC void*
value 0x106396d8 { m_host=0x10638e04 } <-- object is available here
handle 0x0E1618EC void*
value 0x1020e558 { m_host=0x1020e4f0 } <-- object moved in memory
handle 0x0E1618EC void*
value <-- no value here
handle 0x0E1618EC void*
value 0x1020e558 { m_host=0x1020e4f0 } <-- object 'is back'
Maybe it would help to initialize the gcroot. Try:
template <class T>
struct ManagedReference
{
gcroot<T^> addonHost;
ManagedReference() : addonHost(nullptr) {}
};