What's the correct way to observe on the UI thread - windows-phone-8

I'm building observables over the Geolocator and events must be subscribed on the UI thread.
Is ObserveOnDispatcher deprecated?

ObserveOnDispatcher() is not deprecated, but as Paul says it's generally better to provide an explicit scheduler so you can inject a TestScheduler for unit testing.
DispatcherScheduler.Current can be used to obtain the current DispatcherScheduler - not .Instance, which makes sense since there can actually be more than one - although most people shouldn't need to go down that particular road!
ObserveOnDispatcher() and DispatcherScheduler are present in the Windows Phone 8 Rx build. They are in the rx-xaml nuget package which contains xaml platform specific elements - you would have missed this if you just included rx-main.
Specifically, they are located in the System.Reactive.Windows.Threading.dll assembly. ObserveOnDispatcher() is on the System.Reactive.Linq.DispatcherObservable type, and the assembly also has System.Reactive.Concurrency.DispatcherScheduler.

I usually write:
.ObserveOn(DispatcherScheduler.Instance)
if I'm not using ReactiveUI. If I am, it's
.ObserveOn(RxApp.MainThreadScheduler)
The difference being, that in a unit test runner, RxApp.MainThreadScheduler is automatically rigged to be CurrentThread, so your unit tests pass - otherwise they'll all hang.

Related

How to use an in-process IMFTransform with WinRT MediaPlayer::AddVideoEffect via activatableClassId

WinRT's Windows::Media::Playback::MediaPlayer has support to adding video and audio effects (much like IMFMediaEngine), however I can't find a way to use existing IMFTransform's that I already use with IMFMediaEngineEx::InsertVideoEffect() in MediaPlayer::AddVideoEffect()
MediaPlayer::AddVideoEffect() only takes a string for the "activatableClassId", whereas IMFMediaEngineEx::InsertVideoEffect() allows me to pass in a pointer to my local IMFTransform directly. I don't want to registry a DLL with the system for the class to be activatable, I just want the IMFTransform to be registered locally in-process so that it can be discovered by the classId.
I've searched online but there is very little information. All I found was this Microsoft thread, an old article showing CGreyScale MFT using WRL, and this useful repository which uses an appxmanifest to registry the classes (not what I want to do).
These example seem useful and I implemented the decoration around my existing MFT however the example relies on registering the activatableClassId externally so I still can't tell how to do it in-process. The only thing I could find was RoRegisterActivationFactories() but there's very little information about this so I'm not sure.
Does anyone know how to do this?
Thanks,
Since the MediaPlayer API is WinRT, it will expect to use WinRT activated objects for effects. Alternatively, the lower level win32 MF Media Engine allows you to pass in an IMFActivate for any custom activation.
There are two ways to activate the MFT with WinRT:
Register the MFT to the registry and reference the CLSID, you can refer to this document.
Registration-Free WinRT(which requires the use of an application manifest), you can refer to this blog.
Unfortunately, this means that there is an appmanifest requirement if you wish to register the MFT in-process.

Swing and JavaFX concurrency

Is there a way to avoid concurrency when using Swing embedded in JavaFX8 (swingNode) or vice versa (JFXPanel) ?
I have two threads (the EDT and the FX application) which manage the UI, this can cause unexpected results...
No, it is not officially possible currently. In both frameworks changes to the structure can only be done on the respective UI thread.
However, in the future this may change, but I do not know of any concrete plans that oracle may have and I cannot find an appropriate task in their jira.
edit: I found the specific thread about this on the javafx mailinglist:
http://mail.openjdk.java.net/pipermail/openjfx-dev/2013-August/009541.html
jira issue: https://javafx-jira.kenai.com/browse/RT-30694
appearently there is an experimental system property that can be set to enable a "single threaded mode": -Djavafx.embed.singleThread=true

Castle Windsor when is transient with disposable released? Burden

We're using Castle Windsor 2.1.0.6655.
I'm wanting to use transient lifecycle for my resolved objects, but I'm wanting to check how this version of Castle deals with transients that have dependencies. If I use my immediate window (visual studio), I can see the effects of resolving, disposing, and finally realeasing, all the time checking whether the resolved object is released.
eg.
resolved = container.Resolve(Id);
container.Kernal.ReleasePolicy.HasTrack(resolved)
= true
resolved.Dispose()
container.Kernal.ReleasePolicy.HasTrack(resolved)
= true
container.release(resolved)
container.Kernal.ReleasePolicy.HasTrack(resolved)
= false
My concern is that these objects are continuing to be tracked between requests, as they are never released, meaning memory usage continues to rise.
I've read that Component Burden is related to this issue, but I haven't been able to find out exactly what this is in Castle 2.0 and greater.
The difficulty in 'releasing' is that the resolved objects are in fact part of services, their usage being to provide ORM functions and mappings. I'm not sure that referencing the container to release is correct in these instances.
I'm wondering whether there is a way for me to see how many objects the container is referencing at a given point, without having to use memory profilers, as we don't have this available.
I thought I could maybe use the following:
container.Kernel.GetHandlers()
with the type I'm looking for, to see if tracked occurrences are increasing?
Vesion 2.1 will celebrate its 4th birthday very soon. I strongly recommend you to upgrade to version 3.1.
Not only because v2.1 is no longer supported and v3.1 is much newer, with many bugfixes, but also it has some major improvements in the way it does tracking.
Also in v3.1 you will be able to enable a performance counter, that will report to you, in real time, the number of instances being tracked by the release policy.
Addressing the particular concern you're referring to, that sounds like an old threading bug that was fixed somewhere along the way. One more reason to upgrade.
windsor has to be used with R(egister)R(esolve)R(elease) pattern.
by default(you definitely should stick with that...) all components are tracked/owned by the container... that's the windsor beauty!
Until you (or the container itself) calls Release the instance will be hold in memory, no matter if you call the Dispose directly(as per you sample).
Said so, components registered as Transient should be called w/ composition root only in other word as first object of the dependency graph or through a factory(late dependency).
Of course keep in mind that using a factory within dependency graph you may need to implement RRR pattern expliclty.

Open alternatives to Windows Workflow

Pre-warning: There are some other questions similar to this but don't quite answer the question (these include: Alternatives to Windows Workflow Foundation?, Can anyone recommend a .Net open source alternative to Windows Workflow?)
We are developing a system that is an event based state machine, currently we are investigating windows workflow, our system needs to be low latency in its response to events from a multitude of sources (xmpp, http, sms, phone call, email etc etc) coming into the system, scalable and resilient and most importantly customisable. For a variety of reasons (and due diligence) I am looking for open workflow engines that support functions similar to Windows Workflow Foundation (and more - if possible), mainly (but it doesn't matter too much if there are engines that don't support some features):
Persistence of long running tasks, and resumption of tasks on external events
High performance, low latency
Ability to develop custom actions
The ability to specify workflows dynamically
Tracking and tracing
I am not constrained to platform or language, and I would love some help and tips from you guys so that I can start to investigate the engines more closely and any experiences you had with the engines.
Paul.
I invite you to examine Stateless further, as suggested in the answer to my SO question can-anyone-recommend-a-net-open-source-alternative-to-windows-workflow. to achieve the goal of a long running state machine is very simple in that you can store the current state of your state in a database and re-sync the state machine when needed. Consider the following code from the stateless site:
Stateless has been designed with
encapsulation within an ORM-ed domain
model in mind. Some ORMs place
requirements upon where mapped data
may be stored. To this end, the
StateMachine constructor can accept
function arguments that will be used
to read and write the state values:
var stateMachine = new StateMachine<State, Trigger>(
() => myState.Value,
s => myState.Value = s);
With very little effort you can persist your state, then retrieve that state easily later on.
In respect updating the workflow dynamically, if you configure a state machine such as
var stateMachine = new StateMachine<string, int>();
and maintain a separate file of states and triggers in XML, you can perform a configuration at runtime by looping through the string int value pairs.
"Java side":
Apache ODE (Orchestration Director Engine) executes business processes written following the WS-BPEL standard. It talks to web services, sending and receiving messages, handling data manipulation and error recovery as described by your process definition. It supports both long and short living process executions to orchestrate all the services that are part of your application.
http://ode.apache.org/
OSWorkflow can be considered a "low level" workflow implementation. Situations like "loops" and "conditions" that might be represented by a graphical icon in other workflow systems must be "coded" in OSWorkflow.
http://www.opensymphony.com/osworkflow/
Shark is an extendable workflow engine framework including a standard implementation completely based on WfMC specifications using XPDL (without any proprietary extensions !) as its native workflow process definition format and the WfMC "ToolAgents" API for serverside execution of system activitie
http://www.enhydra.org/workflow/shark/index.html
Python side:
http://bika.sourceforge.net/
http://www.vivtek.com/wftk/
I this will help you :-)
You might consider implementing your flow as an actual state machine. Tools like State Machine Compiler and Ragel can help with this. State machines, in many circumstances, are just what you need to implement insanely complex behavior that is testable, and rock-solid. I don't claim to be a Windows work flow expert, but from what I have seen, I question its superiority over coding your own state machine, either by hand or using a tool.
You might want to check out Simple State Machine.
If you feel like you want to have more control over things and want to roll your own it might be helpful to check out the Saga support that projects like NServiceBus and MassTransit use. Sagas look to be very similar to WF workflows but are POCO objects and I believe both projects just use NHibernate for Saga persistence.
I'm going to recommend you take a few hours to look at the book Open-Source ESBs in Action. "Orchestration" and "Choreography" are the key buzzwords to look at when dealing with "enterprise service busses." The systems for .NET are quite expensive (BizTalk is in the price range of a decent car, the price of Tibco is in the price range of a decent house).
Other links:
Open ESB project
Comparison of OpenESB and ServiceMix (both of which are the subject of the "In Action" book above.
Try Drools for JAVA, I personally have never tried it but I know several commercial applications are based on drools.
http://www.jboss.org/drools/
You could also upgrade to .NET 4.0 there are major improvements in the Workflow in the new framework. I know if I was writing a new workflow application I would jump to 4.0.
Good Luck
JBoss JBPM
Consider Workflow Engine, a lightweight all-in-one component that enables you to add custom executable workflows of any complexity to any .NET or Java software, be it your own creation or a third-party solution, with minimal changes to existing code. It supports custom actions and commands, has timers and supports parallel workflows. And there's a free version.
You can take a look at Imixs-Workflow, which is an event driven approach of a state machine based on bpmn 2.0. It specially focuses on human-centric long running tasks.

Why would you want Dependency Injection without configuration?

After reading the nice answers in this question, I watched the screencasts by Justin Etheredge. It all seems very nice, with a minimum of setup you get DI right from your code.
Now the question that creeps up to me is: why would you want to use a DI framework that doesn't use configuration files? Isn't that the whole point of using a DI infrastructure so that you can alter the behaviour (the "strategy", so to speak) after building/releasing/whatever the code?
Can anyone give me a good use case that validates using a non-configured DI like Ninject?
I don't think you want a DI-framework without configuration. I think you want a DI-framework with the configuration you need.
I'll take spring as an example. Back in the "old days" we used to put everything in XML files to make everything configurable.
When switching to fully annotated regime you basically define which component roles yor application contains. So a given
service may for instance have one implementation which is for "regular runtime" where there is another implementation that belongs
in the "Stubbed" version of the application. Furthermore, when wiring for integration tests you may be using a third implementation.
When looking at the problem this way you quickly realize that most applications only contain a very limited set of component roles
in the runtime - these are the things that actually cause different versions of a component to be used. And usually a given implementation of a component is always bound to this role; it is really the reason-of-existence of that implementation.
So if you let the "configuration" simply specify which component roles you require, you can get away without much more configuration at all.
Of course, there's always going to be exceptions, but then you just handle the exceptions instead.
I'm on a path with krosenvold, here, only with less text: Within most applications, you have a exactly one implementation per required "service". We simply don't write applications where each object needs 10 or more implementations of each service. So it would make sense to have a simple way say "this is the default implementation, 99% of all objects using this service will be happy with it".
In tests, you usually use a specific mockup, so no need for any config there either (since you do the wiring manually).
This is what convention-over-configuration is all about. Most of the time, the configuration is simply a dump repeating of something that the DI framework should know already :)
In my apps, I use the class object as the key to look up implementations and the "key" happens to be the default implementation. If my DI framework can't find an override in the config, it will just try to instantiate the key. With over 1000 "services", I need four overrides. That would be a lot of useless XML to write.
With dependency injection unit tests become very simple to set up, because you can inject mocks instead of real objects in your object under test. You don't need configuration for that, just create and injects the mocks in the unit test code.
I received this comment on my blog, from Nate Kohari:
Glad you're considering using Ninject!
Ninject takes the stance that the
configuration of your DI framework is
actually part of your application, and
shouldn't be publicly configurable. If
you want certain bindings to be
configurable, you can easily make your
Ninject modules read your app.config.
Having your bindings in code saves you
from the verbosity of XML, and gives
you type-safety, refactorability, and
intellisense.
you don't even need to use a DI framework to apply the dependency injection pattern. you can simply make use of static factory methods for creating your objects, if you don't need configurability apart from recompiling code.
so it all depends on how configurable you want your application to be. if you want it to be configurable/pluggable without code recompilation, you'll want something you can configure via text or xml files.
I'll second the use of DI for testing. I only really consider using DI at the moment for testing, as our application doesn't require any configuration-based flexibility - it's also far too large to consider at the moment.
DI tends to lead to cleaner, more separated design - and that gives advantages all round.
If you want to change the behavior after a release build, then you will need a DI framework that supports external configurations, yes.
But I can think of other scenarios in which this configuration isn't necessary: for example control the injection of the components in your business logic. Or use a DI framework to make unit testing easier.
You should read about PRISM in .NET (it's best practices to do composite applications in .NET). In these best practices each module "Expose" their implementation type inside a shared container. This way each module has clear responsabilities over "who provide the implementation for this interface". I think it will be clear enough when you will understand how PRISM work.
When you use inversion of control you are helping to make your class do as little as possible. Let's say you have some windows service that waits for files and then performs a series of processes on the file. One of the processes is to convert it to ZIP it then Email it.
public class ZipProcessor : IFileProcessor
{
IZipService ZipService;
IEmailService EmailService;
public void Process(string fileName)
{
ZipService.Zip(fileName, Path.ChangeFileExtension(fileName, ".zip"));
EmailService.SendEmailTo(................);
}
}
Why would this class need to actually do the zipping and the emailing when you could have dedicated classes to do this for you? Obviously you wouldn't, but that's only a lead up to my point :-)
In addition to not implementing the Zip and email why should the class know which class implements the service? If you pass interfaces to the constructor of this processor then it never needs to create an instance of a specific class, it is given everything it needs to do the job.
Using a D.I.C. you can configure which classes implement certain interfaces and then just get it to create an instance for you, it will inject the dependencies into the class.
var processor = Container.Resolve<ZipProcessor>();
So now not only have you cleanly separated the class's functionality from shared functionality, but you have also prevented the consumer/provider from having any explicit knowledge of each other. This makes reading code easier to understand because there are less factors to consider at the same time.
Finally, when unit testing you can pass in mocked dependencies. When you test your ZipProcessor your mocked services will merely assert that the class attempted to send an email rather than it really trying to send one.
//Mock the ZIP
var mockZipService = MockRepository.GenerateMock<IZipService>();
mockZipService.Expect(x => x.Zip("Hello.xml", "Hello.zip"));
//Mock the email send
var mockEmailService = MockRepository.GenerateMock<IEmailService>();
mockEmailService.Expect(x => x.SendEmailTo(.................);
//Test the processor
var testSubject = new ZipProcessor(mockZipService, mockEmailService);
testSubject.Process("Hello.xml");
//Assert it used the services in the correct way
mockZipService.VerifyAlLExpectations();
mockEmailService.VerifyAllExceptions();
So in short. You would want to do it to
01: Prevent consumers from knowing explicitly which provider implements the services it needs, which means there's less to understand at once when you read code.
02: Make unit testing easier.
Pete