WriteableBitmap vs. BitmapImage What to use? - windows-runtime

I am developing a Photo Gallery application for WinRT and I do not know what to use.
What is better to use?
Is it better to use instances of BitmapImage or WriteableBitmap?
What are pros and cons of each?
Thanks for explanation.

There are some memory collection issues with WriteableBitmap - however you might use it to display pictures with alpha channel (like png format). BitmapImage is ideal for jpg, but I have encountered problems with setting big stream source with asynchronous SetSourceAsync so if you'll get any issues with that - use synchronous version instead. Also worth mention if you want to clear BitmapImage memory - set BitmapImage source to empty stream (let it be InMemoryRandomAccessStream - memory leak free solution)

Related

Needed Download Speed

First of all, sorry for my English.
I need some kind of assistance with Forge. I need to display the download speed of the BIM model on the Unity.UI, be it the .RVT or whatever is downloaded from BIM360. Is it possible to know where it is downloading exactly to be able to put a count of downloaded bytes there?.
Another question. Currently the download time from Autodesk servers is approximately 110 seconds.
We understand that the download is by meshes packages. Is there a way to speed up this download? Our client needs this download to be faster.
The size (in Mb) depends of the model, so it hard to tell you how large a RVT image could be. It also depends on the asset quality, and what you actually requesting to view. So I am afraid I cannot really answer that question. However, if you are interested to create a progress bar in your UI, you can ask the number of triangles, material definitions, and textures from the model manifest, and calculate the % download from there, depending which technology you are using to access the data. There is an example in the AR|VR toolkit doing this, but again it depends of the technology you are using.
Assuming you are using the AR|VR Toolkit, mesh request are done in parallel, so the speed to download and instantiate meshes depends of your internet bandwidth, and speed of you device to run Unity. In the toolkit, you may accelerate the download, but accept to lose control of the UI during that period. It is a compromise to make due to the Unity limitation running singe threaded.
The toolkit can also convert models to glTF, and you could using the gltfast Unity plugin to get better performances, but lose metadata associated to objects. Another compromise due to the nature of glTF at this time.

Is media pipeline broken in Windows Phone 8.1?

It seems that media pipeline in Windows Phone 8.1 is broken because of a lot of memory management issues.
When you create a background audio app that uses IMediaSource to stream audio in Windows Phone Runtime 8.1, the app's components eventually throw OutOfMemoryException and even StackOverflowException under some circumstances. When looking through the memory dumps, there's a lot of uncollected garbage inside.
The discussion has started on MSDN forums and progressed to this conclusion. I have created a WPDev UserVoice suggestion in order Windows Phone team could notice this, but I still hope it's me (and other guys from MSDN forums) who's wrong and there's a solution for the issue.
I also have a small CodePlex project that also suffers from this, and there's actually an issue report there regarding this exact problem.
I hope that with the help of the community this issue can be worked around or passed directly to Microsoft development team to investigate and eliminate. Thanks!
Update 1:
There's a kind of workaround for StackOverflowException, but it doesn't help against OutOfMemoryException.
Okay, so it seems that the problem is actually with byte array's lifetime in .NET.
In order to resolve the memory problem, one shoud use Windows Runtime's Windows.Storage.Streams.IBuffer. Don't create many new .NET byte arrays in any form, neither by simple new byte[], nor by using System.Runtime.InteropServices.WindowsRuntime.WindowsRuntimeBuffer class as it is a managed implementation of IBuffer interface.
Those byte arrays, being once allocated, live long because of being pinned by OverlappedData structures and overflow the memory threshold for background audio task. IBuffers (the real Windows Runtime ones, like Windows.Storage.Streams.Buffer class) contain native arrays that are being deallocated as soon as IBuffer's reference count reaches 0 (zero), they don't rely on GC.
What I've found out is that this problem is not only background audio specific. Actually, I have seen a lot of other questions about similar problems. The solution is to use Windows Runtime backend where possible because it's unmanaged and frees resources as soon as they have zero references.
Thanks to #Soonts for pointing me in the right direction!
They had memory issues with the way MSS manages its memory, but they have silently fixed it during some update: WP7 Background Audio - Memory Leak or Not?
I’m not sure, but I think the problem is your code. You just shouldn’t call var buffer = new byte[4096]; each time a sample is requested. Doing so may work on the PC, but for the embedded platform, I don’t think it’s a good idea to stress the memory manager that much.
In my MediaStreamSource implementation, I use a single circular buffer that is allocated when the MSS is constructed, and the portions of the buffer are infinitely reused during the playback. In my GetSampleAsync, I construct an instance of my Stream-implementing class, that doesn’t own any memory, but instead only holds a reference to a portion of that circular buffer. This way, only a few small objects are allocated/deallocated during the playback, thus the audio stream data does not load the memory manager.

Image compression inside HTML5 Web Worker

I have a web application which create a big image (30Mo) as a bufferArray inside a web worker.
I would like to return this image to the main thread.
Return the bufferArray via the postMessage method is very slow (5 secondes) and it freeze the UI.
postMessage(data);
So, I have looking for a library to compress the image but I'm only find library which use HTML5 canvas.
Because HTML5 canvas is forbidden is Web Worker, I'm asking if you have solution to this problem ?
My goal is to take more time in the Web Worker and reduce the transfer time.
Some browsers support transferable objects but to my knowledge not all of them.
Transferable objects are basically pointers to structures that you pass from main thread to worker and/or back. Using this methodology requires sightly different invocation of postMessage()
More of this is covered here
How about to increase the concurrent web workers threads pool?
Here is two great articles which cast some lights on how to use it:
http://www.smartjava.org/content/html5-easily-parallelize-jobs-using-web-workers-and-threadpool
http://typedarray.org/concurrency-in-javascript/
Here's how use the postMessage method :
postMessage(jsonObject, [jsonObject.data.buffer]);
The first argument is the same as usual.
The second must be an array of ArrayBuffer.
Still looking for image compression library with no use of HTML5 canvas.

How can you throttle bandwidth usage on the receiving side of a video stream in ActionScript 3.0?

Right now I'm on a project that's moving video streams across RTMP using mostly ActionScript 3.0 (a little bit of 2.0 is used on the server side), and we already have functionality in place to throttle bandwidth usage for these video streams on the client level. However we're only doing that by calling the setQuality() method of class Camera, which affects every receiver of that video stream. Now though we really need a way to effectively set the bandwidth usage for individual receivers, but apparently VideoDisplay, NetStream, and NetConnection are all pretty much void of this sort of functionality. Is there not some decent way to do this in AS3? If there is a way, how? Thanks!
EDIT: For clarity let's say that the sender of the video stream has their Camera object's quality set to use 1 meg of bandwidth. How could I make a receiver of that stream only use half a meg of bandwidth to stream that video in without messing with the sender's 1 meg setting?
FMS just passes data received from publisher to the set of subscribers. It's doesn't change it (at least from the data point of view). What you require, though, is transcoding of the video stream being published according to subcriber needs.
Simple RTMP dosn't do that at all. I think there is a way to publish multiple streams for the same data using http streaming feature. But, in that case, the publisher would really be publishing multiple streams of the media to FMS.

Is it possible to stream live video to Flash Media Server via NetStream byte access?

So, I'm working with a video source that I'm feeding into my Adobe AIR application via some native extension work, with the goal of ultimately getting it to a Flash Media Server. The video is H.264 encoded and muxed into a FLV container, which aligns me with supported Flash Media Server codecs and NetStream (appendBytes) requirements. I can get the data into AIR just fine.
The mine I stepped onto today, however, is that documentation for NetStream.appendBytes states I must call NetStream.play(null):
Call this method on a NetStream in "Data Generation Mode". To put a NetStream into Data Generation Mode, call NetStream.play(null) on a NetStream created on a NetConnection connected to null. Calling appendBytes() on a NetStream that isn't in Data Generation Mode is an error and raises an exception.
NetStream.play() called with a null parameter yields local FLV playback. I can't publish the stream to FMS in this mode. But my research into Flash seems to indicate NetStream's byte access is my only real hope here when dealing with non-camera or non-web video data.
Q: Can I latch onto the video playback buffer for publish to a FMS? Can I create a sort of pipeline of NetStreams or NetConnections to achieve this? Or is there an alternate approach here for transmitting H.264/FLV data to FMS? (The source of my video cannot communicate with FMS directly.)
The answer to your question is quite simply no. This is apparently implemented as a security feature, which is probably less of a security based issue and more of a sales issue. Adobe likes to block certain capabilities intentionally in order to create the possibility of, or need of another product aka more revenue.
I tried looking into this for you to see if there was some dirty hack where you could attach a camera or something and override the binary data being sent to the stream like you can with Audio but unfortunately, to my knowledge, no such hack is possible. More nfo here: NetStream.appendBytes
Update
You might be able to do something hackish by using ManyCam which is a virtual webcam driver (from what I understand). This will provide a valid camera you can select from flash and you can also select a video file as the source file for ManyCam. See http://manycam.com/user_guide/#HowtoSelectaVideofileasthePictureSourceforManyCam
Update #2
If you're looking for something open source that will do the same thing as manycam, check out the following:
http://code.google.com/p/webcamstudio/wiki/VideoSourceMovie (GPL Licensed)