Multiple windows, Windows Universal - windows-runtime

I would like to have a second, (and more) windows for my windows universal app to run on a PC. But to my surprise this does not seem easy. In WPF for example I could add a new item to my project and select a window. In Universal, there is no "Window" in new items. I can declare an object of class windows.ui.xaml.window but I cannot instantiate it (there is no new) or show it. How do I launch another window? Thanks

There is a sample available on Microsoft's UWP GitHub repo which covers creating multiple views for your application. I can provide more information or help if you need it

Ended up finding something which is quite simple, takes a page by type, creates it and a window with it inside, and returns the page object created:
async Task<Page> CreatePageWindowAsync(Type p)
{
CoreApplicationView newView = CoreApplication.CreateNewView();
int newViewId = 0;
Page pg = null;
await newView.Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
Frame frame = new Frame();
frame.Navigate(p, null);
Window.Current.Content = frame;
Window.Current.Activate();
newViewId = ApplicationView.GetForCurrentView().Id;
pg = frame.Content as Page;
});
bool viewShown = await ApplicationViewSwitcher.TryShowAsStandaloneAsync(newViewId);
return (pg);
}

Related

How to dynamically assign Puppeteer viewport size from current screen resolution?

I'm using Puppeteer to automate some page actions in an already open, fully-visible browser (non-headless). Currently, I manually set the viewport like this:
const page = await browser.newPage();
await page.setViewport({width: W, height: H});
I have to manually set W and H based on both the actual screen resolution, and on the system-wide scaling factor. This makes the script very brittle and non-portable.
I would like to have the new page always open with the largest possible visible viewport, without having to manually specify what that is. I tried some of the other solutions suggested on SO and elsewhere, such as setting the viewport to null, but I have not yet stumbled upon a working solution for my specific use case. Any help would be appreciated. Thanks!
If you want to set the W and H persistently across a launched browser you need to set defaultViewport: null together with --window-size=${W},${H} launch arg. It sets the window size and viewport on browser-level, not on page-level (which changes with each new tab).
Like this, all the newly opened tabs will share the same window size and viewport.
const browser = await puppeteer.launch({
defaultViewport: null,
args: [`--window-size=${W},${H}`]
})
If you can retrieve the screen resolution from system specifications you would be able to correctly set the viewport size from it.
Though you will probably not be able to get this information directly from javascript.
If you can get this information from a PowerShell script (see edit), you could try the following to execute that script from javascript and retrieve this information in your program in order to set your viewport dimensions.
const {spawn} = require("child_process");
async function getSomeDataFromAPowerShellScript() {
const child = spawn("powershell.exe", ["./PATH/MyPowerShellScript.ps1"]); // spawn a powershell terminal as a child process of main program and run the provided script in it
return await new Promise(resolve => {
child.stdout.on("data", (data) => { // trigger when data is send into the child terminal
console.log(data);
resolve(data);
};
});
}
A call to getSomeDataFromAPowerShellScript() will return the first outputed data in the powershell terminal as a string.
If you want to retrieve more informations than just the first output in the powershell terminal you can use this instead:
async function getSomeDataFromAPowerShellScript() {
const child = spawn("powershell.exe", ["./PATH/MyPowerShellScript.ps1"]); // spawn a powershell terminal as a child process of main program and run the provided script in it
let result = [];
return await new Promise(resolve => {
child.stdout.on("data", (data) => { // trigger when data is send into the child terminal
console.log(data);
result.push(data);
};
child.on("exit", () => { // trigger when the child process exit after execution
resolve(result);
});
});
}
Edit:
You could use this powershell script from Ben N answer here How to get the current screen resolution on windows via command line? to get the current resolution of your primary screen:
PowerShell-script.ps1
Add-Type #"
using System;
using System.Runtime.InteropServices;
public class PInvoke {
[DllImport("user32.dll")] public static extern IntPtr GetDC(IntPtr hwnd);
[DllImport("gdi32.dll")] public static extern int GetDeviceCaps(IntPtr hdc, int nIndex);
}
"#
$hdc = [PInvoke]::GetDC([IntPtr]::Zero)
[PInvoke]::GetDeviceCaps($hdc, 118) # width
[PInvoke]::GetDeviceCaps($hdc, 117) # height
original explanation
It outputs two lines: first the horizontal resolution, then the
vertical resolution.
To run it, save it to a file (e.g. screenres.ps1) and launch it with
PowerShell:
powershell -ExecutionPolicy Bypass .\screenres.ps1
Using this answer in combination of theDavidBarton answer should achieve what you're asking for.

Chrome produces no audio after reaching 50 audio output streams

During my testing, I have found out that reaching 50 audio output streams (as displayed in chrome://media-internals/ Audio tab) on a single tab causes the audio output to disappear. Does Chrome have a set maximum limit of audio output streams allowed per displayed tab? If so, is there some workaround for that? The Chrome version that I am using is Version 87.0.4280.141.
Whenever we're muting/unmuting the audio(second function below) and adjusting the mic volume(first function below), we create a new audio context. Does too many audio context instances caused the issue?
private setLocalStreamVolume(stream: MediaStream | undefined) {
const context = new AudioContext()
const destination = context.createMediaStreamDestination()
const gainNode = context.createGain()
if (stream) {
for(const track of stream.getTracks()){
const sourceStream = context.createMediaStreamSource(new MediaStream([track]));
sourceStream.connect(gainNode)
gainNode.connect(destination)
gainNode.gain.value = this._micVolume
}
}
return destination.stream
}
export function mixStreams(streams: Iterable<(MediaStream | undefined)>) {
const context = new AudioContext()
const mixedOutput = context.createMediaStreamDestination()
for(const stream of streams)
if(stream)
for(const track of stream.getTracks()){
const sourceStream = context.createMediaStreamSource(new MediaStream([track]));
sourceStream.connect(mixedOutput);
}
return mixedOutput.stream.getTracks()[0]
}
Does too many audio context interactions caused the issue?
Too many AudioContext instances certainly will. In fact, on some systems you can only use a single AudioContext.
I'm not sure what your specific use case is, but you probably only need one AudioContext. All your MediaStreamSourceNodes can live in the same context.

Does the MessageDialog class for UWP Apps support three buttons on Mobile?

I'm creating a simple program for reading text file on the Windows Phone. I decided to make it a Universal Windows Platform (UWP) App.
In the app, I have a very simple MessageDialog, with three options, Yes, No, Cancel. It works perfectly on the Desktop and in the Simulator. However, when testing with the actual device, the ShowAsync method fails with the message: "Value does not fall in the expected range".
This only happens if there are more than two commands registered in the dialog. Does the MessageDialog class really supports up to three commands - as the documentation suggests - or is this only applying for UWP Apps running on Desktop devices?
At the moment, there is a clear statement in the docs:
The dialog has a command bar that can support up to 3 commands in desktop apps, or 2 commands in mobile apps.
Sad but true: on mobiles, there are two commands only. Need more? Use ContentDialog instead.
It looks like the documentation is missing information about Mobile (and really the API should do a better job here).
For Mobile, if you hit the Back key you get a null return value, so you can do this (not recommended coding pattern, but best I can think of):
async Task Test()
{
const int YES = 1;
const int NO = 2;
const int CANCEL = 3;
var dialog = new MessageDialog("test");
dialog.Commands.Add(new UICommand { Label = "yes", Id = YES });
dialog.Commands.Add(new UICommand { Label = "no", Id = NO });
// Ugly hack; not really how it's supposed to be used.
// TODO: Revisit if MessageDialog API is updated in future release
var deviceFamily = AnalyticsInfo.VersionInfo.DeviceFamily;
if (deviceFamily.Contains("Desktop"))
{
dialog.Commands.Add(new UICommand { Label = "cancel", Id = CANCEL });
}
// Maybe Xbox 'B' button works, but I don't know so best to not do anything
else if (!deviceFamily.Contains("Mobile"))
{
throw new Exception("Don't know how to show dialog for device "
+ deviceFamily);
}
// Will return null if you press Back on Mobile
var result = await dialog.ShowAsync();
// C# 6 syntactic sugar to avoid some null checks
var id = (int)(result?.Id ?? CANCEL);
Debug.WriteLine("You chose {0}", id);
}

Uniquely identify a user on WinRT and WP8 using (f.ex.) LiveID?

I am looking for a way to uniquely identify a user in WinRT and preferably in WP8 as well. In WP7 applications, I could get a hash of the Live ID to do this, but I am not sure of how to approach this in WinRT environment. One of the goals here is to identify the user in Windows 8 environment as a whole. Using LiveID in one form or another would be ok in this case. I found some sources but they also mentioned that this might require some Enterprise Security permissions (or such) that are not welcome in the Windows Marketplace.
Say I want to identify the user based on the live id, I want to do it automatically and across multiple devices (PC, Tablet, maybe WP8). What resources should I be looking for?
You can obtain ID of each live user if you are using Live SDK. Here's code for you.
private async Task<string> GetLiveUserId()
{
string ID = "";
var auth = new LiveAuthClient();
var loginResult = await auth.LoginAsync(new string[] { "wl.signin", "wl.basic" });
if (loginResult.Status == LiveConnectSessionStatus.Connected)
{
var liveClient = new LiveConnectClient(loginResult.Session);
var myData = await liveClient.GetAsync("me");
ID = myData.Result["id"].ToString();
}
return ID;
}

How to browse mobile directory in flex?

I have captured 3 videos on my mobile which is by default stored on the phone gallery (Gallery/videos/). I have to play these 3 videos in one of my flex mobile application. How can I get the videos to the flex project? if I need to browse the mobile directory means kindly help me with some code to do so.
I too am looking for an answer to this question. Right now, based on other Stackoverflow discussions, exhaustive perusal of tutorials and Adobe documentation, and comments to both (often the more useful resource), I'm coming to the conclusion that it's not possible.
you can use CameraRoll.browseForImage() and open the iOS gallery of photos to see all entities of MediaType.IMAGE, but it will not show you MediaType.VIDEO
you can use CameraUI to launch the system camera by delegation and that returns a MediaPromise, but as far as I can tell, it does not save the video you capture anywhere, and I cannot find a way to access the captured video using the MediaPromise (at least using the Loader class)
Here's my code as a hint in that direction. The second code block is using the CameraRoll to browseForImage() but there is no browseForVideo() in the API.
if(CameraUI.isSupported)
{
camera = new CameraUI();
camera.addEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.addEventListener(Event.CANCEL, cameraCanceled);
camera.addEventListener(ErrorEvent.ERROR, cameraError);
camera.launch(MediaType.VIDEO);
}
else
{
statusText.text = "Camera not supported on this device.";
startTimer();
}
if (CameraRoll.supportsBrowseForImage)
{
roll = new CameraRoll();
roll.addEventListener(MediaEvent.SELECT, cameraRollEventComplete);
roll.addEventListener(Event.CANCEL, cameraCanceled);
roll.addEventListener(ErrorEvent.ERROR, cameraError);
roll.browseForImage();
}
else
{
statusText.text = "Camera roll not supported on this device.";
startTimer();
}
I've since found that Videos captured using the delegated system camera are stored in a temporary storage location that iOS -DOES!- allow access to. (I was pleasantly shocked.)
The Captured video is not added to the device's Camera Roll as other videos captured using the iOS System Camera app, so it's not enough to capture video and expect to be able to access it later (if, for instance, CameraRoll.browseForVideo() is ever added to the API.
Therefore, you have to 'get while the getting is good' and move the file from the temporary storage location to some non-volatile location such as ApplicationStorageDirectory or the user's Documents directory (The only options in iOS I think).
The MediaPromise... I think... is completely useless for accessing the video via any direct progressive loader/streamer method, but still provides the location/url/path/filename of the temporary file so you can perform File operations on it.
Ironic that there are tutorials for getting around the lack of a file location/url/path/filename in the MediaPromise when using CameraRoll.browseForImage()... and that method is to use a loader class to load the image content (which you can then write out to a file), but when taking video, the video content is not accessible, and instead a file location/url/path/filename is provided. Ironic that there are nearly no resources I was able to find to help with this also. grumble
I'm going to include some code chunks w/o really editing them to strip out extraneous bits because it's way past when I need to be in bed, but I wanted you to have this. I may come clean it up later.
This section is in a Spark SkinnablePopUpContainer and I use the same click event for several buttons, thus the below 'case' is in the switch-case in that event handler function.
In case you are not familiar, the 'close(true, data)' is the method to close the SkinnablePopUpContainer, tell the parent/owner that the container was closed purposefully and that it should look for the data object being shared back (i.e., there are changes to be 'commit'ed).
case "cameraVideo":
{
if(CameraUI.isSupported)
{
camera = new CameraUI();
camera.addEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.addEventListener(Event.CANCEL, cameraCanceled);
camera.addEventListener(ErrorEvent.ERROR, cameraError);
camera.launch(MediaType.VIDEO);
}
else
{
statusText.text = "Camera not supported on this device.";
startTimer();
}
break;
}
protected function cameraCanceled(event:Event):void
{
statusText.text = "Camera access canceled by user.";
startTimer();
}
protected function cameraError(event:ErrorEvent):void
{
statusText.text = "There was an error while trying to use the camera.";
startTimer();
}
protected function videoMediaEventComplete(event:MediaEvent):void
{
statusText.text="Preparing captured video...";
camera.removeEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.removeEventListener(Event.CANCEL, cameraCanceled);
camera.removeEventListener(ErrorEvent.ERROR, cameraError);
var media:MediaPromise = event.data;
data.MediaType = MediaType.VIDEO;
data.MediaPromise = media;
data.source = "camera video";
close(true,data)
}
This section is the Actionscript in the close handler of the parent/owner of the SkinnablePopUpContainer (truncated once the useful code is included)
private function choosePictureLightboxClosed(event:PopUpEvent):void
{
imageButtonsActive = false;
if(event.commit)
{
this.data = event.data as Object;
filters = new Array();
selection = true;
switch(data.MediaType)
{
case MediaType.VIDEO:
{
mediaType = "video";
trace(data.MediaPromise.file.url + " - " + data.MediaPromise.relativePath + " - " +data.MediaPromise.mediaType);
var sourceFile:File = new File(data.MediaPromise.file.url);
var destinationFile:File = File.applicationStorageDirectory.resolvePath("User" +parentApplication.userid);
if(destinationFile.exists && !destinationFile.isDirectory)
{
destinationFile.deleteFile();
}
destinationFile.createDirectory();
destinationFile = destinationFile.resolvePath("Videos");
if(destinationFile.exists && !destinationFile.isDirectory)
{
destinationFile.deleteFile();
}
destinationFile.createDirectory();
destinationFile = destinationFile.resolvePath(parentApplication.userid+"Video"+new Date().getTime()+".mov");
trace(destinationFile.nativePath);
sourceFile.moveTo(destinationFile,true);
break;
}
I sure do hope this helps. This has been a very frustrating (and costly in terms of our project being government grant funded and having deadlines we utterly failed to meet), and I very much hope that these hard-won solutions might help others avoid the same experience.