new APIs for windows phone 8.1 - windows-phone-8

I am trying to use these two methods (of WP 8) in windows phone 8.1, but it gives error and doesn't compile, most probably becasue they are removed. I tried searching the new APIs but couldn't get any. What are other alternatives for these.
Dispatcher.BeginInvoke( () => {}); msdn link
System.Threading.Thread.Sleep(); msdn link

They still exists for Windows Phone 8.1 SIlverlight Apps, but not for Windows Phone Store Apps. The replacements for Windows Store Apps is:
Sleep (see Thread.Sleep replacement in .NET for Windows Store):
await System.Threading.Tasks.Task.Delay(TimeSpan.FromSeconds(30));
Dispatcher (see How the Deployment.Current.Dispatcher.BeginInvoke work in windows store app?):
CoreDispatcher dispatcher = CoreWindow.GetForCurrentThread().Dispatcher;
await dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { });

Dispatcher.BeginInvoke( () => {}); is replaced by
await this.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, async () => {});
and System.Threading.Thread.Sleep(); is replaced by
await Task.Delay(TimeSpan.FromSeconds(doubleValue));

Be aware that not only has the API changed (adopting the API from WindowsStore apps), but the way that the Dispatcher was obtained in windowsPhone 8.0 has changed as well.
#Johan Faulk's suggestion, although will work, may return null under a multitude of conditions.
Old code to grab the dispatcher:
var dispatcher = Deployment.Current.Dispatcher;
or
Deployment.Current.Dispatcher.BeginInvoke(()=>{
// any code to modify UI or UI bound elements goes here
});
New in Windows 8.1 Deployment is not an available object or namespace.
In order to make sure the Main UI Thread dispatcher is obtained, use the following:
var dispatcher = CoreApplication.MainView.CoreWindow.Dispatcher;
or
CoreApplication.MainWindow.CoreWindow.Dispatcher.RunAsync(
CoreDispatcherPriority.Normal,
()=>{
// UI code goes here
});
Additionally, although the method SAYS it will be executed Async the keyword await can not be used in the method invoked by RunAsync. (in the above example the method is anonymous).
In order to execute an awaitable method inside anonymous method above, decorate the anonymous method inside RunAsync() with the async keyword.
CoreApplication.MainWindow.CoreWindow.Dispatcher.RunAsync(
CoreDispatcherPriority.Normal,
**async**()=>{
// UI code goes here
var response = **await** LongRunningMethodAsync();
});

For Dispatcher, try this. MSDN
private async Task MyMethod()
{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () => { });
}
For Thread.Sleep() try await Task.Delay(1000). MSDN

Related

constructor issue in nestjs framework

I'm learning NestJs and puppeteer.
I tried web crawl and it worked well.
But because of launching and closing headless browser, it takes a lot of response time.
I think it's better launching browser just one time than every launching and closing.
But i don't know how i use constructor in NestJS. It looks different from Vanila javascript.
async crawlData() {
const browser = await launch();
const page = await browser.newPage();
await page.goto("https://ko.reactjs.org/");
await page.screenshot({ path: "./Docs/ko-reactjs-homepage.png" });
await browser.close();
}
Please understand that i'm not native english speaker.
It sounds like you need a custom provider for the browser so it can be opened once and don't need to open it again. Something along the lines of
{
provide: 'PUPPETEER_INSTNACE',
useFactory: async () => await launch()
}
And now in a constructor in a service that belongs to same module you can do #Inject('PUPPETEER_INSTANCE') private readonly puppeteer)

speechSynthesis.getVoices() is empty array in Chromium Fedora

Is Speech Synthesis API supported by Chromium? Do I need to install voices? If so how can I do that? I'm using Fedora. Is voices like video that I need to install extra package for it to work?
I've tried this code:
var msg = new SpeechSynthesisUtterance('I see dead people!');
msg.voice = speechSynthesis.getVoices().filter(function(voice) {
return voice.name == 'Whisper';
})[0];
speechSynthesis.speak(msg);
from article Web apps that talk - Introduction to the Speech Synthesis API
but the function speechSynthesis.getVoices() return empty array.
I've also tried:
window.speechSynthesis.onvoiceschanged = function() {
console.log(window.speechSynthesis.getVoices())
};
the function get executed but the array is also empty.
On https://fedoraproject.org/wiki/Chromium there is info to use --enable-speech-dispatcher flag but when I've use it I've got warning that flag is not supported.
Is Speech Synthesis API supported by Chromium?
Yes, the Web Speech API has basic support at Chromium browser, though there are several issues with both Chromium and Firefox implementation of the specification, see see Blink>Speech, Internals>SpeechSynthesis, Web Speech.
Do I need to install voices? If so how can I do that? I'm using
Fedora. Is voices like video that I need to install extra package for
it to work?
Yes, voices need to be installed. Chromium is not shipped with voices to set at SpeechSynthesisUtterance voice attribute by default, see How to use Web Speech API at chromium?; How to capture generated audio from window.speechSynthesis.speak() call?.
You can install speech-dispatcher as a server for the system speech synthesis server and espeak as the speech synthesizer.
$ yum install speech-dispatcher espeak
You can also set a configuration file for speech-dispatcher in the user home folder to set specific options for both speech-dispatcher and the output module that your use, for example espeak
$ spd-conf -u
Launching Chromium with --enable-speech-dispatcher flag automatically spawns a connection to speech-dispatcher, where you can set the LogLevel between 0 and 5 to review SSIP communication between Chromium code and speech-dispatcher.
.getVoices() returns results asynchronously and needs to be called twice
see this electron issue at GitHub Speech Synthesis: No Voices #586.
window.speechSynthesis.onvoiceschanged = e => {
const voices = window.speechSynthesis.getVoices();
// do speech synthesis stuff
console.log(voices);
}
window.speechSynthesis.getVoices();
or composed as an asynchronous function which returns a Promise with value being array of voices
(async() => {
const getVoices = (voiceName = "") => {
return new Promise(resolve => {
window.speechSynthesis.onvoiceschanged = e => {
// optionally filter returned voice by `voiceName`
// resolve(
// window.speechSynthesis.getVoices()
// .filter(({name}) => /^en.+whisper/.test(name))
// );
resolve(window.speechSynthesis.getVoices());
}
window.speechSynthesis.getVoices();
})
}
const voices = await getVoices();
console.log(voices);
})();

getUser does not work in 4.0.4 master webchat client

I downloaded today the last release (4.0.4, from yesterday) of the webchat client from github and deployed in in my website.
I have detected that Smooch.getUser() returns 'undefined' when a new user is detected until this new user send his first message, but it doesn't happen on returning users.
<script>
Smooch.on('ready', function(){
console.log('the init has completed!');
});
var skPromise = Smooch.init({appId: 'myAppId'});
skPromise.then(
function()
{
var u = Smooch.getUser();
console.log(u._id);
});
);
</script>
smooch_local.html:26 Uncaught (in promise) TypeError: Cannot read property '_id' of undefined
at smooch_local.html:26
at anonymous
But, if i send any message after the promise has resolved, and later I try to recover the userId, the variable gets defined. It didn't happen in this way in previous 3.x.x releases of the Web Messenger chat.
This code returns a valid userId:
<script>
Smooch.on('ready', function(){
console.log('the init has completed!');
});
var skPromise = Smooch.init({appId: 'myAppId'});
skPromise.then(
function()
{
Smooch.sendMessage({type: 'text', text: 'x'}).then(
function(){
var u = Smooch.getUser();
console.log(u._id);
});
}
);
</script>
This is the console ouptut:
12:21:20.165 the init has completed!
12:21:22.947 smooch_local.html:28 1102fdee2b7d3c2abb639cbe
Does anyone knows if it's a bug or a new feature from v4.x releases?
Thanks
This is expected behaviour for Web Messenger 4.x - users are no longer automatically created at init time. Instead, user creation is deferred until after they send a message. This was mentioned in the release notes for v4.0.0
Web Messenger now uses a new optimized initialization sequence. This new sequence alters the timing of key events such as creating a new user or establishing a websocket connection.
Alternatively, you can pre-create a user with a userId before the Web Messenger is initialized, and use the login method to initialize as that user, but this may or may not be appropriate depending on your use case.

Loading Aurelia breaks Google API

I have created a reproduction of this bug here (ugly use of Aurelia but to prove the point): https://jberggren.github.io/GoogleAureliaBugReproduce/
If I load Google API and try to list my files in Google Drive my code derived from Googles quickstart works fine. If I use the same code after loading Aurelia I get a script error from gapi stating
Uncaught Error: arrayForEach was called with a non array value
at Object._.Sa (cb=gapi.loaded_0:382)
at Object._.eb (cb=gapi.loaded_0:402)
at MF (cb=gapi.loaded_0:723)
at Object.HF (cb=gapi.loaded_0:722)
at Object.list (cb=gapi.loaded_0:40)
at listFiles (index.js:86)
...
When debugging it seems to be some sort of array check (Chroms says 'native code') that failes after Aurelia is loaded. In my search for an answer I found two other people with the same problem but no solution (Aurelia gitter question, SO Question). Don't know if to report this to the Aurelia team, Google or where the actual problem lays.
Help me SO, you are my only hope.
This is not a perfect solution but works.
aurelia-binding
https://github.com/aurelia/binding/blob/master/src/array-observation.js
Aurelia overrides Array.prototype.* for some reasons.
gapi (especially spreadsheets)
Gapi lib checks to make sure that is it native code or not.
// example
const r = /\[native code\]/
r.test(Array.prototype.push)
conclusion
So, we have to monkey patching.
gapi.load('client:auth2', async () => {
await gapi.client.init({
clientId: CLIENT_ID,
discoveryDocs: ['https://sheets.googleapis.com/$discovery/rest?version=v4'],
scope: 'https://www.googleapis.com/auth/spreadsheets',
});
// monkey patch
const originTest = RegExp.prototype.test;
RegExp.prototype.test = function test(v) {
if (typeof v === 'function' && v.toString().includes('__array_observer__.addChangeRecord')) {
return true;
}
return originTest.apply(this, arguments);
};
});

Launch camera from HTML 5 app running on Windows 8?

I've seen examples using XAML and writing some code in C# - is it possible just using Javascript?
Yes. Here is a blog showing how to: http://blogs.msdn.com/b/davrous/archive/2012/09/05/tutorial-series-using-winjs-amp-winrt-to-build-a-fun-html5-camera-application.aspx
You can write following method for launch camera
function capturePhoto() {
var capture = new Windows.Media.Capture.CameraCaptureUI();
capture.captureFileAsync(Windows.Media.Capture.CameraCaptureUIMode.photo)
.then(function (file) {
if (file) {
return file.openAsync(Windows.Storage.FileAccessMode.readWrite);
}
});