PCLStorage CreateFolderAsync hanging - windows-phone-8

I am using PCL Storage library for my WP8 Application. I am trying to use intro example from their website; https://pclstorage.codeplex.com/
code:
IFolder rootFolder = FileSystem.Current.LocalStorage;
IFolder folder = await rootFolder.CreateFolderAsync("MySubFolder", CreationCollisionOption.OpenIfExists);
IFile file = await folder.CreateFileAsync("answer.txt", CreationCollisionOption.ReplaceExisting);
await file.WriteAllTextAsync("42");
CreateFolderAsync function hangs and does not go through. I both tried on simulator and device.
Am I missing something?

Look further up your call stack. You'll almost certainly find a call to Task.Wait or Task<T>.Result, thus causing a deadlock that I describe on my blog.
To resolve, replace all Wait and Result calls with await. I describe this as "async all the way" in my async best practices MSDN article.

Related

Why do the weather samples in FetchData seem to get cached for the sample Blazor app?

The Blazor app in Visual Studio uses a Http.GetFromJsonAsync call to get the data for Weather Forecasts from a json file in wwwroot.
When I change the data in the file, I still see the same data in the table?
When I copy the file, and change the code to use the new filename, I get the changed results.
Is there some caching happening with wwwroot files? I've tried hard refresh, that doesn't make a difference, but changing browser does. I know that Blazor caches the framework files...but is this happening to all wwwroot, how do I change this behaviour?
Thanks in advance.
The fetchdata sample page (from new blazorwasm) retrieves data on initialize component:
protected override async Task OnInitializedAsync()
{
forecasts = await Http.GetFromJsonAsync<WeatherForecast[]>("sample-data/weather.json");
}
When you go out of this page and come back, initialize is running again and a request is done.
But, because this is a GET request, the browser can deliver answer from cache:
They are some ways to avoid cache on Blazor GET requests, learn about it here: Bypass HTTP browser cache when using HttpClient in Blazor WebAssembly
Also, you can use the simple trick to add a random string to query string:
protected override async Task OnInitializedAsync()
{
var randomid = Guid.NewGuid().ToString();
var url_get = $"sample-data/weather.json?{randomid}";
forecasts = await Http.GetFromJsonAsync<WeatherForecast[]>(url_get);
}
In short, it seems to get cached because a get request can be cached by browser and is the browser who retrieve the data.

Programmatically start the performance profiling in Chrome

Is there a way to start the performance profiling programmatically in Chrome?
I want to run a performance test of my web app several times to get a better estimate of the FPS but manually starting the performance profiling in Chrome is tricky because I'd have to manually align the frame models. (I am using this technique to extract the frames)
CMD + Shift + E reloads the page and immediately starts the profiling, which alleviates the alignment problem but it only runs for 3 seconds as explained here. So this doesn't work.
Ideally, I'd like to click on a button to start my test script and also starts the profiling. Is there a way to achieve that?
in case you're still interested, or someone else may find it helpful, there's an easy way to achieve this using Puppeteer's tracing class.
Puppeteer uses Chrome DevTools Protocol's Tracing Domain under the hood, and writes a JSON file to your system that can be loaded in the dev tools performance panel.
To get a profile trace of your page's loading time you can implement the following:
const puppeteer = require('puppeteer');
(async () => {
// launch puppeteer browser in headful mode
browser = await puppeteer.launch({
headless: false,
devtools: true
});
// start a page instance in the browser
page = await browser.newPage();
// start the profiling, with a path to the out file and screenshots collected
await page.tracing.start({
path: `tests/logs/trace-${new Date().getTime()}.json`,
screenshots: true
});
// go to the page
await page.goto('http://localhost:8080');
// wait for as long as you want
await page.waitFor(4000);
// or you can wait for an element to appear with:
// await page.waitForSelector('some-css-selector');
// stop the tracing
await page.tracing.stop();
// close the browser
await browser.close();
})();
Of course, you'll have to install Puppeteer first (npm i puppeteer). If you don't want to use Puppeteer you can interact with Chrome DevTools Protocol's API directly (see link above). I didn't investigate that option very much since Puppeteer delivers a high level and easy to use API over CDP's API. You can also interact directly with CDP via Puppeteer's CDPSession API.
Hope this helps. Good luck!
You can use the chrome devtools protocol and use any driver library from here https://github.com/ChromeDevTools/awesome-chrome-devtools#protocol-driver-libraries to programmatically create a profile.
Use this method - https://chromedevtools.github.io/devtools-protocol/tot/Profiler#method-start to start a profile.

FileNotFoundException when starting a background download even though file clearly exists

In my WinRT application I have the following code:
resultingFile = await downloadFolder.CreateFileAsync(filename, CreationCollisionOption.OpenIfExists);
var downloader = new BackgroundDownloader();
var operation = downloader.CreateDownload(new Uri(rendition.Url), resultingFile);
await operation.StartAsync();
After the CreateFileAsync call I can verify that I do have a 0byte file at the filename path (and double verified by pulling the location out of the resultingFile itself.
However, when operation.StartAsync() is called I get a FileNotFoundException claiming the system could not find the file specified. Unfortunately, that's all it tells me and there is no inner exception.
I have also verified that rendition.Url gives me a valid url that downloads the content I'm expecting to be downloading.
Am I doing something wrong here?
Apparently this code isn't what is throwing the error but it's some code the BackgroundDownloader uses to coordinate things that can't find it's own file.
Uninstalling the application and redeploying it fixed it.
Good waste of 3 hours :(

NodeJS JSON.parse(...) takes forever to finish (under debugger in WebStorm)

EDIT
The problem seems to be related to WebStorm itself, it seems that it doesn't want to work with objects containing huge amount of nested objects. Neither it wants to show the object contents inside Watches window. The problem is kinda strange because I'm able to inspect the string, it is loaded blazingly fast. Seems like a WebStorm issue
I have a relatively big JSON file 4.9mb that I need to process in NodeJS, the file is stored in file system and is loadded using following lines of code:
var path = require('path');
var filename = path.join(__dirname, 'db_asci.json');
var fs = require('fs');
var content = fs.readFileSync(filename);
debugger;
var decycledObj = JSON.parse(content);
debugger;
The problem is that after the first debugger; breakpoint is hit, the second one is not, I'm waiting for more than 20 minutes and nothing, one processor core is loadded at 100%. I'm unable to debug the function because it's native.
Here is ASCI version of JSON
Here is UTF8 version of JSON
What am I doing wrong?
The problem you are running in to is not JSON parsing taking too long. Indeed, try this:
var start = Date.now();
var obj = JSON.parse(fs.readFileSync(filename));
console.log('Took', Date.now() - start, 'ms');
You'll probably see that it took less than a second or so.
What you are running into is an issue with the debugger itself – the observer effect. The act of observing a system changes that system.
I assume you're using node-inspector. Whenever you have an extremely large, complex object, it is extremely expensive to load the object into the inspector. While it is doing so, your node process will peg the CPU and the event loop is paused.
My guess is that the JSON is parsed and a huge (given that we're dealing with 5MB) object is created. Node then hits the second debugger, and the inspector needs to load locals. The excruciatingly slow process begins, and the inspector won't show that you've hit a breakpoint until it finishes. So to you it just looks frozen.
Try replacing your JSON file with something small (like {a:1}). It should load quickly.
Do you really need to visually inspect the entire object? There are tools better suited for viewing JSON files.
+1 for Pradeep Mahdevu solution, here is another way to the same thing, (edit with the async version)
var fs = require ('fs');
var options = { encoding: 'utf8' };
var jsonData = fs.readFile('db_asci.json', options, function (err, data) {
if (err) throw err;
var object = JSON.parse(data);
});
You can require .json files. So, no need to parse.
var content = require('./db_asci.json');
That should do it!

Why can't Web Worker call a function directly?

We can use the web worker in HTML5 like this:
var worker = new Worker('worker.js');
but why can't we call a function like this?
var worker = new Worker(function(){
//do something
});
This is the way web workers are designed. They must have their own external JS file and their own environment initialized by that file. They cannot share an environment with your regular global JS space for multi-threading conflict reasons.
One reason that web workers are not allowed direct access to your global variables is that it would require thread synchronization between the two environments which is not something that is available (and it would seriously complicate things). When web workers have their own separate global variables, they cannot mess with the main JS thread except through the messaging queue which is properly synchronized with the main JS thread.
Perhaps someday, more advanced JS programmers will be able to use traditional thread synchronization techniques to share access to common variables, but for now all communication between the two threads must go through the message queue and the web worker cannot have access to the main Javascript thread's environment.
This question has been asked before, but for some reason, the OP decided to delete it.
I repost my answer, in case one needs a method to create a Web worker from a function.
In this post, three ways were shown to create a Web worker from an arbitrary string. In this answer, I'm using the third method, since it's supported in all environments.
A helper file is needed:
// Worker-helper.js
self.onmessage = function(e) {
self.onmessage = null; // Clean-up
eval(e.data);
};
In your actual Worker, this helper file is used as follows:
// Create a Web Worker from a function, which fully runs in the scope of a new
// Worker
function spawnWorker(func) {
// Stringify the code. Example: (function(){/*logic*/}).call(self);
var code = '(' + func + ').call(self);';
var worker = new Worker('Worker-helper.js');
// Initialise worker
worker.postMessage(code);
return worker;
}
var worker = spawnWorker(function() {
// This function runs in the context of a separate Worker
self.onmessage = function(e) {
// Example: Throw any messages back
self.postMessage(e.data);
};
// etc..
});
worker.onmessage = function() {
// logic ...
};
worker.postMessage('Example');
Note that the scopes are strictly separated. Variables can only be passed and forth using worker.postMessage and worker.onmessage. All messages are structured clones.
This answer might be a bit late, but I wrote a library to simplify the usage of web workers and it might suit OP's need. Check it out: https://github.com/derekchiang/simple-worker
It allows you to do something like:
SimpleWorker.run({
func: intensiveFunction,
args: [123456],
success: function(res) {
// do whatever you want
},
error: function(err) {
// do whatever you want
}
})
WebWorkers Essentials
WebWorkers are executed in an independent thread, so have no access to the main thread, where you declare them (and viceversa). The resulting scope is isolated, and restricted. That's why, you can't , for example, reach the DOM from inside the worker.
Communication with WebWorkers
Because communication betwen threads is neccessary, there are mechanisms to accomplish it. The standard communication mechanism is through messages, using the worker.postMessage() function and the worker.onMessage(), event handler.
More advanced techniques are available, involving sharedArrayBuffers, but is not my objective to cover them. If you are interested in them, read here.
Threaded Functions
That's what the standard brings us.
However, ES6 provides us enough tools, to implement an on-demmand callable Threaded-Function.
Since you can build a Worker from a Blob, and your Function can be converted into it (using URL.createObjectURL), you only need to implement some kind of Communication Layer in both threads, to handle the messages for you, and obtain a natural interaction.
Promises of course, are your friend, considering that everything will happen asynchronously.
Applying this theory, you can implement easilly, the scenario you describe.
My personal approach : ParallelFunction
I've recently implemented and publised a tiny library wich does exactly what you describe. in less than 2KB (minified).
It's called ParallelFunction, and it's available in github, npm , and a couple of CDNs.
As you can see, it totally matches your request:
// Your function...
let calculatePi = new ParallelFunction( function(n){
// n determines the precision , and in consequence
// the computing time to complete
var v = 0;
for(let i=1; i<=n; i+=4) v += ( 1/i ) - ( 1/(i+2) );
return 4*v;
});
// Your async call...
calculatePi(1000000).then( r=> console.log(r) );
// if you are inside an async function you can use await...
( async function(){
let result = await calculatePi(1000000);
console.log( result );
})()
// once you are done with it...
calculatePi.destroy();
After initialization, you can call your function as many times you need. a Promise will be returned, wich will resolve, when your function finishes execution.
By the way, many other Libraries exists.
Just use my tiny plugin https://github.com/zevero/worker-create
and do
var worker_url = Worker.create(function(e){
self.postMessage('Example post from Worker'); //your code here
});
var worker = new Worker(worker_url);
While it's not optimal and it's been mentioned in the comments, an external file is not needed if your browser supports blobURLs for Web Workers. HTML5Rocks was the inspiration for my code:
function sample(e)
{
postMessage(sample_dependency());
}
function sample_dependency()
{
return "BlobURLs rock!";
}
var blob = new Blob(["onmessage = " + sample + "\n" + sample_dependency]);
var blobURL = window.URL.createObjectURL(blob);
var worker = new Worker(blobURL);
worker.onmessage = function(e)
{
console.log(e.data);
};
worker.postMessage("");
Caveats:
The blob workers will not successfully use relative URLs. HTML5Rocks link covers this but it was not part of the original question.
People have reported problems using Blob URLs with Web Workers. I've tried it with IE11 (whatever shipped with FCU), MS Edge 41.16299 (Fall Creator's Update), Firefox 57, and Chrome 62. No clue as to Safari support. The ones I've tested have worked.
Note that "sample" and "sample_dependency" references in the Blob constructor call implicitly call Function.prototype.toString() as sample.toString() and sample_dependency.toString(), which is very different than calling toString(sample) and toString(sample_dependency).
Posted this because it's the first stackoverflow that came up when searching for how to use Web Workers without requesting an additional file.
Took a look at Zevero's answer and the code in his repo appears similar. If you prefer a clean wrapper, this is approximately what his code does.
Lastly -- I'm a noob here so any/all corrections are appreciated.
By design web workers are multi-threaded, javascript is single threaded"*"multiple scripts cannot run at the same time.
refer to: http://www.html5rocks.com/en/tutorials/workers/basics/