I'm sure this is a very common issue but for some reason I can't find a solution that works.
I have a very simple setup with Firebase Realtime Database and Angular 1. I have this directive in my html
<div id="usersListWrapper" ng-controller="UsersListController" ng-init="loadUsersList()">
Then inside my loadUsersList() method, I make a call to Firebase Database to fetch the data
$rootScope.users = [];
var usersRef = firebase.database().ref('/users');
usersRef.on('value', function(snapshot) {
console.log("loaded users list");
var users = snapshot.val();
updateUsersTable(users);
});
Then finally, inside updateUsersTable(users), I update my $rootScope.users variable
var updateUsersTable = function(users) {
$.each(users, function(key, value) {
var user = {
username: key,
...
}
$rootScope.users.push(user);
}
}
However, even though the $rootScope.users variable updates correctly (I verified using the devtools inside Chrome), the html doesn't update :/
Apologies in advance if this is a duplicate question. Help would be appreciated.
$rootScope is not an appropriate place for you to be storing your user objects. Generally speaking, modifying $rootScope should be considered something you do when there are no other options.
What you should be doing is creating a userService and injecting it whereever you need access to user data.
The same advice is generally true for using ng-init as well. I'm guessing you're just getting started with Angularjs. I'd highly recommend you familiarize yourself with John Papa's Angularjs style guide: https://github.com/johnpapa/angular-styleguide/blob/master/a1/README.md. It's a bit dated in that it doesn't cover components, which are EVERYTHING, but following the style guide makes migration to components super easy.
The solution here seems to work: Firebase callbacks and AngularJS
Kinda hacky but it works I guess. Thank you for those who answered.
As I mentioned in a comment above I would put the scope.$apply on the event listener b/c AngularJS doesn't know when the event is fired since it's not managing it. ie:
usersRef.on('value', scope.$apply(function () {
console.log("loaded users list");
var users = snapshot.val();
updateUsersTable(users);
}));
Or you could look into $evalAsync. From the AngularJS docs:
$evalAsync([expression], [locals]); Executes the expression on the
current scope at a later point in time.
The $evalAsync makes no guarantees as to when the expression will be
executed, only that:
it will execute after the function that scheduled the evaluation
(preferably before DOM rendering). at least one $digest cycle will be
performed after expression execution.
Related
HttpMethod.CallHttpPOSTMethod('POST',null, path).success(function (response) {
console.log(response);
$scope.htmlString = $sce.trustAsHtml(response.data[0]);
$timeout(function () {
var temp = document.getElementById('form');
if (temp != null) {
temp.submit();
}
}, 0);
});
I will get html string in RESPONSE of my API call. And then I will add the html to my view page.
If I write the code outside $timeout service it wont work as it will work when written inside $timeout service.
What is the difference between two ways?
How is $timeout useful here?
When you make any changes to the controller, it does not start asynchronously for two-way binding. If the asynchronous code is wrapped in special ones: `$timeout, $scope.$apply, etc. binding will happen. For the current code example, I would have tried replace you code to:
HttpMethod.CallHttpPOSTMethod('POST',null, path).success(function (response) {
console.log(response);
$scope.htmlString = $sce.trustAsHtml(response.data[0]);
var temp = document.getElementById('form');
if (temp != null) {
temp.submit();
}
$scope.$apply();
});
I tried to give you an answer in very simple language, hope it may help to understand your issue.
Generally, When HTTP request fires to execute it will send to the server and get the data from the server this is the general scenario we have in our mind. There may be a situation occur that sometime due to network latency it may possible to receive response delay.
AngluarJs application has its own lifecycle.
Root scope is created during application bootstrap by the $injector. In template linking, directive binding creates new child scope.
While template linking there is watch registered to particular scope to identify particular changes.
In your case, when template linking and binding directive, there is a new watcher registered. Due to network latency or other reason your $http request sends delay response to your $http request and meanwhile those time scope variable has been changed. due to that, it will not give the updated response.
When you send $http request to a server it is asynchronous operation. When you use $timeout ultimately your scope binding wait to numbers of seconds in $timeout function you defined. After n number of seconds, your scope variable watch has been executed and it will update the value if you get the response in time.
I am building a webpage for learning. Actually doing the page is the main goal, if it works well it would only be a bonus since i will most likely be the only person using it.
That being said i am using Angular Objects that hold a lot of informations, like:
Semester - Subcategory - Question - List of answers as objects with "true"/ "false" properties for multi choice and the answer itself ect.
Since i will be doing the whole sorting / filtering with angular i wonder if i really need SQL or if a XML file would be superior.
With SQL saving is my main issue here. PHP seems to butcher arrays into a string with the value "array". If i use json_encode it saves correctly, but on GET it stops working since i have to rebuild the whole data structure with " and ' about everywhere.
With XML it really looks like angular just is not build for that. I have found some outdated tutorials that did not even have a working example.
So i guess my question here is:
Do i either go for SQL, putting up with multiple tables. Splitting my objects into several columns with optional values all over the place, while also rebuilding the whole thing on load?
Or do i use XML, since i would only use the DB to GET the whole thing anyways?
Both approaches have been tested by me and work, somewhat. Both would need quite a lot of further digging, reading and trying. I don't have the spare time to do both routes. Which one is the better one to go for in this particular use case?
This is ofcourse a personal preference but I always try to avoid XML. The JSON format is alot lean and meaner and it's way easier to work with in web applications.
In fact I would suggest to start with some static JSON files until you're finished with giving your website some structure. You can generate them manually, use some generator tools (like http://www.mockaroo.com/) or build them by using some simple javascript (JSON.stringify is your friend). You can then use this data quite easily by using the $http service:
$http.get('my-data.json')
.then(function(response) {
$scope.myData = response.data;
});
This is actually the approach my teams take when building large enterprise applications. We mock all data resources and replace them with the real thing when we (or the customer) are happy with the progress.
Using a JSON-File should be sufficient. You can store all the needed objects in it and change it easily. With the following code you can load the data within JavaScript
function loadJSON(path, success, error) {
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (xhr.readyState === XMLHttpRequest.DONE) {
if (xhr.status === 200) {
if (success)
success(JSONH.parse(xhr.responseText));
} else {
if (error)
error(xhr);
}
}
};
xhr.open("GET", path, true);
xhr.send();
}
usage
loadJSON('data.json',//relative path
function (data) {//success function
$scope.questions = question;
$scope.$apply();
},
function (xhr) {//error function
console.error(xhr);
}
);
In my application I have dynamic field sets on what is otherwise the same form. I can load them from the server as javascript includes and that works OK.
However, it would be much better to be able to load them from a separate API.
$.getJSON() provides a good way to load the json but I have not found the right place to do this. Clearly it needs to be completed before the compile step begins.
I see there is a fieldTransform facility in formly. Could this be used to transform vm.fields from an empty object to whatever comes in from the API?
If so how would I do that?
Thx. Paul
There is an example on the website that does exactly what you're asking about. It uses $timeout to simulate an async operation to load the field configuration, but you could just as easily use angular's own $http to get the json from the server. It hides the form behind an ng-if and only shows the form when the fields return (when ng-if resolves to true, it compile the template).
Thx #kent
OK, so we need to replace the getFields() promise with this
function getFields() {
return $http.get('fields-demo.json', {headers:{'Cache-Control':'no-cache'}});
}
This returns data.fields so in vm.loadingData we say
vm.fields = result[0].data;
Seems to work for OK for me.
When testing I noticed that you have to make sure there is nothing wrong with your json such as using a field type you haven't defined. In that case the resulting error message is not very clear.
Furthermore you need to deal with the situation where the source of the data is unavailable. I tried this:
function getFields() {
console.log('getting',fields_url);
return $http.get(fields_url, {headers: {'Cache-Control':'no-cache'}}).
error(function() {
alert("can't get fields from server");
//return new Promise({status:'fields server access error'}); //??
});
.. which does at least throw the alert. However, I'm not sure how to replace the promise so as to propagate the error back to the caller.
Paul
Frequently GAS users (me too) do not use the ServerHandler.addCallbackElement method or use in a way which does not cover all controls.
What is a background to have this method at all? Why GAS developers introduced it? Is it simpler to pass all input widgets values to all server handlers as parameters?
The documentation does not provide answers to these questions.
I see the following causes
Adding widgets as callback elements reduces traffic between browsers and GAS servers in case of several handlers which handle different sets of controls. Here is a question. How much traffic it saves? I think maximum a few kilobytes, usually hundreds of bytes. Is it worth, considering the modern internet connections speed, even mobile connections.
A form contains a table-like edit controls with multiple buttons and it is comfortable to handle row elements with the same name. This issue is easily avoided by using tags. See the following example. If the tags are used for other purposes it is not a problem to parse the source button id and extract the row number.
Limits of technology used behind the scenes. If there are such limits, then what are they?
function doGet(e) {
var app = UiApp.createApplication();
var vPanel = app.createVerticalPanel();
var handler = app.createServerHandler("onBtnClick");
var lstWidgets = [];
for (var i = 0; i < 10; i++) {
var hPanel = app.createHorizontalPanel().setTag('id_' + i);
var text = app.createTextBox().setName("text_" + i);
text.setText(new Date().valueOf());
var btn = app.createButton("click me").addClickHandler(handler);
btn.setTag(i).setId('id_btn' + i);
var lbl = app.createLabel().setId("lbl_" + i);
hPanel.add(text);
hPanel.add(btn);
hPanel.add(lbl);
lstWidgets.push(text);
lstWidgets.push(btn);
vPanel.add(hPanel);
}
// The addCallbackElement calls simulate situation when all widgets values are passed to a single server handler.
for (var j = 0; j < lstWidgets.length; j++) {
handler.addCallbackElement(lstWidgets[j]);
}
app.add(vPanel);
return app;
}
function onBtnClick(e) {
var app = UiApp.getActiveApplication();
var i = e.parameter[e.parameter.source + '_tag'];
var lbl = app.getElementById("lbl_" + i);
lbl.setText("Source ButtonID: " + e.parameter.source + ', Text: ' + e.parameter["text_" + i]);
return app;
}
Great Question.
"How much traffic it saves?" I don't think we know yet, but I expect it will get more efficient over time. Here is another discussion on performance. Only extensive testing and improvements from Google will really allow us to identify best practices, for now all I can say is that ClientHandlers are clearly going to be better than ServerHandlers whenever possible.
As JavaScript developers I think we are predominantly use to doing stuff client-side, then we think of PHP/ASP as server-side tools. My understanding so far is that our GAS code is actually running both client and server side (at the very least it's calling server side functionality) but it sure seems like there's more going on server-side than we realize, and on the client-side this seems to result in somewhat "compiled" code. I kinda recognize some of this multi-tier deployment from my Java experience.
Since there are a lot of ways of doing the same thing, Google can take advantage of the fact that our code is not directly interpreted (by either side) to do things that would not necessarily make sense if we were writing the code by hand. This is why I think it will become more efficient than other solutions, eventually but probably not yet. For now I'd suggest steering clear of GAS if you are worried about performance. Maybe just for fun try looking at the source of your client-side Web-Apps at runtime (view source). So in order for them to do things most efficiently, I imagine they will benefit by having us define things in a very high-level way. This gives them the most flexibility in how they interpret our code.
To specifically address your second question I personally think of the Handler Function onBtnClick() as running on the Server-Side, whereas the Tags you refer to (and most of the doGet) would be in the browser's engine on the client-side. I can see how the functionality would be much more flexible (efficient and powerful) on the server-side if they have an idea ahead of time as to how much memory they would need to handle specific events/requests. (Clearly if each getElementById() call was running a separate request, that would be like clicking a link to a new mini-webpage each time.)
So now the question is why can't my handler just automatically create parameters with just the stuff I use in my handler function? The only reason we are asking this question in the first place is because there is some stuff in the UiApp which seems to be available on both ends. The UiApp is already in the scope of both the doGet and onClick but the variables defined in doGet are not, so these values need to be either
explicitly saved like ScriptProperties.setProperty() or
put into the UiApp somewhere with an Id or
explicitly given to the Handler function using addCallbackElement()
Notice how you had to addCallbackElement(lstWidget), because it was not created with an app.create... constructor within the UiApp object. My guess is that GAS is implementing XML compliant SOAP calls to a web-service on the Google end, we may be able to figure this out by really studying the client-side source code. Just to reiterate we could also use setProperty() it does not really matter, or even save them via JDBC and then retrieve them with another connection from within your handler function but somehow the data needs to be passed from the Client to the Server and vice-versa.
From a programming perspective there is a lot of stuff available in the scope of your client-side doGet function that you probably would never want to pass to the server, or there may be functions in the scope of the server-side doClick() with the same name as functions on the client-side but they may actually be calls to totally different library functions maybe even on totally different hardware (even though from the developer's perspective they work the same way).
Maybe the Google team has not yet really decided on how the UiApp really works yet, otherwise they would just force or at least allow us to put everything in there. Yet another observation when we call UiApp.getActiveApplication() based on it's name it does not seem like a constructor, but rather a method that returns a private instance from the UiApp object. (Object being a class that was previously instantiated and supposedly initialized somewhere.) I may not have 100% answered your question but I sure did try, any further insight from the community would clearly be appreciated.
Now I may be straying off-topic but I also imagine the actual product will continue to change as they do more to improve performance in the long-term, and if we still feel like we are writing client-side code as a developer then that is a success for Google. Now please correct me if I have stated anything wrong, I have just recently started using these tools and plan to follow up on this question with more specifics as I learn more but as of right now that is my best interpretation.
If you use a formpanel all the sub elements will be sent to your dopost function. With the button as source. And your UIapp will be cleaned.
If you don't want that use a callback to specify what element and siblings will be sent.
This is how the UIapp is designed.
We can use the web worker in HTML5 like this:
var worker = new Worker('worker.js');
but why can't we call a function like this?
var worker = new Worker(function(){
//do something
});
This is the way web workers are designed. They must have their own external JS file and their own environment initialized by that file. They cannot share an environment with your regular global JS space for multi-threading conflict reasons.
One reason that web workers are not allowed direct access to your global variables is that it would require thread synchronization between the two environments which is not something that is available (and it would seriously complicate things). When web workers have their own separate global variables, they cannot mess with the main JS thread except through the messaging queue which is properly synchronized with the main JS thread.
Perhaps someday, more advanced JS programmers will be able to use traditional thread synchronization techniques to share access to common variables, but for now all communication between the two threads must go through the message queue and the web worker cannot have access to the main Javascript thread's environment.
This question has been asked before, but for some reason, the OP decided to delete it.
I repost my answer, in case one needs a method to create a Web worker from a function.
In this post, three ways were shown to create a Web worker from an arbitrary string. In this answer, I'm using the third method, since it's supported in all environments.
A helper file is needed:
// Worker-helper.js
self.onmessage = function(e) {
self.onmessage = null; // Clean-up
eval(e.data);
};
In your actual Worker, this helper file is used as follows:
// Create a Web Worker from a function, which fully runs in the scope of a new
// Worker
function spawnWorker(func) {
// Stringify the code. Example: (function(){/*logic*/}).call(self);
var code = '(' + func + ').call(self);';
var worker = new Worker('Worker-helper.js');
// Initialise worker
worker.postMessage(code);
return worker;
}
var worker = spawnWorker(function() {
// This function runs in the context of a separate Worker
self.onmessage = function(e) {
// Example: Throw any messages back
self.postMessage(e.data);
};
// etc..
});
worker.onmessage = function() {
// logic ...
};
worker.postMessage('Example');
Note that the scopes are strictly separated. Variables can only be passed and forth using worker.postMessage and worker.onmessage. All messages are structured clones.
This answer might be a bit late, but I wrote a library to simplify the usage of web workers and it might suit OP's need. Check it out: https://github.com/derekchiang/simple-worker
It allows you to do something like:
SimpleWorker.run({
func: intensiveFunction,
args: [123456],
success: function(res) {
// do whatever you want
},
error: function(err) {
// do whatever you want
}
})
WebWorkers Essentials
WebWorkers are executed in an independent thread, so have no access to the main thread, where you declare them (and viceversa). The resulting scope is isolated, and restricted. That's why, you can't , for example, reach the DOM from inside the worker.
Communication with WebWorkers
Because communication betwen threads is neccessary, there are mechanisms to accomplish it. The standard communication mechanism is through messages, using the worker.postMessage() function and the worker.onMessage(), event handler.
More advanced techniques are available, involving sharedArrayBuffers, but is not my objective to cover them. If you are interested in them, read here.
Threaded Functions
That's what the standard brings us.
However, ES6 provides us enough tools, to implement an on-demmand callable Threaded-Function.
Since you can build a Worker from a Blob, and your Function can be converted into it (using URL.createObjectURL), you only need to implement some kind of Communication Layer in both threads, to handle the messages for you, and obtain a natural interaction.
Promises of course, are your friend, considering that everything will happen asynchronously.
Applying this theory, you can implement easilly, the scenario you describe.
My personal approach : ParallelFunction
I've recently implemented and publised a tiny library wich does exactly what you describe. in less than 2KB (minified).
It's called ParallelFunction, and it's available in github, npm , and a couple of CDNs.
As you can see, it totally matches your request:
// Your function...
let calculatePi = new ParallelFunction( function(n){
// n determines the precision , and in consequence
// the computing time to complete
var v = 0;
for(let i=1; i<=n; i+=4) v += ( 1/i ) - ( 1/(i+2) );
return 4*v;
});
// Your async call...
calculatePi(1000000).then( r=> console.log(r) );
// if you are inside an async function you can use await...
( async function(){
let result = await calculatePi(1000000);
console.log( result );
})()
// once you are done with it...
calculatePi.destroy();
After initialization, you can call your function as many times you need. a Promise will be returned, wich will resolve, when your function finishes execution.
By the way, many other Libraries exists.
Just use my tiny plugin https://github.com/zevero/worker-create
and do
var worker_url = Worker.create(function(e){
self.postMessage('Example post from Worker'); //your code here
});
var worker = new Worker(worker_url);
While it's not optimal and it's been mentioned in the comments, an external file is not needed if your browser supports blobURLs for Web Workers. HTML5Rocks was the inspiration for my code:
function sample(e)
{
postMessage(sample_dependency());
}
function sample_dependency()
{
return "BlobURLs rock!";
}
var blob = new Blob(["onmessage = " + sample + "\n" + sample_dependency]);
var blobURL = window.URL.createObjectURL(blob);
var worker = new Worker(blobURL);
worker.onmessage = function(e)
{
console.log(e.data);
};
worker.postMessage("");
Caveats:
The blob workers will not successfully use relative URLs. HTML5Rocks link covers this but it was not part of the original question.
People have reported problems using Blob URLs with Web Workers. I've tried it with IE11 (whatever shipped with FCU), MS Edge 41.16299 (Fall Creator's Update), Firefox 57, and Chrome 62. No clue as to Safari support. The ones I've tested have worked.
Note that "sample" and "sample_dependency" references in the Blob constructor call implicitly call Function.prototype.toString() as sample.toString() and sample_dependency.toString(), which is very different than calling toString(sample) and toString(sample_dependency).
Posted this because it's the first stackoverflow that came up when searching for how to use Web Workers without requesting an additional file.
Took a look at Zevero's answer and the code in his repo appears similar. If you prefer a clean wrapper, this is approximately what his code does.
Lastly -- I'm a noob here so any/all corrections are appreciated.
By design web workers are multi-threaded, javascript is single threaded"*"multiple scripts cannot run at the same time.
refer to: http://www.html5rocks.com/en/tutorials/workers/basics/