Mysterious ES6 templates side effects whilst debugging [duplicate] - google-chrome

I am currently reading Async Javascript by Trevor Burnham. This has been a great book so far.
He talks about this snippet and console.log being 'async' in the Safari and Chrome console. Unfortunately I can't replicate this. Here is the code:
var obj = {};
console.log(obj);
obj.foo = 'bar';
// my outcome: Object{}; 'bar';
// The book outcome: {foo:bar};
If this was async, I would anticipate the outcome to be the books outcome. console.log() is put in the event queue until all code is executed, then it is ran and it would have the bar property.
It appears though it is running synchronously.
Am I running this code wrong? Is console.log actually async?

console.log is not standardized, so the behavior is rather undefined, and can be changed easily from release to release of the developer tools. Your book is likely to be outdated, as might my answer soon.
To our code, it does not make any difference whether console.log is async or not, it does not provide any kind of callback or so; and the values you pass are always referenced and computed at the time you call the function.
We don't really know what happens then (OK, we could, since Firebug, Chrome Devtools and Opera Dragonfly are all open source). The console will need to store the logged values somewhere, and it will display them on the screen. The rendering will happen asynchronously for sure (being throttled to rate-limit updates), as will future interactions with the logged objects in the console (like expanding object properties).
So the console might either clone (serialize) the mutable objects that you did log, or it will store references to them. The first one doesn't work well with deep/large objects. Also, at least the initial rendering in the console will probably show the "current" state of the object, i.e. the one when it got logged - in your example you see Object {}.
However, when you expand the object to inspect its properties further, it is likely that the console will have only stored a reference to your object and its properties, and displaying them now will then show their current (already mutated) state. If you click on the +, you should be able to see the bar property in your example.
Here's a screenshot that was posted in the bug report to explain their "fix":
So, some values might be referenced long after they have been logged, and the evaluation of these is rather lazy ("when needed"). The most famous example of this discrepancy is handled in the question Is Chrome's JavaScript console lazy about evaluating arrays?
A workaround is to make sure to log serialized snapshots of your objects always, e.g. by doing console.log(JSON.stringify(obj)). This will work for non-circular and rather small objects only, though. See also How can I change the default behavior of console.log in Safari?.
The better solution is to use breakpoints for debugging, where the execution completely stops and you can inspect the current values at each point. Use logging only with serialisable and immutable data.

This isn't really an answer to the question, but it might be handy to someone who stumbled on this post, and it was too long to put in a comment:
window.console.logSync = (...args) => {
try {
args = args.map((arg) => JSON.parse(JSON.stringify(arg)));
console.log(...args);
} catch (error) {
console.log('Error trying to console.logSync()', ...args);
}
};
This creates a pseudo-synchronous version of console.log, but with the same caveats as mentioned in the accepted answer.
Since it seems like, at the moment, most browsers' console.log's are asynchronous in some manner, you may want to use a function like this in certain scenarios.

When using console.log:
a = {}; a.a=1;console.log(a);a.b=function(){};
// without b
a = {}; a.a=1;a.a1=1;a.a2=1;a.a3=1;a.a4=1;a.a5=1;a.a6=1;a.a7=1;a.a8=1;console.log(a);a.b=function(){};
// with b, maybe
a = {}; a.a=function(){};console.log(a);a.b=function(){};
// with b
in the first situation the object is simple enough, so console can 'stringify' it then present to you; but in the other situations, a is too 'complicated' to 'stringify' so console will show you the in memory object instead, and yes, when you look at it b has already be attached to a.

Related

Using Chrome DevTools Protocol Input.dispatchKeyEvent or Input.dispatchMouseEvent to send an event

I'm writing a DSL that will interact with a page via Google Chrome's Remote Debugging API.
The INPUT domain (link here:
https://chromedevtools.github.io/devtools-protocol/1-2/Input/) lists two functions that can be used for sending events: Input.dispatchKeyEvent and Input.dispatchMouseEvent.
I can't seem to figure out how to specify the target element as there is no link between the two functions and DOM.NodeId, or an intermediate API that accepts a DOM.NodeId which then returns an X,Y co-ordinate.
I know that it's possible to use Selenium, but I'm interested in doing directly using WebSockets.
Any help is appreciated.
Brief Intro
I'm currently working on a NodeJS interaction library to work with Chrome Headless via the Remote Debugging Protocol. The idea is to integrate it into my colleague's testing framework to eventually replace the usage of PhantomJS, which is no longer being supported.
Evaluating JavaScript
I'm just experimenting with things currently, but I have a way of evaluating JavaScript on the page, for example, to click on element via a selector reference. It should in theory work for anything assuming my implementation isn't flawed.
let evaluateOnPage: function (fn) {
let args = [...arguments].slice(1).map(a => {
return JSON.stringify(a);
});
let evaluationStr = `
(function() {
let fn = ${String(fn)};
return fn.apply(null, [${args}]);
})()`;
return Runtime.evaluate({expression: evaluationStr});
}
}
The code above will accept a function and any number of arguments. It will turn the arguments into strings, so they are serializable. It then evaluates an IIFE on the page, which calls the function passed in with the arguments.
Example Usage
let selector = '.mySelector';
let result = evaluateOnPage(selector => {
return document.querySelector(selector).click();
}, selector);
The result of Runtime.evaluate is a promise, which when is fulfilled, you can check the result object for a type to determine success or failure. For example, subtype may be node or error.
I hope this may be of some use to you.
this protocol is probably not the best if you are wanting to click on specific elements rather than clicking on spots on the screen...
It's important to keep in mind that this area of the devtools protocol is intended to emulate the raw input. If you want to try and figure out the position of the elements using the protocol or by running some javascript in the page you could do that, however it might be better to use something like target.dispatchEvent() with MouseEvent and inject the javascript into the page instead.

App Widget getting hijacked by Google Sound Search?

So I'm seeing some bizarre behavior in an appwidget that I wrote.
The widget itself is quite simple - it reads a few values from persistent storage and then displays them as text. Nothing fancy in the layout - just a FrameLayout root element with some Linear Layout and TextView children.
The widget has a simple configuration activity associated with it.
The bizarre behavior is that the widget will initially show "Problem Loading Widget" after the user closes the configuration activity, and then after a few seconds it shows a "Google Sound Search" button (and clicking on the button actually does launch Google Sound Search). Then, after a few more seconds, it finally shows the expected display.
I am away from my code right now, so I'll have to wait until tonight to post code snippets. However, in the meantime, can anyone provide some insight into how such a thing could happen? Has anyone else ever experienced this? How could another widget "hijack" mine?
Thanks,
-Ron
Here are some screenshots:
There are a couple of issues with your widget and there are answers to all of them (although you didn't post any code so some of my statements are based on assumptions):
"Problem loading widget": this is the default view Android uses before the widget is initialized and the layout updated. Simply add the following line to your widget xml configuration (to show a loading message instead of the problem message):
android:initialLayout="#layout/my_cool_widget_loading_message"
If the widget shows the wrong layout then you probably have an issue in the widget's onReceive method. onReceive is called for all the widgets no matter whether the broadcast is for that specific widget. Android's AppWidgetProvider filters the broadcasts by appwidget Id and dispatches to the other methods (like onUpdate).
See also: https://developer.android.com/reference/android/appwidget/AppWidgetProvider.html#onReceive(android.content.Context, android.content.Intent).
If you override onReceive (which I assume you do), you need to call through to super.onReceive(Context, Intent) to make sure your other methods don't get calls meant for other widgets.
Now for the configuration of the widget. If you follow the Google documentation then it will all work nicely. The only improvement I'd do is what my other answer that you reference suggests (https://stackoverflow.com/a/14991479/534471). This will NOT send out two broadcasts. The setResult()/finish() part does only terminate the config Activity and let Android know whether to actually add the widget or not (depending on whether the result is RESULT_CANCELED or RESULT_OK.
From your own answer I can see why your code wouldn't work. The code is:
Intent intent = new Intent();
intent.setAction(AppWidgetManager.ACTION_APPWIDGET_UPDATE);
intent.putExtra(AppWidgetManager.EXTRA_APPWIDGET_IDS, new int[] {mAppWidgetId});
intent.putExtra(AppWidgetManager.EXTRA_APPWIDGET_ID, mAppWidgetId);
setResult(RESULT_OK, intent);
sendBroadcast(intent);
finish();
First of all there's no need to add the appWidgetId twice, use the AppWidgetManager.EXTRA_APPWIDGET_IDS version and you're good. Second you're using the same Intent to return as a result for the Activity. AFAIK it's not documented what happens when you do set an action on that Intent but my experience with Android widgets is that you need to stick exactly to the documentation or you'll end up having strange issues (like the ones you encounter). So please use two different Intents.
Activity result:
Intent resultValue = new Intent();
resultValue.putExtra(AppWidgetManager.EXTRA_APPWIDGET_ID, mAppWidgetId);
setResult(RESULT_OK, resultValue);
finish();
Broadcast:
Intent intent = new Intent(AppWidgetManager.ACTION_APPWIDGET_UPDATE, null, this, MyWidget.class);
intent.putExtra(AppWidgetManager.EXTRA_APPWIDGET_IDS, new int[] {mAppWidgetId});
sendBroadcast(intent);
ok, so I figured it out. Posting here in case anyone else runs into this. I think that the Android Developer docs are a little misleading here.
The problem was that in my configuration Activity, I had this code at the end:
Intent intent = new Intent();
intent.setAction(AppWidgetManager.ACTION_APPWIDGET_UPDATE);
intent.putExtra(AppWidgetManager.EXTRA_APPWIDGET_IDS, new int[] {mAppWidgetId});
intent.putExtra(AppWidgetManager.EXTRA_APPWIDGET_ID, mAppWidgetId);
setResult(RESULT_OK, intent);
sendBroadcast(intent);
finish();
Providing an intent with the extra EXTRA_APPWIDGET_ID is recommended by the documentation provided by google.
However, that same document says that you have to update the widget's view by creating a RemoteView and calling AppWidgetManager.updateAppWidget() like so:
RemoteViews views = new RemoteViews(context.getPackageName(),
R.layout.example_appwidget);
appWidgetManager.updateAppWidget(mAppWidgetId, views);
I didn't like the idea of placing the presentation logic in both the configuration activity and the widget class, so I instead decided to broadcast an intent at the end of the configuration activity to tell the widget to redraw itself. That's why I have setResult() AND sendBroadcast() at the end of the activity. The documentation further states that the onUpdate() callback will not be called when using a configuration activity. So this seemed neccessary. I added the ACTION_APPWIDGET_UPDATE and the EXTRA_APPWIDGET_IDS to the intent so that it would trigger the onUpdate() method. This practice was recommended by this SO answer (albeit without being included in the activity result intent - but I tried separating the two and it had no effect).
Now I'm not certain exactly how the "Google Sound Search" widget got in there, nor do I fully understand the mechanics of how the intents interacted to produce the observed results. However, as soon as I replaced my code above with the code stated in the docs, the widget was updated properly.
Intent resultIntent = new Intent();
resultIntent.putExtra(AppWidgetManager.EXTRA_APPWIDGET_ID, mAppWidgetId);
setResult(RESULT_OK, resultIntent);
finish();
This seems to contradict the documentation's statement that the configuration activity must update the widget's view. Simply providing the configuration activity result as below triggers the onUpdate() method in the widget, thus allowing the widget to redraw itself. I confirmed the behavior on an emulator running API 23 and also on a Samsung device running Samsung's android flavor.

How to load angular-formly vm.fields object from remotely-generated json?

In my application I have dynamic field sets on what is otherwise the same form. I can load them from the server as javascript includes and that works OK.
However, it would be much better to be able to load them from a separate API.
$.getJSON() provides a good way to load the json but I have not found the right place to do this. Clearly it needs to be completed before the compile step begins.
I see there is a fieldTransform facility in formly. Could this be used to transform vm.fields from an empty object to whatever comes in from the API?
If so how would I do that?
Thx. Paul
There is an example on the website that does exactly what you're asking about. It uses $timeout to simulate an async operation to load the field configuration, but you could just as easily use angular's own $http to get the json from the server. It hides the form behind an ng-if and only shows the form when the fields return (when ng-if resolves to true, it compile the template).
Thx #kent
OK, so we need to replace the getFields() promise with this
function getFields() {
return $http.get('fields-demo.json', {headers:{'Cache-Control':'no-cache'}});
}
This returns data.fields so in vm.loadingData we say
vm.fields = result[0].data;
Seems to work for OK for me.
When testing I noticed that you have to make sure there is nothing wrong with your json such as using a field type you haven't defined. In that case the resulting error message is not very clear.
Furthermore you need to deal with the situation where the source of the data is unavailable. I tried this:
function getFields() {
console.log('getting',fields_url);
return $http.get(fields_url, {headers: {'Cache-Control':'no-cache'}}).
error(function() {
alert("can't get fields from server");
//return new Promise({status:'fields server access error'}); //??
});
.. which does at least throw the alert. However, I'm not sure how to replace the promise so as to propagate the error back to the caller.
Paul

why does console.log not output in chrome?

Recently I read a query about "What does console.log do" and I read an answer, tried using it and found that despite the answer stating that it outputs to the console in googles browser, I just tried it and I get no output.
I did try this code:
function put(p){
if ( window.console && window.console.log ) {
console.log(p); // console is available
}else{
alert(p);
}
}
BUT... I get neither console output or alert and furthermore .log is a Math property, what gives with that?
Make sure that in the Developer Tools the Filter in the console section is set to All or Logs...
I had a similar experience where I couldn't see any of my console.log output and it was because the console was set to filter on Errors only... All the log entries were there - they just weren't visible.
Bonus marks: you can also Ctrl + Click to select multiple filters (Errors + Logs for example).
Press F12 and look at in Developer Tools: Console. I tried your code just now, works fine for me -- Chrome version 30.0.
Since you're after console logging, not mathematical logarithms, perhaps you could stop going on about there being similarly-named function in the Math object. It's not relevant here whatsoever.
You're also coming across just a little shouty. console.log() works fine, and your question didn't indicate that you knew where to look. This is totally your problem, but I'm trying to help you. I can obviously only go on the information you provide, and I can't say your initial question was written very clearly.
It appears, since the snippet of code you posted works here absolutely fine, that your calling code & containing (which you haven't posted) would be the cause of the problem. You should debug these, to find out what's going wrong.
Do you know how to use the Chrome debugger? Are there any error messages showing in Chrome or on the console?
Test it on a simple page if necessary, to rule out other errors in the page/ or surrounding context breaking it. One common mistakes is declare functions in a jQuery ready handler or similar, and then try & access them globally. Make sure your logging function is actually global (outside any other function(){} or object {} blocks).
Lastly, it's good to have a logging function for portability (I have one myself) but put() is not a good name for it. Naming it consoleLog() or log() would be better.
Had the same issue .
Make sure your using de right path when you try import thing's .
Example whit my issue :
Wrong path ----> ** import normalizedData from 'normalizr'; **
Right path ---> ** import normalizedData from '../schemas/index.js'; **
I had also faced the same problem. Make sure you apply no filter in the console. It worked for me.

What is background to have ServerHandler.addCallbackElement method?

Frequently GAS users (me too) do not use the ServerHandler.addCallbackElement method or use in a way which does not cover all controls.
What is a background to have this method at all? Why GAS developers introduced it? Is it simpler to pass all input widgets values to all server handlers as parameters?
The documentation does not provide answers to these questions.
I see the following causes
Adding widgets as callback elements reduces traffic between browsers and GAS servers in case of several handlers which handle different sets of controls. Here is a question. How much traffic it saves? I think maximum a few kilobytes, usually hundreds of bytes. Is it worth, considering the modern internet connections speed, even mobile connections.
A form contains a table-like edit controls with multiple buttons and it is comfortable to handle row elements with the same name. This issue is easily avoided by using tags. See the following example. If the tags are used for other purposes it is not a problem to parse the source button id and extract the row number.
Limits of technology used behind the scenes. If there are such limits, then what are they?
function doGet(e) {
var app = UiApp.createApplication();
var vPanel = app.createVerticalPanel();
var handler = app.createServerHandler("onBtnClick");
var lstWidgets = [];
for (var i = 0; i < 10; i++) {
var hPanel = app.createHorizontalPanel().setTag('id_' + i);
var text = app.createTextBox().setName("text_" + i);
text.setText(new Date().valueOf());
var btn = app.createButton("click me").addClickHandler(handler);
btn.setTag(i).setId('id_btn' + i);
var lbl = app.createLabel().setId("lbl_" + i);
hPanel.add(text);
hPanel.add(btn);
hPanel.add(lbl);
lstWidgets.push(text);
lstWidgets.push(btn);
vPanel.add(hPanel);
}
// The addCallbackElement calls simulate situation when all widgets values are passed to a single server handler.
for (var j = 0; j < lstWidgets.length; j++) {
handler.addCallbackElement(lstWidgets[j]);
}
app.add(vPanel);
return app;
}
function onBtnClick(e) {
var app = UiApp.getActiveApplication();
var i = e.parameter[e.parameter.source + '_tag'];
var lbl = app.getElementById("lbl_" + i);
lbl.setText("Source ButtonID: " + e.parameter.source + ', Text: ' + e.parameter["text_" + i]);
return app;
}
Great Question.
"How much traffic it saves?" I don't think we know yet, but I expect it will get more efficient over time. Here is another discussion on performance. Only extensive testing and improvements from Google will really allow us to identify best practices, for now all I can say is that ClientHandlers are clearly going to be better than ServerHandlers whenever possible.
As JavaScript developers I think we are predominantly use to doing stuff client-side, then we think of PHP/ASP as server-side tools. My understanding so far is that our GAS code is actually running both client and server side (at the very least it's calling server side functionality) but it sure seems like there's more going on server-side than we realize, and on the client-side this seems to result in somewhat "compiled" code. I kinda recognize some of this multi-tier deployment from my Java experience.
Since there are a lot of ways of doing the same thing, Google can take advantage of the fact that our code is not directly interpreted (by either side) to do things that would not necessarily make sense if we were writing the code by hand. This is why I think it will become more efficient than other solutions, eventually but probably not yet. For now I'd suggest steering clear of GAS if you are worried about performance. Maybe just for fun try looking at the source of your client-side Web-Apps at runtime (view source). So in order for them to do things most efficiently, I imagine they will benefit by having us define things in a very high-level way. This gives them the most flexibility in how they interpret our code.
To specifically address your second question I personally think of the Handler Function onBtnClick() as running on the Server-Side, whereas the Tags you refer to (and most of the doGet) would be in the browser's engine on the client-side. I can see how the functionality would be much more flexible (efficient and powerful) on the server-side if they have an idea ahead of time as to how much memory they would need to handle specific events/requests. (Clearly if each getElementById() call was running a separate request, that would be like clicking a link to a new mini-webpage each time.)
So now the question is why can't my handler just automatically create parameters with just the stuff I use in my handler function? The only reason we are asking this question in the first place is because there is some stuff in the UiApp which seems to be available on both ends. The UiApp is already in the scope of both the doGet and onClick but the variables defined in doGet are not, so these values need to be either
explicitly saved like ScriptProperties.setProperty() or
put into the UiApp somewhere with an Id or
explicitly given to the Handler function using addCallbackElement()
Notice how you had to addCallbackElement(lstWidget), because it was not created with an app.create... constructor within the UiApp object. My guess is that GAS is implementing XML compliant SOAP calls to a web-service on the Google end, we may be able to figure this out by really studying the client-side source code. Just to reiterate we could also use setProperty() it does not really matter, or even save them via JDBC and then retrieve them with another connection from within your handler function but somehow the data needs to be passed from the Client to the Server and vice-versa.
From a programming perspective there is a lot of stuff available in the scope of your client-side doGet function that you probably would never want to pass to the server, or there may be functions in the scope of the server-side doClick() with the same name as functions on the client-side but they may actually be calls to totally different library functions maybe even on totally different hardware (even though from the developer's perspective they work the same way).
Maybe the Google team has not yet really decided on how the UiApp really works yet, otherwise they would just force or at least allow us to put everything in there. Yet another observation when we call UiApp.getActiveApplication() based on it's name it does not seem like a constructor, but rather a method that returns a private instance from the UiApp object. (Object being a class that was previously instantiated and supposedly initialized somewhere.) I may not have 100% answered your question but I sure did try, any further insight from the community would clearly be appreciated.
Now I may be straying off-topic but I also imagine the actual product will continue to change as they do more to improve performance in the long-term, and if we still feel like we are writing client-side code as a developer then that is a success for Google. Now please correct me if I have stated anything wrong, I have just recently started using these tools and plan to follow up on this question with more specifics as I learn more but as of right now that is my best interpretation.
If you use a formpanel all the sub elements will be sent to your dopost function. With the button as source. And your UIapp will be cleaned.
If you don't want that use a callback to specify what element and siblings will be sent.
This is how the UIapp is designed.