I've got a strange set of errors as I load up my application
polymer-micro.html:117 Uncaught TypeError: Cannot read property '_makeReady' of undefined
(anonymous function) # polymer-micro.html:117
The indefined in this first one is Polymer.RenderStatus
polymer.html:3417 Uncaught TypeError: Polymer.dom is not a function
_findStyleHost # polymer.html:3417
_computeStyleProperties # polymer.html:3461
_applyCustomProperties # polymer.html:3652
fn # polymer.html:3638
Obviously Polymer.dom ought to be a function. why is it not?
Uncaught TypeError: Cannot read property '_isEventBogus' of undefined
_notifyListener # polymer.html:2012
(anonymous function) # polymer.html:1534
fire # polymer.html:1277
_notifyChange # polymer.html:1372
_notifyEffect # polymer.html:1553
_effectEffects # polymer.html:1405
_propertySetter # polymer.html:1389
setter # polymer.html:1468q
ueryHandler # iron-media-query.html:116
This is a media query that generates this error everytime we move the width of the screen across the media query boundary. The undefined variable in this case is Polymer.Bind
If I put a breakpoint at the top of polymer.html (just after the script tag) The errors go away when I then let it go. This almost implies that normally it is running without letting polymer-mini.html load.
I am running with chrome, and the tests in index.html as to whether to load the webcomponents-lite.js polyfill means it doesn't load.
I am stuck about what to do about debugging this issue. Any ideas?
It turns out that the positioning of the initialisation script in index.html is the cause of these problems. I had copied the Polymer Shop app which places the script at the bottom of the body. The app created by the polymer-cli tool (with the app-drawer template) places the script in the header, before importing the my-app element.
I suspect the reason the shop app doesn't have problems is the header is not doing any animation. I am using a blend-background effect in my header and that might be why it fails.
Anyway, moving the script which includes the Polymer initialization into the header solved the problem.
Adding async to the import solved it for me.
<link rel="import" href="/src/your-app.html" async>
Instead of
<link rel="import" href="/src/your-app.html">
I'm experimenting with the 2.0 preview and had a similar set of errors trying to follow Rob Dodson's routing Polycast (#47, 48 ish). This is what worked after many hours of trying other solutions:
**<script src="/bower_components/webcomponentsjs/webcomponents-lite.js">**
// Setup Polymer options
window.Polymer = {
dom: 'shadow',
lazyRegister: true
};
// Load webcomponentsjs polyfill if browser does not support native Web Components
(function() {
'use strict';
var onload = function() {
// For native Imports, manually fire WebComponentsReady so user code
// can use the same code path for native and polyfill'd imports.
if (!window.HTMLImports) {
document.dispatchEvent(
new CustomEvent('WebComponentsReady', {bubbles: true})
);
}
};
var webComponentsSupported = (
'registerElement' in document
&& 'import' in document.createElement('link')
&& 'content' in document.createElement('template')
);
if (!webComponentsSupported) {
**// var script = document.createElement('script');
// script.async = true;
// script.src = '/bower_components/webcomponentsjs/webcomponents-lite.min.js';
// script.onload = onload;
// document.head.appendChild(script);**
} else {
onload();
}
})();
// Load pre-caching Service Worker
if ('serviceWorker' in navigator) {
window.addEventListener('load', function() {
navigator.serviceWorker.register('/service-worker.js');
});
}
</script>
Related
I have created a reproduction of this bug here (ugly use of Aurelia but to prove the point): https://jberggren.github.io/GoogleAureliaBugReproduce/
If I load Google API and try to list my files in Google Drive my code derived from Googles quickstart works fine. If I use the same code after loading Aurelia I get a script error from gapi stating
Uncaught Error: arrayForEach was called with a non array value
at Object._.Sa (cb=gapi.loaded_0:382)
at Object._.eb (cb=gapi.loaded_0:402)
at MF (cb=gapi.loaded_0:723)
at Object.HF (cb=gapi.loaded_0:722)
at Object.list (cb=gapi.loaded_0:40)
at listFiles (index.js:86)
...
When debugging it seems to be some sort of array check (Chroms says 'native code') that failes after Aurelia is loaded. In my search for an answer I found two other people with the same problem but no solution (Aurelia gitter question, SO Question). Don't know if to report this to the Aurelia team, Google or where the actual problem lays.
Help me SO, you are my only hope.
This is not a perfect solution but works.
aurelia-binding
https://github.com/aurelia/binding/blob/master/src/array-observation.js
Aurelia overrides Array.prototype.* for some reasons.
gapi (especially spreadsheets)
Gapi lib checks to make sure that is it native code or not.
// example
const r = /\[native code\]/
r.test(Array.prototype.push)
conclusion
So, we have to monkey patching.
gapi.load('client:auth2', async () => {
await gapi.client.init({
clientId: CLIENT_ID,
discoveryDocs: ['https://sheets.googleapis.com/$discovery/rest?version=v4'],
scope: 'https://www.googleapis.com/auth/spreadsheets',
});
// monkey patch
const originTest = RegExp.prototype.test;
RegExp.prototype.test = function test(v) {
if (typeof v === 'function' && v.toString().includes('__array_observer__.addChangeRecord')) {
return true;
}
return originTest.apply(this, arguments);
};
});
I'm writing a unit test of an AngularJS 1.x directive.
If I use "template" it works.
If I use "templateUrl" it does not work (the directive element remains the same original HTML instead of being "compiled").
This is how I create the directive element to test in Jasmine:
function createDirectiveElement() {
scope = $rootScope.$new();
var elementHtml = '<my-directive>my directive</my-directive>';
var element = $compile(elementHtml)(scope);
scope.$digest();
if (element[0].tagName == "my-directive".toUpperCase()) throw Error("Directive is not compiled");
return element;
};
(this does not actually work, see Update for real code)
I'm using this workaround to use the $httpBackend from ngMockE2E (instead of the one in ngMock). In the browser developer "network" tab I don't see any request to the template file. It seems to work because I solved the error "Object # has no method 'passThrough'".
I know that the call to the template is done asynchronously using the $httpBackend (this means $compile exit before the template is really applied).
My question is:
obviously $compile is not doing what I expect. How can I trap this error?
If I use a wrong address in the templateUrl I don't receive any error.
How can I found the problem happened when I called $compile(directive) or scope.$digest() ?
Thanks,
Alex
[Solution]
As suggested by #Corvusoft I inject $exceptionHandler and I check for errors after every test.
In the end this is the only code I have added:
afterEach(inject(function ($exceptionHandler) {
if ($exceptionHandler.errors.length > 0)
throw $exceptionHandler.errors;
}));
Now I can clearly see the errors occurred in the Jasmine test result (instead of search for them in the console), example:
Error: Unexpected request: GET /api/category/list
No more request expected,Error: Unexpected request: GET /api/category/list
No more request expected thrown
And, most important, my tests does not pass in case there are some errors.
[Update to show real example case]
Actually the real code to make templateUrl work use asynchronous beforeEach ("done") and a timeout to wait the end of compile/digest.
My directive use some prividers/services and the template contains other directives which in turn use their templateUrl and make calls to some APIs in the link function().
This is the current (working) test code:
// workaround to have .passThrough() in $httpBackend
beforeEach(angular.mock.http.init); // set $httpBackend to use the ngMockE2E to have the .passThrough()
afterEach(angular.mock.http.reset); // restore the $httpBackend to use ngMock
beforeEach(inject(function (_$compile_, _$rootScope_, _$http_, $httpBackend, $templateCache, $injector) {
$compile = _$compile_;
$rootScope = _$rootScope_;
$http = _$http_;
$httpBackend.whenGET(/\/Scripts of my app\/Angular\/*/).passThrough();
$httpBackend.whenGET(/\/api\/*/).passThrough(); // comment out this to see the errors in Jasmine
}));
afterEach(inject(function ($exceptionHandler) {
if ($exceptionHandler.errors.length > 0)
throw $exceptionHandler.errors;
}));
beforeEach(function(done) {
createDirectiveElementAsync(function (_element_) {
element = _element_;
scope = element.isolateScope();
done();
});
});
function createDirectiveElementAsync(callback) {
var scope = $rootScope.$new();
var elementHtml = '<my-directive>directive</my-directive>';
var element = $compile(elementHtml)(scope);
scope.$digest();
// I haven't found an "event" to know when the compile/digest end
setTimeout(function () {
if (element.tagName == "my-directive".toUpperCase()) throw Error("Directive is not compiled");
callback(element);
}, 0.05*1000); // HACK: change it accordingly to your system/code
};
it("is compiled", function () {
expect(element).toBeDefined();
expect(element.tagName).not.toEqual("my-directive".toUpperCase());
});
I hope this example helps someone else.
$exceptionHandler
Any uncaught exception in AngularJS expressions is delegated to this
service. The default implementation simply delegates to $log.error
which logs it into the browser console.
In unit tests, if angular-mocks.js is loaded, this service is overridden by mock $exceptionHandler which aids in testing.
angular.
module('exceptionOverwrite', []).
factory('$exceptionHandler', ['$log', 'logErrorsToBackend', function($log, logErrorsToBackend) {
return function myExceptionHandler(exception, cause) {
logErrorsToBackend(exception, cause);
$log.warn(exception, cause);
};
}]);
I have been using Traceur to develop some projects in ES6. In my HTML page, I include local Traceur sources:
<script src="traceur.js"></script>
<script src="bootstrap.js"></script>
and if I have a module in the HTML afterwards like:
<script type="module" src="foo.js"></script>
Then Traceur loads in that module, compiles it and everything works great.
I now want to programmatically add an ES6 module to the page from within another ES6 module (reasons are somewhat complicated). Here was my first attempt:
var module = document.createElement('script');
module.setAttribute('type', 'module');
module.textContent = `
console.log('Inside the module now!');
`;
document.body.appendChild(module);
Unfortunately this doesn't work as Traceur does not monitor the page for every script tag added, I guess.
How can I get Traceur to compile and execute the script? I guess I need to invoke something on either 'traceur' or '$traceurRuntime' but I haven't found a good online source of documentation for that.
You can load other modules using ES6 import statements or TraceurLoader API for dynamic dependencies.
Example from Traceur Documentation
function getLoader() {
var LoaderHooks = traceur.runtime.LoaderHooks;
var loaderHooks = new LoaderHooks(new traceur.util.ErrorReporter(), './');
return new traceur.runtime.TraceurLoader(loaderHooks);
}
getLoader().import('../src/traceur.js',
function(mod) {
console.log('DONE');
},
function(error) {
console.error(error);
}
);
Also, System.js loader seems to be supported as well
window.System = new traceur.runtime.BrowserTraceurLoader();
System.import('./Greeter.js');
Dynamic module loading is a (not-yet-standardized) feature of System:
System.import('./repl-module.js').catch(function(ex) {
console.error('Internal Error ', ex.stack || ex);
});
To make this work you need to npm test then include BrowserSystem
<script src="../bin/BrowserSystem.js"></script>
You might also like to look into https://github.com/systemjs/systemjs as it has great support for browser loading.
BTW the System object may eventually be standardize (perhaps under a different name) in the WHATWG: http://whatwg.github.io/loader/#system-loader-instance
I'm building a mid sized app with Polymer and used the Polymer Starter Kit to kick things off which uses page.js for routing.
I want to implement flash message functionality using the paper-toast element.
In other technologies/frameworks this is implemented by checking to see if a property exists when the route is changed.. if it does, it shoes the relevant flash/toast message.
How... with Polymer & Page.js is it possible to replicate this type of functionality? Page.js doesn't seem to have any events for changed routes.
The only way I can think is to create a proxy function for the page('/route') function that I have to call every time I want to go to a new page which then calls the actual page function. Is there a better way?
I've implemented this like follows for the time being... seems to be ok but if anyone can suggest improvements let me know.
In routing.html
window.addEventListener('WebComponentsReady', function() {
// Assign page to another global object
LC.page = page;
// Define all routes through this new object
LC.page('/login', function () {
app.route = 'login';
app.scrollPageToTop();
});
....
//implement remaining routes
// page proxy... to intercept calls
page = function(path) {
// dispatch event
document.dispatchEvent(new Event('LC.pageChange', {path: path}));
// call the real page
LC.page(path);
};
});
Then where you want to listen.. in my case in a lc-paper-toast element added to the index.html file of the app I can now listen to when the page is changed...
ready: function() {
document.addEventListener('LC.pageChange', function(e){
console.log('page change' , e);
}, false);
}
Only thing to be aware of is that all page changes must be called with page('/route') otherwise it won't go through the proxy.
With gulp you often see patterns like this:
gulp.watch('src/*.jade',['templates']);
gulp.task('templates', function() {
return gulp.src('src/*.jade')
.pipe(jade({
pretty: true
}))
.pipe(gulp.dest('dist/'))
.pipe( livereload( server ));
});
Does this actually pass the watch'ed files into the templates task? How do these overwrite/extend/filter the src'ed tasks?
I had the same question some time ago and came to the following conclusion after digging for a bit.
gulp.watch is an eventEmitter that emits a change event, and so you can do this:
var watcher = gulp.watch('src/*.jade',['templates']);
watcher.on('change', function(f) {
console.log('Change Event:', f);
});
and you'll see this:
Change Event: { type: 'changed',
path: '/Users/developer/Sites/stackoverflow/src/touch.jade' }
This information could presumably be passed to the template task either via its task function, or the behavior of gulp.src.
The task function itself can only receive a callback (https://github.com/gulpjs/gulp/blob/master/docs/API.md#fn) and cannot receive any information about vinyl files (https://github.com/wearefractal/vinyl-fs) that are used by gulp.
The source starting a task (.watch in this case, or gulp command line) has no effect on the behavior of gulp.src('src-glob', [options]). 'src-glob' is a string (or array of strings) and options (https://github.com/isaacs/node-glob#options) has nothing about any file changes.
Hence, I don't see any way in which .watch could directly affect the behavior of a task it triggers.
If you want to process only the changed files, you can use gulp-changed (https://www.npmjs.com/package/gulp-changed) if you want to use gulp.watch, or you cold use gulp-watch.
Alternatively, you could do this as well:
var gulp = require('gulp');
var jade = require('gulp-jade');
var livereload = require('gulp-livereload');
gulp.watch('src/*.jade', function(event){
template(event.path);
});
gulp.task('templates', function() {
template('src/*.jade');
});
function template(files) {
return gulp.src(files)
.pipe(jade({
pretty: true
}))
.pipe(gulp.dest('dist/'))
}
One of the possible way to pass a parameter or a data from your watcher to a task. Is through using a global variable, or a variable that is in both blocks scops. Here is an example:
gulp.task('watch', function () {
//....
//json comments
watch('./app/tempGulp/json/**/*.json', function (evt) {
jsonCommentWatchEvt = evt; // we set the global variable first
gulp.start('jsonComment'); // then we start the task
})
})
//global variable
var jsonCommentWatchEvt = null
//json comments task
gulp.task('jsonComment', function () {
jsonComment_Task(jsonCommentWatchEvt)
})
And here the function doing the task work in case it interest any one, But know i didn't need to put the work in such another function i could just implemented it directly in the task. And for the file you have your global variable. Here it's jsonCommentWatchEvt. But know if you don't use a function as i did, a good practice is to assign the value of the global variable to a local one, that you will be using. And you do that at the all top entry of the task. So you will not be using the global variable itself. And that to avoid the problem that it can change by another watch handling triggering. When it stay in use by the current running task.
function jsonComment_Task(evt) {
console.log('handling : ' + evt.path);
gulp.src(evt.path, {
base: './app/tempGulp/json/'
}).
pipe(stripJsonComments({whitespace: false})).on('error', console.log).
on('data', function (file) { // here we want to manipulate the resulting stream
var str = file.contents.toString()
var stream = source(path.basename(file.path))
stream.end(str.replace(/\n\s*\n/g, '\n\n'))
stream.
pipe(gulp.dest('./app/json/')).on('error', console.log)
})
}
I had a directory of different json's files, where i will use comments on them. I'm watching them. When a file is modified the watch handling is triggered, and i need then to process only the file that was modified. To remove the comments, i used json-comment-strip plugin for that. Plus that i needed to do a more treatment. to remove the multiple successive line break. Whatever, at all first i needed to pass the path to the file that we can recover from the event parameter. I passed that to the task through a global variable, that does only that. Allow passing the data.
Note: Even though that doesn't have a relation with the question, in my example here, i needed to treat the stream getting out from the plugin processing. i used the on("data" event. it's asynchronous. so the task will mark the end before the work completely end (the task reach the end, but the launched asynchronous function will stay processing a little more). So the time you will get in the console at task end, isn't the time for the whole processing, but task block end. Just that you know. For me it doesn't matter.