I'm working on a lit-element project, and I got a problem that reset.css cannot be applied to web components wrapped with shadow-root
I've tried this way, and I got this following error.
Refused to apply style from 'http://localhost:8080/style/static/reset.css' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
The code I tried is this:
<script>
var css = new CSSStyleSheet()
css.replace('#import url("./style/static/reset.css")')
document.adoptedStyleSheets = [css]
</script>
this is put into a html file.
How can I avoid this error, and apply the reset.css to the web components?
Does it help to apply the replace import to the shadowroot as opposed to the document?
const node = document.createElement('div')
const shadow = node.attachShadow({ mode: 'open' })
shadow.adoptedStyleSheets = [sheet]
https://wicg.github.io/construct-stylesheets/#using-constructed-stylesheets
Edit -- addition for clarity:
The above addresses applying a style sheet possibly containing #import statements onto the frame to which your original question refers, the shadow-root node (element), however, because your code tries to apply the instantiated sheet to the document, to me, the question becomes a bit hazy.
The error you indicate seems indicative of code which attempts to apply a style sheet created in another document:
If you try to adopt a CSSStyleSheet that's constructed in a different Document, a "NotAllowedError" DOMException will be thrown.
https://github.com/WICG/construct-stylesheets/blob/gh-pages/explainer.md
Related
I am building a website, and am trying to make it where the links aren't always just/blog or /post1. I want to get to my posts, but i cant make it where the link is like this /blog/post, but when i do that the html shows up, but not the styles. My style.css is in the public folder, but its not working. I am using express, body-parser, and node js.
Here is my code:
app.get('/blog/post1', (req, res) => {
res.render('../posts/post1.pug');
});
When I import the css, I get this error in chrome:
Refused to apply style from 'http://localhost:7000/blog/style.css' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
Imported css:
link(rel='stylesheet', href='style.css')
It works just fine when I put the link like this /blog-post1 but not like this /blog/post1
It is for some reason saying that the styles are in another location, but why?
Is there a way to fix that?
Im trying to scrape a div class but everything I have tried has failed so far :(
Im trying to scrape the element(s):
<a href="http://www.bellator.com/events/d306b5/bellator-newcastle-pitbull-vs-
scope"><div class="s_buttons_button s_buttons_buttonAlt
s_buttons_buttonSlashBack">More info</div></a>
from the website: http://www.bellator.com/events
I tried accessing the list of elements by doing
Elements elements = document.select("div[class=s_container] > li");
but that didnt return anything.
Then i tried accessing just the parent with
Elements elements = document.select("div[class=s_container]");
and that returned two div with classname "s_container", non of which is the one I needed :<
then i tried accessing that ones parent with
Elements elements = document.select("div[class=ent_m152_bellator module
ent_m152_bellator_V1_1_0 ent_m152]");
And that didnt return anything
I also tried
Elements elements = document.select("div[class=ent_m152_bellator]");
because I wasnt sure about the white spaces but it didnt return anything either
Then I tried accessing its parent by
Elements elements = document.select("div#t3_lc");
and that worked, but it returned an element containing
<div id="t3_lc">
<div class="triforce-module" id="t3_lc_promo1"></div>
</div>
which is kinda weird because i cant see that it has that child when i inspect the website in chrome :S
Anyone knows whats going on? I feel kinda lost..
What you see in your web browser is not what Jsoup sees. Disable JavaScript and refresh page to get what Jsoup gets OR press CTRL+U ("Show source", not "Inspect"!) in your browser to see original HTML document before JavaScript modifications. When you use your browser's debugger it shows final document after modifications so it's not not suitable for your needs.
It seems like whole "UPCOMING EVENTS" section is dynamically loaded by JavaScript.
Even more, this section is asynchronously loaded with AJAX. You can use your browsers debugger (Network tab) to see every possible request and response.
I found it but unfortunately all the data you need is returned as JSON so you're going to need another library to parse JSON.
That's not the end of the bad news and this case is more complicated. You could make direct request for the data:
http://www.bellator.com/feeds/ent_m152_bellator/V1_1_0/d10a728c-547e-4a6f-b140-7eecb67cff6b
but the URL seems random and few of these URLs (one per upcoming event?) are included inside JavaScript code in HTML.
My approach would be to get the URLs of these feeds with something like:
List<String> feedUrls = new ArrayList<>();
//select all the scripts
Elements scripts = document.select("script");
for(Element script: scripts){
if(script.text().contains("http://www.bellator.com/feeds/")){
// here use regexp to get all URLs from script.text() and add them to feedUrls
}
}
for(String feedUrl : feedUrls){
// iterate over feed URLs, download each of them
String json = Jsoup.connect(feedUrl).ignoreContentType(true).get().body().toString();
// here use JSON parsing library to get the data you need
}
ALTERNATIVE approach would be to stop using Jsoup because of its limitations and use Selenium Webdriver as it supports dynamic page modifications by JavaScript so you'd get the HTML of the final result - exactly what you see in web browser and Inspector.
If anyone finds this in the future; I managed to solve it with Selenium, dont know if its a good/correct solution but it seems to be working.
System.setProperty("webdriver.chrome.driver", "C:\\Users\\PC\\Desktop\\Chromedriver\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.get("http://www.bellator.com/events");
String html = driver.getPageSource();
Document doc = Jsoup.parse(html);
Elements elements = doc.select("ul.s_layouts_lineListAlt > li > a");
for(Element element : elements) {
System.out.println(element.attr("href"));
}
Output:
http://www.bellator.com/events/d306b5/bellator-newcastle-pitbull-vs-scope
http://www.bellator.com/events/ylcu8d/bellator-215-mitrione-vs-kharitonov
http://www.bellator.com/events/yk2djw/bellator-216-mvp-vs-daley
http://www.bellator.com/events/e8rdqs/bellator-217-gallagher-vs-graham
http://www.bellator.com/events/281wxq/bellator-218-sanchez-vs-grimshaw
http://www.bellator.com/events/8lcbdi/bellator-219-koreshkov-vs-larkin
http://www.bellator.com/events/9rqguc/bellator-macdonald-vs-fitch
Latest Update (also updated post title)
So I tracked down the issue to the old version of Mootools (which I cannot upgrade or remove due to project restrictions).
Mootools does the following, which is the code that causes the issue:
/*
Class: Abstract
Abstract class, to be used as singleton. Will add .extend to any object
Arguments:
an object
Returns:
the object with an .extend property, equivalent to <$extend>.
*/
var Abstract = function(obj){
obj = obj || {};
obj.extend = $extend;
return obj;
};
//window, document
var Window = new Abstract(window);
var Document = new Abstract(document);
The new definitions of Window and Document is what's breaking Polymer imports. Any suggestions on updating the code above to gracefully extend the Document/Window objects without breaking existing functionality?
OLD description below before I discovered the issue lies with mootools
I've already included the webcomponents.js script.
Then, when I have the for polymer.html, the errors below start appearing, and my polymer components doesn't work.
The components works in isolation using the polymer-cli. Anyone know what may be causing this issue?
EDIT: So this is what I have in my <head>
<script src="/media/bower_components/webcomponentsjs/webcomponents.js"></script>
<link rel="import" href="/media/bower_components/polymer/polymer.html">
(...sorry I cannot show more, private company code and what-not)
That's literally all I need in my page to raise the error mentioned above.
I'm starting to think there is some other javascript library (there's a lot) that might be interfering with Polymer, since I cannot replicate the issue on a brand new site.
I should also note, there are no 404's. The polymer.html file does get loaded as expected in the browser, I verified this in my network panel in developer console.
I am trying to optimise the loading of Polymer Elements in my Polymer based web app. In particular I am concentrating my effort around the initial start up screens. Users will have to log on if they don't have a valid jwt token held in a cookie.
index.html loads an application element <pas-app> which in turn loads an session manager (<pas-eession>). Since the normal startup will be when the user is already logged on the element that handles input of user name and password (<pas-logon>) is hidden behind a <template is="dom-if"> element inside of <pas-session>and I have added the async flag to its html import line in that element as well - thus :
<link rel="import" href="pas-logon.html" async>
However, in chrome (I don't experience this in firefox, where html imports are polyfilled) this async seems to flow over embedded <script> element inside the custom element. In particular I get a type error because the script to cause it to be regestered as a custom element thinks Polymer is not a function.
I suspect I am using the wrong kind of async flag - is there a way to specify that the html import should not block the current element, but should block the scripts inside itself when loaded.
I think I had the same problem today and found this question when searching for a solution. When using importHref async I get errors like [paper-radio-button::_flattenBehaviorsList]: behavior is null, check for missing or 404 import and dependencies are not loaded in the right order. When I change to async = false the error messages are gone.
It seems that this is a known bug of Polymer or probably Chrome https://github.com/Polymer/polymer/issues/2522
Our AJAX framework works such that it sends back a snippet of HTML that might contain a tag. We then take that snippet of HTML and set it to be the innerHTML of an element. In IE 6/7 it appears to ignore the tags and so the returned HTML isn't styled properly.
I'm wondering if other people have run into similar problems, and if so, how they've dealt with them. I know I could use a javascript library (we use YUI) to dynamically get external style sheets, so I could convert this to an external style sheet. Just wondering if there are other ways to fix this.
Of course the best method is a pure DOM solution, ie use document.createElement to generate all the elements returned by the server.
However, there is (of course) an IE hack to work around some of the problems presented by innertHTML. Instead of sticking the response straight into the DOM, first create an element, append its innerHTML, then attach it to the DOM.
function responseHandler(response) {
var div = document.createElement('div');
div.innertHTML = response.responseText
document.getElementById('ZE_ELEMENT').appendChild(div);
}
Inline style tags should be recognized. A styelsheet block will not, as they are loaded at the page's initial render time.
You can dynamically add classes you the existing stylesheet, but you'll have to parse your HTML fragment to grab what's there and do it in script.
see: Totally Pwn CSS with Javascript
You can try jquery empty().append() instead of using html() or innerHTML for IE 6 to IE 8. It works for me loading full html document with inline styling. Here is the code sample.
var url = "inline.html";
$.ajax({
url: url,
success: function (html) {
$('#content').empty();
$('#content').append(html);
},
error: function () { return null; }
});
For more details check the blog link Solution: Inline CSS issue and Ajax call with IE6 to IE8