HTML <base> tag in email - html

We have a content-managed solution (SDL Tridion, to be specific; however, the question is more general), which includes multiple sites with content of different languages. They all share a number of Razor-based templates, which are used to render HTML fragments with specific injected content when pages are published.
CRM is also managed through the CMS and the same templating is used for the creation of email newsletters. These HTML emails contain images, which are published out to whatever site manages the distribution list in question. Because the templating system is generic and the CMS has no concept of the absolute URLs of the final product, these images are all embedded with relative addresses. We have the capacity to apply an absolute URL as metadata to the different websites in the CMS and write .Net extensions to format these URLs into rendered image tags; however, this would add considerable overhead to this piece of work.
We can resolve this by using a <base href="..." /> tag in the <head> section of the email's markup. This seems to work in Outlook, at least; however, I have not been able to find much up-to-date information on what email clients do and do not support this tag.
The question, then: How widely supported among email clients - particularly browser-based ones - is the <base> tag?

Unfortunately, it won't work for most web-based email clients (Hotmail, Gmail) and that typically adds up to about 30% of receivers.
Why it won't work:
Most web-based clients inject whatever's inside the body tag of your email and strip out everything else, including the head. So, if you send:
<html>
<head><base ...></head>
<body><p class="youremail">Email</p></body>
</html>
The email client does this:
<html>
<head><Email client head></head>
<body>
<email client wrapper>
<email>
<p class="youremail">Email</p>
</email>
<email client wrapper>...
</body>
So your base tag will be stripped. Even if it wasn't, since it's not include in the email client's head, it will be ignored by the browser.
Unfortunately, absolute paths on images is the way to go. I have got over similar problems in the past by using a 'preflight processor'. You could use that to get the <base> href and set it on all the images before returning the finished HTML.

I couldn't tell if your using Razor or not, but if you are, you can do this inside a razor view:
src="#Request.Url.GetLeftPart(UriPartial.Authority)~/images/screenshot.png"

Related

ASP.NET Core read HTML email and display in webapp

I am working on a custom email webapp for my company. I have browsed the web and can't find an exact answer. Maybe I don't know what to "google" for exactly. I have never done anything email related.
In our database we are saving emails as a string. It contains everything.
<DOCTYPE><head><body><style> etc. How would I go about displaying this in my webapp.
I tried just pasting the HTML in a <div> inside one of my components but the styles would not load properly.
How would I go about reading/parsing the raw HTML in the string to then display it in my webapp. Is there a nuget package anyone recommends.
I am using Blazor on .net core 5.
As you may have found, putting an email HTML body into an existing web page will almost result in CSS pollution as the HTML may have styles (or may use styles from the web page).
My solution to this was to load and display the email HTML inside an <iframe> tag as this isolates the email from the page very effectively. The URL for the IFRAME is separate view on the server, which just returns the raw email HTML (remember to add security to this if needed in your app).
There are probably better ways to do this in CSS now without having to resort to IFRAME.

Is it safe to use iframe to prevent malicious files

I am making a web based chatting platform where people can chat and also they can share files. If any hacker inject a malicious file then there is a risk my website maybe got hacked. I am just thinking about embedding the files shared by users from a different domain name with different hosting so the script will look like -
<iframe src="server-url.com?file=filename.ext" ></iframe >
And iframe src URL will response by
<html>
<head></head>
<body>
<img src="filename.ext" >
</body>
</html>
Is this technique prevent my website from getting hacked? If not, what is the best way to protect my website from malicious files?
Now it all depends on the site you are using the iframe attribute for. If the site is secure and has SSL (Referring to the 3rd party site you are iframing) then you should be good.
Now if you did want to make it secure, you could use the "Sandbox" attribute. I have a link below from W3 schools that explains more about it. Sandbox will usually block most content, but there are attribute values that allow you to make exceptions
For example, let's say your iframe chat uses scripts to function, you could do something like this
<iframe src="server-url.com?file=filename.ext sandbox="allow-scripts"></iframe >
Information about iframe sandbox from W3 schools

Is it wise to use the noscript tag for tracking visitors who have javascript disabled, since noscript is deprecated?

Piwik (open source web analytics) uses javascript to track visitors. However, for visitors who have javascript disabled they suggest using the <noscript> tag:
When a visitor has disabled JavaScript, or when JavaScript cannot be
used, you can use an image tracking link to track visitors. Generate
the link below and copy-paste the generated HTML in the page. If
you're using this as a fallback for JavaScript tracking, you can
surround it in <noscript></noscript> tags.
Using a js vs no-js class on the body isn't really an option, since that would not be able to prevent the image from loading. So the noscript tag seems like my only option, but will it work as expected since it is deprecated?
Visitors that have JavaScript disabled means practically:
Screen readers for blind people - I don't see a reason they should download images
Some bored admin viewing page with text-mode browser - no images downloaded as well
Google BOT or some other bot crawling pages - they would do with the links what they want, for example ignore it
Scripts writing the page for offline usage, that would save the image or not, depending on the settings (in most cases not)
The only place where such image-tracking could have work are the HTML mail messages, but such images are normally blocked, or you could fall under spam filter.

rendering behavior of remote linked files and script

lets say in my webpage i have added images, files & scripts which are not on available locally with respect to websites physical path
e.g
<script src"http://libraryheaven.com/somescript.js">
<link rel="stylesheet" type="text/css" href="http://www.styles.com/plugs/mystyle.css"/>
<img src="http://www.google.com/logo.png">
When the browser will start rendering the response HTML, then it will sort-of resolve dependencies, meaning it will then make separate HTTP requests to fetch the files from their remote location, or will it send request to the web-server of website to provide these and the that web-server will fetch these files and respond them to client or is the web-server intelligent enough to fetch and send all the dependencies.. please explain i haven't read the theory of rendering so i don't how it works...
When you enter URL in your web browser you tell browser to fetch whatever can be found at that particular URL. And in most cases it is HTML file or some server code which produces HTML on the fly.
When browser gets HTML it knows how to and tries to interpret it (that's it's primary task after all).
Now when interpreting HTML browser "meets" tags with src or href attributes it makes separate request per attribute to URL in attribute value. These URLs usually point to images, style sheets, javascript files. Browser fetches whatever it finds there and tries to interpret the downloaded resources as well (show image, apply style sheet, execute javascript).
So to answer your question:
Yes, browser will download all the resources by itself from URLs in afore-mentioned attributes
No, web server does not take care of any external references in a served/produced HTML
No, web server does not try to play smart here and does not try to give you more than you ask for.
So basically if you put something like this in HTML
<img src="http://www.google.com/logo.png" />
then you know that any browser interpreting this HTML will try to fetch image logo.png from google.

Embedding iframe in Wikimedia based Wiki

I have been trying to embed an iframe to a wiki page that I'm working on based on wikimedia but not the actual wikipedia without any luck.
I've also tried googling on this topic, but have been fruitless. Will appreciate any advice on this pls.
Thks.
There's the easy way and the slightly harder way.
The easy way assumes you don't have a publicly editable wiki (i.e. non-logged in users cannot edit and creating an account is not automatic).
If that's the case, simply set $wgRawHtml to true and you will be able to input any arbitrary HTML into your pages by wrapping it inside the <html> tag.
Here's an example:
This is '''wikitext'''.
<html>
This is <em>HTML</em>.
</html>
Now, if you have a publicly editable wiki you most definitely don't want users to be able to add any and all HTML to your wiki. In that case you can use the Verbatim extension. This will embed the contents of a page in the MediaWiki namespace as-is, preserving any HTML markup.
For example:
<verbatim>Foo</verbatim>
Would embed the contents of MediaWiki:Foo.
Hope that helps.
I suggest you use the IDisplay extension.
The iDisplay extension allows MediaWiki pages to embed external web pages. It also allows setting an option to put a blocking page in front of it, so you prevent loading the page until the user wants to load the page.
It's implemented with an <iframe>.