JSF change auto generated meta elements to allow download files with ssl - html

I have a JSF 1.1 web application, where I use ssl for lets say all pages. So when I try to download file with Internet Explorer 8, the classic security exception is raised, that I cannot download the file.
so I added to all responses with a Listener the headers suggested here: IE cannot download foo.jsf. IE was not able to open this internet site. The requested site is either unavailable or cannot be found
But it doesn't solved the problem. Then I realized that the generated html pages also contains elements:
meta content="no-cache" http-equiv="Pragma"
meta content="no-cache" http-equiv="Cache-Control"
meta content="no-store" http-equiv="Cache-Control"
So this could be the problem? How to change these for lets say all or selecte pages?
(I'm quite new in jsf)
thx

Those headers needs to be set on file download responses, not on JSF responses. A PhaseListener runs on JSF responses only (and is basically a clumsy approach for this purpose, a Filter would be better).
How and where exactly to set the headers depends on how you're serving the file download, which is not clear from the question.

Related

CSS not displaying properly in Sharepoint on Edge Browser (SEC7111 Error)

Hopefully I can explain this correctly. I have recently been moved to a Windows 10 VM from Windows 7 and I'm trying to get a site for my team at work to display properly in Edge. I have a WebPart linking to CSS that is displaying everything as one large list instead of a table with dropdowns. When I open the HTML page on its own in Edge it displays fine, but with code in SharePoint it is not working correctly. Any ideas of why this could happen?
What should display
What is displaying in SharePoint
EDIT
After opening developer tools I find that I am receiving a SEC7111 error code on my CSS file that is being linked. Looking other places for solutions to this too, but any help is greatly appreciated!
FINAL EDIT
With the SEC7111 error I found out that the "file://" links I used for the CSS weren't going to work because they weren't considered "secure" (Although I got the same error in IE, but never had this display issue..?) So, I moved my linked CSS file to a secure folder in another SharePoint site I have, linked the CSS from there, and now it's working!
There are some ways that you can use to solve your problem (It's better to share your code within your question to get a better answer). So, I offer you below solutions:
Solution 1
Please don't use file:// for the published site in the webserver. HTML rendered on the client so you cannot access the local files. so you should not use the file://. you can read more about security concerns and more details on the file protocol here: (https://en.wikipedia.org/wiki/File_URI_scheme)
Instead of using local file protocol, you can use the Absolute/Relative path to your CSS on the HTTP/HTTPS protocols
Solution 2
Add X-UA-Compatible meta tag or HTTP response header to force IE to run with legacy document mode: 5, 7, 8.
X-UA-Compatible meta tag:
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=8" />
...
</head>
<body>
</body>
</html>
X-UA-Compatible HTTP response header:

Twitter-Card Meta Tag Issue

URL in question: https://www.halleonard.com/viewpressreleasedetail.action?releaseid=10261
When you view the source in a browser and in Developer Tools, you can see all of the meta tags for Open Graph and Twitter. I have checked the Facebook Debugger and, aside from a few canonical issues, I'm fairly happy with the results.
I also plugged the above URL into a third-party Open Graph Debugger:
http://debug.iframely.com/
and all of the tags for Open Graph, Twitter and even other all come back positive.
Why is is that Twitter's Card Validator is coming back with a
INFO: Page fetched successfully
INFO: 9 metatags were found
ERROR: No card found (Card error)
Any insight on how I get Twitter to display properly?
I think you may either be blocking or redirecting the Twitterbot agent.
I faked a Twitterbot agent with curl:
curl -A 'Twitterbot' https://www.halleonard.com/viewpressreleasedetail.action?releaseid=10261 -o ~/Desktop/what-twitterbot-sees.html
And this is what your server returns:
As you can see there are 10 meta tags (if you exclude the <meta charset> one) which is what the card validator indicates and there are no <meta name="twitter:*"> tags at all.
You can reproduce with your browser if you can set a custom user agent string. This is possible with Google Chrome:
I'm pretty sure there's some sort of redirection rule going on either at your web server level or in your application code.
According to developer.twitter.com, the user agent string I have used is correct:
Twitter uses the User-Agent of Twitterbot (with version, such as Twitterbot/1.0)
I checked your source code with w3c validator.
https://validator.w3.org/
it seems that google tag manager is not installed properly: you have to move the google tag manager (noscript) (line 10:13) on top of the body section and absolutely remove it from head section.
You can find here some information
https://support.google.com/tagmanager/answer/6103696?hl=en
and also (more specific) in your tag manager page (like this attached)
The error is coming because of the twitter:card type you have selected. You forgot to add the twitter:image content. These both should be added.
<meta name="twitter:card" content="summary_large_image">
<meta name="twitter:image" content="ADD IMAGE URL">
Reference Link

.html Caching in HTML

I have a web application published on IIS. All of my JS files are called from my static html file called "Index.html". In that html file, I call each JS file with the <script> tag, and in order to manage our versions and perform updates without user's history and cache deleting, I've added the ?v={version} at the end of each JS file's URL as the following:
<script src="./app.js?v=20161226.1" />
After multiple version updates, I've noticed that the users still need to refresh the page in order to get the latest Index.html file. After searching the Developer Tools of chrome, and looking in the Network section in the Developer Tools, I've managed to notice that the Index.html file is loaded from the cache (shown the "(from cache)" sign in the Network).
After searching the web for any solution for uncaching .html files (Because there is no ?v={version} for my .html file), I've found that adding:
<meta http-equiv="cache-control" content="no-cache" />
<meta http-equiv="expires" content="0" />
<meta http-equiv="pragma" content="no-cache" />
isn't solving the issue and the my Index.html file is still loaded from the cache.
I'm updaing my web application each two weeks and I can't afford myself letting the users deleting the cache and history each version update because the new and latest .html file is loaded because it is cached.
The only thing that helps is refreshing (F5) and then the Index.html file is reloaded (Not loaded from cache and the latest version of that Index.html file is shown). But if someone types the url and enters it in the URL-bar, the Index.html is still loaded from the cache.
Is there anything I've done wrong and should add anything else?
Is there anything to do to solve this issue at all?
Thanks!
Putting a query string on the end of a URL is a (good) hack to allow you to set the HTTP cache control headers to cache for a long time for infrequently changed resources and still force the new version to load on those occasions that you do change it.
If you are frequently updating your HTML, then just set the cache control headers to tell the browser to check for updates more frequently. Take advantage of Etags or If-Modified-Since instead of depending on an Expires header set far in the future.
NB: You have to use real HTTP headers. <meta http-equiv> is a bad joke.

Prevent Browser File Caching

I need to disable caching for single files in all browsers.
I have a website that generates small video clips. There is a preview stage where the results can be watched.
An mp4 called preview.mp4 is displayed. When the user goes back, edits the video and wants to preview it again, the old preview.mp4 is being displayed even though the file on the server is changed.
How can I prevent the caching of this video file? Or what are the other ways to fix it?
Note: it's a single page application so I don't reload any HTML files. Only PHP content. Hence the headers I set, are not useful in this scenario:
<meta http-equiv="X-UA-Compatible" content="IE=Edge"/>
<meta http-equiv="cache-control" content="no-store" />
Thanks.
It's not the web page itself you should be concerned about, but the mp4 file which is downloaded and cached separately.
Ensure the response headers of the mp4 file prevent browser caching.
Cache-Control: no-cache
Hence the headers I set, are not useful in this scenario:
<meta http-equiv="cache-control" content="no-store" />
There problem here is that those are not headers.
They are HTML elements (which are part of the HTML document, which is the body of the HTTP response) which attempt to be equivalent to HTTP headers … and fail.
You need to set real HTTP headers, and you need to send them with the video file (rather than the HTML document).
The specifics of how you do that will depend on the HTTP server you use.
Further reading:
Caching in the HTTP specification
Caching Tutorial for Web Authors and Webmasters
Caching Guide in the Apache HTTPD manual
How to Modify the Cache-Control HTTP Header When You Use IIS
Try adding a cache key
preview.mp4?cachekey=randNum
Where randNum can be a timestamp or you use a random number generator to generate randNum.

Why use meta tag "Pragma" and "Expires" in head section of html

Why use meta tag "Pragma" and "Expires" in head section of html like this.
Thanks.
<META HTTP-EQUIV="Pragma" CONTENT="no-cache">
<META HTTP-EQUIV="Expires" CONTENT="-1">
Using this will disable the browser to cache your webpage.
Disabling cache has some valuable advantages.
Like when you update your files on the server, if happened that the browser doesnt have a cached copy of your webpage then it would force itself to load the updated content of your website.
One of the disadvantage of this is the impact on page downloading. Since you dont have cached copy on your browser, it will always download all of your assets from the server thus affecting time and also consuming bandwidth.
Try reading this article.
Both tags are meant to prevent browsers from caching the HTML page, and they usually do that. This means that access to the page may be slower especially if it is frequently visited. Probably most commonly, these tags are inserted by people who do not understand how caches work. See Caching Tutorial for Web Authors and Webmasters.
There are several ways to try to prevent caching. These specific tags have no official definition, and they do not conform to HTML5 CR.