I built a random sentence generator -- when you click an HTML button, a random sentence is generated beneath it. The generation is powered by a simple script and jQuery.
It works fine on my local machine: When I open index.html in a browser, everything goes smoothly.
But once I upload to GiHub and visit the GitHub pages URL, the generator stops working. Clicking the button does nothing.
Here's the script (it's all contained within the index.html file):
<script src="http://code.jquery.com/jquery-1.10.1.min.js"></script>
and
<script> function sentenceLoad() {
//declare arrays
var nouns = ['Mulder, Scully, and'];
var names = ['Assistant Director Skinner', 'the Cigarette Smoking Man', 'Alex Krycek'];
var actions = ['are running from alien bounty hunters', 'are tracking a shapeshifter', 'are hunting a mutant serial killer'];
var places = ['in the woods of New Jersey', 'in a government bunker', 'in Olympic National Forest'];
//shuffle through contents of each array, picking one entry per array
var randNoun = nouns[Math.floor(Math.random() * nouns.length)];
var randName = names[Math.floor(Math.random() * names.length)];
var randAction = actions[Math.floor(Math.random() * actions.length)];
var randPlace = places[Math.floor(Math.random() * places.length)];
//place the random entry into the appropriate place in the HTML
jQuery("h5").html("");
jQuery("h5").append(randNoun + " ");
jQuery("h5").append(randName + " ");
jQuery("h5").append(randAction + " ");
jQuery("h5").append(randPlace);
}
What would cause this to work locally, but not work on Github Pages?
If you open up your Developer Tools pane (in Chrome, right-click on the page and choose Inspect), you'll see this error in the Network console:
Mixed Content: The page at 'https://bobbyfestgenerator.github.io/' was loaded over HTTPS, but requested an insecure script 'http://code.jquery.com/jquery-1.10.1.min.js'. This request has been blocked; the content must be served over HTTPS.
You need to load your script over HTTPS instead of HTTP.
The reason this works locally is because you're using the file:// scheme on your local machine (or http:// if you have a local development server). The browser doesn't have a problem loading an external script over HTTP in this case.
However, Github Pages is hosting your file over HTTPS (a secure connection) for you. For security reasons, the browser won't load a script over HTTP if the page is hosted on HTTPS.
Just change the code in your <head> tag to load the script over HTTPS:
<script src="https://code.jquery.com/jquery-1.10.1.min.js"></script>
Today I saw a similar problem, but not because of HTTP/HTTPS: when I published to GitHub pages, all of the CR/LF characters were removed in the source HTML. Perhaps not a big deal for HTML with closing tags, but when the entire source page is on one line, including JavaScript, well - any embedded comments in JavaScript caused all code afterwards (until the JavaScript closing tag) to be unexpectedly also commented out.
In this case some more code is treated like a comment:
code...
// a comment
some more code...
Here's an example. Before, as viewed in editor:
After push to GitHub, CR/LF apparently removed by GH-Pages actions:
Note how all the remaining JavaScript ends up disabled after the comment::
The (somewhat undesired) solution would be to remove the comment or better: use /* and */ comment wrappers instead of // at the beginning of the line.
edit: the root cause seems to be the layout: compress; see https://github.com/jekyll/jekyll/issues/8660
Related
I have am HTML based manual. It was designed for and works in IE7 and below. It is installed locally on a computer (not web server based, only local), but I need to make changes that conform to changes in rules for Chromium based web browsers. Parts of the book work, but other parts fail. I can not use any scripts/techniques that would alter any standard security settings.
Primary issues are it has links for graphics and tables that launch a "container" html that is supposed to display them. But all that shows up is the blank shell "container" without the content. Console error messages for figures show errors occuring on lines with "document.write" functions being called out and issues with "cross-window communication". I am unclear if the tables are failing for the same reason since no console errors are being displayed. I added " --allow-file-access-from-files --allow-file-access --allow-cross-origin-auth-prompt" to the shortcuts for Chrome/Edge and the figures began displaying correctly, but the tables still failed.
My background is not as a javascript coder, and I have slightly beyond basic html editing skills. The scripts currently inuse were written by a javascript programer before he moved on and I am being instructed to make the transition. Im sure im not explaining something correctly since this is above my head, but if anyone can help on this, it would be greatly appreciated.
the links inuse for figures are:
onClick="figureWindow('fig.png', 'fig title')
the script in the resources folder:
function figureWindow(imgURL, imgName) {
settings = 'resizable=1,width=1010,height=800,left=10,top=35,titlebar=yes'
newName='Fig'+Math.floor(Math.random()*1001);
openwin = window.open('../../../resources/FigWin.htm?loc=' + imgURL + "?doc=" + imgName, newName, settings);}
the links inuse for tables are:
onClick="javascript:tableWindow('table 1.txt')
the script in the resources folder:
function tableWindow(tableURL) {
settings = 'resizable=1,width=1010,height=600,left=10,top=115,titlebar=yes'
myname = 'table_' + Math.floor(Math.random() * 1001);
openwin = window.open('../../../resources/TableWin.htm?loc=' + tableURL, myname, settings);}
(basic breakdown of manual structure)
->Root
-->Manual
---->Chapter 1 (Folder and Displayed html TOC)
------>Tables (folder Inside Chapter 1 folder)
------<Table 1 (Txt file with contents)
------<Table 2 (Txt file with contents)
------<Table 3 (Txt file with contents)
----<Page 1 (html page to be displayed and read)
----<Page 2 (html page to be displayed and read)
----<Page 3 (html page to be displayed and read)
---->Figs (Folder for all graphics to be displayed from Pages)
-->Resources Folder
---->CSS
---->Scripts
--<Fig Container (Inside Resources folder, supposed to get graphic filename and text coded on the Page)
--<Table Container (Inside Resources folder, supposed to get table filename coded on the Page)
I have the basic shell of a Chrome extension done and have come to the point where I am trying to inject an HTML signature into Gmail using code hosted on an unindexed page on my site. The reason I want to do this is to be able to include web fonts, something that for the life me I can't figure out why Gmail hasn't allowed you to do from their font library.
In any regard, as I said, I have a right-click context menu option ready to trigger a script from my js function page and the extension loads without errors. I need to figure out the best way to inject the HTML into the email and without losing any of the formatting that has been done on the page.
I have created the extension manifest, set the permissions on the context menu and created a function to call back to the js page that will inject the signature.
var contextMenus = {};
contextMenus.createSignature =
chrome.contextMenus.create(
{"title": "Inject Signature",
"contexts": ["editable"]},
function (){
if(chrome.runtime.lastError){
console.error(chrome.runtime.lastError.message);
}
}
);
chrome.contextMenus.onClicked.addListener(contextMenuHandler);
function contextMenuHandler(info, tab){
if(info.menuItemId===contextMenus.createSignature){
chrome.tabs.executeScript({
file: 'js/signature.js'
});
}
}
The end result is nothing enters the page and get massive errors related to cross-site because the domain is not the same obviously. This has obviously been solved as there are numerous signature extensions out there. I would probably use one of theirs but a) I want to build it on my own, b) they all want you to use their templates, none of them that I have seen will let you just use your own code.
So, any ideas?
I'm slowly building a website and it's just saved on my desktop right now. I'm trying to place Disqus on one of my pages and I pasted the code in the HTML document and I'm not getting anything on my page. I was able to successfully get my twitter widget to work on a different page just by pasting the code that was given to me and the same type of instruction was given from Disqus which was to paste the universal code to my site but nothing is showing up.
Do I have to do something with a CSS file to get it showing? I was searching through the settings in Disqus and one of the settings allows me to set the website URL but my website is not live and is just located in a folder in my desktop containing my html and CSS files.
I created a test HTML document in the folder containing all my HTML documents but I only get the sentence contained in the paragraph tag.
<! DOCTYPE html>
<html>
<head>
<title>Test-Disqus</title>
</head>
<body>
<p> Testing Disqus.</p.>
<div id="disqus_thread"></div>
<script type="text/javascript">
/* * * CONFIGURATION VARIABLES * * */
var disqus_shortname = 'myusername'; /***changed for this question*///
/* * * DON'T EDIT BELOW THIS LINE * * */
(function() {
var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
})();
</script>
<noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript>
</body>
</html>
A couple of issues with your HTML, however it's not the root cause of your issue.
1) It's <!DOCTYPE html> not <! DOCTYPE html>
2) Your closing paragraph tag has a ".", it should be </p> and not </p.>
You said, "I'm slowly building a website and it's just saved on my desktop right now." If you're opening this test file directly from your desktop using your web browsers, the disqus module will not load. Disqus restricts the comment module to only load on trusted domains set by you. You can check the trusted domain by logging into disqus -> admin -> settings -> advanced.
You can add additional trusted domains if you need. However if your trusted domain is "xyz.com" and you load your test page from your desktop, the trusted domain will not match.
You need to run a webserver to get it working, I recommend MAMP for local development. MAMP will most likely start up on port 8888 or 8080. This will allow you to access your test file by going to http://localhost:8080/test.html. After that you can try adding localhost as a trusted domain, or create an entry in your host file.
I have to send an email to several hundred users. Each will have a link tailored to the recipient so I need to generate the emails in script/code - I'm no Notes developer so I can't do this in Notes; I'm using C# and I'm pulling the list out of a SQL database.
There are some constraints:
The site that the link points to uses Integrated Windows Authentication.
The sender wants the link to be a button, rather than text.
The vast majority of recipients are running Lotus Notes 7.
I've tried creating an HTML mail but have had problems:
If I use a form with a submit button and action that points to the link, Notes tries to use its internal browser which fails (because the site uses Integrated Windows Authentication).
If I use an a href tag with an img tag in it, pointing to an image on a webserver, Notes refuses to display the image - i just get the red x box, even though the tags are valid if embedded in a web page.
Anyone know how I can do this?
I finally found a method that works: embedding the image in the email itself. I found the solution here. I'll include the critical stuff here, just in case.
There are three key components to the email: the plain text version, the html version and the image, all consructed as AlternateViews:
string imagePath = #"C:\Work\images\clickhere.jpg";
AlternateView imageView = new AlternateView(imagePath, MediaTypeNames.Image.Jpeg);
imageView.ContentId = "uniqueId";
imageView.TransferEncoding = TransferEncoding.Base64;
:
//loop to generate the url and send the emails containing
AlternateView plainTextView = AlternateView.CreateAlternateViewFromString(
BuildPlainTextMessage(url), null, "text/plain");
AlternateView htmlView = AlternateView.CreateAlternateViewFromString(
BuildHtmlMessage(url), null, "text/html");
//set up MailAddress objects called to and from
:
MailMessage mail = new MailMessage(from, to);
mail.Subject = "ACTION REQUIRED: Do this by then or else";
mail.AlternateViews.Add(plainTextView);
mail.AlternateViews.Add(htmlView);
mail.AlternateViews.Add(imageView);
//send mail using SmtpClient as normal
:
//endloop
BuildHtmlMessage(string) and BuildPlainTextMessage(string) just return strings containing the messages. BuildHtmlMessage includes this to display the image in a link to 'url':
sb.AppendLine("<div>");
sb.AppendFormat("<a href=\"{0}\" target=\"_blank\">", url);
sb.Append("<img alt=\"Click here button image\" hspace=0 src=\"cid:uniqueId\" ");
sb.Append("align=baseline border=0 >");
sb.Append("</a>");
sb.AppendLine("</div>");
Hope this is of use to someone else, sometime.
I imagine no matter what link you use (button, text), you're going to run into that same problem where the Notes client launches its internal browser. That's a preference on the Notes client, and it could be different on all machines.
I would first try a standard text link to see if it has the same behavior. Maybe if that does work for some reason, you at least can deliver a workaround.
Regarding the image issue - is the image coming from the server using Windows Authentication? Make sure the image is open to the public and doesn't require authentication to see it (test in Firefox, and if you don't get a password prompt, you're safe)
I hate to say it, but you might have to request users copy a URL from their email to a specific browser. At least you'd know that would work.
I agree with Ken on the preference for the internal browser. The image not displaying - showing a red x - might be a preference also. I don't recall if it was available in R7, but in R8 there is a preference to not show remote images automatically. It is a security feature. It could also be a proxy issue - if they would have to login to their internet proxy server to get to the internet, and your image is on the internet...
There is also this but it fires off a security warning to the user with my Notes config:
<input type='button'
onclick=document.location.href='http://server/path';
value='Click here'
id='buttonID' class='button'
xstyle='background-color:red;color:white;'/>
Does anyone know of an extension for Firefox, or a script or some other mechanism, that can monitor one or more local files. Firefox would auto-refresh or otherwise update its canvas when it detected a change (of timestamp) in the files(s).
For editing CSS, it would be ideal if just the CSS could be reloaded, rather than a full HTML re-render.
Effectively it would enable similar behaviour to Firebug with its dynamic HTML/CSS editing, only through external files.
Live.js
From the website:
How?
Just include Live.js and it will monitor the current page including local CSS and Javascript by sending consecutive HEAD requests to the server. Changes to CSS will be applied dynamically and HTML or Javascript changes will reload the page. Try it!
Where?
Live.js works in Firefox, Chrome, Safari, Opera and IE6+ until proven otherwise. Live.js is independent of the development framework or language you use, whether it be Ruby, Handcraft, Python, Django, NET, Java, Php, Drupal, Joomla or what-have-you.
It has the huge benefit of working with IETester, dynamically refreshing each open IE tab.
Try it out by adding the following to your <head>
<script type="text/javascript" src="http://livejs.com/live.js"></script>
Have a look at FileWatcher extension:
https://addons.mozilla.org/en-US/firefox/addon/filewatcher/
it's a WebExtension, so it works with the latest Firefox
it has a native app (to be installed locally) that monitors watched files for changes using native OS calls (no polling!) and notifies the WebExtension to let it reload the web page
reload is driven by rules: a rule contains the page URL (with regular expression support) and its included/excluded local source files
open source: https://github.com/coolsoft-ita/filewatcher
DISCLAIMER: I'm the author of the extension ;)
I would recommend livejs
But it has following Advantages and Disadvantages
Advantages:
1. Easy setup
2. Works seamlessly on different browsers (Live.js works in Firefox, Chrome, Safari, Opera and IE6+)
3. Don't add irritating interval for refreshing browser specially when you want to debug along with designing
4. Only refreshing when you save change ctrl + S
5. Directly saves CSS etc from firebug I have not used that feature but read on their site http://livejs.com/ that they support it too!!!
Disadvantages:
1. It will not work on file protocol file:///C:/Users/Admin/Desktop/livejs/live.html
2. You need to have server to run it like http://localhost
3. You have to remove it while deploying on staging/production
4. Doesn't serves CDN I have tried cheating & applying direct link http://livejs.com/live.js but it will not work you have to download and keep on local to work.
Xrefresh with firebug.
Firefox has an extension called mozRepl.
Emacs can plug into this, with moz-reload-on-save-mode.
when it's set up, saving the file forces a refresh of the browser window.
There are some IDE's that contain this ability (They'll have a pane within them or some other means to auto-refresh a page on save).
If you want to do this yourself a quick hack is to set the meta refresh on the page to a low value - one or two seconds.
# Will refresh the page content every second
<meta http-equiv="refresh" content="1" />
You could just place a javascript interval on your page, have it query a local script which checks the last date modified of the css file, and refreshes it if it changed.
jQuery Example:
var modTime = 0;
setInterval(function(){
$.post("isModified.php", {"file":"main.css", "time":modTime}, function(rst) {
if (rst.time != modTime) {
modTime = rst.time;
// reload style tag
$("head link[rel='stylesheet']:eq(0)").remove();
$("head").prepend($(document.createElement("link")).attr({
"rel":"stylesheet",
"href":"http://sstatic.net/mso/all.css?v=4372"
})
);
}
});
}, 5000);
Browsersync can do this from the server side / outside of the browser.
This can achieve more repeatable results / things that don't require so much clicking.
This will serve a page and refresh on change
cd static_content
browser-sync start --server --files .
It also allows a scripting mode.
This is certainly hacky, but if you want to work locally without making any external request (to live.js, for example), or run any local server, I think this might be useful. This is not specific to web development, you can adopt similar strategy to any other workflow.
You will need two tiny tools (which are present in almost all distribution repos): inotify-tools and xdotool.
First get the ID of your Firefox and your editor window using xdotool.
$ xdotool search --name "Mozilla Firefox"
60817411
60817836
$ xdotool search --name "Pluma" # Pluma is my editor
94371842
Depending on the number of processes running, you will get one or more window ID. Use xdotool windowactivate <ID> to know which one you want (the focus changes to the respective window).
Use inotifywait -e close_write to monitor changes to your local file and when you save the file using your editor, change focus to your browser, reload xdotool key CTRL+R and focus back to your editor. This is so instantaneous you will not notice nothing.
Also, inotifywait exits on change, so you might have to do it in a loop. Here is a minimum working example (in Bash in your working directory).
while /usr/bin/true
do
inotifywait -e close_write index.html;
xdotool windowactivate 60917411; # Switch to Firefox
xdotool key CTRL+R; # Reload Firefox
xdotool windowactivate 94371842 # Switch back to Pluma
done
You can use inotifywait to watch for the entire directory or some selected files in your directory.
You can write a script that can automate is easily.
This works on Linux (I've tested this on Void Linux.)
You can use live.js with a tampermonkey script to avoid having to include https://livejs.com/live.js in your HTML file.
// ==UserScript==
// #name Auto reload
// #author weirane
// #version 0.1
// #match http://127.0.0.1/*
// #grant none
// ==/UserScript==
(function() {
'use strict';
if (Number(window.location.port) === 8000) {
const script = document.createElement('script');
script.src = 'https://livejs.com/live.js';
document.body.appendChild(script);
}
})();
With this tampermonkey script, the live.js script will be automatically inserted to pages whose address matches http://127.0.0.1:8000/*. You can change the port according to your need.
I think that you can solve it by using some ajax requests after a determinate interval. You can do a request to CSS files and then if you don't get the "not modified" header you delete your css and load it again. For dynamic files you do a request and store the response and then every time you make a request to that file you compare the response to the latest.