Is there a simple way (or what could be the simplest way) to include a html-code fragment, which is stored in a text file, into a page code?
E.g. the text file fragment.txt contains this:
<b><i>External text</i></b>
And the page code should include this fragment "on the fly". (Without php ...?)
The Javascript approach seems to be the preferred one. But with the examples below you possibly can get problems with cross origin requests (localhost to internet and vice versa) or you can have security problems when including external scripts which are not served via HTTPS.
An inline solution without any external libraries would be:
<!DOCTYPE html>
<html>
<body>
<div id="textcontent"></div>
<script>
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
document.getElementById('textcontent').innerText = xhttp.responseText;
};
xhttp.open("GET", "content.txt", true);
xhttp.send();
</script>
</body>
</html>
Here you need a file content.txt in the same folder as the HTML file. The text file is loaded via AJAX and then put into the div with the id textcontent. Error handlings are not included in the example above. Details about XMLHttpRequest you can find at http://www.w3schools.com/xml/xml_http.asp.
EDIT:
As VKK mentioned in another answer, you need to put the files on a server to test it, otherwise you get Cross-Origin-Errors like XMLHttpRequest cannot load file:///D:/content.txt. Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https, chrome-extension-resource.
You need to use Javascript to do this (or perhaps an iframe which I would avoid). I'd recommend using the JQuery framework. It provides a very simply DOM method (load) that allows you to load the contents of another file into an HTML element. This is really intended for AJAX calls, but it would work in your use case as well. The fragment.txt would need to be in the same server directory as the html page (if it's in a different directory just add on a path).
The load method is wrapped in the $(document).ready event handler since you can only access/edit the contents element after the DOM (a representation of the page) has been loaded.
Most browsers don't support local AJAX calls (the load method uses AJAX) - typically the HTML and txt files would be uploaded to a server and then the html file would be accesed on the client. Firefox does support local AJAX though, so if you want to test it locally use Firefox.
<!DOCTYPE html>
<html>
<head>
<script src="https://code.jquery.com/jquery-2.2.4.js"></script>
<script>
$(document).ready(function() {
$("#contents").load("fragment.txt");
});
</script>
</head>
<body>
<div id="contents"></div>
</body>
</html>
With javascript. I use it.
Example:
<!DOCTYPE html>
<html>
<script src="http://www.w3schools.com/lib/w3data.js"></script>
<body>
<div w3-include-html="content.html"></div>
<script>
w3IncludeHTML();
</script>
</body>
</html>
Related
I'm deploying a Google web app to write commutative diagrams with LaTeX/Xy-pic.
In the heading of html page I put the following configuration:
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
extensions: ["tex2jax.js"],
jax: ["input/TeX","output/HTML-CSS"],
"HTML-CSS": {
styles: {".MathJax_Preview": {visibility: "hidden"}}
},
tex2jax: {inlineMath: [["$","$"],["\\(","\\)"]]},
TeX: {extensions:
["AMSmath.js","AMSsymbols.js","http://sonoisa.github.io/xyjax_ext/xypic.js"]}
});
</script>
<script type="text/javascript"
src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/MathJax.js">
</script>
The problem is that the file http://sonoisa.github.io/xyjax_ext/xypic.js is not loaded because it is from an http source. This is the message I read in console:
MathJax.js:19 Mixed Content: The page at 'https://script.google.com/' was loaded over HTTPS, but requested an insecure script 'http://sonoisa.github.io/xyjax_ext/xypic.js?V=2.7.5'. This request has been blocked; the content must be served over HTTPS.
I try to use https://sonoisa.github.io/xyjax_ext/xypic.js instead, but this doesn't work at all.
Any suggestions?
One way to prevent the error message described on the question is to copy the code from the referred JavaScript library to the Google Apps Script project.
The above could be done in several ways that will depend on how you prefer to manage your code, but according to the best practices on https://developers.google.com/apps-script/guides/html/best-practices#separate_html_css_and_javascriptHTML, CSS and JavaScript should be kept on separate files. This implies to use a template like the following:
<!DOCTYPE html>
<html>
<head>
<base target="_top">
<?!= include('Stylesheet'); ?>
</head>
<body>
<h1>Welcome</h1>
<p>Please enjoy this helpful script.</p>
<?!= include('JavaScript'); ?>
</body>
</html>
Where JavaScript is the file name of the file holding the JavaScript code. The name actually could be almost anything that makes sense to you and even you would have your JavaScript code on several files, like having one for you own code and another for the referred JavaScript library.
I am following the introduction to Leaflet from https://maptimeboston.github.io/leaflet-intro/. At the first Rat Map, my code failed to show the rodent objects/locations on the map. I c/v the tutorial code directly and still failed to get objects on my map. All of the necessary files are in the same directory (and are appropriately named) as the html file being used.
I'm new to HTML, GeoJSON, and have been unsuccessful in finding a method that I could use to troubleshoot. The data files are complete and have all of the values/objects expected. I'm used to Python/R/VBA, so not having an error message is new to me as well.
I am running the HTML file through a Chrome browser. The HTML files are being written in Sublime Text
//make sure you have the jQuery and rodent GeoJSON files in HTML directory
<html>
<head>
<title>A Leaflet map!</title>
<link rel="stylesheet" href="http://cdn.leafletjs.com/leaflet-0.7.3/leaflet.css"/>
<script src="http://cdn.leafletjs.com/leaflet-0.7.3/leaflet.js"></script>
<script src="jquery-2.1.1.min.js"></script>
<style>
#map{ height: 100% }
</style>
</head>
<body>
<div id="map"></div>
<script>
// initialize the map
var map = L.map('map').setView([42.35, -71.08], 13);
// load a tile layer
L.tileLayer('http://tiles.mapc.org/basemap/{z}/{x}/{y}.png',
{
attribution: 'Tiles by MAPC, Data by MassGIS',
maxZoom: 17,
minZoom: 9
}).addTo(map);
// load GeoJSON from an external file
$.getJSON("F://FinanceServer//HTML//rodents.geojson",function(data){
// add GeoJSON layer to the map once the file is loaded
L.geoJson(data).addTo(map);
});
</script>
</body>
</html>
I was expecting to see something resembling the third map from the aforementioned tutorial site.
The URL to your local file should never work, especially as an absolute path.
Browsers prevent you from accessing the client file system, for well known security reasons.
Even if you open your HTML page directly from file system (with file:// protocol), Chrome browser prevents you from making AJAX requests to other local files. Last time I tried it works in other browsers, though.
Even if you use another browser, your URL should be relative, or specify the protocol / start with double slash to make it absolute.
To avoid most of these limitations, the standard practice in web development is to serve files with a small local server.
I have an object tag like this:
<object type="text/plain" data="http://www.theurl.com/thefile"></object>
The file I am accessing has no file-extension, but I would like to embed it as plain text. However, this code just causes a download of the file to start.
Is there any way to fix this?
You can use the built-in fetch() method and access the file with .then() afterwards.
<script>
fetch("http://www.theurl.com/thefile")
.then((r)=>{r.text().then((d)=>{console.log(d)})});
</script>
But that throws the "CORS"-error so you have to enable Cross-Origin for the hosted document. Otherwise it forbids you to fetch it.
Here is a working example of fetch():
<script>
fetch('https://upload.wikimedia.org/wikipedia/commons/7/77/Delete_key1.jpg')
.then((r)=>{r.text().then((d)=>{console.log(d)})});
</script>
I need to remotely control a solenoid with an Arduino, from about 2000 feet away. So far, it works: I designed a control circuit that fires based upon a logic-level signal from pin 9.
My problem: the initial Arduino code sent up a web page over ethernet each time the form was submitted, but if the user tried to toggle the state too quickly, the transmission was interrupted and the whole system puked. It was also slow to load.
My attempted solution: I created an HTML document on a local page to do what I need done, and indeed it does: I can control the Solenoid. However once the links which control the commands are submitted, there's no redirect back to the local control page, and after much Google-fu I can't seem to implement it in this way. Is this possible? Is this a good approach?
<HTML>
<HEAD>
<TITLE>Sensor-Cleaning Control</TITLE>
</HEAD>
<BODY>
<H1>Solenoid Remote Actuation</H1>
<hr />
<br />
Turn On Solenoid
Turn Off Solenoid<br />
<button type="button" onclick="location.href='http://192.168.0.88/?sol_on'">On</button>
<button type="button" onclick="location.href='http://192.168.0.88/?sol_off'">Off</button>
<button type="button" onclick="location.href='http://192.168.0.88/?toggle'">Toggle</button>
<br />
<p>(Check pin 9 LED ''L9'' to make sure this code is working)</p>
</BODY>
</HTML>
So if the Arduino sees "sol_on" it turns the solenoid on; "sol_off" off, and you can guess what "toggle" does. I'm pretty comfortable coding, but I know nothing of javascript, CSS, or PHP. I'm not afraid of implementing those, it just needs to be clear for me to do so. Note that there's some redundancy in the code above, I left it so that I could test multiple approaches to the UI.
If I'm understanding you correctly, your best approach would probably be to use Ajax, where your web page uses an asynchronous Javascript call to do the toggling/on/off.
Effectively, you have the web page as shown, but instead of links to the Arduino "pages", clicking each link fires off an asynchronous request to the Arudino page, leaving your current page in the browser while still prodding the URL on the Arduino web server.
If you're not that familiar with Javascript, possibly a sensible approach would be to use jQuery, a Javascript library which insulates you somewhat from differences between browsers, and encapsulates things like Ajax requests quite nicely.
Here are some simple steps:
1) Download the latest production jQuery. I'm using 2.0.3 from here for this example.
2) Put it in the same directory as your web page, so we can include it easily.
3) Convert your web page to use Ajax with jQuery. (I've also converted it to something a little closer to the current web standard, HTML5):
<!DOCTYPE html>
<html>
<head>
<title>Sensor-Cleaning Control</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<!-- Include jQuery so we can use its simple goodness -->
<script src="jquery-2.0.3.min.js"></script>
<script>
/* This function will be called by the onclick handlers of the buttons */
function solenoid(url) {
// Use jQuery's Ajax functionality to prod the given URL without
// reloading this page or visiting another one:
$.ajax(url);
}
</script>
</head>
<body>
<h1>Solenoid Remote Actuation</h1>
<button type="button" onclick="solenoid('http://192.168.0.88/?sol_on');">On</button>
<button type="button" onclick="solenoid('http://192.168.0.88/?sol_off');">Off</button>
<button type="button" onclick="solenoid('http://192.168.0.88/?toggle');">Toggle</button>
<p>(Check pin 9 LED ''L9'' to make sure this code is working)</p>
</body>
</html>
The main things to note are:
1) The inclusion of the jQuery library, so we can use its ajax() call and fire off http requests in the background with ease no matter which browser we're on.
2) I've replaced your existing onclick events with a call to a new function called solenoid, that takes a URL as a parameter.
3) The solenoid function, defined in the <script> at the top, takes the URL that was passed in and uses jQuery's ajax() call to poke the given URL. This happens in the "background", i.e. without any page (re)load.
From here, you could expand this in all sorts of ways. This code could, for example, read a short response from the Arduino and handle it in the background, too, perhaps indicating the current state of the solenoid.
(Given the simplicity of what I'm doing here, I'm sure this could be done in a more "lightweight" way in pure Javascript without jQuery, but that would have involved a chunk more slightly scary code in this example to ensure the Ajax stuff worked in many different browsers -- there's some browser inconsistency in how the underlying object (an XMLHttpRequest) used by Ajax is created. I figured for a Javascript beginner, simpler was probably better...)
Well, i don't know your Arduino's based http server, but it certainly shall reply all requests, either with an 200 http status, that means "OK" or with any other error message like 400, that means "Bad Request". While your web application is waiting for a response, you can block (or hide) the page (or some elements) so the user will be unable to start a click-frenzy and mess up everything while he should be waiting the server's (Arduino) response.
You can use an ajax call using JQuery, so you will be able to "do something" after calling the url with your "Arduino code", either in case of success or fail.
Please see the example below:
<html>
<head>
<script src="../scripts/jquery-2.0.3.min.js"></script>
<script>
function callArduinoCode(var code) {
jQuery.ajax({
type: "GET",
url: "http://192.168.0.88/?" + code,
data: dados,
beforeSend: function() {
// Hide links, show loading message...
$("#controls").css("display","none");
$("#loading").css("display","block");
},
success: function( data ){
// Hide loading message, show links again...
$("#controls").css("display","block");
$("#loading").css("display","none");
},
error: function (xhr, ajaxOptions, thrownError){
alert("Failed, HTTP Status was " + xhr.status);
// Hide loading message, show links again...
$("#controls").css("display","block");
$("#loading").css("display","none");
}
});
return false;
}
</script>
</head>
<body>
<div id="controls">
<!-- your links here -->
<a onClick="callArduinoCode('sol_on'); return false;">Turn Solenoid ON</a>
<a onClick="callArduinoCode('sol_off'); return false;">Turn Solenoid OFF</a>
</div>
<div id="loading" style="display:none;">
Loading, please wait...
</div>
</body>
</html>
You can get JQuery at jquery.com
Good Luck!
No need to add any library / framework to accomplish what you need. Even you can achieve it without javascript at all. Simply add an invisible IFRAME in your HTML file with name attribute set. In following example we'll use "Arduino" as the IFRAME's name, but you can use any valid element name you'd like.
<IFRAME name="Arduino" style="display:none"></IFRAME>
Next, add target attribute on your link element (the 'A' tags) with value specified as the IFRAME's name, i.e.:
Turn On Solenoid
When you click on the link, request sent to your Arduino and resulting response will be directed to the the invisible IFRAME without navigating away from currently viewed page.
For the button element, prefix location.href in onclick handler with the IFRAME's name:
<BUTTON onclick="Arduino.location.href='//192.168.0.88?sol_on';">On</BUTTON>
I am wrapping a razor view in an iframe. The razor view is a web service on a different domain.
Here is what I am doing:
<!DOCTYPE html>
<html>
<body>
<p align="center">
<img src="http://somewhere.com/images/double2.jpg" />
</p>
<p align="center">
<iframe src="https://secure.somewhereelse.com/MyPortal?CorpID=12334D-4C12-450D-ACB1-7372B9D17C22" width="550" height="600" style="float:middle">
<p>Your browser does not support iframes.</p>
</iframe>
</p>
</body>
</html>
This is the header of the src site:
<!DOCTYPE html>
<html>
<head>
<title>#ViewBag.Title</title>
<link href="#Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" />
<link href="#Url.Content("~/Content/themes/cupertino/jquery-ui-1.8.21.custom.css")" rel="stylesheet" type="text/css" />
<script src="#Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script>
<script src="#Url.Content("~/Scripts/jquery-ui-1.8.11.min.js")" type="text/javascript"></script>
</head>
I want the iframe src to use the CSS of the calling site.
Is there a way to pass in the CSS URL or have it inherit the CSS of the calling site?
I'd even settle for the css file location being a parameter being passed in from the originating site.
Anyone have any suggestions?
You cannot enforce your css on your site using an iframe. The css must be included in the source of the page included in an iframe. It used to be possible but in certain cases using javascript, and for the page to be on the same domain.
The only other way you may be able to use your own css is if the web service allows you to pass in the url of the css. But you would have to consult the documentation of the web service to find that out.
I would pass the CSS url as an argument to the iframe's src attribute:
<iframe src="http://somedomain.com/?styleUrl=#(ResolveStyleUrl())"></iframe>
Where ResolveStyleUrl might be defined as:
#functions {
public IHtmlString ResolveStyleUrl()
{
string url = Url.Content("~/Content/site.css");
string host = "http" + (Request.IsSecureConnection ? "s" : "") + "//" + Request.Url.Host + url;
return Raw(url);
}
}
This is of course assuming that the domain would accept a style url query string and render the appropriate <link /> on the remote page?
Eroc, I am sorry you cannot enforce your css on others' site using an iframe because most browsers will give an error like the one chrome gives:
Unsafe JavaScript attempt to access frame with URL http://terenceford.com/catalog/index.php? from frame with URL http://www.example.com/example.php. Domains, protocols and ports must match.
But this does not mean that you cannot extract the html from that page (which may be modified as per your ease)
http://php.net/manual/en/book.curl.php can be used for site scrapping with http://simplehtmldom.sourceforge.net/
First play with these functions:
curl_init();
curl_setopt();
curl_exec();
curl_close();
and then parse the html.
After trying yourself, you can look at this example below that I made for parsing beemp3 content, when I wanted to create a rich tool for directly downloading songs, unfortunately I couldn't because of the captcha but it is useful for you
directory structure
C:\wamp\www\try
-- simple_html_dom.php
-- try.php
try.php:
<?php
/*integrate results for dif websites seperately*/
require_once('simple_html_dom.php');
$q='eminem';
$mp3sites=array('http://www.beemp3.com/');
$ch=curl_init("{$mp3sites[0]}index.php?q={$q}&st=all");
curl_setopt($ch,CURLOPT_HEADER,0);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
//curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 10);
$result=curl_exec($ch);
curl_close($ch);
$html=str_get_html("{$result}");
$ret = $html->find("a");
echo "<head><style type='text/css'>a:link,a{font-size:16px;font-weight:bold;font-family:helvetica;text-decoration:none;color:#458;}a:hover{color:#67b;text-decoration:underline;}a:visited{color:silver;}</style></head>";
$unik=array(null);
foreach($ret as $link)
{
$find="/(.{1,})(\.php)[?](file=.{1,})&song=(.{1,})/i";
$replace="$4";
if(preg_match("{$find}",$link->href))
{
$unik[]=$link->href;
if(current($unik)===prev($unik)){unset($unik);}
else{
echo "<a href='".$mp3sites[0].$link->href."'>".urldecode(preg_replace($find,$replace,$mp3sites[0].$link->href))."</a><br/>";
}}
}
?>
I know that you do not code in php, but I think you are capable of translating the code. Look at this:
php to C# converter
I spent time on this question because only I can understand what it means to offer bounty.
May be the answer seems unrelated (because I have not used javascript or html based solution), but because of cross-domain issues this is an important lesson for you. I hope that you find similar libraries in c#. Best of luck
The only way I know to achieve that is to make the HTTP request on your server side, fetch the result and hand it back to the user.
A minima, you'll need either to strip completely the header from the targeted site to inject the content in your page using AJAX, or to inject your own css in the page headers to put it into an IFRAME.
Either way you have to implement the proxy method, which will take the targetted URL as an argument.
This technique has many downsides :
You have to do the queries on you server, which can cost a lot of bandwidth and CPU
You have to implement the proxy
You cannot transmit the domain specific cookies from the user, though you can manage new cookies have by rewriting them
If you do a lot of requests you server(s) is/are likely to become blacklisted on the targeted website(s)
The benefits sound low compared to the hassles.