iFrame is not displaying website (cross-site request error) - html

I have been on an adventure to get a sportsbook dashboard in my mancave. With the main goal to display the lines of my Prefered sportsbook. Based on some googling and digging all the API's cost money. I settled on using the Game Center by Pregame, which I use quite often anyways.
So they have embed code for the GameCenter and I have a fairly basic HTML page going but somethings not quite right and it's displaying all wonky. I can't tell if it's my code or Pregame that is the issue. Any help or guidance would be appreciated.
Link to the Static Page: https://dashboard.megustasports.com/Untitled-1.html
Pregame Gamecenter: https://pregame.com/game-center
EDIT
Here are the errors from the Chrome console
Indicate whether to send a cookie in a cross-site request by
specifying its SameSite attribute Because a cookie's SameSite
attribute was not set or is invalid, it defaults to SameSite=Lax,
which prevents the cookie from being sent in a cross-site request.
This behavior protects user data from accidentally leaking to third
parties and cross-site request forgery.
Resolve this issue by updating the attributes of the cookie: Specify
SameSite=None and Secure if the cookie should be sent in cross-site
requests. This enables third-party use. Specify SameSite=Strict or
SameSite=Lax if the cookie should not be sent in cross-site requests
9 cookies Name Domain & Path .te.dpr pregame.com/
_ga .pregame.com/
_gid .pregame.com/ .te.w pregame.com/ Telligent.Evolution-UI pregame.com/ tzoffset pregame.com/
tzid pregame.com/ .te.dpr pregame.com/utility
.te.w pregame.com/utility 2 requests pg.authentication.js
error-notfound.aspx?item=%2fassets%2fscripts%2fpg.…entication&user=extranet%5cAnonymous&site=website
Here is the HTML I am currently using:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Here the title</title>
<style>
*{
margin: 0px;
}
iframe {
display: block !important;
width: 100%;
height: 100vh;
}
</style>
</head>
<body>
<script src="https://pregame.com/assets/scripts/tear/tear.js" data-type="generic" data-url="https://pregame.com/game-center?ts_i=game-center"></script>
</body>
</html>
In theory, the end result should only contain the table from the page linked above and look something similar to this:

Instead of using an iframe, try making a HTTP request and storing that in a div:
var cors_api_url = 'https://cors-anywhere.herokuapp.com/';
function doCORSRequest(options, printResult) {
var x = new XMLHttpRequest();
x.open(options.method, cors_api_url + options.url);
x.onload = x.onerror = function() {
printResult(
(x.responseText || '')
);
};
if (/^POST/i.test(options.method)) {
x.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');
}
x.send(options.data);
}
doCORSRequest({
method: 'GET',
url: 'https://pregame.com/game-center/',
data: ''
}, function printResult(result) {
document.querySelector('.result').innerHTML = result;
});
<div class="result"></div>

Problem
CORS is designed to prevent malicious websites from imitating a legitimate one. The Pregame Game Center’s CORS policy requires that the website embedding an iframe containing their website be on the same domain. To use an iframe from your domain, the Pregame Game Center developers would have to add your domain to their CORS policy.
Solution
Consider instead programmatically navigating their website and pulling information you need from it (scraping), then creating your own website to display that information. This is the most roundabout way, but generally accepted if there is no API you can use to access the information you need. Through investigation you may even come across a JSON formatted endpoint used by their front end to retrieve information to be displayed (their private API) that you can use.
Note
Scraping and republishing information produced by Pregame Game Center may be against their terms and conditions. You may be able to avoid legal implications if you host the website locally, or remotely behind authentication.
References
CORS: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
Web scraping (Python - Beautiful Soup) Tutorial: https://realpython.com/beautiful-soup-web-scraper-python
Django (Python web framework): https://www.djangoproject.com

You cannot use iframe to display some of the websites like Google.
This is a security measure implemented to protect their website and customer data from being attacked.
If you want to get some details of the website, you can use their API if they provide.
That is the best option.
You can search for a bunch of API's available on the web for your requirement.

Related

Why is 'noreferrer' not working on links?

I can't work out how to get rid of the referrer that keeps getting appended to URL
My link
<a aria-label="Book" target="_blank" class="book-link" rel="noopener"` or `rel="noreferrer" href="<?=$site->book_link()->html()?>">...</a>
My meta
<meta name="referrer" content="no-referrer">
It keeps appending this and breaking links /&referrerUrl=https%3A%2F%2Fwww.mysite.com%2F
This referrer that you are getting is not effected by this meta tag because the meta tag is meant for inside javascript to use document.referrer to get the referrer url in your browser or from your webserver for tracking purposes.
The referrer in your link is either because of some script or front end framework you are using or because of a backend framework or language like php.
To be able to solve your problem better we will need to know more about your frontend and backend stack.
Consequently this question is tagged wrong. It should be tagged with javascript or php.
It's simple "no-referrer", which specifies that no referrer information is to be sent along with requests made from a particular request client to any origin. The header will be omitted entirely.
If a document at https://techcaregn.com/page.html sets a policy of "no-referrer", then navigations to https://techcaregen.com/ (or any other URL) would send no Referer header.
It is my personal opinion .

HTML redirect to different URL based on server availability

The default home page for our company workstations is http://intranet, which is our internal SharePoint site set by group policy. Right now, if a user attempts to open IE on a laptop when they are off-site, they are (obviously) greeted by a "Page cannot be displayed" error. This causes confusion from our less sophisticated users and they wind up calling our help desk even though there is nothing wrong with their internet connection.
What I would like to do is set the default home page to a local .html file that will use an HTTP redirect to forward the browser to our public web site if the internal URL is not reachable.
Is this possible?
All too often, something that seems easy to implement can turn out to be quite challenging. In this case, JavaScript prohibits cross-domain calls for security measures, so a XMLHttpRequest isn't an option.
It seems like your best option would be to implement the solution discussed here: Test url availability with javascript.
I did some quick testing in Chrome & IE and this code worked well in both. (IE did complain about running the script on a local page, but this would be the same regardless of solution.)
<html>
<head></head>
<body>
<script>
function checkServerStatus(url)
{
var script = document.body.appendChild(document.createElement("script"));
script.onload = function()
{
alert( url + " is online.");
};
script.onerror = function()
{
alert( url + " is offline.");
window.location.replace("http://google.com");
};
script.src = url;
}
checkServerStatus("http://google.com");
checkServerStatus("http://intranet");
</script>
</body>
Here's another link that discussing this solution: https://petermolnar.eu/test-site-javascript/.
Hope this helps.

How do search engines deal with AngularJS applications?

I see two issues with AngularJS application regarding search engines and SEO:
1) What happens with custom tags? Do search engines ignore the whole content within those tags? i.e. suppose I have
<custom>
<h1>Hey, this title is important</h1>
</custom>
would <h1> be indexed despite being inside custom tags?
2) Is there a way to avoid search engines of indexing {{}} binds literally? i.e.
<h2>{{title}}</h2>
I know I could do something like
<h2 ng-bind="title"></h2>
but what if I want to actually let the crawler "see" the title? Is server-side rendering the only solution?
(2022) Use Server Side Rendering if possible, and generate URLs with Pushstate
Google can and will run JavaScript now so it is very possible to build a site using only JavaScript provided you create a sensible URL structure. However, pagespeed has become a progressively more important ranking factor and typically pages built clientside perform poorly on initial render.
Serverside rendering (SSR) can help by allowing your pages to be pre-generated on the server. Your html containst the div that will be used as the page root, but this is not an empty div, it contains the html that the JavaScript would have generated if it were allowed to run.
The client downloads the HTML and renders it giving a very fast initial load, then it executes the JavaScript replacing the content of the root div with generated content in a process known as hydration.
Many newer frameworks come with SSR built in, notably NextJS.
(2015) Use PushState and Precomposition
The current (2015) way to do this is using the JavaScript pushState method.
PushState changes the URL in the top browser bar without reloading the page. Say you have a page containing tabs. The tabs hide and show content, and the content is inserted dynamically, either using AJAX or by simply setting display:none and display:block to hide and show the correct tab content.
When the tabs are clicked, use pushState to update the URL in the address bar. When the page is rendered, use the value in the address bar to determine which tab to show. Angular routing will do this for you automatically.
Precomposition
There are two ways to hit a PushState Single Page App (SPA)
Via PushState, where the user clicks a PushState link and the content is AJAXed in.
By hitting the URL directly.
The initial hit on the site will involve hitting the URL directly. Subsequent hits will simply AJAX in content as the PushState updates the URL.
Crawlers harvest links from a page then add them to a queue for later processing. This means that for a crawler, every hit on the server is a direct hit, they don't navigate via Pushstate.
Precomposition bundles the initial payload into the first response from the server, possibly as a JSON object. This allows the Search Engine to render the page without executing the AJAX call.
There is some evidence to suggest that Google might not execute AJAX requests. More on this here:
https://web.archive.org/web/20160318211223/http://www.analog-ni.co/precomposing-a-spa-may-become-the-holy-grail-to-seo
Search Engines can read and execute JavaScript
Google has been able to parse JavaScript for some time now, it's why they originally developed Chrome, to act as a full featured headless browser for the Google spider. If a link has a valid href attribute, the new URL can be indexed. There's nothing more to do.
If clicking a link in addition triggers a pushState call, the site can be navigated by the user via PushState.
Search Engine Support for PushState URLs
PushState is currently supported by Google and Bing.
Google
Here's Matt Cutts responding to Paul Irish's question about PushState for SEO:
http://youtu.be/yiAF9VdvRPw
Here is Google announcing full JavaScript support for the spider:
http://googlewebmastercentral.blogspot.de/2014/05/understanding-web-pages-better.html
The upshot is that Google supports PushState and will index PushState URLs.
See also Google webmaster tools' fetch as Googlebot. You will see your JavaScript (including Angular) is executed.
Bing
Here is Bing's announcement of support for pretty PushState URLs dated March 2013:
http://blogs.bing.com/webmaster/2013/03/21/search-engine-optimization-best-practices-for-ajax-urls/
Don't use HashBangs #!
Hashbang URLs were an ugly stopgap requiring the developer to provide a pre-rendered version of the site at a special location. They still work, but you don't need to use them.
Hashbang URLs look like this:
domain.example/#!path/to/resource
This would be paired with a metatag like this:
<meta name="fragment" content="!">
Google will not index them in this form, but will instead pull a static version of the site from the escaped_fragments URL and index that.
Pushstate URLs look like any ordinary URL:
domain.example/path/to/resource
The difference is that Angular handles them for you by intercepting the change to document.location dealing with it in JavaScript.
If you want to use PushState URLs (and you probably do) take out all the old hash style URLs and metatags and simply enable HTML5 mode in your config block.
Testing your site
Google Webmaster tools now contains a tool which will allow you to fetch a URL as Google, and render JavaScript as Google renders it.
https://www.google.com/webmasters/tools/googlebot-fetch
Generating PushState URLs in Angular
To generate real URLs in Angular, rather than # prefixed ones, set HTML5 mode on your $locationProvider object.
$locationProvider.html5Mode(true);
Server Side
Since you are using real URLs, you will need to ensure the same template (plus some precomposed content) gets shipped by your server for all valid URLs. How you do this will vary depending on your server architecture.
Sitemap
Your app may use unusual forms of navigation, for example hover or scroll. To ensure Google is able to drive your app, I would probably suggest creating a sitemap, a simple list of all the URLs your app responds to. You can place this at the default location (/sitemap or /sitemap.xml), or tell Google about it using webmaster tools.
It's a good idea to have a sitemap anyway.
Browser support
Pushstate works in IE10. In older browsers, Angular will automatically fall back to hash style URLs
A demo page
The following content is rendered using a pushstate URL with precomposition:
http://html5.gingerhost.com/london
As can be verified, at this link, the content is indexed and is appearing in Google.
Serving 404 and 301 Header status codes
Because the search engine will always hit your server for every request, you can serve header status codes from your server and expect Google to see them.
Update May 2014
Google crawlers now executes javascript - you can use the Google Webmaster Tools to better understand how your sites are rendered by Google.
Original answer
If you want to optimize your app for search engines there is unfortunately no way around serving a pre-rendered version to the crawler. You can read more about Google's recommendations for ajax and javascript-heavy sites here.
If this is an option I'd recommend reading this article about how to do SEO for Angular with server-side rendering.
I’m not sure what the crawler does when it encounters custom tags.
Let's get definitive about AngularJS and SEO
Google, Yahoo, Bing, and other search engines crawl the web in traditional ways using traditional crawlers. They run robots that crawl the HTML on web pages, collecting information along the way. They keep interesting words and look for other links to other pages (these links, the amount of them and the number of them come into play with SEO).
So why don't search engines deal with javascript sites?
The answer has to do with the fact that the search engine robots work through headless browsers and they most often do not have a javascript rendering engine to render the javascript of a page. This works for most pages as most static pages don't care about JavaScript rendering their page, as their content is already available.
What can be done about it?
Luckily, crawlers of the larger sites have started to implement a mechanism that allows us to make our JavaScript sites crawlable, but it requires us to implement a change to our site.
If we change our hashPrefix to be #! instead of simply #, then modern search engines will change the request to use _escaped_fragment_ instead of #!. (With HTML5 mode, i.e. where we have links without the hash prefix, we can implement this same feature by looking at the User Agent header in our backend).
That is to say, instead of a request from a normal browser that looks like:
http://www.ng-newsletter.com/#!/signup/page
A search engine will search the page with:
http://www.ng-newsletter.com/?_escaped_fragment_=/signup/page
We can set the hash prefix of our Angular apps using a built-in method from ngRoute:
angular.module('myApp', [])
.config(['$location', function($location) {
$location.hashPrefix('!');
}]);
And, if we're using html5Mode, we will need to implement this using the meta tag:
<meta name="fragment" content="!">
Reminder, we can set the html5Mode() with the $location service:
angular.module('myApp', [])
.config(['$location',
function($location) {
$location.html5Mode(true);
}]);
Handling the search engine
We have a lot of opportunities to determine how we'll deal with actually delivering content to search engines as static HTML. We can host a backend ourselves, we can use a service to host a back-end for us, we can use a proxy to deliver the content, etc. Let's look at a few options:
Self-hosted
We can write a service to handle dealing with crawling our own site using a headless browser, like phantomjs or zombiejs, taking a snapshot of the page with rendered data and storing it as HTML. Whenever we see the query string ?_escaped_fragment_ in a search request, we can deliver the static HTML snapshot we took of the page instead of the pre-rendered page through only JS. This requires us to have a backend that delivers our pages with conditional logic in the middle. We can use something like prerender.io's backend as a starting point to run this ourselves. Of course, we still need to handle the proxying and the snippet handling, but it's a good start.
With a paid service
The easiest and the fastest way to get content into search engine is to use a service Brombone, seo.js, seo4ajax, and prerender.io are good examples of these that will host the above content rendering for you. This is a good option for the times when we don't want to deal with running a server/proxy. Also, it's usually super quick.
For more information about Angular and SEO, we wrote an extensive tutorial on it at http://www.ng-newsletter.com/posts/serious-angular-seo.html and we detailed it even more in our book ng-book: The Complete Book on AngularJS. Check it out at ng-book.com.
You should really check out the tutorial on building an SEO-friendly AngularJS site on the year of moo blog. He walks you through all the steps outlined on Angular's documentation. http://www.yearofmoo.com/2012/11/angularjs-and-seo.html
Using this technique, the search engine sees the expanded HTML instead of the custom tags.
This has drastically changed.
http://searchengineland.com/bing-offers-recommendations-for-seo-friendly-ajax-suggests-html5-pushstate-152946
If you use:
$locationProvider.html5Mode(true);
you are set.
No more rendering pages.
Things have changed quite a bit since this question was asked. There are now options to let Google index your AngularJS site. The easiest option I found was to use http://prerender.io free service that will generate the crwalable pages for you and serve that to the search engines. It is supported on almost all server side web platforms. I have recently started using them and the support is excellent too.
I do not have any affiliation with them, this is coming from a happy user.
Angular's own website serves simplified content to search engines: http://docs.angularjs.org/?_escaped_fragment_=/tutorial/step_09
Say your Angular app is consuming a Node.js/Express-driven JSON api, like /api/path/to/resource. Perhaps you could redirect any requests with ?_escaped_fragment_ to /api/path/to/resource.html, and use content negotiation to render an HTML template of the content, rather than return the JSON data.
The only thing is, your Angular routes would need to match 1:1 with your REST API.
EDIT: I'm realizing that this has the potential to really muddy up your REST api and I don't recommend doing it outside of very simple use-cases where it might be a natural fit.
Instead, you can use an entirely different set of routes and controllers for your robot-friendly content. But then you're duplicating all of your AngularJS routes and controllers in Node/Express.
I've settled on generating snapshots with a headless browser, even though I feel that's a little less-than-ideal.
A good practice can be found here:
http://scotch.io/tutorials/javascript/angularjs-seo-with-prerender-io?_escaped_fragment_=tag
As of now Google has changed their AJAX crawling proposal.
Times have changed. Today, as long as you're not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.
tl;dr: [Google] are no longer recommending the AJAX crawling proposal [Google] made back in 2009.
Google's Crawlable Ajax Spec, as referenced in the other answers here, is basically the answer.
If you're interested in how other search engines and social bots deal with the same issues I wrote up the state of art here: http://blog.ajaxsnapshots.com/2013/11/googles-crawlable-ajax-specification.html
I work for a https://ajaxsnapshots.com, a company that implements the Crawlable Ajax Spec as a service - the information in that report is based on observations from our logs.
I have found an elegant solution that would cover most of your bases. I wrote about it initially here and answered another similar Stack Overflow question here which references it.
FYI this solution also includes hard coded fallback tags in case JavaScript isn't picked up by the crawler. I haven't explicitly outlined it, but it is worth mentioning that you should be activating HTML5 mode for proper URL support.
Also note: these aren't the complete files, just the important parts of those that are relevant. I can't help with writing the boilerplate for directives, services, etc.
app.example
This is where you provide the custom metadata for each of your routes (title, description, etc.)
$routeProvider
.when('/', {
templateUrl: 'views/homepage.html',
controller: 'HomepageCtrl',
metadata: {
title: 'The Base Page Title',
description: 'The Base Page Description' }
})
.when('/about', {
templateUrl: 'views/about.html',
controller: 'AboutCtrl',
metadata: {
title: 'The About Page Title',
description: 'The About Page Description' }
})
metadata-service.js (service)
Sets the custom metadata options or use defaults as fallbacks.
var self = this;
// Set custom options or use provided fallback (default) options
self.loadMetadata = function(metadata) {
self.title = document.title = metadata.title || 'Fallback Title';
self.description = metadata.description || 'Fallback Description';
self.url = metadata.url || $location.absUrl();
self.image = metadata.image || 'fallbackimage.jpg';
self.ogpType = metadata.ogpType || 'website';
self.twitterCard = metadata.twitterCard || 'summary_large_image';
self.twitterSite = metadata.twitterSite || '#fallback_handle';
};
// Route change handler, sets the route's defined metadata
$rootScope.$on('$routeChangeSuccess', function (event, newRoute) {
self.loadMetadata(newRoute.metadata);
});
metaproperty.js (directive)
Packages the metadata service results for the view.
return {
restrict: 'A',
scope: {
metaproperty: '#'
},
link: function postLink(scope, element, attrs) {
scope.default = element.attr('content');
scope.metadata = metadataService;
// Watch for metadata changes and set content
scope.$watch('metadata', function (newVal, oldVal) {
setContent(newVal);
}, true);
// Set the content attribute with new metadataService value or back to the default
function setContent(metadata) {
var content = metadata[scope.metaproperty] || scope.default;
element.attr('content', content);
}
setContent(scope.metadata);
}
};
index.html
Complete with the hardcoded fallback tags mentioned earlier, for crawlers that can't pick up any JavaScript.
<head>
<title>Fallback Title</title>
<meta name="description" metaproperty="description" content="Fallback Description">
<!-- Open Graph Protocol Tags -->
<meta property="og:url" content="fallbackurl.example" metaproperty="url">
<meta property="og:title" content="Fallback Title" metaproperty="title">
<meta property="og:description" content="Fallback Description" metaproperty="description">
<meta property="og:type" content="website" metaproperty="ogpType">
<meta property="og:image" content="fallbackimage.jpg" metaproperty="image">
<!-- Twitter Card Tags -->
<meta name="twitter:card" content="summary_large_image" metaproperty="twitterCard">
<meta name="twitter:title" content="Fallback Title" metaproperty="title">
<meta name="twitter:description" content="Fallback Description" metaproperty="description">
<meta name="twitter:site" content="#fallback_handle" metaproperty="twitterSite">
<meta name="twitter:image:src" content="fallbackimage.jpg" metaproperty="image">
</head>
This should help dramatically with most search engine use cases. If you want fully dynamic rendering for social network crawlers (which are iffy on JavaScript support), you'll still have to use one of the pre-rendering services mentioned in some of the other answers.
With Angular Universal, you can generate landing pages for the app that look like the complete app and then load your Angular app behind it.
Angular Universal generates pure HTML means no-javascript pages in server-side and serve them to users without delaying. So you can deal with any crawler, bot and user (who already have low cpu and network speed).Then you can redirect them by links/buttons to your actual angular app that already loaded behind it. This solution is recommended by official site. -More info about SEO and Angular Universal-
Use something like PreRender, it makes static pages of your site so search engines can index it.
Here you can find out for what platforms it is available: https://prerender.io/documentation/install-middleware#asp-net
Crawlers (or bots) are designed to crawl HTML content of web pages but due to AJAX operations for asynchronous data fetching, this became a problem as it takes sometime to render page and show dynamic content on it. Similarly, AngularJS also use asynchronous model, which creates problem for Google crawlers.
Some developers create basic html pages with real data and serve these pages from server side at the time of crawling. We can render same pages with PhantomJS on serve side which has _escaped_fragment_ (Because Google looks for #! in our site urls and then takes everything after the #! and adds it in _escaped_fragment_ query parameter). For more detail please read this blog .
The crawlers do not need a rich featured pretty styled gui, they only want to see the content, so you do not need to give them a snapshot of a page that has been built for humans.
My solution: to give the crawler what the crawler wants:
You must think of what do the crawler want, and give him only that.
TIP don't mess with the back. Just add a little server-sided frontview using the same API

Razor CSS file location as Variable

I am wrapping a razor view in an iframe. The razor view is a web service on a different domain.
Here is what I am doing:
<!DOCTYPE html>
<html>
<body>
<p align="center">
<img src="http://somewhere.com/images/double2.jpg" />
</p>
<p align="center">
<iframe src="https://secure.somewhereelse.com/MyPortal?CorpID=12334D-4C12-450D-ACB1-7372B9D17C22" width="550" height="600" style="float:middle">
<p>Your browser does not support iframes.</p>
</iframe>
</p>
</body>
</html>
This is the header of the src site:
<!DOCTYPE html>
<html>
<head>
<title>#ViewBag.Title</title>
<link href="#Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" />
<link href="#Url.Content("~/Content/themes/cupertino/jquery-ui-1.8.21.custom.css")" rel="stylesheet" type="text/css" />
<script src="#Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script>
<script src="#Url.Content("~/Scripts/jquery-ui-1.8.11.min.js")" type="text/javascript"></script>
</head>
I want the iframe src to use the CSS of the calling site.
Is there a way to pass in the CSS URL or have it inherit the CSS of the calling site?
I'd even settle for the css file location being a parameter being passed in from the originating site.
Anyone have any suggestions?
You cannot enforce your css on your site using an iframe. The css must be included in the source of the page included in an iframe. It used to be possible but in certain cases using javascript, and for the page to be on the same domain.
The only other way you may be able to use your own css is if the web service allows you to pass in the url of the css. But you would have to consult the documentation of the web service to find that out.
I would pass the CSS url as an argument to the iframe's src attribute:
<iframe src="http://somedomain.com/?styleUrl=#(ResolveStyleUrl())"></iframe>
Where ResolveStyleUrl might be defined as:
#functions {
public IHtmlString ResolveStyleUrl()
{
string url = Url.Content("~/Content/site.css");
string host = "http" + (Request.IsSecureConnection ? "s" : "") + "//" + Request.Url.Host + url;
return Raw(url);
}
}
This is of course assuming that the domain would accept a style url query string and render the appropriate <link /> on the remote page?
Eroc, I am sorry you cannot enforce your css on others' site using an iframe because most browsers will give an error like the one chrome gives:
Unsafe JavaScript attempt to access frame with URL http://terenceford.com/catalog/index.php? from frame with URL http://www.example.com/example.php. Domains, protocols and ports must match.
But this does not mean that you cannot extract the html from that page (which may be modified as per your ease)
http://php.net/manual/en/book.curl.php can be used for site scrapping with http://simplehtmldom.sourceforge.net/
First play with these functions:
curl_init();
curl_setopt();
curl_exec();
curl_close();
and then parse the html.
After trying yourself, you can look at this example below that I made for parsing beemp3 content, when I wanted to create a rich tool for directly downloading songs, unfortunately I couldn't because of the captcha but it is useful for you
directory structure
C:\wamp\www\try
-- simple_html_dom.php
-- try.php
try.php:
<?php
/*integrate results for dif websites seperately*/
require_once('simple_html_dom.php');
$q='eminem';
$mp3sites=array('http://www.beemp3.com/');
$ch=curl_init("{$mp3sites[0]}index.php?q={$q}&st=all");
curl_setopt($ch,CURLOPT_HEADER,0);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
//curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 10);
$result=curl_exec($ch);
curl_close($ch);
$html=str_get_html("{$result}");
$ret = $html->find("a");
echo "<head><style type='text/css'>a:link,a{font-size:16px;font-weight:bold;font-family:helvetica;text-decoration:none;color:#458;}a:hover{color:#67b;text-decoration:underline;}a:visited{color:silver;}</style></head>";
$unik=array(null);
foreach($ret as $link)
{
$find="/(.{1,})(\.php)[?](file=.{1,})&song=(.{1,})/i";
$replace="$4";
if(preg_match("{$find}",$link->href))
{
$unik[]=$link->href;
if(current($unik)===prev($unik)){unset($unik);}
else{
echo "<a href='".$mp3sites[0].$link->href."'>".urldecode(preg_replace($find,$replace,$mp3sites[0].$link->href))."</a><br/>";
}}
}
?>
I know that you do not code in php, but I think you are capable of translating the code. Look at this:
php to C# converter
I spent time on this question because only I can understand what it means to offer bounty.
May be the answer seems unrelated (because I have not used javascript or html based solution), but because of cross-domain issues this is an important lesson for you. I hope that you find similar libraries in c#. Best of luck
The only way I know to achieve that is to make the HTTP request on your server side, fetch the result and hand it back to the user.
A minima, you'll need either to strip completely the header from the targeted site to inject the content in your page using AJAX, or to inject your own css in the page headers to put it into an IFRAME.
Either way you have to implement the proxy method, which will take the targetted URL as an argument.
This technique has many downsides :
You have to do the queries on you server, which can cost a lot of bandwidth and CPU
You have to implement the proxy
You cannot transmit the domain specific cookies from the user, though you can manage new cookies have by rewriting them
If you do a lot of requests you server(s) is/are likely to become blacklisted on the targeted website(s)
The benefits sound low compared to the hassles.

Cross/Different Domain get src's HTML content

Lets say I have example.html and inside that i have a code like
<iframe src="x.com" id="x"></iframe>
from x.com, I would like to get everything inside
<div class="content">...</div>
into example.html inside
<div class="xCodes">ONTO HERE</div>
So I tried to get the elements inside x.com to show up on example.html and I heard it's not possible to access them for cross domain problems.
I was wondering if there was another way to retrieve HTML tags from x.html into example.html
Maybe without using <iframe />??
Sourced from: http://james.padolsey.com/javascript/cross-domain-requests-with-jquery/
$('.xCodes').load('http://x.com/x.html');
OR
$.ajax({
url: 'http://x.com/x.html',
type: 'GET',
success: function(res) {
var data = $(res.responseText).find('.content').text();
$('.xCodes').html(data);
}
});
If I understand correctly you want to rip the content from a DIV on one site and display it on another. There are several issues with this, but I'll focus on the technological aspect and assume you are acting in good faith with pulling the content.
The real issue you're running up against here is that you don't have access to DOM elements of pages that haven't loaded yet. As such you need to tell the browser to load the data for that page so that you can access the elements that should have loaded on the page and then pull the information out. JQuery has a nice little method to help with that called .load() (http://api.jquery.com/load/).
As an important side note however you can't do this as all modern broswers forbid cross site access in such a manner:
From the JQuery .load() page:
Additional Notes:
Due to browser security restrictions, most "Ajax" requests are subject to the same origin policy; the request can not successfully retrieve data from a different domain, subdomain, or protocol.
And check out:
http://en.wikipedia.org/wiki/Same_origin_policy
One more bit of warning. If you don't control the code on the other site you are potentially exposing yourself to serious security issues so only do this in situations where you have control of the other site or for some reason have absolute faith in that site. Alternatively you should try to, if available, use APIs for the sites/services you are trying to get data from.