Using Swiffy html5 banners in Adwords - html

I've successfully used swiffy to convert a flash banner to html5 - to be used on Adwords
However I keep running into hurdles for uploading them -
the first issue was the swiffy file didnt include the meta ad size property - easy enough to fix
second issue is - swiffy files point to an external runtime.js, and Adwords requires all files to be local. runtime.js is HUGE and will easily put me over the 150kb size limit when zipped.
third issue (once i'm able to find a way around the runtime.js issue) would be having the ad server clickTag function like a swf banner - and not be set static in the file
has ANYone- successfully served an html5 banner on Adwords made via swiffy?

We did, but the size was the biggest problem. Did you manage to get the runtime.js down to anything usable. We were lucky with some banners which were small to begin with. The clicktag works seamlessly once you upload the .zip, GDN takes care of that.

Ive built over 100 banners in Swiffy and all have trafficked just fine in DCS, DCM, Sizmek and 3rd party. I have not done any Adwords banners yet, but my DCM rep told me that adwords can auto convert SWF ads into Swiffy. I am not sure how that works and I have not done that yet. Also, I have been told by reps in Sizmek, DCM and DCS that runtime.js is NOT counted in the file size when you put it in externally since most browsers will be caching that file. So the user would be caching that file after they viewed 1 banner and after that it would only be downloading the Swiffy ad.
For the clicktags, we have been using this successfully:
CODE FOR THE FLA FILE...... Make sure you remove the old clicktags and replace it with this for each clicktag. Modify the movieclip name as needed.....
import flash.external.*;
//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\
// ----- CLICK THROUGH ----- \\
mybackgroundclick.onRelease = function():Void{
ExternalInterface.call("bannerBgClicked")
};
//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\
CODE FOR THE HTML FILE...... This goes inside the <HEAD> section, but put it at the very end, just before </HEAD>......
<script type="text/javascript"> var clickTag = "http://www.google.com"; </script>
<script>
function bannerBgClicked(){
window.open(window.clickTag);
}
</script>

For Clicktag this seems fit in most cases with Swiffy content.
Add to head:
<script type="text/javascript">
var clickTag = "%c";
</script>
Add this to body just after the div id="swiffycontainer" element at bottom of document code (before closing /body tag):
<a onclick="window.open(window.clickTag, '_blank')" style="cursor:pointer; text-decoration:none;">
<div style="width:160px; height:600px; left:0px; top:0px; position:absolute;"></div>
</a>
Change the 'height' and 'width' properties to the proper size of document.

Related

How to add an HTML5 animation generated with Adobe Animate CC to a Ionic 2 project page?

I have made an animation with Adobe Animate CC as HTML 5 canvas item. I can publish the animation in several ways:
as html: 1 file html, 1 file js, several directory with included file. Pages are all inclusive (head, body, including libraries) and including it in a Ionic page means unpack the html (right?). I'm trying but it seems a bit complex.
as movie .mov, but it's a bit expensive in terms of file size
In both cases, i'm not sure how to include the published result in a Ionic page and I think I'm missing something.
So, how do I include a Adobe Animate CC animation in a Ionic 2 Framework page?
UPDATE: I tried to unpack the html. Tried publishing the animation as HTML5 animation, with the publish setting "Included javascript in HTML". This way all the necessary js is put inside the html file that the publish procedure outputs. Then I took that js (the one contained in
<script> some js here </script>
immediately after the import of the library createjs) and put it in a mylibrary.js file in /assets/js/mylibrary.js. This way, I was able to import it in the ts file of one ionic page like
import mylibrary from '../../assets/js/mylibrary'
and then try to initialize the animation like
ionViewDidLoad(){
mylibrary.init()
}
In the original file the init() function was called on body like
<body onload="init()">
Unfortunately it doesn't work. The error thrown is
cjs.Bitmap is not a constructor
that is not really important as error, if not to state that clearly the library createjs is not imported in the right way for it cannot associate its cjs variable of mylibrary.js that contains a function like
(function (lib, img, **cjs**, ss, an) {
… (all code here)
})(lib = lib||{}, images = images||{}, **createjs = createjs||{}**, ss = ss||{}, AdobeAn = AdobeAn||{});
to createjs variable of the library. I mean, commenting the cjs.Bitmap line, it starts complaining about the subsequent cjs.Rectangle that does not exixts.
So I tried to include the library, first taking the include from the Animate generated html
<script src="libs/createjs-2015.11.26.min.js"></script>
and put it in my main 'index.html' app file (copying in asset/js the directory lib containing the file). Did not work. Tried with the live version of the library
<script src="https://code.createjs.com/createjs-2015.11.26.min.js"></script>
Did not work.
Tried even to add all the other (unecessary library linked to createjs like easejs etc https://code.createjs.com/). Did not work.
Then I tried adding the library in the ts file adding before the import of mylibrary
import createjs from '../../assets/js/createjs'
Did not work.
Even tried to npm install createjs, createjs-module and createjs-easeljs and import them.
import easeljs from 'createjs-easeljs';
import createjs-module from 'createjs-module';
import createjs from 'createjs';
Guess what? Did not work.
Any time I said "did not work" the problem was ever the same error "cjs.Bitmap is not a constructor". Presently I couldn't find a way to import an animation generate via Adobe Animate CC into Ionic.
Just for people passing by, not an elegant solution, but at least one.
You can save in Adobe Animate your animation as video ('mov' file) and optimize its size and format with Adobe Media Converter or whatever other program you like. I've converted it in mp4 in this case.
NB: When you publish the video, if there is an audio track, it will be included in the video only if beginning at the second frame of the animation. It is a little tricky to discover, so worthy to be said.
As soon as you have an 'mp4' file, add it to your app assets, and you could include it to a Ionic page in the classical html way. In my case, it was a video to play fullscreen so here it is my code:
<ion-content padding>
<div id="videocontainer">
<video fullscreen="fullscreen" autoplay="true" (ended)="onVideoEnded()">
<source src="assets/video/your_video_name.mp4" type="video/mp4">
</video>
</div>
</ion-content>

How does Google Swiffy implement clickTag for HTML5 banner ads?

First, I tried finding an option to prevent Swiffy from compressing/minifying all the data when doing an export to HTML5 from Adobe Flash Pro. But no dice.
Even if I was able to read the unminified Javascript that Swiffy exported, I don't think there would be a simple function call for the "clickTag".
It's likely defining the variable clickTag in the huge haystack of {index: #, type: #}, and then doing processing each operations to eventually call the window.open() method (or something similar).
This is how it currently outputs (minified)
Does anyone have any clue how Swiffy implements clickTag?
Or what would be the Javascript equivalent that does the same job?
Swiffy uses the core API of the environment where the ad is served. If it's a DoubleClick Rich Media ad, it does use Enabler.exit("url").
If you want to have a better control on your output file, I'd suggest having a look at Google Web Designer. The closest experience to flash development in the HTML5 - JS era.
Google instructions are here, for plain HTML5 (non-swiffy) click tags.
At the bottom of the file is a snippet of code like this:
<script>
var stage = new swiffy.Stage(document.getElementById('swiffycontainer'),
swiffyobject, {});
stage.start();
Add this right before stage.start(); to mimic previous flash behavior. Ensure you use the proper capitalization.
stage.setFlashVars("clickTAG=http://stackoverflow.com");
**Note: Ensure to use url encoding if there are parameters...doesn't seem to like it otherwise

iBooks widget methods for starting/stopping media?

I have an HTML5 widget for iBooks that is used to embed video from external sites.
<html>
<head></head>
<body>
<iframe id="videoLoadsHere" src="http://www.embeddedvideo.blah">
</iframe>
</body>
</html>
I have 2 problems:
1) iframe remains blank w/o an internet connection. Is there any way to access native device info about connectivity from within a widget (like Phonegap does, but w/o having to import a ton of extra code?)
2) The video continues playing after closing the widget & needs to be stopped manually (unless you move 2 pages away). I'd like it to stop playing & reset when the widget is dismissed, but there doesn't seem to be any way to intercept that user action. How can I tell when a widget is open/closed?
I already tried importing AppleWidget.js in the HTML <head> tag
<script type="text/javascript" src="AppleClasses/AppleWidget.js"> </script>
And then also within the head tag handle the methods:
widget.pauseAudioVisual = function() {
//allegedly called when the widget is dismissed,
but this is not the case as far as I can tell
}
and
widget.didEnterWidgetMode = function(){
//allegedly called not when first opening a widget,
but when re-opening it as a way to resume or reload.
}
Neither of those methods ever seem to get called, and there's no way to alert/log anything from within iBooks.
I replicated this guy's experiment exactly, but it didn't work.
https://github.com/miguel/iBooks-Author-widget-events/blob/master/we.wdgt/main.html
I haven't been able to find any other documentation specific to iBooks or any other "widget" methods. Apple's docs all deal with Dashcode & desktop widgets, and don't seem to apply.
Anybody have any references for how to communicate between an HTML5 widget and an iBook? Is there any API? Some window.widget object with known methods?
Thanks!

Browser load HTML [duplicate]

I have done some web based projects, but I don't think too much about the load and execution sequence of an ordinary web page. But now I need to know detail. It's hard to find answers from Google or SO, so I created this question.
A sample page is like this:
<html>
<head>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript">
</script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
So here are my questions:
How does this page load?
What is the sequence of the loading?
When is the JS code executed? (inline and external)
When is the CSS executed (applied)?
When does $(document).ready get executed?
Will abc.jpg be downloaded? Or does it just download kkk.png?
I have the following understanding:
The browser loads the html (DOM) at first.
The browser starts to load the external resources from top to bottom, line by line.
If a <script> is met, the loading will be blocked and wait until the JS file is loaded and executed and then continue.
Other resources (CSS/images) are loaded in parallel and executed if needed (like CSS).
Or is it like this:
The browser parses the html (DOM) and gets the external resources in an array or stack-like structure. After the html is loaded, the browser starts to load the external resources in the structure in parallel and execute, until all resources are loaded. Then the DOM will be changed corresponding to the user's behaviors depending on the JS.
Can anyone give a detailed explanation about what happens when you've got the response of a html page? Does this vary in different browsers? Any reference about this question?
Thanks.
EDIT:
I did an experiment in Firefox with Firebug. And it shows as the following image:
Edit: It's 2022. If you are interested in detailed coverage on the load and execution of a web page and how the browser works, you should check out https://browser.engineering/ (open sourced at https://github.com/browserengineering/book)
According to your sample,
<html>
<head>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript">
</script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
roughly the execution flow is about as follows:
The HTML document gets downloaded
The parsing of the HTML document starts
HTML Parsing reaches <script src="jquery.js" ...
jquery.js is downloaded and parsed
HTML parsing reaches <script src="abc.js" ...
abc.js is downloaded, parsed and run
HTML parsing reaches <link href="abc.css" ...
abc.css is downloaded and parsed
HTML parsing reaches <style>...</style>
Internal CSS rules are parsed and defined
HTML parsing reaches <script>...</script>
Internal Javascript is parsed and run
HTML Parsing reaches <img src="abc.jpg" ...
abc.jpg is downloaded and displayed
HTML Parsing reaches <script src="kkk.js" ...
kkk.js is downloaded, parsed and run
Parsing of HTML document ends
Note that the download may be asynchronous and non-blocking due to behaviours of the browser. For example, in Firefox there is this setting which limits the number of simultaneous requests per domain.
Also depending on whether the component has already been cached or not, the component may not be requested again in a near-future request. If the component has been cached, the component will be loaded from the cache instead of the actual URL.
When the parsing is ended and document is ready and loaded, the events onload is fired. Thus when onload is fired, the $("#img").attr("src","kkk.png"); is run. So:
Document is ready, onload is fired.
Javascript execution hits $("#img").attr("src", "kkk.png");
kkk.png is downloaded and loads into #img
The $(document).ready() event is actually the event fired when all page components are loaded and ready. Read more about it: http://docs.jquery.com/Tutorials:Introducing_$(document).ready()
Edit - This portion elaborates more on the parallel or not part:
By default, and from my current understanding, browser usually runs each page on 3 ways: HTML parser, Javascript/DOM, and CSS.
The HTML parser is responsible for parsing and interpreting the markup language and thus must be able to make calls to the other 2 components.
For example when the parser comes across this line:
a hypertext link
The parser will make 3 calls, two to Javascript and one to CSS. Firstly, the parser will create this element and register it in the DOM namespace, together with all the attributes related to this element. Secondly, the parser will call to bind the onclick event to this particular element. Lastly, it will make another call to the CSS thread to apply the CSS style to this particular element.
The execution is top down and single threaded. Javascript may look multi-threaded, but the fact is that Javascript is single threaded. This is why when loading external javascript file, the parsing of the main HTML page is suspended.
However, the CSS files can be download simultaneously because CSS rules are always being applied - meaning to say elements are always repainted with the freshest CSS rules defined - thus making it unblocking.
An element will only be available in the DOM after it has been parsed. Thus when working with a specific element, the script is always placed after, or within the window onload event.
Script like this will cause error (on jQuery):
<script type="text/javascript">/* <![CDATA[ */
alert($("#mydiv").html());
/* ]]> */</script>
<div id="mydiv">Hello World</div>
Because when the script is parsed, #mydiv element is still not defined. Instead this would work:
<div id="mydiv">Hello World</div>
<script type="text/javascript">/* <![CDATA[ */
alert($("#mydiv").html());
/* ]]> */</script>
OR
<script type="text/javascript">/* <![CDATA[ */
$(window).ready(function(){
alert($("#mydiv").html());
});
/* ]]> */</script>
<div id="mydiv">Hello World</div>
1) HTML is downloaded.
2) HTML is parsed progressively. When a request for an asset is reached the browser will attempt to download the asset. A default configuration for most HTTP servers and most browsers is to process only two requests in parallel. IE can be reconfigured to downloaded an unlimited number of assets in parallel. Steve Souders has been able to download over 100 requests in parallel on IE. The exception is that script requests block parallel asset requests in IE. This is why it is highly suggested to put all JavaScript in external JavaScript files and put the request just prior to the closing body tag in the HTML.
3) Once the HTML is parsed the DOM is rendered. CSS is rendered in parallel to the rendering of the DOM in nearly all user agents. As a result it is strongly recommended to put all CSS code into external CSS files that are requested as high as possible in the <head></head> section of the document. Otherwise the page is rendered up to the occurance of the CSS request position in the DOM and then rendering starts over from the top.
4) Only after the DOM is completely rendered and requests for all assets in the page are either resolved or time out does JavaScript execute from the onload event. IE7, and I am not sure about IE8, does not time out assets quickly if an HTTP response is not received from the asset request. This means an asset requested by JavaScript inline to the page, that is JavaScript written into HTML tags that is not contained in a function, can prevent the execution of the onload event for hours. This problem can be triggered if such inline code exists in the page and fails to execute due to a namespace collision that causes a code crash.
Of the above steps the one that is most CPU intensive is the parsing of the DOM/CSS. If you want your page to be processed faster then write efficient CSS by eliminating redundent instructions and consolidating CSS instructions into the fewest possible element referrences. Reducing the number of nodes in your DOM tree will also produce faster rendering.
Keep in mind that each asset you request from your HTML or even from your CSS/JavaScript assets is requested with a separate HTTP header. This consumes bandwidth and requires processing per request. If you want to make your page load as fast as possible then reduce the number of HTTP requests and reduce the size of your HTML. You are not doing your user experience any favors by averaging page weight at 180k from HTML alone. Many developers subscribe to some fallacy that a user makes up their mind about the quality of content on the page in 6 nanoseconds and then purges the DNS query from his server and burns his computer if displeased, so instead they provide the most beautiful possible page at 250k of HTML. Keep your HTML short and sweet so that a user can load your pages faster. Nothing improves the user experience like a fast and responsive web page.
Open your page in Firefox and get the HTTPFox addon. It will tell you all that you need.
Found this on archivist.incuito:
http://archivist.incutio.com/viewlist/css-discuss/76444
When you first request a page, your
browser sends a GET request to the
server, which returns the HTML to the
browser. The browser then starts
parsing the page (possibly before all
of it has been returned).
When it finds a reference to an
external entity such as a CSS file, an
image file, a script file, a Flash
file, or anything else external to
the page (either on the same
server/domain or not), it prepares to
make a further GET request for that
resource.
However the HTTP standard specifies
that the browser should not make more
than two concurrent requests to the
same domain. So it puts each request
to a particular domain in a queue, and
as each entity is returned it starts
the next one in the queue for that
domain.
The time it takes for an entity to be
returned depends on its size, the
load the server is currently
experiencing, and the activity of
every single machine between the
machine running the browser and the
server. The list of these machines
can in principle be different for
every request, to the extent that one
image might travel from the USA to me
in the UK over the Atlantic, while
another from the same server comes out
via the Pacific, Asia and Europe,
which takes longer. So you might get a
sequence like the following, where a
page has (in this order) references
to three script files, and five image
files, all of differing sizes:
GET script1 and script2; queue request for script3 and images1-5.
script2 arrives (it's smaller than script1): GET script3, queue
images1-5.
script1 arrives; GET image1, queue images2-5.
image1 arrives, GET image2, queue images3-5.
script3 fails to arrive due to a network problem - GET script3 again
(automatic retry).
image2 arrives, script3 still not here; GET image3, queue images4-5.
image 3 arrives; GET image4, queue image5, script3 still on the way.
image4 arrives, GET image5;
image5 arrives.
script3 arrives.
In short: any old order, depending on
what the server is doing, what the
rest of the Internet is doing, and
whether or not anything has errors
and has to be re-fetched. This may
seem like a weird way of doing
things, but it would quite literally
be impossible for the Internet (not
just the WWW) to work with any degree
of reliability if it wasn't done this
way.
Also, the browser's internal queue
might not fetch entities in the order
they appear in the page - it's not
required to by any standard.
(Oh, and don't forget caching, both in
the browser and in caching proxies
used by ISPs to ease the load on the
network.)
If you're asking this because you want to speed up your web site, check out Yahoo's page on Best Practices for Speeding Up Your Web Site. It has a lot of best practices for speeding up your web site.
AFAIK, the browser (at least Firefox) requests every resource as soon as it parses it. If it encounters an img tag it will request that image as soon as the img tag has been parsed. And that can be even before it has received the totality of the HTML document... that is it could still be downloading the HTML document when that happens.
For Firefox, there are browser queues that apply, depending on how they are set in about:config. For example it will not attempt to download more then 8 files at once from the same server... the additional requests will be queued. I think there are per-domain limits, per proxy limits, and other stuff, which are documented on the Mozilla website and can be set in about:config. I read somewhere that IE has no such limits.
The jQuery ready event is fired as soon as the main HTML document has been downloaded and it's DOM parsed. Then the load event is fired once all linked resources (CSS, images, etc.) have been downloaded and parsed as well. It is made clear in the jQuery documentation.
If you want to control the order in which all that is loaded, I believe the most reliable way to do it is through JavaScript.
Dynatrace AJAX Edition shows you the exact sequence of page loading, parsing and execution.
The chosen answer looks like does not apply to modern browsers, at least on Firefox 52. What I observed is that the requests of loading resources like css, javascript are issued before HTML parser reaches the element, for example
<html>
<head>
<!-- prints the date before parsing and blocks HTMP parsering -->
<script>
console.log("start: " + (new Date()).toISOString());
for(var i=0; i<1000000000; i++) {};
</script>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript"></script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
What I found that the start time of requests to load css and javascript resources were not being blocked. Looks like Firefox has a HTML scan, and identify key resources(img resource is not included) before starting to parse the HTML.

Can you take a "screenshot" of the page using Canvas?

I have a page where we're positioning a bunch of elements using CSS, and changing their "top and left" positions using JS.
I've had reports that these things have been misaligned, but a user has the motive to lie about this to "cheat", so I'm not exactly sure whether they're telling the truth. I'm trying to find a way to figure out whether they're lying or not, and to have some "proof".
I know that Canvas has a method to copy image information from an image element, or another canvas element (kind of a BitBlt operation).
Would it be possible to somehow, with Canvas (or with something else, Flash, whatever), take a "picture" of a piece of the page?
Again, I'm not trying to take information from an <image>. I'm trying to copy what the user sees, which is comprised of several HTML elements positioned absolutely (and I care most about those positions) and somehow upload that to the server.
I understand this can't be done, but maybe I'm missing something.
Any ideas?
Somebody asked a question earlier that's somewhat similar to this. Scroll to the bottom of Youtube and click the "Report a Bug" link. Google's Feedback Tool (Javascript driven), essentially does what you described. Judging by what I looked at of its code, it uses canvas and has a JavaScript-based JPEG encoder which builds a JPG image to send off to Google.
It would definitely be a lot of work, but I'm sure you could accomplish something similar.
If a commercial solution is an option, Check out SnapEngage.
Click on the "help" button in top-right to see it in action. Here is a screenshot:-
Setup is pretty straight-forward you just have to copy and paste a few lines of javascript code.
SnapEngage uses a Java Applet to take screenshots, Here is a blog post about how it works.
Yes you can See following demo
In below code I have define table inside body tag but when you run this code then it will display image snapshot.
<!doctype html>
<html>
<head>
<meta charset="utf-8"/>
<title>test2</title>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.js"></script>
<script type="text/javascript" src="js/html2canvas.js?rev032"></script>
<script type="text/javascript">
$(document).ready(function() {
var target = $('body');
html2canvas(target, {
onrendered: function(canvas) {
var data = canvas.toDataURL();
alert(data);
document.body.innerHTML="<br/><br/><br/><br/><br/><br/><img src="+data+" />";
}
});
});
</script>
</head>
<body>
<h1>Testing</h1>
<h4>One column:</h4>
<table border="1">
<tr>
<td>100</td>
</tr>
</table>
</body>
html2canvas official Documentation:- http://html2canvas.hertzen.com/
To download html2canvas.js library you can use also if you unable to find from official link :-
https://github.com/niklasvh/html2canvas/downloads
[I am not responsible for this link :P :P :P]
You can use the element, that currently is only supported by Firefox.
background: -moz-element(#mysite);
Here #mysite is the element whose content is used as background
Open in Firefox: http://jsfiddle.net/qu2kz/3/
(tested on FF 27)
I don't think that you can do that. However, you could recursively fetch clientHeight, clientWidth, offsetTop and offsetLeft to determine the positions of all elements on the page and send it back to the server.
On Firefox, you can create an AddOn that uses canvas.drawWindow to draw web content to a canvas. https://developer.mozilla.org/en/Drawing_Graphics_with_Canvas#Rendering_Web_Content_Into_A_Canvas
I don't think there's a way to do that on WebKit at the moment.
It can be done, with some limitations. There are some techniques, and these are exactly how many extensions do screenshots. From your description it seems that you aren't looking for a generic, client side solution to be deployed, but just something that a user or some users could use and submit, so I guess using an extension would be fine.
Chrome:
I can point you to my opensource Chrome extension, Blipshot, that does exactly that:
https://github.com/folletto/Blipshot
Just to give some background:
You can do that, as far as I know, only from inside an extension, since it uses an internal function.
The function is chrome.tabs.captureVisibleTab and require access to the tabs from the manifest.
The function grabs only the visible part of the active tab, so if you need just that it's fine. If you need something more, than, you should have a look at the whole script, since it's quite tricky to get the full page screenshot until Google fixes Bug #45209.
Firefox:
In Firefox since 1.5 you can build extensions that use a custom extension of canvas, drawWindow, that is more powerful than the chrome.tabs.captureVisibleTab equivalent of Chrome. Some informations are here:
https://developer.mozilla.org/en/Drawing_Graphics_with_Canvas