Thumbnails from HTML pages created and used automatically in web application - html

I am working on a Ruby on Rails app that visualizes product trees. The tree is built of nodes an everything is rendered in HTML/CSS3. Some of the products make several hundred SQL queries as the tree builds up (up to 800 queries on the biggest tree).
I'd like to have small thumbnails of each tree to present it on an index page. So rendering each tree once again and modifying CSS to make a tiny representation is an option.
But i think it's probably easier to generate thumbnails, crop, cache, and show these on the index page.
Any ideas on how to do this? Any links/articles/blog posts that could help me?

Check out websnapr; it looks like they provide 100,000 free snaps a month.

I should check this site more often. :D Anyway, I've done some more research and it looks like you'll need to set up some server-side scripts that will open a browser to the page, take a screenshot, and dump the file/store in database/etc.

This question has been open for quite a while. I have a proposal which actually fulfills most of the requirements.
Webkit2png can create screenshots which and crop parts of the image. You can specify dimensions, crop areas, and also it provides a thumbnail of the pages.
However, it will not support login in your application out-of-the-box.
Webkit2png is really easy to use in a shell script, so you can just feed it with a number of URLS and it will return all the image files.
More info in this blog post: Batch Screenshots with webkit2png
Webkit2png has an open request to add authentication (so you can use it on logged in pages).

Related

Do I need an HTML file for every single page on my website?

Say I have a product website, like Amazon (this is not the case, but it will help me explain my point), and I have a URL for every single product (such as with Amazon)...
Do I need to copy-paste and modify an HTML file for every single individual product page, or is there a way to use a "model" on which I can base all my other pages without recopying the whole code and modifying a few things in each?
I've just started learning HTML and web development, so bare with me if I'm asking a stupid question.
It just seems odd to me that a million-page website should host a million+ individual, nearly identical, HTML files.
Thank you very much in advance.
P.S. I'm using Amazon's brand name as an example here, and am not affiliated with anything related to it. Thank you for understanding.
No, you do not need an HTML file for every single page on your website. While you could do that, it is becomes very infeasible to manage the bigger your site becomes. On most websites you would have the following components:
A front end - consists of HTML code and usually some sort of template engine with placeholders for your data
A backend - consists of your data store (usually a database).
There will also usually be some form of API and/or middleware between your front end and backend.
If you go to https://example.com/myproductid in your browser, your computer will send that request to the web server. The web server will then retrieve your data, load it into the correct template, and serve the page to you.
In traditional HTML and PHP only websites, you would have to reload the entire page each time you went to a new product. However, you can instead use a technique called Ajax to only update certain parts of a web page rather than reloading the entire page. That way you can just update the text, images, and links that are specific to the product, and the rest of the page would stay the same. (Note: Ajax originally used XML, modern implementations usually use JSON).
Ultimately, you will want to learn some JavaScript and then start looking into various web frameworks or libraries such as ReactJS.
Not you Can have only one page for all product, but you have to make it Dynamic.
Yes you need an HTML document for your each webpage, like for Home page, contact us page you need different HTML documents

Can I integrate grapejs website builder into my own website

Does anyone know if I can integrate GrapeJS into my own website so clients could build their own websites using it? IF anyone has done this, how easy is it and are there downsides?
This question is pretty open ended, but I'll take a shot at it.
The short answer is yes, you can use Grapesjs to allow clients to make their own sites; however, the details matter.
Grapesjs by default doesn't know anything about your stack, website structure, metadata, etc. You will need to either supply plugins or implement those features yourself. I've worked on a project for a company that used Grapesjs to implement single page apps and I'll include just some of the tweaks we had to manage.
Hiding certain layers that only confuse average users.
Hiding pretty much all of the styling, and using traits to allow people to pick from some predefined styles.
Take the html, css on store and generate the final html page, and store it in our static serving folder on the server.
Implement a wrapping "App" component that has traits for the different metadata we want users to control (open graph metadata, title, etc)
and those are just the big things, I'm sure I am forgetting several small ones.
For your application, you'll also need to implement a custom trait for links / buttons that allows you to link from one "page" to another. As well as, a way to allow a user to pick which page to work on.
The long answer is Yes, but Grapesjs is only the starting point.
Yes you can.
However it is not straightforward.
If you want to build a Drag Drop Editor like GrapeJS Demo, here is the Source Code - https://github.com/artf/grapesjs-preset-webpage
You can see an implementation at https://codegres.org/dragdrop

How do i code a scrolling window to stay at a certain link per page

On this page, I want to get my scrolling dinosaur name window to specifically keep that dinosaurs name at the top so the person doesn't have to scroll all the way down to the next dinosaur.
I also want to know if there's an easier way to do this window.
My predicament is this....
I have over 30 dinosaurs on here. Each time I add a new one I have to update each and every one of the dinosaurs pages to add that one new dinosaur. Its not really time effective... Is there a better way without having to use frames?
My code is open so you can look at it and modify it at your leasure.
Thanks!
Vince
At this point I would suggest you go for server side code. Since you have 30 dinosaurs, it would be much easier to create and maintain a simple page using server side scripts such as PHP or ASP.NET to load the dinosaur from a database.
What are server side scripts?
Server side scripts allow you to dynamically generate a page on the fly whenever the user requests a page. For example, take youtube's search page. Rather than generate a seperate page for every single possible search term, they simply have a base template there, and then they fetch the relevant results based on the search query. The same can be applied to your site. You can have one page for all the dinosaurs, and you would just load the appropriate dinosaur based on the url.
Once you do that, putting the current dinosaur at the top of the page would be a trivial task. Since it appears that you already have a fair amount of knowledge in HTML, it should be easy for you to pick up and use some PHP. Codecademy has some excellent tutorials.
Along the same lines as Kevins answer but more specifically I'd like to recommend you look into a PHP MVC framework such as CakePHP, Laravel or CodeIgniter.
You've done all the hard work manually building these pages, which is awfully time consuming.
Once you learn one of these frameworks and you'll rebuild this site in a day.
If your links had id attributes on them you could scroll the list to a position by linking to #whatever. Here's a quick code example of a link.
<li id="camarasaurus">Camarasaurus</li>
Here's a small example: http://jsbin.com/ExExEvAB/1/edit?html,css,output
As for making it easier to administrate, I'd look into PHP since it's widely available and there's tons of resources to learn from. When you're basically looking for is <?php include "dinosaur-menu.html" ?> since you're thinking in terms of frames. You can make it even easier but this alone should make it a ton easier to update.
I really started to enjoy Mixture recently. It's great for prototyping and is, in my opinion, perfect for exactly what you're trying to do here.

How can I save a webpage as an image in my rails app?

In my rails app I have a need to save some webpages and display them to the user as images. For example, how would I save www.google.com as an image?
There is a command line utility called CutyCapt that is using the WebKit-Rendering engine to render HTML-Pages into various image formats. Maybe this is for you?
http://cutycapt.sourceforge.net/
Prohibitively difficult to do in pure Ruby, so you'd want to use an external service for this. Browsershots does it, for example, and it looks like they have an api, although I haven't used it myself. Maybe someone else can chime in with alternative but similar services.
You'll also want to read up on delayed_job or something similar, to make sure you're accessing those page images as a background task and that it doesn't interfere with your actual application.
You can't do it easily (probably can't do it at all).
Each page is just a text - html data. The view you want to make an image of is a rendered page. Browser renders the page using tonns of techniques like html parsing, javascript parsing, css parsing, font rendering, etc.. To make the screenshot of google page - you would need to do all the rendering somewhere in memory and then take a screenshot of rendered page.
That task is almost impossible (there is nothing fully impossible).
If you are really eager to donate tonns of time to accomplish that task - you should do this steps:
1) Find some opensource rendering engine. Firefox would do.
2) Find some way to communicate between ruby-on-rails and that engine.
3) Wire it all together and see the results.
However, I see steps 1 and 2 as nearly impossible.
Firefox addon:
https://addons.mozilla.org/en-US/firefox/addon/1146/

Best solution for turning a website into a pdf

The company I work for we have a CBT system we have developed. We have to go through and create books out of the content that is in our system, I have developed a program that goes through and downloads all of the content out of our system and creates a offline version of the different training modules.
I created a program that creates PDF documents using the offline version of the CBT. It works by using Websites Screenshot to create a screen shot of the different pages and then using iTextSharp it creates a PDF Document from those images.
It seams to be a memory hug and painfully slow. There are 40 CBT Modules that it needs to turn into books. Even though I take every step to clear the memory after each time it creates a book. After about 2 books it crashes because there is no memory left.
Is there a better way to do this instead of having to take a screen shot of the pages that will yield the same look of the web page inside the pdf document?
I have searched and demoed and found that ABCPdf from WebSuperGoo is the best product for .NET. It is the most accurate and doesn't require a printer driver. It uses IE as the rendering engine, so it looks almost exactly like what you get in IE.
PrinceXML is commercial software that generates pdf from websites.
I've used PDFSharp in the past and have had good success in generating PDF's.
It's open source as well, so in the event of troubles like you've mentioned, you're able to hunt and peck to increase performance.
If you control the source it is probably not too difficult to generate pdf directly instead of through a screenshot.
Did you try unloading the dll?
There are also different ways of getting screenshots:
http://mashable.com/2007/08/24/web-screenshots/