How to include Google Search results on a webpage - html

I'm doing a webpage about apples (the fruit), I want to include Google search results for apple recipes on the bottom of the page. How can I do this?

For more advanced inclusion of Google results check out this: https://developers.google.com/web-search/docs/#fonje
You can use ajax based search queries for your keywords and put the results on your website.
There are examples in the docs
Update:
As the API is deprecated you can now alternatively just scrape the results from Google instead of using their API.
There is an open source PHP scraper that can be included into your website at http://scraping.compunect.com

You can accomplish this using iframes, but you might want to consider some alternatives... you might want to create a Google Custom Search Engine to serve results only from recipe websites. And, instead of using a frame to display results, you might want to simply make the custom search engine available, so that, only if a user is interested in finding a recipe, he/she may do so. Something else to consider... you might want to monetize this, in which case using Google AdSense would be the way to go.
<iframe src="http://www.google.com/search?q=apple+recipe" />
If you want to use an iframe, you can use the code above; however, this is not a good design decision.

See recipes about bananas.

Related

how to make a search engine using HTML

i just got into HTML and want to make some sort of search engine.
is there a way to search the web from that page, OR redirect it to google or other search engine's?
You can use Google's custom search engine (CSE) for this. It is really easy to setup. Go to the link given below.
https://www.google.com/cse/
You will get a code that you can paste to your webpage.

Google bot crawling on AngularJS site with HTML5 Mode routes

We have an AngularJS site using HTML5 routes. I just did some test "Fetch as Google" runs. The results are a bit confusing:
On the fetching tab, I see our site as it looks on view source, with all the front end bindings {{ }}, and not all the HTML rendered
On the rendering tab, our site looks perfectly fine, no {{ }} variables, it seems like Google bot fetched and rendered the site fine, which is maybe in line with this, http://googlewebmastercentral.blogspot.ae/2014/05/rendering-pages-with-fetch-as-google.html.
However, we are already prepared for Google to not be able to crawl our site, so we have already added , so the Google bot revisits our page with “?_escaped_fragment_=". We followed this, https://developers.google.com/webmasters/ajax-crawling/docs/getting-started (section "3. Handle pages without hash fragments"). In our Nginx config we have something like this:
if ($args ~ "_escaped_fragment_=") {
serve the static HTML snapshots
}
, and indeed it works fine, if we pass the _escaped_fragment_= ourselves. However, the Google bot never tried to crawl our site with this param, so it never crawled the snapshot. Are we missing something? Should we also add agent detection for Google bot on our Nginx conf? Something like this?
if ($http_user_agent ~* "googlebot|yahoo|bingbot|baiduspider|yandex|yeti|yodaobot|gigabot|ia_archiver|facebookexternalhit|twitterbot|developers\.google\.com") {
server from snapshots
}
It would be great if we can understand this better, thank you so much in advance!
UPDATE:
I just read this, http://scotch.io/tutorials/javascript/angularjs-seo-with-prerender-io?_escaped_fragment_=tag#caveats. So, it seems that when using the manual tools (Fetch as Google), we should pass ourselves either #! or ?_escaped_fragment_= in the right place. Indeed, if I pass ?_escaped_fragment_= in our case, I do see the HTML snapshot that we have created.
Is that true? Is this how it works indeed?
UPDATE 2
On the bottom of this thread, a Google employee verifies that for Google Webmasters "Fetch as Google", you need to manually pass the _escaped_fragment_= param yourself, https://productforums.google.com/forum/#!msg/webmasters/fZjdyjq0n98/PZ-nlq_2RjcJ
Cheers,
Iraklis
I will try to answer your questions based on our experiences in the last month of developing a SPA with HTML5 mode.
How do I get Googlebot to use ?_escaped_fragment_= instead of the direct links.
This is actually quite simple but easy to overlook. In fact, there are two different ways to get Googlebot to try the escaped_fragment. The first method is to run your site in non-html5 mode. This means that your URLs will be of the form:
http://my.domain.com/base/#!some/path/on/website
Googlebot recognizes the #! and makes a second call to your server with an altered URL:
http://my.domain.com/base/?_escaped_fragment_=some/path/on/website
Which you can then handle as you wish. The second way to get Googlebot to try _escaped_fragment_ mode is to include the following meta tag on the index page you supply to the bot:
<meta name="fragment" content="!">
This will make googlebot check the other version of the webpage every time it sees the tag. Interestingly you can use both these techniques together or you can do what we ended up doing, which is running in html5 mode with the meta tag. This means that your URLs will be escaped as follows:
http://my.domain.com/base/some/path/on/website?_escaped_fragment_=
Interestingly, the bot will not put anything at the end of the fragment. But depending on what webserver you are running, you can easily map this with a pattern matching the "_escaped_fragment_" text to your alternate bot page. For more information on the escaped fragment go here.
"Fetch as Googlebot" returns two different versions of my page, the source with {{}} and the rendered page looking correct. What does that mean?
Google's Bots can actually interpret JavaScript to a limited extent since early 2014. For more information, read the official blog entry on google webmasters here. However, as is made clear in the blog entry, this comes with a lot of caveats. For instance:
Googlebot does not guarantee to execute all javascript code.
Googlebot will attempt to find links in the javascript to follow and use them to help find more pages.
Googlebot will render the preview in webmasters tools by executing as much of the javascript as it can (thus the lack of {{}} in the rendered version).
Googlebot will not necessarily use the rendered version in order to build the meta information about your site for its index.
As of 18/12/2014, we are still unsure if Googlebot can actually extract any information from an SPA in rendered mode for its index beyond finding links to follow in the javascript. In our experience, Googlebot will include {{}} in its index listing so that when you try to use {{}} to fill meta information (description, keywords, title, etc...) your site looks like this in Google Search results:
{{meta.siteTitle}}
http://my.domain.com/base/some/path/on/website
{{meta.description}}
rather than what you expect which might look like this:
Domain
http://my.domain.com/base/some/path/on/website
This is a random page on my domain. An excellent example page to be sure!
GoogleBot for Search Engine uses _escaped_fragment_ but we can not be sure for other services
Google recommend to serve an HTML snapshot of AJAX website by using hashbang (#!) and _escaped_fragment_ param.
But as often for new Google feature all Google services do not support it from the begging.
For now, by experience, we are sure GoogleBot for indexing webpage use HTML snapshot and _escaped_fragment_. You can check your Server Access Logs to be sure Google did it on your application.
(For now and by experience, nothing official by Google) other services like PageSpeed Insight, Webmaster Tools parser, Richsnippet testing tools, etc.: hasbang (#!) is not supported. You have to use _escaped_fragment_.
Should you use User Agent detection to serve HTML snapshot?
No. Just don't. For different reasons :
You just do not know which services/bots on the web would like to parse your content and you can not be exhaustive (for instance, think of all the social networks existing on the web using Bot to create a snippet of your content : you can not handle them one by one)
This can be considered as cloacking : serving a different version depending on type of user on the same URL, which is basically wrong for SEO.
Google looks for #! in our site urls and then takes everything after the #! and adds it in _escaped_fragment_ query parameter. Some developers create basic html pages with real data and serve these pages from server side at the time of crawling. So , why not we render same pages with PhantomJS on serve side which has _escaped_fragment_.
For more detail please read this blog .
Maybe a bit outdated, but for the completeness:
According to the statement from May 23, 2014 Google bot is now able to "see your content more like modern Web browsers".
According to their statement from October 14, 2015 Google deprecated the AJAX crawling scheme.
So using the HTML5 History API (html5mode in angular) should be no problem to Google.

Get only images

Is there any way to get only images (only audio or video) using box-api.
I tried to use search api and If-Match condition but didnt find any way to do this.
Please suggest...
Do your audio/video files have extensions? The Search API should work in this case. You can search for multiple extensions at one time if you put a space between them:
GET /2.0/search?query=jpg%20mp4%20png

GOOGLE: how to prevent subpages from appearing in results

I have a fairly new website which allows people to create their own profiles and such. The issue is that when someone links to their profile from their website/blog, their profile shows up in google searches for my website - and to date the one person who has done this has a NSFW profile. Which means, when you search for my site on Google one of the top results is a NSFW page.
How do I prevent google from listing subpages in the results? Would robots.txt solve this? And if a page is already listed, will adding an entry in robots.txt disallowing access to profile pages in general end up removing it from the results?
robots.txt will solve it to some extent. If there are direct external links, then I have found that google still indexes them.
Go to http://webmaster.google.com, get your website claimed, and then use their URL removal tool.
Yes, see http://www.robotstxt.org/. Just list things like "Disallow: /profile/" etc and google will stop indexing them and after a time, remove them.

Do not you know how to take the capture of the arbitrary URL in a server-side? (with Chromium )

As for Google chrome, a site captures a reading history if I think personally.
May not you acquire capture with the library?
like this site;link text
Are you asking how to screen capture a website that someone submits, like the thumbnails in the site you linked to?
The simplest way would be to use a service like one of these. And then write a script to grab the image.