I am trying to get all or at least many bookmarks for a given tag out of delicious. I remember that it was possible to use pagination in earlier version of the delicious feed mechanics.
But I was astonished to see that pagination does not seem to work any longer.
Is there any way for me to acquire many bookmarks given a specific tag or is there no chance?
Thanks for the help
Philipp
Related
I am wondering how one would go about using a permalink from another website to extract data about that particular permalink, especially in the case of looking for specific information. Kind of how youtube has websites that essentially use the link to the video to download and convert it to mp3 format. Its for a college project in HTML5, but upon researching the subject (for about a week) I didnt come up with alot of information on how to go about it using html. Any help in advance will be appreciated. Just basic structure is necessary. im not good or anything, I just want to actually learn, so i need some directional pointing that way i get on the right track.
Thanks in advance :)
Oh and to be more direct, I mean in such a way as to list certain products at the price they are being listed on the site they are being listed on. However, from within my own site. (all in html...)Figured I should be more direct.
Those websites that extract audio from Youtube probably doing it using Python or something similar
This might help you.
I'm sorry I do not know how to word that title better. I have tried searching google but my terminology isn't helping my results.
Let me explain the context. When you're on a news website or blog and you're on their homepage like: www.homepage.co.uk/ and then you click an article it will go somewhere like this: www.homepage.co.uk/2017/article/ how do they make the 2017 appear? because if you remove the /article/ from the url it takes you to an archive of all the links in that year? I don't understand, is there a process to this?
When I click a link in my website it goes to: www.website.co.uk/link
I want to be able to have that 2017/link/ in the url so they can find the archive of that year just like on their websites?
How do I do this?
I am sorry if I am not explaining this very well.
I understand changing my filenames to : "2017/article.html" might work but I do not believe that is the correct way of doing it?
Thanks a lot for your time and suggestions!
You're asking about a couple of things: one is the taxonomy of the site. Taxonomy, if you don't know, is the "shape" of or how your site is organized. News sites, for instance, are usually organized by date and perhaps topic (Health and Leisure, Politics, Entertainment, etc.). The other aspect of your question is regarding what you might call RESful "hacking" of URLs. One of the tenents of REST is that URLS (uri, to be accurate) are supposed to be hackable. A news site might have /2017/10/10 to display all articles for Oct 10. Maybe you remove the last "10", and get all the articles for October so far. If you are not using a site platform that does this for you, you will have to maintain that taxonomy yourself, and manually write all the links. Systems such as Drupal and Joomla, among others, will translate your taxonomy into automatically-maintained links. In editing a page on one of these platforms, you typically only refer to the system's internal name of the page (could be a shortened version of the article's title in the above example), and the underlying engine takes care of reconstructing the URL for you (in case the page moves, or its tags/taxonomy changes).
This is a big topic, and I encourage you to do some further reading:
http://searchcontentmanagement.techtarget.com/feature/Building-a-website-taxonomy-in-eight-steps
https://www.drupal.org/docs/7/organizing-content-with-taxonomies/organizing-content-with-taxonomies
i am currently working on coding HTML Templates for an E-Commerce Shop (Magento).
We want to send out daily Mails with recently uploaded products.
Right now, we would have to dive in to the html each time, change img tags etc. for each product (about 10 per mail).
So here is my question: Is it possible to automate put in new products automatically each day? I've read about the Magento RSS Feed and the Combination of Mailchimp Mergetags to display the latest products. Is this the best way to do this or are there any other ways to automate this?
I talked to some Friends of mine, who said that RSS is supposedly "dead" and this would be, if it would even work, rather a quick hack than a long-term sustainable solution.
So I would be thankful if someone has an insight here.
This is a viable solution. Besides this, you'd probably have to look into the API's of each service. Ultimately the API would give you the most 'automatic' result, but it would take some coding to set that up for your specific needs.
Give the RSS a go.
For those of you who i18n, do you i18n the alt attribute on your img tags? Is it really worth it?
Well, I guess if you wanted your site to be truly i18n'd, then yes.
It might be an extra headache, but some vision impaired people out there are thanking you :)
Also, are you using i18n JavaScript strings too?
If you decide to translate your alt tags and find yourself needing to manage and / or just get the actual translation done we have (what I like to think, obviously :) a pretty cool tool called String - http://mygengo.com/string
String is great for not just managing translations, where you can invite others to projects to help with translation, but you can order translations right in the service too. We've integrated our API into String to showcase our API and the ability to see status updates for numerous (100s...1000s) of jobs, translated by real people!
If you're interested in the API itself, we held a bounty contest not long ago with some fun winners for a number of platforms (Wordpress, Django, etc.): http://mygengo.com/services/api/lab/winners/
Just thought I'd share.
I'm looking for ways to prevent indexing of parts of a page. Specifically, comments on a page, since they weigh up entries a lot based on what users have written. This makes a Google search on the page return lots of irrelevant pages.
Here are the options I'm considering so far:
1) Load comments using JavaScript to prevent search engines from seeing them.
2) Use user agent sniffing to simply not output comments for crawlers.
3) Use search engine-specific markup to hide parts of the page. This solution seems quirky at best, though. Allegedly, this can be done to prevent Yahoo! indexing specific content:
<div class="robots-nocontent">
This content will not be indexed!
</div>
Which is a very ugly way to do it. I read about a Google solution that looks better, but I believe it only works with Google Search Appliance (can someone confirm this?):
<!--googleoff: all-->
This content will not be indexed!
<!--googleon: all-->
Does anyone have other methods to recommend? Which of the three above would be the best way to go? Personally, I'm leaning towards #2 since while it might not work for all search engines, it's easy to target the biggest ones. And it has no side-effect on users, unless they're deliberately trying to impersonate a web crawler.
I would go with your JavaScript option. It has two advantages:
1) bots don't see it
2) it would speed up your page load time (load the comments asynchronously and unobtrusively, e.g. via jQuery) ... page load times have a much underrated positive effect on your search rankings
Javascript is an option but engines are getting better at reading javascript, to be honest I think your thinking too much into it, Engines love unique content, the more content you have on each page the better and if the users are providing it... its the holy grail.
Just because your commenter made a reference to star wars on your toaster review doesn't mean your not going to rank for the toaster model, it just means you might rank for star wars toaster.
Another idea would be, you could only show comments to people who are logged in, collegehumor do the same I believe, they show the amount of comments a post has but you have to login to see them.
googleoff and googleon are for the Google Search Appliance, which is a search engine they sell to companies that need to search through their own internal documents. It's not effective for the live Google site.
I think number 1 is the best solution, actually. The search engines doesn't like when you give them other material than you give your users so number 2 could get you kicked out from the search listings altogether.
This is the first I have heard that search engines provide a method for informing them that part of a page is irrelevant.
Google has a feature for web masters to declare parts of their site for a web search engine to use to find pages when crawling.
http://www.google.com/webmasters/
http://www.sitemaps.org/protocol.php
You might be able to relatively de-emphasize some things on the page by specifying the most relevant keywords using META tag(s) in the HEAD section of your HTML pages. I think that is more in line with the engineering philosophy used to architect search engines in the first place.
Look at Google's Search Engine Optimization tips. They spell out clearly what they will and will not let you do to influence how they index your site.