Is Microsoft Exchange NDR compliant with RFC 3461, RFC 3834? - smtp

I am trying to parse NDR (Non Delivery Report) from a plethora of providers, including but not limited to, Microsoft Exchange, GMail, Yahoo! and Microsoft Live.
However, I am not sure whether Microsoft Exchange (all currently supported versions) conforms to the relevant RFCs that other providers listed above are conforming to.
Any help would be appreciated.

A very quick google for "Exchange RFC 3461" returns this Knowledge Base page with all the supported RFCs. Both RFCs are in there

Related

Why implement 64 base codificatioin instead of other in MIME over smtp protocol?

The title says everything.
I would like to ask if there's any place on the internet where i can consult the other "candidates" to MIME protocol.
Thanks in advance.
MIME isn't a protocol, it's really just a format specification.
That said, there are no alternatives for use with SMTP. There are also no open specification alternatives, either (there are proprietary alternatives, but they aren't what the general internet uses - for example, Exchange can/used to(?) use a SOAP-based protocol, GroupWise had its own custom protocol as well, and I'm sure so did Lotus Notes... but they all also support MIME, SMTP, POP3 and IMAP).
Also there's no website that I know of that lists them.

Is Exchange 2010 compliant with RFC 3848

I have looked up
Exchange 2010 Support for RFC Standards
but do not see RFC 3848 amongst the supported standards. I do see RFC 4954 listed, which 'recommends use of RFC 3848 transmission types', though I cannot tell if this means any mail server compliant with 4954 must also be compliant with 3848.
Specifically, I am trying to find out whether Exchange is capable of adding 'ESMTPA' or 'ESMTPSA' extensions to email headers to indicate SMTP authentication. It would seem peculiar if Exchange did not support this standard...
Thank you.

What exactly is a MIME type [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I've researched but all I can find is that the manifest file should have the correct MIME type & that's text/cache-manifest. I have no idea what a MIME type is.
As stated in WikiPedia:
An Internet media type is a standard identifier used on the Internet
to indicate the type of data that a file contains. Common uses include
the following:
email clients use them to identify attachment files,
web browsers use them to determine how to display or output files that are not in HTML format,
search engines use them to classify data files on the web.
A media type is composed of a type, a subtype, and zero or more
optional parameters. As an example, an HTML file might be designated
text/html; charset=UTF-8. In this example text is the type, html is
the subtype, and charset=UTF-8 is an optional parameter indicating the
character encoding.
IANA manages the official registry of media types.
The identifiers were originally defined in RFC 2046, and were called
MIME types because they referred to the non-ASCII parts of email
messages that were composed using the MIME (Multipurpose Internet Mail
Extensions) specification. They are also sometimes referred to as
Content-types.
Their use has expanded from email sent through SMTP, to other
protocols such as HTTP, RTP and SIP.
MIME types have this name because of their original purpose. According to Wikipedia:
Multipurpose Internet Mail Extensions (MIME) is an Internet standard
that extends the format of email to support: Text in character sets
other than ASCII, Non-text attachments, Message bodies with multiple
parts, Header information in non-ASCII character sets.
Although MIME was designed mainly for SMTP protocol, its use today has
grown beyond describing the content of email and now often includes
describe content type in general, including for the web (see Internet
media type) and as a storage for rich content in some commercial
products (e.g., IBM Lotus Domino and IBM Lotus Quickr).
Virtually all human-writt"en Internet email and a fairly large
proportion of automated email is transmitted via SMTP in MIME format.
Internet email is so closely associated with the SMTP and MIME
standards that it is sometimes called SMTP/MIME email.[1] The content
types defined by MIME standards are also of importance outside of
email, such as in communication protocols like HTTP for the World Wide
Web. HTTP requires that data be transmitted in the context of
email-like messages, although the data most often is not actually
email.

Browser support for eTags

I'm working on getting my site to support the eTag/If-None-Match browser cache, but I'm not sure which browsers do/don't support it. Can anyone point me to a list? I can't imagine it's universal, but I haven't found anything that supports that claim.
cheers,
Mike
If-None-Match was Specified in HTTP 1.1 (June 1999):
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol
According to Wikipedia:
By March 1996, pre-standard HTTP/1.1 was supported in Arena, Netscape 2.0, Netscape Navigator Gold 2.01, Mosaic 2.7, Lynx 2.5, and in Internet Explorer 3.0. End user adoption of the new browsers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet were HTTP 1.1 compliant. That same web hosting company reported that by June 1996, 65% of all browsers accessing their servers were HTTP/1.1 compliant. The HTTP/1.1 standard as defined in RFC 2068 was officially released in January 1997. Improvements and updates to the HTTP/1.1 standard were released under RFC 2616 in June 1999.
In my experience, all browsers in popular use, IE5.5+, Safari, Chrome, Opera, and Firefox, all support the ETag/If-None-Match headers.
However, there are some other headers which will stop these browsers from respecting the etag... so if it's not working for you, I'd carefully examine the other headers being sent back to the client when they request a resource.
Do you have any particular reason for asking the question? Maybe if you had a specific instance you were having an issue with, we could look at the other headers?
But these headers have been around for a long time, and they are a key caching mechanism used widely around the net.

How can you access the info on a website via a program?

Suppose I want to write a program to read movie info from IMDb, or music info from last.fm, or weather info from weather.com etc., just reading the webpage and parsing it is quiet tedious. Often websites have an xml feed (such as last.fm) set up exactly for this.
Is there a particular link/standard that websites follow for this feed? Such as robot.txt, is there a similar standard for information feeds, or does each website have its own standard?
This is the kind of problem RSS or Atom feeds were designed for, so look for a link for an RSS feed if there is one. They're both designed to be simple to parse too. That's normally on sites that have regularly updated content though, like news or blogs. If you're lucky, they'll provide many different RSS feeds for different aspects of the site (the way Stackoverflow does for questions, for instance)
Otherwise, the site may have an API you can use to get the data (like Facebook, Twitter, Google services etc). Failing that, you'll have to resort to screen-scraping and the possible copyright and legal implications that are involved with that.
Websites provide different ways to access this data. Like web services , Feeds, Endpoints to query their data.
And there are programs used to collect data from pages without using standard techniques. These programs are called Bots. These programs use different techniques to get data from websites (NOTE: Be careful Data may be copyright protected)
The most common such standards are RSS and the related Atom. Both are formats for XML syndication of web content. Most software libraries include components for parsing these formats, as they are widespread.
yes rss standard. And xml standard.
Sounds to me like you're referring to RSS or Atom feeds. These are specified for a given page in the source; for instance, open the source html for this very page and go to line 22.
Both Atom and RSS are standards. They are both XML based, and there are many parsers for each.
You mentioned screen scraping as the "tedious" option; it is also normally against the terms of service for the website. Doing this may get you blocked. Feed reading is by definition allowed.
There are a number of standards websites use for this, depending on what they are doing, and what they want to do.
RSS is a protocol for sending out formatted chunks of data in machine-parsable form. It stands for "Real Simple Syndication" and is usually used for news feeds, blogs, and other things where there is new content on a periodic or sporadic basis. There are dozens of RSS readers which allow one to subscribe to multiple RSS sources and periodically check them for new data. It is intended to be lightweight.
AJAX is a protocol for sending commands from websites to the web server and getting results back in a machine-parsable form. It is designed to work with JavaScript on the web client. The AJAX standard specifies how to format and send a request and how to format and send a reply, as well as how to parse the requests and replies. It tends to be up to the developers to know what commands are available via AJAX.
SOAP is another protocol like AJAX, but it's uses tend to be more program-to-program, rather than from web client to server. SOAP allows for auto-discovery of what commands are available by use of a machine-readable file in WSTL format, which essentially specifies in XML the method signatures and types used by a particular SOAP interface.
Not all sites use RSS, AJAX, or SOAP. Last.fm, one of the examples you listed, does not seem to support RSS and uses it's own web-based API for getting information from the site. In those cases, you have to find out what their API is (Last.fm appears to be well documented, however).
Choosing the method of obtaining data depends on the application. If its a public/commercial application screen scraping won't be an option. (E.g. if you want to use IMDB information commercially then you will need to make contract paying them 15000$ or more according to their website's usage policy)
I think your problem isn't not knowing the standard procedure for obtaining website information but rather not knowing that your inability to obtain data is due to websites not wanting to provide that data.
If a website wants you to be able to use their information, then there will almost certainly be a well documented api interface with various standard protocols for queries.
A list of APIs can be found here.
Dataformats listed at this particular sites are: CSV, GeoRSS, HTML, JSON, KML, OPML, OpenSearch, PHP, RDF, RSS, Text, XML, XSPF, YAML, CSV, GEORSS.