As recently as 2002 the IETF was recommending in RFC 3406 that we should use x- prefixes for URN namespaces we didn't want to register, e.g. urn:x-acme:foobar. Now that the IETF has deprecated the x- prefix in RFC 6648, how are we supposed to construct URNs for namespaces we don't intend to register?
As an aside, I note that RFC 6648 specifically mentions URNs: "In almost all application protocols that make use of protocol parameters (including ... URNs ...), the name space is not limited or constrained in any way, so there is no need to assign a block of names for private use or experimental purposes." I find this an odd thing to say, as RFC 3406 claims, "The space of URN namespaces is managed. I.e., not all syntactically correct URN namespaces (per the URN syntax definition) are valid URN namespaces."
So what is best to use for custom but unregistered URN namespaces? Can I just drop the x- and use, for my example company Acme, a URN such as urn:acme:foobar?
RFC 6648 says:
Does not override existing specifications that legislate the use of "X-" for particular application protocols […]; this is a matter for the designers of those protocols.
So it’s still fine to use experimental NIDs as defined by RFC 3406.
And what RFC 6648 recommends for new protocols (and, I assume, updates of existing protocols) is essentially what is currently the case with URNs anyway (minus the experimental X- prefix):
there is a "potentially unlimited value-space" for NIDs
there are "clear registration procedures" defined (I have no idea what they understand under "simple")
So in case experimental X- NIDs should get deprecated in an updated RFC, I wouldn’t expect an alternative to having to register NIDs.
If you don’t want to register NIDs (even not an Informal NID), you might want to use a different URI scheme. tag comes to mind (tag:example.com,2013:foobar).
Related
Does Json Patch violate REST rules? My API won't be RESTful if I use it? Or, maybe not?
{ "op": "replace", "path": "/biscuits/0/name", "value": "Chocolate Digestive" }
Does Json Patch violate REST rules?
No.
JSON Patch is a standardized media-type designed to act as a patch document (in the RFC 5789 sense). It is a perfectly normal way to describe edits to a JSON document.
Every protocol, every media type definition, every URI scheme, and every link relationship type constitutes prior knowledge that the client must know (or learn) in order to make use of that knowledge. REST doesn’t eliminate the need for a clue. What REST does is concentrate that need for prior knowledge into readily standardizable forms. -- Fielding, 2008
And that's exactly what has been done here.
We are implementing SCIM Resource Provider for Users, Groups and a couple of custom resources.
SCIM Core Schema RFC 7643 defines User resource so, that only userName and core attributes (id, schemas) are required. Plus it defines optional attributes like name, profileUrl, etc.
Some optional attributes do not make sense in our context (e.g. ims) or are not supported or very expensive to be supported.
From the other hand, other optional attributes like name should be "required" and should be returned "always".
What is the recommended way to express this, so that the clients would know what attributes should be provided?
As much I understand rfc, we should provide the adjusted, tweaked version of core User schema on /Schemas endpoint. Is it correct way?
Would it make our Provider "none SCIM compliant"?
Discussion has been started on scim mailing list. Here is the answer from Phil Hunt, one of rfc authors:
This happens a lot particularly when adapting SCIM protocol on top of applications (e.g. payroll, HCM, CRM, etc). Each app has data they care about that is a sub-set of what is seen in IDM systems. The point of 7643 is really to define standard attribute names, types, syntax, and handling that developers can count on.
IMO, you do not have to implement the schema exactly as published in 7643. It is quite common practice to omit attributes (e.g. such as an app that doesn’t care about ims). Note that renaming standard attributes or changing their formats will produce interop concerns.
Use the extension mechanism to define your own app specific attributes (see section 3.3 of 7643 and 4.3 for the EnterpriseUser example).
You are free to omit unused attributes from your schema. You document what your server actually supports in the /Schemas endpoint.
The full discussion can be found on https://www.ietf.org/mail-archive/web/scim/current/msg02851.html
i just want to know is yang modeling language specific and can only be used with NetConf protocol, or can it be used to model data like Xml and Json ?
Thank you
YANG was originally intended to model data exchanged between peers in a NETCONF session, but this is no longer the only case. There are now other protocols that (will) make use of it, such as RESTCONF and CoMI.
YANG is a data modeling language originally designed to model
configuration and state data manipulated by the Network Configuration
Protocol (NETCONF), NETCONF Remote Procedure Calls, and NETCONF
notifications [RFC6241]. Since the publication of YANG version 1
[RFC6020], YANG has been used or proposed to be used for other
protocols (e.g., RESTCONF [RESTCONF] and the Constrained Application
Protocol (CoAP) Management Interface (CoMI) [CoMI]). Further,
encodings other than XML have been proposed (e.g., JSON [RFC7951]).
RFC7950, Section 1
In fact the recent new YANG specification release (1.1) has made a move toward decoupling the model from its encoding. In the future, we will probably see separate XML and JSON encoding documents (plus perhaps others) and a single document dealing only with the language specifics.
You could use YANG to model data for other more general purposes if you ignore statements like rpc, action, notification, config, etc. Those that are only relevant in specific contexts. Of course you would have to define the context in which you wish to use the model and what it means to you. Some modelers make use of the extension statement to define such requirements, then implement a specialized YANG compiler that recognizes the extensions and acts accordingly - this allows you to make use of the language for things not originally intended by the authors.
There are some definitions in the specification that could make it harder to make YANG a general purpose modeling language (like what XML Schema is for XML). Concepts like configuration datastore, configuration data, state data, client, server, etc. are pretty darn specific. You can always turn a blind eye to those and just make it work for you, however. I believe that is how it is already done in the OpenDaylight project.
We are designing a fairly complex REST API, in which most of the I/O are JSON encoded objects with a specific structure. One challenge we have found is to document the API in such a way that makes it easier for clients to post correct input and process output. Because the data of both the input and output requires fairly complex JSON objects, client developers often introduce bugs related to the structure of the I/O objects.
With all of the JSON web API's these days, I would have hoped for a general solution, but I am having a hard time finding one. I looked into json-schema which is a json-validation schema but both the IETF draft and implementations seem to be fairly immature (even though they have been around for a while, which is not a good sign).
A slightly different approach is offered by Protocol Buffers and Apache Avro, where the schema is not used for validation, but actually required for the encoding/decoding of the message. Of these 2, Avro seems to have rather limited documentation and implementations. ProtoBuf seems better, but I am not sure if this is really suitable to use in the browser to call a JSON api?
Now I am starting to doubt if I am looking at this from the right angle. Are there other methods available to make my API a bit more strong-typed'ish? Or is a formal description of a JSON REST/RPC API something that defeats the purpose of using JSON?
Edit: 6 months after this topic we found mongoose, which is very close to what we were lookin for.
Below a reply I received by email from Douglas Crockford.
I am not a believer in schemas as an alternative to input validation.
There are properties that cannot be verified from the syntax. I think
that was one of the ways that XML went wrong.
If your formats are too complex, then I would look at simplifying
them.
Such systems exist and I'm the author of one of them. It is called Piqi-RPC and it does IDL-based validation of the input and output parameters for RPC-style APIs over HTTP.
It supports JSON, XML and Google Protocol Buffers as data representation formats for input and output of HTTP POST requests. Clients can choose to use any of the three formats and specify their choice using the standard Accept and Content-Type HTTP headers.
So, yes, in theory, you are looking in the right direction. However, at the moment, Piqi-RPC supports writing servers only in Erlang and it wouldn't be very useful for you if you use a different stack. I heard that Apache Thrift also supports JSON over HTTP transport, but I haven't checked. Another kind of similar system I know of (also for Erlang) is called UBF. I have heard of libraries for Java that can parse and validate JSON based on Protocol Buffers specification (e.g. http://code.google.com/p/protostuff/).
The idea itself is far from being new, but there aren't many systems that approach it in practice. It is a challenging problem.
Historically, IDLs were used for interface definition and binary data serialization and not so much for validating dynamic data interchange formats (e.g. XML and JSON) which emerged later. Sun-RPC IDL and CORBA IDL fall in the first category. WSDL would be one of few examples covering both areas, but it is a terrible piece of technology and it would be a bad choice for most modern systems. In addition, there are many schema languages (also known as DDLs -- data definition languages), most of which are highly specialized and work with only one representation format, e.g. XML or JSON schemas. Few of those have stable implementations.
The Piqi project and Piqi-RPC, which is based on it, are build around several fairly simple realizations:
DLL doesn't have to be explicitly tied to any particular data representation format or built around it. Instead, such language can be fairly universal and cover wide range of practical use-cases (e.g. cross-language data serialization and data validation) and data formats (e.g. JSON, XML, Protocol Buffers).
IDL for RPC-style communication can be implemented as a thin, mostly syntactic layer on top of the universal DDL.
Such IDL and interface specifications can be transport agnostic.
Speaking of REST-style APIs over HTTP compared to RPC-style APIs over HTTP.
With RPC-style APIs, service developer or an automated system have to validate three things: function name (according to some service naming scheme), input and, if you choose so, output.
In case of REST-style APIs, people get themselves in trouble for no good reason. Now, they have a lot more stuff to validate: arbitrarily complex URL syntax, including dynamic parameters encoded in URL segments (for all HTTP methods) and URL query string (only for HTTP GET method), HTTP method correspondence (whether it should be GET, POST, PUT, DELETE, etc.), HTTP body when some parameters go there (sometimes they do it manually twice for parameters represented in JSON and XML), custom HTTP headers, and separately -- service documentation. Imagine an IDL supporting all that!
XML is better for RESTful services in many ways. It has native linking (<link href=, for all those HATEOAS fans), native language support (lang="en") and a great ecosystem.
It is also better for future proofing and future API refactorings. Converting this:
<profile>
<username>alganet</username>
</profile>
To support more usernames:
<profile>
<username>alganet</username>
<username>alexandre</username>
</profile>
Is much more simpler to do without breaking existing clients using XML. JSON is hard on that.
If you really need JSON, JSON-Schema is the way to go. It's immature, but I don't know anything better on that case. Maybe your consumers could choose between XML and JSON, so they can choose between a small payload (JSON) or RESTful candies (XML) using Content Negotiation.
I'd say the answer to your last question is yes. If you need a way to constrain and document the JSON "schema", why didn't you go with XML in the first place? It is not that much harder to parse, and being able to enforce a schema for it is a great advantage.
What's the definition of a Shim?
Simple Explanation via Cartoon
Summary
A shim is some code that takes care of what's asked (by 'interception'), without anyone being any wiser about it.
Example of a Shim
An example of a shim would be rbenv (a ruby tool). Calls to ruby commands are "shimmed". i.e. when you run bundle install, rbenv intercepts that message, and reroutes it according to the specific version of Ruby you are running. If that doesn't make sense try this example, or just think of the fairy god mother intercepting messages and delivering apposite outcomes.
That's it!
Important Clarifications on this example
Note: Like most analogies, this is not perfect: usually Ralph will get EXACTLY what he asked for - but the mechanics of HOW it was obtained is something Ralph doesn't care about. If Ralph asks for dog food, a good shim will deliver dog food.
I wanted to avoid semantic arguments, and complexity e.g. adapter gang of four design patterns, facade, proxy patterns - not that great when you're trying to explain a concept. Introducing code? Pedagogically risky. Wikipedia-like explanation? Boooring, too complex, and time consuming: so I had to deliberately simplify to a cartoon, so you can easily understand in a "fun" way, in 30 seconds, is memorable so you can move on. This approach is not for everyone: if you want a precise definition consider the Wikipedia entry on shims.
The term "shim" as defined in Wikipedia would technically be classified, based on its definition, as a "Structural" design pattern. The many types of “Structural” design patterns are quite clearly described in the (some would say defacto) object oriented software design patterns reference "Design Patterns, Elements of Reusable Object-Oriented Software" better known as the "Gang of Four".
The "Gang of Four" text outlines at least 3 well established patterns known as, "Proxy", "Adapter" and "Facade" which all provide “shim” type functionality. In most fields it’s often times the use and or miss use of different acronyms for the same root concept that causes people confusion. Using the word “shim” to describe the more specific “Structural” design patterns "Proxy", "Adapter" and "Facade" certainly is a clear example of this type of situation. A "shim" is simply a more general term for the more specific types of "Structural" patterns "Proxy", "Adapter", "Facade" and possibly others.
According to Microsoft's article "Demystifying Shims":
It’s a metaphor based on the English language word shim, which is an
engineering term used to describe a piece of wood or metal that is
inserted between two objects to make them fit together better. In
computer programming, a shim is a small library which transparently
intercepts an API, changes the parameters passed, handles the
operation itself, or redirects the operation elsewhere. Shims can also
be used for running programs on different software platforms than they
were developed for.
So a shim is a generic term for any library of code that acts as a middleman and partially or completely changes the behavior or operation of a program. Like a true middleman, it can affect the data passed to that program, or affect the data returned from that program.
The Windows API is an example:
The application is generally unaware that the request is going to a
shim DLL instead of to Windows itself, and Windows is unaware that the
request is coming from a source other than the application (because
the shim DLL is just another DLL inside the application’s process).
So the two programs that make the "bread" of the "shim sandwich" should not be able to differentiate between talking to their counterpart program and talking to the shim.
What are some pros and cons of using shims?
Again, from the article:
You can fix applications without access to the source code, or without
changing them at all. You incur a minimal amount of additional
management overhead... and you can fix a
reasonable number of applications this way. The downside is support as
most vendors don’t support shimmed applications. You can’t fix every
application using shims. Most people typically consider shims for
applications where the vendor is out of business, the software isn’t
strategic enough to necessitate support, or they just want to buy some
time.
As for origins of the word, quoth Apple's Dictionary widget
noun
a washer or thin strip of material used to align parts,
make them fit, or reduce wear.
verb ( shimmed, shimming) [ trans. ]
wedge (something) or fill up (a space) with a shim.
ORIGIN early 18th cent.: of unknown origin
This seems to fit quite well with how web designers use the term.
Shims are used in .net 4.5 Microsoft Fakes framework to isolate your application from other assemblies for unit testing. Shims divert calls to specific methods to code that you write as part of your test
As we could see in many responses here, a shim is a sort of adapter that provides functionality at API level which was not necessarily part of that API. This thread has a lot of good and complete responses, so I'm not expanding the definition further.
However, I think I can add a good example, which is the Javascript ES5 Shim (https://github.com/es-shims/es5-shim):
Javascript has evolved a lot during the last few years, and among many other changes to the language specification, a lot of new methods have been added to its core objects.
For example, in the ES2015 specification (aka ES5), the method find has been added to the Array prototype. So let's say you are running your code using a JavasScript engine prior to this specification (ex: Node 0.12) which doesn't offer that method yet. By loading the ES5 shim, these new methods will be added to the Array prototype, allowing you to make use of them even if you are not running on a newer JavaScript specification.
You might ask: why would someone do that instead of upgrading the environment to a newer version (let's say Node 8)?
There is a lot of real cases scenarios where that approach makes sense. One good example:
Let's say you have a legacy system that is running in an old environment, and you need to use such new methods to implement/fix a functionality. The upgrade of your environment still a work in progress because there are compatibility issues that require a lot of code changes and tests (a critical component).
In this example, you could try to craft your own version of such functionality, but that would make your code harder to read, more complex, can introduce new bugs and will require tons of additional tests just to cover a functionality that you know it will be available in the next release.
Instead, you can use this shim and make use of these new methods, taking advantage of the fact that this fix/functionality will be compatible after the upgrade, because you are already using the methods known to be available in the next specification. And there is a bonus reason: since these methods are native to the next language specification, there is a good chance that they will run faster than any implementation that you could have done if you tried to make your own version.
Another real scenario where such approach is welcome is at browser level. Let's say you need to support old browser and want to take advantage of these newer features. Javascript is a language that allows you to add/modify methods in its core objects (like adding methods to Array prototype), and those shim libraries are smart enough to add such methods only if the current implementation is lacking of them.
PS:
1) You will see the term "Polyfill" related to these Javascript shims. Polyfill is a more specialized type of shim that is used to provide forward compatibility in different browser level specifications. By the way, my example above refers exactly to such example.
2) Shims are not limited to this example (adding functionality that will be available in a future release). There are different use cases that would be considered to be a shim as well.
3) If you are curious about how this specific polyfill is implemented, you can open Javascript Array.find specs and scroll to the end of the page where you will find a canonical implementation for this method.
SHIM is another level of security check which is done for all the services, to protect upstream systems. SHIM Server validates every incoming request, with Headers User credentials, against the user credentials, which are passed in the request(SOAP / RESTFUL).