XBRL: Connecting a fact to presentation via context - xbrl

As best I can tell, XBRL allows SEC filers to use the same concept for multiple facts so long as the context is different. I'm having a difficult time understanding how to include/exclude a fact for a given roleURI (a statement, for instance). I believe this ability is somehow related to context, but there doesn't seem to be an obvious link between the concepts asked for in the presentation document and the appropriate concept in the instance. To ask this a different way:
1) The company has several roleURIs (networks), perhaps one of which is "http://www.bigcompany.com/role/StatementOfIncome"
2) In the *_pre.xml document section related to this network, the company has asked that a "Revenues" concept be displayed.
3) The instance document has multiple "revenue" items, each with different contexts, and some with segments related to the company's sub-entities.
How do I determine that a revenue item with a certain context belongs on the StatementOfIncome roleURI, and another one should be excluded?
Thanks for any hints or resources...

As you note, the presentation linkbase is context-agnostic. From the exhibits themselves, there is no explicit way to know which numbers were actually printed on the Income Statement and which facts using the same element/concept are from other schedules.
The SEC uses programmatic means to determine what will be rendered where in its Pre-Viewer and Viewer; they explain some of that in their FAQs and Interpretations (see, for example, question B.3 at http://www.sec.gov/spotlight/xbrl/staff-interps.shtml). They look at key words and phrases.
The SEC makes the source code to their rendering engine available at http://www.sec.gov/spotlight/xbrl/viewers.shtml - you may be able to leverage it or find other hints of how they determine what to print and when.
XBRL was not designed to recreate the original presentation; later developments including Inline XBRL and the new Table Linkbase do more to preserve presentation placement.

I accepted the given answer as correct, but after several more days of banging my head on this issue, I can't quite yet see it. I keep thinking that facts somehow HAVE to "belong" to a network via their context. As best I can tell, facts with contexts having segments belong on networks with hypercube dimensions that match (somehow) the context segment "explicitMember" values. Facts with context without segments belong on networks either that have no hypercube dimensions or that relate to the company has a whole. But I can't make heads or tails of this. When I've gone through and looked at how XBRL gets made using software, facts and contexts are added to documents as networks are added. Somehow, there's a connection there. The SEC's software is helpful, but I haven't quite wrapped my head around what they did either.
I wish there were better resources on learning to parse XBRL. Most of what I've seen relates to creating it...
If ever I figure this out, I'll revisit this question with better details!

Related

Which concepts to report in XBRL instance?

I have a taxonomy entry point that refers to three linkbases, one of which is a presentation linkbase. When opening the entry point XSD, my XBRL tool discovers many more concepts than are present in the presentation linkbase, most of which are irrelevant for the report in question.
Is there a programmable way of deciding which concepts are to be reported, for instance by only reporting concepts which are present in a discovered presentation linkbase? Or does a human being always need to read some taxonomy-specific documentation and then select the concepts?
To give the example behind my question. I was referring to the entry point www.nltaxonomie.nl/10.0/report/bd/entrypoints/bd-rpt-ob-aangifte-2016.xsd. (The complete taxonomy is available at www.sbr-nl.nl/fileadmin/SBR/documenten/NT_2016/SBR_NT_10.0.zip.)
For instance, the XBRL editor of my choice displays concept BusinessProfitTitle coming from www.nltaxonomie.nl/10.0/report/bd/abstracts/bd-abstracts.xsd. BusinessProfitTitle is not included in the presentation linkbase www.nltaxonomie.nl/10.0/report/bd/linkroles/bd-aangifte-omzetbelasting-pre.xml, which is referred to by the entry point and which only contains concepts related to value-added tax. The entry point refers to two more definition linkbases, which seem to contain more concepts than are relevant. So I was wondering how to derive the concepts that must be reported for the entry point above, when you don’t speak Dutch and would like to derive the concepts programmatically.
There is nothing in the XBRL 2.1 specification that directly address your question.
There is something called "the formula linkbase" that might produce validation errors from content in the original instance document and the DTS. Within the "formula linkbase specification set" there are "Existence assertions" and some regulators are using the "Value assertions" for the purposes of detecting when a concept has been reported and the value reported is "valid".
This does depend somewhat on the filing system in question, but typically, the concepts covered by a presentation tree are the ones relevant to the entry point in question, and you can safely ignore all other concepts.
The schemas defining concepts are often shared by the entry points for multiple report types, so it's not uncommon to find concepts that are not covered by the presentation for a particular entry point.

Handling paper documentation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
After every new program written a lot of paper documentation remains.
Apart from the usual scribble notes from the programmers there usually is a nice heap of papers containing physical model explanations, calculations and so on (equations, tables, graphs, little pictures describing variables ...)
We usually do numerically intensive calculations in console applications, which are not released to the public (remain in the house, only results go out). Before each project is finished all those papers have to be packed somehow with the application, so that one day, when someone will be reusing parts of it, has some idea what is what in there.
So far, we've been using the 'dirty' solution of just scanning all of it, and packing it up on the disc with the application.
So I was wondering ... for all science guys here in a similar situation ... how do you handle project documentation which is needed, but not released to the public ?
(the one that does, goes to the dtp laddies, and they make it nice and shiny - not our problem anymore :)
I use one of three options:
Keep everything in my lab notebooks, which I archive myself, for low-level stuff
Scan the paper document, and add to source control in pdf. It's ugly, but if someone needs it, it's there
Transcribe the equations, results, etc... in a clean format (usually Latex) for future reference, and again, add to source control. Official paper copy gets signed (I work in a highly regulated domain) and filed in a binder.
In the projects I've worked on we have done a lot of physics calculations in our programs and consequently we have a lot of whiteboard sessions with equations we are working on.
We keep a wiki for each major project and after each whiteboard session we physically photograph the whiteboard with a digital camera and upload/organize it within the wiki. We also scan in paper documents from developers notebooks if it is important and include it in the wiki as well.
Then, we back up the wiki on disc for storage. So our solution is pretty similar to yours, other than we use the project wiki for organization.
If it's important, it seems to me you should treat the internal documentation with the same care with which you treat the public docs.
I create UI paper prototypes when designing the UI of an application, which produces lots of A3-sized papers (in one project we had many desks covered with papers). When the design is ready or it needs to be mailed to somebody, I take pictures of it with a digital camera, so that I can produce a series of pictures showing how to perform some tasks on the UI, which serves as documentation of how the application is meant to work. This serves also as a backup, in case somebody steals/cleans away the original papers.
Here is some of the thoughts... Not so practical though :)
We can make it part of our Check-in notes. This may help the developers going to maintain the application.
Update the requirement document/Low level design document with these items

Examples of design pattern misuse [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Design patterns are great in that they distill a potentially complex technique into something idiomatic. Often just the fact that it has a name helps communication and understanding.
The downside is that it makes it easier to try to use them as a silver bullet, applying them to every situation without thinking about the motivation behind them, and taking a second to consider whether a given pattern is really appropriate for the situation.
Unlike this question, I'm not looking for design patterns that are often misused, but I'd love to see some examples of really solid design patterns put to bad use. I'm looking for cases where someone "missed the point" and either applied the wrong pattern, or even implemented it badly.
The point of this is that I'd like to be able to highlight that design patterns aren't an excuse to disable critical analysis. Also, I want to emphasise the need to understand not just what the patterns are, but why they are often a good approach.
I have an application that I maintain that uses a provider pattern for just about everything -- with little need. Multiple levels of inheritance as well. As an example, there's a data provider interface that is implemented by an abstract BaseDataProvider, that is in turn extended by a SqlDataProvider. In each of the hierarchies, there is only one concrete implementation of each type. Apparently the developer got ahold of a Microsoft document on implementing Enterprise Architecture and, because lots of people use this application, decided it needed all the flexibility to support multiple databases, multiple enterprise directories, and multiple calendering systems even though we only use MS SQL Server, Active Directory, and Exchange.
To top it all off, configuration items like credentials, urls, and paths are hard-coded all over the place AND override the data that is passed in via parameters to the more abstract classes. Changing this application is a lot like pulling on a thread in a sweater. The more you pull the more things get unraveled and you end up making changes all over the code to do something that should have been simple.
I'm slowly rewriting it -- partly because the code is awful and partly because it's really three applications rolled up into one, one of which isn't really even needed.
Well, to share a bit of experiance. In C#, I had a nicely cool design which used lots of pattern.. I really used lot of them so to make the story short, I won't name it all. But, when I actually tested with real data, the 10^6 objects didn't "run smoothly" with my beautiful design. And by profiling it, I just saw that all those indirection level of nicely polymorphisms classes, proxy, etc. were just too much.. so I guess I could have rewritten it using better pattern to make it more efficient but I had no time, so I practically hacked it procedurally and until now, well, it works way better.. sigh, sad story.
I have seen an asp.net app where the (then junior, now quite capable) developer had managed to effectively make his codebehinds singletons thinking each page was unique which worked brilliantly on his local machine right up to the point where the testers were fighting for control of the login screen.
Purely a misunderstanding of the scope of "unique" and a mind eager to use these design pattern thingies.

How to measure "usability" in a specification-requirements document? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Starting to look at my last year project now, and so I'm doing the specification-requirements document. Now, it just so happens that this project requires a high degree of "usability" - I dunno if this is the right word in english, but what I mean is that it should be really easy to use from a user PoV. Now - in all the projects I've worked on so far, usability haven't really been a great factor, and so I could just write some gibberish to get around it. I always asked our teachers how they would specify the requirements of usability though, but no one have yet given me an answer I felt was good enough.
Our teachers have always preached that any requirement given on a project should be "test-able", but how do you test how easily accessable your user-interface is?
Say I had a real-time application running. Here it wouldnt be too hard to say "an entry should be deleted in less then 100ms after the initial call". But it's a lot harder to say "The userinterface should be 86% intuitive".
I guess this is a tough nut to crack, but surely I can't be the first person in the world to have thought about this, let alone having problems with it.
... how do you test how easily accessible your user-interface is?
With usability tests.
Basically, you grab a bunch of your friends (because you won't have any money to encourage strangers to participate) give them the documentation a new user would have and ask them to perform the system's key use cases.
Ideally, you want your test users to have at least some of the qualities of your target users, so if your system is aimed at a technical audience then your classmates will work; however, if your system is aimed at the general public then you're going to want to get your friends in Arts, Human Kinetics, etc. to participate.
So how do you turn that into requirements? You identify your key use cases and stipulate what how usable should they be (walk-up usable, a few minutes with the documentation, real actual training...) and then verify that your test subjects can complete the use cases without too much frustration, with the right amount of training, in a reasonable time.
Try to define usability in terms of "test-able" requirements.
You already gave yourself the answer, because usability can be described in terms of requirements like "an entry should be deleted in less then 100ms after the initial call".
What makes a user interface 86% intuitive? This can’t be answered without some form of measurement. You need to ask what features can make a user interface intuitive. Talk to the customer and the potential future users. Gather features (or better dig for them!), which would make working with the piece of software easier.
Maybe you get a list of features like:
The department’s organization must be
shown in a hierarchical tree view.
In this tree view drag & drop must be
possible.
The font size must be
configurable and saved for each user.
On the top of the screen there must
be a list of important links. Each
user may configure and save his own
personal list.
...
Make requirements out of these features. They are “testable” and thus “measureable”. If in the acceptance tests it turns out that 17 out of 20 features are working, you have 85% success.
EDIT: This works in a project environment, where you need to deliver measurements (like in many commercial projects). If you have a "softer" form of project environment where not everybody is poking on figures, then sticking too much on this formalism may be counterproductive since flexibility and agility may suffer from this.
I would advise you against quantifying usability requirements. The problem is not so much that you can't define metrics. You could say, for instance, that
it should take a person not longer than x seconds to find y on the site, or
the conversion rate of the store has to be higher than z%
etc etc
The problem is rather that you have to spend time and resources on finding acceptable target values for your metrics that you can actually reach. What is an acceptable time to find a piece of content?

When evaluating a design, how do you evaluate complexity? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
We all know to keep it simple, right?
I've seen complexity being measured as the number of interactions between systems, and I guess that's a very good place to start. Aside from gut feel though, what other (preferably more objective) methods can be used to determine the level of complexity of a particular design or piece of software?
What are YOUR favorite rules or heuristics?
Here are mine:
1) How hard is it to explain to someone who understands the problem but hasn't thought about the solution? If I explain the problem to someone in the hall (who probably already understands the problem if they're in the hall) and can explain the solution, then it's not too complicated. If it takes over an hour, chances are good the solution's overengineered.
2) How deep in the nested functions do you have to go? If I have an object which requires a property held by an object held by another object, then chances are good that what I'm trying to do is too far removed from the object itself. Those situations become problematic when trying to make objects thread-safe, because there'd be many objects of varying depths from your current position to lock.
3) Are you trying to solve problems that have already been solved before? Not every problem is new (and some would argue that none really are). Is there an existing pattern or group of patterns you can use? If you can't, why not? It's all good to make your own new solutions, and I'm all for it, but sometimes people have already answered the problem. I'm not going to rewrite STL (though I tried, at one point), because the solution already exists and it's a good one.
Complexity can be estimated with the coupling and how cohesive are all your objects. If something is have too much coupling or is not enough cohesive, than the design will start to be more complex.
When I attended the Complex Systems Modeling workshop at the New England Complex Systems Institute (http://necsi.org/), one of the measures that they used was the number of system states.
For example if you have two nodes, which interact, A and B, and each of these can be 0 or 1, your possible states are:
A B
0 0
1 0
0 1
1 1
Thus a system of only 1 interaction between binary components can actually result in 4 different states. The point being that the complexity of the system does not necessarily increase linearly as the number of interactions increases.
good measures can be also number of files, number of places where configuration is stored, order of compilation on some languages.
Examples:
.- properties files, database configuration, xml files to hold related information.
.- tens of thousands of classes with interfaces, and database mappings
.- a extremely long and complicated build file (build.xml, Makefile, others..
If your app is built, you can measure it in terms of time (how long a particular task would take to execute) or computations (how much code is executed each time the task is run).
If you just have designs, then you can look at how many components of your design are needed to run a given task, or to run an average task. For example, if you use MVC as your design pattern, then you have at least 3 components touched for the majority of tasks, but depending on your implementation of the design, you may end up with dozens of components (a cache in addition to the 3 layers, for example).
Finally something LOC can actually help measure? :)
i think complexity is best seen as the number of things that need to interact.
A complex design would have n tiers whereas a simple design would have only two.
Complexity is needed to work around issues that simplicity cannot overcome, so it is not always going to be a problem.
There is a problem in defining complexity in general as complexity usually has a task associated with it.
Something may be complex to understand, but simple to look at (very terse code for example)
The number of interactions getting this web page from the server to your computer is very complex, but the abstraction of the http protocol is very simple.
So having a task in mind (e.g. maintenance) before selecting a measure may make it more useful. (i.e. adding a config file and logging to an app increases its objective complexity [yeah, only a little bit sure], but simplifies maintenance).
There are formal metrics. Read up on Cyclomatic Complexity, for example.
Edit.
Also, look at Function Points. They give you a non-gut-feel quantitative measurement of system complexity.