Extending XBRL Presentation Networks - xbrl

I understand XBRL presentation networks very well, and I also understand the mechanisms of prohibiting and overriding relationships, but the way to extend a presentation network with a new, custom concept eludes me.
A presentation network defines a hierarchy of locators, where each locator points to a concept. This makes it possible for the same concept to appear multiple times inside the same network (for example the Equity concept in the Changes of Equity statement, IFRS taxonomy).
The question is: How can an extension taxonomy attach a new arc to a particular locator in the base taxonomy? When an extension taxonomy defines a new arc, that new arc points to two new locators in the same extension taxonomy and will therefore not integrate with the network that is defined by the base taxonomy.

The locators are "proxies" to concepts, and a syntactic detail. From a logical perspective, a presentation network is a DAG of report elements (="concepts" in specese). When prohibiting and overriding, what matters in the resolution of the DTS is the concepts, not the locators.
In specese wording, two relationships are equivalent if their (non-exempt) attributes match, and the XML fragments on the from and to sides are identical. The XML fragments of the from and to sides, in a presentation network, are the concepts, not the locators.
An extension taxonomy will thus have its own locators, but pointing to the same concepts if it wants to reuse them.

Related

What does backbone mean in a neural network?

I am getting confused with the meaning of "backbone" in neural networks, especially in the DeepLabv3+ paper. I did some research and found out that backbone could mean
the feature extraction part of a network
DeepLabv3+ took Xception and ResNet-101 as its backbone. However, I am not familiar with the entire structure of DeepLabv3+, which part the backbone refers to, and which parts remain the same?
A generalized description or definition of backbone would also be appreciated.
In my understanding, the "backbone" refers to the feature extracting network which is used within the DeepLab architecture. This feature extractor is used to encode the network's input into a certain feature representation. The DeepLab framework "wraps" functionalities around this feature extractor. By doing so, the feature extractor can be exchanged and a model can be chosen to fit the task at hand in terms of accuracy, efficiency, etc.
In case of DeepLab, the term backbone might refer to models like the ResNet, Xception, MobileNet, etc.
TL;DR Backbone is not a universal technical term in deep learning.
(Disclaimer: yes, there may be a specific kind of method, layer, tool etc. that is called "backbone", but there is no "backbone of a neural network" in general.)
If authors use the word "backbone" as they are describing a neural network architecture, they mean
feature extraction ( a part of the network that "sees" the input), but this interpretation is not quite universal in the field: for instance, in my opinion, computer vision researchers would use the term to mean feature extraction, whereas natural language processing researchers would not.
in informal language, that this part in question is crucial to the overall method.
Backbone is a term used in DeepLab models/papers to refer to the feature extractor network. These feature extractor networks compute features from the input image and then these features are upsampled by a simple decoder module of DeepLab models to generate segmented masks. The authors of DeepLab models have shown performance with different feature extractors (backbones) like MobileNet, ResNet, and Xception network.
CNNs are used for extracting features. Several CNNs are available, for instance, AlexNet, VGGNet, and ResNet(backbones). These networks are mainly used for object classification tasks and have evaluated on some widely used benchmarks and datasets such as ImageNet. In image classification or image recognition, the classifier classifies a single object in the image, outputs a single category per image, and gives the probability of matching a class. Whereas in object detection, the model must be able to recognize several objects in a single image and provides the coordinates that identify the location of the objects. This shows that the detection of objects can be more difficult than the classification of images.
source and more info: https://link.springer.com/chapter/10.1007/978-3-030-51935-3_30

Which concepts to report in XBRL instance?

I have a taxonomy entry point that refers to three linkbases, one of which is a presentation linkbase. When opening the entry point XSD, my XBRL tool discovers many more concepts than are present in the presentation linkbase, most of which are irrelevant for the report in question.
Is there a programmable way of deciding which concepts are to be reported, for instance by only reporting concepts which are present in a discovered presentation linkbase? Or does a human being always need to read some taxonomy-specific documentation and then select the concepts?
To give the example behind my question. I was referring to the entry point www.nltaxonomie.nl/10.0/report/bd/entrypoints/bd-rpt-ob-aangifte-2016.xsd. (The complete taxonomy is available at www.sbr-nl.nl/fileadmin/SBR/documenten/NT_2016/SBR_NT_10.0.zip.)
For instance, the XBRL editor of my choice displays concept BusinessProfitTitle coming from www.nltaxonomie.nl/10.0/report/bd/abstracts/bd-abstracts.xsd. BusinessProfitTitle is not included in the presentation linkbase www.nltaxonomie.nl/10.0/report/bd/linkroles/bd-aangifte-omzetbelasting-pre.xml, which is referred to by the entry point and which only contains concepts related to value-added tax. The entry point refers to two more definition linkbases, which seem to contain more concepts than are relevant. So I was wondering how to derive the concepts that must be reported for the entry point above, when you don’t speak Dutch and would like to derive the concepts programmatically.
There is nothing in the XBRL 2.1 specification that directly address your question.
There is something called "the formula linkbase" that might produce validation errors from content in the original instance document and the DTS. Within the "formula linkbase specification set" there are "Existence assertions" and some regulators are using the "Value assertions" for the purposes of detecting when a concept has been reported and the value reported is "valid".
This does depend somewhat on the filing system in question, but typically, the concepts covered by a presentation tree are the ones relevant to the entry point in question, and you can safely ignore all other concepts.
The schemas defining concepts are often shared by the entry points for multiple report types, so it's not uncommon to find concepts that are not covered by the presentation for a particular entry point.

Game Inventory - Design Pattern

I'm still studying OOP designs, so what's the best way to achieve an inventory for a simple flash game ? It seems that more than one design pattern could deliver some kind of an invetory but I would lose the flexibility if I try to adapt it somehow without a good knowledge about the subject.
For money to buy what is available in an inventory I thought of Singleton. If there's enough cash earned while playing the game, then one can buy new skills.
Maybe decorator pattern could list many thumbnails as buttons, and clicking on it applies new features and skills to the character.
I'd like to read standard advices on solving this problem, because I feel I'm on the wrong way. Thanks.
Stay away from singleton if possible
Singleton has its uses, however I believe it's overused in a lot of cases.
The biggest problem with a singleton is that you're using Global State, which is generally regarded as a bad thing as when complexity in your software grows it can cause you to have unintended side effects.
Object composition might be a better way
For games you might want to take a look at using Object Composition rather than traditional OOD Modelling.
A software component is a software element that conforms to a
component model and can be independently deployed and composed without
modification according to a composition standard.
A component model defines specific interaction and composition
standards. A component model implementation is the dedicated set of
executable software elements required to support the execution of
components that conform to the model.
A software component infrastructure is a set of interacting software
components designed to ensure that a software system or subsystem
constructed using those components and interfaces will satisfy clearly
defined performance specifications.
Component based game engine design
http://www.as3dp.com/2009/02/21/design-pattern-principles-for-actionscript-30-favor-object-composition-over-class-inheritance/
Reading over the material in the first link should give you some excellent ideas on how to model your inventory system and have it extendable in a nice way.

Presentation patterns to use with Ext

Which presentation patterns do you think Ext favors or have you successfully used to achieve high testability and also maintainability?
Since Ext component instances usually come tightly coupled with state and some sort of presentation logic (e.g. format validation for text fields), Passive View is not a natural fit. Supervising Presenter seems like it can work (and I've painlessly used it in one occasion). How about the suitability of Presentation Model? Any others?
While this question is specifically for Ext, it can apply to similar frameworks like SmartClient and even RIA technologies like Flex. So, if you have any first-hand pattern experiences with any other web UI technologies, your input would still be appreciated.
When thinking of presentation patterns, this is a great quote:
Separating user interface code from
everything else is a key principle in
well-engineered software. But it’s not
always easy to follow and it leads to
more abstraction in an application
that is hard to understand. Quite a
lot design patterns try to target this
scenario: MVC, MVP, Supervising
Controller, Passive View,
PresentationModel,
Model-View-ViewModel, etc. The reason
for this variety of patterns is that
this problem domain is too big to be
solved by one generic solution.
However, each UI Framework has its own
unique characteristics and so they
work better with some patterns than
with others.
As far as Ext is concerned, in my opinion the closest pattern would be the Model-View-Viewmodel, however this pattern is inherently difficult to code for whilst maintaining the separation of the key tenets (state, view, model).
That said, as per the quote above, each pattern tries to solve/compartmentalise/simplify a problem/situation often too complex for the individual application at hand, or which often fails when you try and take it to its absolute. As such, think about getting a 'best fit' as opposed to an absolute when pattern matching application development.
And remember:
The reason
for this variety of patterns is that
this problem domain is too big to be
solved by one generic solution.
I hope this helps!
2 yeas have passed since this question was aksed and now Ext-JS 4 has a built-in implementation of the MVC pattern. However, instead of an MVP (which I prefer), it favors a straight controller because the views attachment themselves to the models through stores.
Here's the docs on the controller:
http://docs.sencha.com/ext-js/4-1/#!/api/Ext.app.Controller
Nonetheless it can be made to act more like a supervising controller. One nice aspect of Ext-JS is the global application objects ability to act like an event bus for handling controller to controller communication. See this post on how to do that:
http://www.sencha.com/forum/showthread.php?176495-How-to-listen-for-custom-events-fired-in-application
Of course the definitive explanation of all these patterns can be found here:
http://martinfowler.com/eaaDev/uiArchs.html

handling amorphous subsystems in formal software design

People like Alexander Stepanov and Sean Parent vote for a formal and abstract approach on software design.
The idea is to break complex systems down into a directed acyclic graph and hide cyclic behaviour in nodes representing that behaviour.
Parent gave presentations at boost-con and google (sheets from boost-con, p.24 introduces the approach, there is also a video of the google talk).
While i like the approach and think its a neccessary development, i have a problem with imagining how to handle subsystems with amorphous behaviour.
Imagine for example a common pattern for state-machines: using an interface which all states support and having different behaviour in concrete implementations for the states.
How would one solve that?
Note that i am just looking for an abstract approach.
I can think of hiding that behaviour behind a node and defining different sub-DAGs for the states, but that complicates the design considerately if you want to influence the behaviour of the main DAG from a sub-DAG.
Your question is not clear. Define amorphous subsystems.
You are "just looking for an abstract approach" but then you seem to want details about an implementation in a conventional programming language ("common pattern for state-machines"). So, what are you asking for? How to implement nested finite state-machines?
Some more detail will help the conversation.
For a real abstract approach, look at something like Stream X-Machines:
... The X-machine model is structurally the
same as the finite state machine, except
that the symbols used to label the machine's
transitions denote relations of type X→X. ...
The Stream X-Machine differs from Eilenberg's
model, in that the fundamental data type
X = Out* × Mem × In*,
where In* is an input sequence,
Out* is an output sequence, and Mem is the
(rest of the) memory.
The advantage of this model is that it
allows a system to be driven, one step
at a time, through its states and
transitions, while observing the
outputs at each step. These are
witness values, that guarantee that
particular functions were executed on
each step. As a result, complex
software systems may be decomposed
into a hierarchy of Stream
X-Machines, designed in a top-down
way and tested in a bottom-up way.
This divide-and-conquer approach to
design and testing is backed by
Florentin Ipate's proof of correct
integration, which proves how testing
the layered machines independently is
equivalent to testing the composed
system. ...
But I don't see how the presentation is related to this. He seems to speak about a quite mainstream approach to programming, nothing similar to X-Machines. Anyway, the presentation is quite confusing and I have no time to see the video right now.
First impression of the talk, reading the slides only
The author touches haphazardly on numerous fields/problems/solutions, apparently without recognizing it: from Peopleware (for example Psychology of programming), to Software Engineering (for example software product lines), to various programming techniques.
How the various parts are linked and what exactly he is advocating is not clear at all (I'm accustomed to just reading slides and they are usually consequential):
Dataflow programming?
Constraints solving for User Interfaces? For practical implementations, see Garnet for Common Lisp, Amulet/OpenAmulet for C++.
What advantages gives us this "new" concept-based generic programming with respect to well-known approaches (for example, tools based on Hoare logic pre/post conditions and invariants or, better, Hoare's Communicating Sequential Processes (CSP) or Hehner's Practical Theory of Programming or some programming language with a sophisticated type-system like ATS, Qi or Epigram and so on)? It seems to me that introducing "concepts" - which, as-is, are specific to C++ - is not more simple than using the alternatives. Is it just about jargon and "politics"? (Finally formal methods... but disguised).
Why organizing program modules as a DAG and not as a tree, like David Parnas advocated decades ago in Designing software for ease of extension and contraction? (here a directly accessible .pdf and here slides from a lecture). The work on X-Machines probably is an answer to this question (going even beyond DAGs), but, again, the author seems to speak about a quite conventional program development regime in which Parnas' approach is the only sensible.
If/when I will see the video I will update this answer.