In XBRL, is the same presentation network valid for all the contexts in a single SEC filing? - xbrl

Are presentation networks/hierarchies independent of the contexts of the facts that populate them?
For example, within an instance document, I see many facts that are duplicated for concepts, but with different contexts. Obviously this is because they represent different years for a particular table, ex. income statement for the previous year or previous quarter.
However, is the presentation hierarchy valid for all contexts and it doesn't change for different contexts in the same SEC filing?

The simple answer is yes, presentation hierarchies are independent of contexts. There is no mechanism for tying particular hierarchies to particular contexts.
There are two details worth noting:
The presentation hierarchy can indicate where a concept is used as an opening balance or a closing balance (using the "preferred label" mechanism to select either a periodStartLabel or a periodEndLabel). Where this happens, tools that use the presentation hierarchy to display report information will select concepts from different contexts.
The SEC XBRL Renderer applies some filtering to which facts are shown in each section. For example, if the same concept appears in the presentation for a primary financial statement, and for a note, the renderer may filter out facts that are intended for the note so that they are not shown on the primary financial statement. For example, if you look at this 10-Q, under "Financial Statements->Condensed Consolidated Statement of Operations" you'll see "Net Sales" as the first line. If you look under the 5th table under "Notes Details" you'll see a breakdown of "Net Sales" by product line. Despite using the same concept, most of the facts on this note are
not shown on the first statement. Note that this filtering is a feature of this particular rendering engine, and is not part of the XBRL standard.

Related

Choosing MYSQL over Mongodb [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
From mongoDB docs:
When would MySQL be a better fit?
A concrete example would be the booking engine behind a travel
reservation system, which also typically involves complex
transactions. While the core booking engine might run on MySQL, those
parts of the app that engage with users – serving up content,
integrating with social networks, managing sessions – would be better
placed in MongoDB
Two things i don't understand in this (not even a little) concrete example:
What kind of queries are complex enough to be better suited for MYSQL
(a concrete example of such a query be of great help)?
Where is the line that seperates the "core booking engine" from the
"parts of the app that engage with users"?
My concern is not theoretical as we use both MYSQL and MongoDB in our app, and a better understanding of the above would really help us in designing our DB models for future features.
MySQL is ACID compliant (assuming you're using INNODB or similar), MogoDB is not. Read the MongoDB docs about atomicity here:
MongoDB Atomicity
Think about going to the grocery store checkout, and that the POS system is using MySQL. What steps might take place in a single transaction?
Item scanned, price retrieved
Inventory updated, quantity on hand is subtracted by 1
Department metrics updated (add dollar amount, quantity, item
type, etc)
Is the item on sale? Show how much money the customer saved on
the receipt
Customer used a coupon, make sure we notify the vendor so we get
reimbursed
Send receipt total to accounting, update month / year / week stats
Now it's time to pay. OOPS! Customer left wallet at home, and says he'll come back later. We've made all these changes to many database tables, now what do we do? If we were using MySQL and had all these updates in a single transaction, we could just rollback that one transaction and no harm is done. All changes will be reverted automatically, and in the correct order.
Doing that in a non-transactional database means writing code to backtrack through all those changes, in the correct order.
MongoDB is good for document storage and retrieval. It wouldn't be my first choice for creating small pieces of a document a little at a time, where you want to store bits and pieces of information in seperate places.
How do we use MongoDB in our grocery store example? We could use it as part of an inventory system.
Our MySQL inventory could have a schema of things we absolutely MUST have --- SKU, price, department. We don't necessarily want to clutter it up with things that we don't often need to know, however, by adding columns such as 'Easter_2016_Promotion'. In MongoDB, since we don't have a schema that's set in stone, this isn't a problem.
Something like
db.inventory.update(
{ _id: 1 },
{ $set: { "Easter_2016": "y" } }
)
Could add the "Easter_2016" field to a single inventory item without affecting any of the others. In MySQL, you affect every row in a table by adding a single column --- not so in MongoDB. Additionally, when querying Mongo, you can search all records (documents) for a field that MAY or MAY not exist. In MySQL, the field either exists or it doesn't.
MongoDB is built for schemas that are fluid, dynamic, and (potentially) somewhat unknown. It's speed partially relies on the fact that there aren't monolithic transactions that it may have to undo, and in part that there isn't a schema to constantly validate against when inserting.
Need to analyze 100,000 receipt JSON files from our POS system? Just run mongoimport and start querying for what you want.
Need to add some special data for just a few inventory items or flag a handful of customers as 'special handing'? MongoDB for this as well.
Need to import and query tax returns from 20 different states (think: different field names, different number of fields, with a few overlaps)? Mongo wins here, hands down.
Anything that has several known, concrete steps that MUST work, and work in the proper seqeunce, however (think: ATM machine), and MySQL is a better fit.
A query with multiple joins will be a good example. The main idea behind this point, is in relational DB m:n relations are symmetrical, whilst in document-oriented DB, they are not. Since v3.2, MongDB has $lookup which address this issue to some degree.
The line between core booking engine and representation engine is drawn by CAP theorem. The core part must be consistent, while the client-facing part can be implemented with eventual consistency. A recommended workaround for lack of atomic transactions in MongoDB should shed some light to this statement. Alternatively your core booking part can use event sourcing to keep state consistent without transactions.

Software for normalization in SQL

I know that decomposing relations into Boyce-Codd Normal Form (BCNF) is done by an algorithm.
If it's done by an algorithm, I wonder if there exists software to do the decomposing for me? I know how to do but I often tend to do some stupid mistakes and I want to be completely sure that I made it correct.
Normalization absolutely is used in the real world... and hopefully you know that 3NF is only the third one of... what is is now, 8? But 3NF should be an easy target.
However... I would venture to say that there could not be such a tool.
Normalization, technically, is an attribute of each table. Within a given database, different tables may have different levels of normalization.
Each table represents facts... facts about instances of a certain type of thing (person, account, order, shipment, item, location) including, sometimes, foreign keys which lead you to other kinds of facts about that thing.
Normalization has to do with how precisely the tables represent the facts and now well the table's design can prevent misrepresentation or inconsistent representation of the facts.
Thus,
an understanding of the actual facts is required... which is outside the scope of automated tools
.
Read more about it here at the original post.

Stop rete activations

I have a rule that retracts thousands of facts when a certain condition is met. This rule sits in a module that contains two other rules that use "not" statements. My questions are:
Does the rete network get recalculated every time the first rule retracts a fact?
It is because of the "not" statements in the other two rules or would that happen anyway?
Is there a way to stop recomputing activations until the first rule has no more facts to retract?
Thanks!
Precise answers aren't possible without knowing the patterns in the rules that use the type of the retracted facts.
Clearly, if Fact is that type and the rules #2 and #3 contain just
not Fact(...constraints...)
nothing tremendous should happen until the last of those Fact facts (that meets the constraints, if any) is removed from working memory: then an additional node may have to be created, depending on what else is that not CE); this may continue depending on what is after the not CE and result in terminal nodes, i.e., activations.
If a pattern like
Fact(...constraints...)
is in any of these rules, retracting a Fact (that meets these constraints, if any) causes some immediate action on any pending activations and removal of nodes in the network, provided it has been included before.
There is not much you can do to avoid happenings in the Rete network.
That said, the necessity of having to retract thousands of facts is rather scary. How many remain? It might be cheaper to pick out the select few and start over in an entirely new Rete. Or use a design pattern that does not expose all of those thousands at once to the Engine. Or something else.
We've written a lazy algorithm, that avoids re-producing partial matches and activations, until the rule is potentially ready to fire. Being lazy you can use salience to delay when rule is evaluated.
http://blog.athico.com/2013/11/rip-rete-time-to-get-phreaky.html

Steps to design a well organized and normalized Relational Database

I just started making a database for my website so I am re-reading Database Systems - Design, Implementation and Management (9th Edition)but i notice there is no single step by step process described in the book to create a well organized and normalized database. The book seems to be a little all over the place and although the normalization process is all in one place the steps leading up to it are not.
I thought it be very usefull to have all the steps in one list but i cannot find anything like that online or anywhere else. I realize the answerer explaining all of the steps would be quite an extensive one but anything i can get on this subject will be greatly appreciated; including the order of instructions before normalization and links with suggestions.
Although i am semi familiar with the process i took a long break (about 1 year) from designing any databases so i would like everything described in detail.
I am especially interested in:
Whats a good approach to begin modeling a database (or how to list business rules so its not confusing)
I would like to use ER or EER (extended entity relationship model) and I would like to know
how to model subtypes and supertypes correctly using EER(disjoint and overlapping) (as well as writing down the business rules for it so you know that its a subtype if there is any common way of doing that)
(I allready am familiar with the normalization process but an answer can include tips about it as well)
Still need help with:
Writing down business rules (including business rules for subtypes and super types in EER)
How to use subtypes and super-types in EER correctly (how to model them)
Any other suggestions will be appreciated.
I would recommend you this videos (about 9) about E/R modeling
http://www.youtube.com/watch?v=q1GaaGHHAqM
EDIT:
"how extensive must the diagrams for this model be ? must they include all the entities and attributes?? "
Yes, actually you have ER modeling and extend ER modeling,
The idea is to make the Extended ER modeling, because there you not only specify the entities, you also specify the PK and FK and the cardinality. Take a look to this link (see the graphics and the difference between both models).
there are two ways of modeling, one is the real scenario and the other one is the real structure of the DB, I.E:
When you create a E-ER Modeling you create even the relationship and cardinality for ALL entities, but when you are going to create the DB is not necessary to create relations with cardinality 1:N(The table with cardinality N create a FK from table with card. 1, and you don't need to create the relation Table into the DB) or when you have a 1:1 cardinality you know that one of your entities can absorb the other entity.
look this Graphic , only the N:M relations entities were create (when you see 2 or more FK, that's a relation table)
But remember those are just "rules" and you can break it if your design need to, for performance, security, etc.
about tools, there are a lot of them, But I recommended workbench, because you can use it to connect to your DBs (if you are in mysql) and create designs E/R modeling, with attributes, and he will auto-create the relations tables N:M.
EDIT 2:
here I put some links that can explain that a little bit better, it will take a lot of lines and will be harder to explain here and by myself, please review this links and let me know if you have questions:
type and subtype:
http://www.siue.edu/~dbock/cmis450/4-eermodel.htm
business rules (integrity constrain)
http://www.deeptraining.com/litwin/dbdesign/FundamentalsOfRelationalDatabaseDesign.aspx (please take a look specially to this one, I think it will help you with all this info)
http://www.google.com/url?sa=t&rct=j&q=database%20design%20integrity%20constraints&source=web&cd=1&ved=0CFYQFjAA&url=http%3A%2F%2Fcs-people.bu.edu%2Frkothuri%2Flect12-constraints.ppt&ei=2aLDT-X4Koyi8gTKhZWnCw&usg=AFQjCNEvXGr7MurxM-YCT0-rU0htqt6yuA&cad=rja
I have reread the book and some articles online and have created a short list of steps in order to design a decent database (of course you need to understand the basics of database design first) Steps are described in greater detail below:
(A lot of steps are described in the book: Database Systems - Design, Implementation and Management (9th Edition) and thats what the page numbers are refering too but i will try to describe as much as I can here and will edit this answer in the following days to make it more complete)
Create a detailed narrative of the organization’s description of operations.
Identify the business rules based from the description of operations.
Identify the main entities and relationships from the business rules.
Translate entities/relationships to EER model
Check naming conventions
Map ERR model to logical model (pg 400)*
Normalize logical model (pg 179)
Improve DB design (pg 187)
Validate Logical Model Integrity Constraints (pg 402) (like length etc.)
Validate the Logical Model against User Requirements
Translate tables to mySQL code (in workbench translate EER to SQL file using export function then to mySQL)
*you can possibly skip this step if you using workbench and work of the ER model that you design there.
1. Describe the workings company in great detail. If you are creating personal project describe it in detail if you are working with a company ask for documents describing their company as well as interviewing the employees for information (interviews might generate inconsistent information make sure to check with supervisers which information is more important for design)
2. Look at the gathered information and start generating rules from them make sure to fill in any information gaps in your knowledge. Confirm with supervisers in the company before moving on.
3. Identify the main entities and relationships from the business rules. Keep in mind that during the design process, the database designer does not depend simply on interviews to help define entities, attributes, and relationships. A surprising amount of information can be gathered by examining the business forms and reports that an organization uses in its daily operations. (pg 123)
4. If the database is complex you can break down the ERD design into followig substeps
i) Create External Models (pg 46)
ii) Combine External Models to form Conceptual Model (pg 48)
Follow the following recursive steps for the design (or for each substep)
I. Develop the initial ERD.
II. Identify the attributes and primary keys that adequately describe the entities.
III. Revise and review the ERD.
IV. Repeat steps until satisfactory output
You may also use entity clustering to further simplify your design process.
Describing database through ERD:
Use solid lines to connect Weak Entities (Weak entities are those which cannot exist without parent entity and contain parents PK in their PK).
Use dashed lines to connect Strong Entities (Strong entities are those which can exist independently of any other entity)
5. Check if your names follow your naming conventions. I used to have suggestions for naming conventions here but people didn't really like them. I suggest following your own standards or looking up some naming conventions online. Please post a comment if you found some naming conventions that are very useful.
6.
Logical design generally involves translating the ER model into a set of relations (tables), columns, and constraints definitions.
Translate the ER to logical model using these steps:
Map strong entities (entities that dont need other entities to exist)
Map supertype/subtype relationships
Map weak entities
Map binary relationships
Map higher degree relationships
7. Normalize the Logical Model. You may also denormalize the logical model in order to gain some desired characteristics. (like improved performance)
8.
Refine Attribute Atomicity -
It is generally good practice to pay attention to the atomicity requirement. An atomic attribute is one that cannot
be further subdivided. Such an attribute is said to display atomicity. By improving the degree of atomicity, you also gain querying flexibility.
Refine Primary Keys as Required for Data Granularity - Granularity refers to the level of detail represented by the values stored in a table’s row. Data stored at their lowest
level of granularity are said to be atomic data, as explained earlier. For example imagine ASSIGN_HOURS attribute to represent the hours worked by a given employee on a given project. However, are
those values recorded at their lowest level of granularity? In other words, does ASSIGN_HOURS represent the hourly
total, daily total, weekly total, monthly total, or yearly total? Clearly, ASSIGN_HOURS requires more careful definition. In this case, the relevant question would be as follows: For what time frame—hour, day, week, month, and
so on—do you want to record the ASSIGN_HOURS data?
For example, assume that the combination of EMP_NUM and PROJ_NUM is an acceptable (composite) primary key
in the ASSIGNMENT table. That primary key is useful in representing only the total number of hours an employee
worked on a project since its start. Using a surrogate primary key such as ASSIGN_NUM provides lower granularity
and yields greater flexibility. For example, assume that the EMP_NUM and PROJ_NUM combination is used as the
primary key, and then an employee makes two “hours worked” entries in the ASSIGNMENT table. That action violates
the entity integrity requirement. Even if you add the ASSIGN_DATE as part of a composite PK, an entity integrity
violation is still generated if any employee makes two or more entries for the same project on the same day. (The
employee might have worked on the project a few hours in the morning and then worked on it again later in the day.)
The same data entry yields no problems when ASSIGN_NUM is used as the primary key.
Try to answer the questions: "Who will be allowed to use the tables and what portion(s) of the table(s) will be available to which users?" ETC.
Please feel free to leave suggestions or links to better descriptions in the comments below i will add it to my answer
One aspect of your question touched on representing subclass-superclass relationships in SQL tables. Martin Fowler discusses three ways to design this, of which my favorite is Class Table Inheritance. The tricky part is arranging for the Id field to propagate from superclasses to subclasses. Once you get that done, the joins you will typically want to do are slick, easy, and fast.
There are six main steps in designing any database :
1. Requirements Analysis
2. Conceptual Design
3. Logical Design
4. Schema Refinement
5. Physical Design
6. Application & Security Design.

Database design for recursive children

This design problem is turning out to be a bit more "interesting" than I'd expected....
For context, I'll be implementing whatever solution I derive in Access 2007 (not much choice--customer requirement. I might be able to talk them into a different back end, but the front end has to be Access (and therefore VBA & Access SQL)). The two major activities that I anticipate around these tables are batch importing new structures from flat files and reporting on the structures (with full recursion of the entire structure). Virtually no deletes or updates (aside from entire trees getting marked as inactive when a new version is created).
I'm dealing with two main tables, and wondering if I really have a handle on how to relate them: Products and Parts (there are some others, but they're quite straightforward by comparison).
Products are made up of Parts. A Part can be used in more than one Product, and most Products employ more than one Part. I think that a normal many-to-many resolution table can satisfy this requirement (mostly--I'll revisit this in a minute). I'll call this Product-Part.
The "fun" part is that many Parts are also made up of Parts. Once again, a given Part may be used in more than one parent Part (even within a single Product). Not only that, I think that I have to treat the number of recursion levels as effectively arbitrary.
I can capture the relations with a m-to-m resolution from Parts back to Parts, relating each non-root Part to its immediate parent part, but I have the sneaking suspicion that I may be setting myself up for grief if I stop there. I'll call this Part-Part. Several questions occur to me:
Am I borrowing trouble by wondering about this? In other words, should I just implement the two resolution tables as outlined above, and stop worrying?
Should I also create Part-Part rows for all the ancestors of each non-root Part, with an extra column in the table to store the number of generations?
Should Product-Part contain rows for every Part in the Product, or just the root Parts? If it's all Parts, would a generation indicator be useful?
I have (just today, from the Related Questions), taken a look at the Nested Set design approach. It looks like it could simplify some of the requirements (particularly on the reporting side), but thinking about generating the tree during the import of hundreds (occasionally thousands) of Parts in a Product import is giving me nightmares before I even get to sleep. Am I better off biting that bullet and going forward this way?
In addition to the specific questions above, I'd appreciate any other comentary on the structural design, as well as hints on how to process this, either inbound or outbound (though I'm afraid I can't entertain suggestions of changing the language/DBMS environment).
Bills of materials and exploded parts lists are always so much fun. I would implement Parts as your main table, with a Boolean field to say a part is "sellable". This removes the first-level recursion difference and the redundancy of Parts that are themselves Products. Then, implement Products as a view of Parts that are sellable.
You're on the right track with the PartPart cross-ref table. Implement a constraint on that table that says the parent Part and the child Part cannot be the same Part ID, to save yourself some headaches with infinite recursion.
Generational differences between BOMs can be maintained by creating a new Part at the level of the actual change, and in any higher levels in which the change must be accomodated (if you want to say that this new Part, as part of its parent hierarchy, results in a new Product). Then update the reference tree of any Part levels that weren't revised in this generational change (to maintain Parts and Products that should not change generationally if a child does). To avoid orphans (unreferenced Parts records that are unreachable from the top level), Parts can reference their predecessor directly, creating a linked list of ancestors.
This is a very complex web, to be sure; persisting tree-like structures of similarly-represented objects usually are. But, if you're smart about implementing constraints to enforce referential integrity and avoid infinite recursion, I think it'll be manageable.
I would have one part table for atomic parts, then a superpart table with a superpartID and its related subparts. Then you can have a product/superpart table.
If a part is also a superpart, then you just have one row for the superpartID with the same partID.
Maybe 'component' is a better term than superpart. Components could be reused in larger components, for example.
You can find sample Bill of Materials database schemas at
http://www.databaseanswers.org/data_models/
The website offers Access applications for some of the models. Check with the author of the website.