Why the complexity of pop_heap is O(2 * log(N))? - stl

I saw in a several places that in priority_queue, the complexity of pop_heap is O(2 * log(N)), is that true? If yes, where that 2 come from? After removing first element it just needs to reconstruct heap, which would take O(log(N)).

why according to standard pop_heap may use 2logN comparisons, whereas push_heap only logN
To start with, remember heap has a structure of a binary tree, it means each node has at most two children (and obviously at most one parent).
How pop_heap works:
Take top element. (O(1))
Take last element and place it on the top. (O(1))
Update heap using top -> bottom logic. You start from the top element and compare it with its two children. Switch current element with one of its children and proceed to the next level, or stop if current element is already in correct place (condition of the heap holds)
How push_heap works:
Place element as the last element (leaf of the tree). (O(1))
Update heap using bottom -> top logic. You start with element you just added, and compare it with its parent. Switch current element with its parent if necessary and proceed, or stop if condition of the heap holds already.
So, the main difference between two above operations is the logic of heap update (reconstruct). pop_heap uses top to bottom logic, and push_heap uses bottom to top logic. Both of them are of O(logN) complexity, since the whole structure is a binary tree. But pop_heap requires more comparisons because each time we need to compare current element with both 2 children. In the same time, during push_heap we need to compare current element only with its 1 (and only) parent.

Related

GTK+ Overlay not display childs in correct order

https://github.com/nintyfan/V2BlankBrowser
In file main. See at v1UI_integration branch and init_v1_ui and real_new_tab code. I known, there is big amount of code, so I do not paste it here.
Problem is, when trying to add more than two children to GtkOverlay, one of it always get at index 0 and I cannot reorder it with gtk_overlay_reorder_overlay. I try with different approach by use three overlays (one root and two children) and it worked, but problem was I cannot interact with items, even when setting gtk_overlay_set_overlay_pass_through to children. What i do wrong?

Normalization of a database when a row is an element of a tree

I have a MySQL table that represents an element which is a node of a tree. The element has an ID, and a parent_id element, which refers to another element in the same table (can be null for top level elements). I wish to be able to sort these elements, both locally (elements which have the same parent), and globally (I want to flatten the tree and present it as a list). I have given each element a local_sorting_key, which can sort the elements within a parent node, but I want the ability to recursively sort the whole tree.
What I've done is add a new field, global_sorting_key, which is a concatenation of the global_sorting_key of the parent (if the element has a parent), and the local_sorting_key. This allows me to recursively set a sorting key for each element, and easily get the whole table and use SQL to sort it. However, this solution is not normalized, as changes to the local_sorting_key do not produce changes in the global_sorting_key, leading to internal inconsistencies.
How do I solve my problem in a simple way, while also maintaining 3NF?

How can I access an element by using its DOM hierarchy(parent element)?

I want to access an element using a DOM hierarchy Node structure, through its parent nodes.I am trying to find the DOM hierarchy through firebug; want something like, <parent_node1>.<child_node1>.<child_node2> (not by document.getElementByID, getElementbyname) to access an element.
I want to automate a scenario like, I have column headers and corresponding values. Want to test, whether the values present under each column header, is correct...
I am thinking of using DOM as a method of automating this case...But, how can I find the DOM hierarchy...?
What I see through Inspect Element in Firebug is something like, list of events, elements and is not looking like a hierarchy node structure...Can somebody help in this regard please?
As discussed, you probably mean the DOM Element properties like element.childNodes, element.firstChild or similar.
Have a look at the DOM Element property reference over at JavaScriptKit, you'll get a good overview there how to access the hierarchy.
var currentTD = document.getElementsByTagName("td")[0];
var currentTable = document.getElementsByTagName("table")[0];
currentTD.parentNode // contains the TR element the TD resides in.
currentTable.childNodes // contains THEAD TBODY and TFOOT if present.
DOM Tables even have more properties like a rows collection and a cells collection.
A reminder of caution: Beware that these collections are live collections, so iterating over them and accessing collection.length in each iteration can be really slow because to get the length, the DOM has to be queried each time.
document.getElementById and document.getElementByTagname are using the DOM. They take an object within the DOM (specifically the document object, though you can also call both of those on elements) and return an object which is a single element or a collection of zero or more elements, respectively. That's a DOM operation. From there you can do other DOM operations on the results like getting children, parents or siblings, changing values etc.
All DOM operations come down to:
Take a starting point. This is often document though it's so often that the first thing we do is call document.getElementById or document.getElementByTagname and then work from the result that we could really consider that the starting point.
Find the element or elements we are interested in, relative to the starting point whether through startingPoint.getElementById* or startingPoing.getElementByTagname perhaps combined with some test (e.g. only working on those with a particular classname, if they have children of particular types, etc.
Read and/or change certain values, add new child nodes and/or delete nodes.
In a case like yours the starting point will be one or more tables found by document.getElementById(someID), document.getElementById(someID).getElementsByTagname('table')[0], or similar. From that table, myTable.getElementsByTagname('th') will get you the column headings. Depending on the structure, and what you are doing with it, you could just select corresponding elements from myTable.getElementsByTagname('td') or go through each row and then work on curRow.getElementsByTagname('td').
You could also just use firstChild, childNodes etc. though it's normally more convenient to have elements you don't care about filtered out by tagname.
*Since there can only be one element with a given id in a document, this will return the same if called on any element higher in the document hierarchy, so we normally just call this on document. It can be useful to call it on an element if we want to do something if the element is a descendant of our current element, and not otherwise.

what happens when i say arrayCollection.addItemAt(object,0);

Lets say i have an Array collection, filled with some elements. If i say
myArrayCollection.addItemAt(object,0);
What exactly happens here? Does all the elements gets shifted rigthwards? or element at 0th position gets replaced with new one?
To summarize, the reason there are two different methods, addItemAt() and setItemAt(), is because one of them adds a new item (not replacing any of the existing ones), and the other sets/overwrites the existing index.
For more info, check out the ArrayCollection documentation.

Why do browsers match CSS selectors from right to left?

CSS Selectors are matched by browser engines from right to left. So they first find the children and then check their parents to see if they match the rest of the parts of the rule.
Why is this?
Is it just because the spec says?
Does it affect the eventual layout if it was evaluated from left to right?
To me the simplest way to do it would be use the selectors with the least number of elements. So IDs first (as they should only return 1 element). Then maybe classes or an element that has the fewest number of nodes — e.g. there may only be one span on the page so go directly to that node with any rule that references a span.
Here are some links backing up my claims
http://code.google.com/speed/page-speed/docs/rendering.html
https://developer.mozilla.org/en/Writing_Efficient_CSS
It sounds like that it is done this way to avoid having to look at all the children of parent (which could be many) rather than all the parents of a child which must be one. Even if the DOM is deep it would only look at one node per level rather than multiple in the RTL matching. Is it easier/faster to evaluate CSS selectors LTR or RTL?
Keep in mind that when a browser is doing selector matching it has one element (the one it's trying to determine style for) and all your rules and their selectors and it needs to find which rules match the element. This is different from the usual jQuery thing, say, where you only have one selector and you need to find all the elements that match that selector.
If you only had one selector and only one element to compare against that selector, then left-to-right makes more sense in some cases. But that's decidedly not the browser's situation. The browser is trying to render Gmail or whatever and has the one <span> it's trying to style and the 10,000+ rules Gmail puts in its stylesheet (I'm not making that number up).
In particular, in the situation the browser is looking at most of the selectors it's considering don't match the element in question. So the problem becomes one of deciding that a selector doesn't match as fast as possible; if that requires a bit of extra work in the cases that do match you still win due to all the work you save in the cases that don't match.
If you start by just matching the rightmost part of the selector against your element, then chances are it won't match and you're done. If it does match, you have to do more work, but only proportional to your tree depth, which is not that big in most cases.
On the other hand, if you start by matching the leftmost part of the selector... what do you match it against? You have to start walking the DOM, looking for nodes that might match it. Just discovering that there's nothing matching that leftmost part might take a while.
So browsers match from the right; it gives an obvious starting point and lets you get rid of most of the candidate selectors very quickly. You can see some data at http://groups.google.com/group/mozilla.dev.tech.layout/browse_thread/thread/b185e455a0b3562a/7db34de545c17665 (though the notation is confusing), but the upshot is that for Gmail in particular two years ago, for 70% of the (rule, element) pairs you could decide that the rule does not match after just examining the tag/class/id parts of the rightmost selector for the rule. The corresponding number for Mozilla's pageload performance test suite was 72%. So it's really worth trying to get rid of those 2/3 of all rules as fast as you can and then only worry about matching the remaining 1/3.
Note also that there are other optimizations browsers already do to avoid even trying to match rules that definitely won't match. For example, if the rightmost selector has an id and that id doesn't match the element's id, then there will be no attempt to match that selector against that element at all in Gecko: the set of "selectors with IDs" that are attempted comes from a hashtable lookup on the element's ID. So this is 70% of the rules which have a pretty good chance of matching that still don't match after considering just the tag/class/id of the rightmost selector.
Right to left parsing, also called as bottom-up parsing is actually efficient for the browser.
Consider the following:
#menu ul li a { color: #00f; }
The browser first checks for a, then li, then ul, and then #menu.
This is because as the browser is scanning the page it just needs to look at the current element/node and all the previous nodes/elements that it has scanned.
The thing to note is that the browser starts processing moment it gets a complete tag/node and needn't have to wait for the whole page except when it finds a script, in which case it temporarily pauses and completes execution of the script and then goes forward.
If it does the other way round it will be inefficient because the browser found the element it was scanning on the first check, but was then forced to continue looking through the document for all the additional selectors. For this the browser needs to have the entire html and may need to scan the whole page before it starts css painting.
This is contrary to how most libs parse dom. There the dom is constructed and it doesn't need to scan the entire page just find the first element and then go on matching others inside it .
It allows for cascading from the more specific to the less specific. It also allows a short circuit in application. If the more specific rule applies in all aspects that the parent rule applies to, all parent rules are ignored. If there are other bits in the parent, they are applied.
If you went the other way around, you would format according to parent and then overwrite every time the child has something different. In the long run, this is a lot more work than ignoring items in rules that are already taken care of.