kcachegrind: How to draw the full call graph? - kcachegrind

I like kcachegrind's call graph. But I have been unable to make it graph the full call graph.
I would imagine I just had to set:
Graph > Caller Depth > Unlimited
Graph > Callee Depth > Unlimited
Graph > Min. Node Cost > No Minimum
Graph > Min. Call Cost > No Minimum
1 and 2 are easy. 3 is in the memu, but grayed out. 4 is not in the menu.
Is there a way I can make it show the full call graph graphically?

The answer for that lies in the source code (line 2794):
a = addNodeLimitAction(m, tr("No Minimum"), 0.0);
// Unlimited node cost easily produces huge graphs such that 'dot'
// would need a long time to layout. For responsiveness, we only allow
// for unlimited node cost if a caller and callee depth limit is set.
a->setEnabled((_maxCallerDepth>=0) && (_maxCalleeDepth>=0));
So you have to set the other two options to value other than 'unlimited'.

Related

Function that will not return 0

I am writing a formula which to use as a decay multiplier on a given value.
The problem is the following : I have a window of processing - days lets say 10, this window is computed every day anew. I need to decay a certain parameter with a factor reflecting the days that an id is present. Currently what I do is (previousWinSize-(start of the current window-start of the previous window))/previousWinSize
In this case if my previous window size is 10 and the difference in the days of processing is two (10-2)/10 which gives me 0.8 to multiply my variable by and respectively decay .2 of it.
However if I have a 3 day window and again 2 days of difference (3-2)/3 I get value close to 0 which cuts more than I would like to.
I am looking for a formula that would scale better when the numbers are small and would not produce a huge decay factor.
Thank you in advance.
I recommend making use of a sigmoid function e.g.
You can take the output of your function i.e. returns a number between 0 and 1 based on the difference of days of processing and feed it into the sigmoid. If you set up the a (slope) and b (inflection point) parameters properly you can for example, ensure that the lowest decay multiplier you get is ~0.5 when your original equation returns a number close to 0.
I've graphed the example I stated above here:
https://www.desmos.com/calculator/nqemuexjhg
(This is based on: https://www.desmos.com/calculator/rna4aqta0c)
I think you do have two edge cases with this method though. When your equation returns 0 the sigmoid isn't exactly going to give you 0.5 (which you may not even want to begin with), you'll end up getting something that's close to 0.5. In this scenario what you may start to see is your values drifting if you keep applying the sigmoid. The same is true for when your equation returns 1. After putting it through the sigmoid you won't get 1, you'll get something close to 1.
What I think I'd do in such a scenario is have some sort of check before the sigmoid gets applied
e.g.
if(x == 0)
y = 0;
else if(x == 1)
y = 1;
else
y = sigmoid(x);
Sources / Possible further reading:
https://en.wikipedia.org/wiki/Sigmoid_function

How to use the subset parameter of igraph's maximal.cliques function?

I have a large graph and I would to find the maximal clique involving a pair of vertices. I thought that the subset argument to igraph's maximal.clique function would do this, but either I'm using it wrong it or it does something completely different. I've spent a fair amount of time searching the web without luck.
Here's a minimal example showing the problem:
> library(igraph)
> packageVersion('igraph')
[1] ‘1.0.1’
> g = graph.empty(n=10, directed=FALSE)
> g = add.edges(g, c(1, 2))
> str(g)
IGRAPH U--- 10 1 --
+ edge:
[1] 1--2
> # This correctly results a clique.
> maximal.cliques(g, min=2)
[[1]]
+ 2/10 vertices:
[1] 2 1
> # These don't return anything!
> maximal.cliques(g, min=2, subset=1)
list()
> maximal.cliques(g, min=2, subset=c(1, 2))
list()
The subset argument is not for calculating the maximal cliques on a subset of the graph; it simply restricts the set of vertices that are used as starting points in the course of the Bron-Kerbosch algorithm when finding maximal cliques. The Bron-Kerbosch algorithm itself still searches the entire graph and is allowed to add or remove vertices from the current set that it considers as it pleases.
The only role of the subset argument is that it allows you to parallelize the maximal cliques computation on large graphs by partitioning the vertex set of the graph into a number of subsets and then running maximal.cliques on multiple CPUs or CPU cores with different subsets. It is not guaranteed that a maximal clique will be found if the starting subset includes any or all of its vertices; for instance, on my machine, the maximal clique 1--2 is found if I use a starting subset consisting of vertex 9 only:
> maximal.cliques(g, subset=c(9))
[[1]]
+ 2/10 vertices:
[1] 2 1
If you want to search for maximal cliques in a subgraph of the original graph, use induced_subgraph first, followed by max_cliques.

Finding the smallest distance in a set of points from the origin

I am to find the smallest distance between a given set of points and the origin. I have a matrix with 2 columns and 10 rows. Each row represents coordinates. One point consists of two coordinates and I would like to calculate the smallest distance between each point and to the origin. I would also like to determine which point gave this smallest distance.
In Octave, I calculate this distance by using norm and for each point in my set, I have a distance associated with them and the smallest distance is obviously the one I'm looking for. However, the code I wrote below isn't working the way it should.
function [dist,koor] = bonus4(S)
S= [-6.8667, -44.7967;
-38.0136, -35.5284;
14.4552, -27.1413;
8.4996, 31.7294;
-17.2183, 28.4815;
-37.5100, 14.1941;
-4.2664, -24.4428;
-18.6655, 26.9427;
-15.8828, 18.0170;
17.8440, -22.9164];
for i=1:size(S)
L=norm(S(i, :))
dist=norm(S(9, :));
koor=S(9, :) ;
end
i = 9 is the correct answer, but I need Octave to put that number in. How do I tell Octave that this is the number I want? Specifically:
dist=norm(S(9, :));
koor=S(9, :);
I cannot use any packages. I found the geometry package online but I am to solve the task without additional packages.
I'll work off of your original code. Firstly, you want to compute the norm of all of the points and store them as individual elements in an array. Your current code isn't doing that and is overwriting the variable L which is a single value at each iteration of the loop.
You'll want to make L an array and store the norms at each iteration of the loop. Once you do this, you'll want to find the location as well as the minimum distance itself. That can be done with one call to min where the first output gives you the minimum distance and the second output gives you the location of the minimum. You can use the second output to slice into your S array to retrieve the actual point.
Last but not least, you need to define S first before calling this function. You are defining S inside the function and that will probably give you unintended results if you want to change the input into this function at each invocation. Therefore, define S first, then call the function:
S= [-6.8667, -44.7967;
-38.0136, -35.5284;
14.4552, -27.1413;
8.4996, 31.7294;
-17.2183, 28.4815;
-37.5100, 14.1941;
-4.2664, -24.4428;
-18.6655, 26.9427;
-15.8828, 18.0170;
17.8440, -22.9164];
function [dist,koor] = bonus4(S)
%// New - Create an array to store the distances
L = zeros(size(S,1), 1);
%// Change to iterate over number of rows
for i=1:size(S,1)
L(i)=norm(S(i, :)); %// Change
end
[dist,ind] = min(L); %// Find the minimum distance
koor = S(ind,:); %// Get the actual point
end
Or, make sure you save the above function in a file called bonus4.m, then do this in the Octave command prompt:
octave:1> S= [-6.8667, -44.7967;
> -38.0136, -35.5284;
> 14.4552, -27.1413;
> 8.4996, 31.7294;
> -17.2183, 28.4815;
> -37.5100, 14.1941;
> -4.2664, -24.4428;
> -18.6655, 26.9427;
> -15.8828, 18.0170;
> 17.8440, -22.9164];
octave:2> [dist,koor] = bonus4(S);
Though this code works, I'll debate that it's slow as you're using a for loop. A faster way would be to do this completely vectorized. Because using norm for matrices is different than with vectors, you'll have to compute the distance yourself. Because you are measuring the distance from the origin, you can simply square each of the columns individually then add the columns of each row.
Therefore, you can just do this:
S= [-6.8667, -44.7967;
-38.0136, -35.5284;
14.4552, -27.1413;
8.4996, 31.7294;
-17.2183, 28.4815;
-37.5100, 14.1941;
-4.2664, -24.4428;
-18.6655, 26.9427;
-15.8828, 18.0170;
17.8440, -22.9164];
function [dist,koor] = bonus4(S)
%// New - Computes the norm of each point
L = sqrt(sum(S.^2, 2));
[dist,ind] = min(L); %// Find the minimum distance
koor = S(ind,:); %// Get the actual point
end
The function sum can be used to sum over a dimension independently. As such, by doing S.^2, you are squaring each term in the points matrix, then by using sum with the second parameter as 2, you are summing over all of the columns for each row. Taking the square root of this result computes the distance of each point to the origin, exactly the way the for loop functions. However, this (at least to me) is more readable and I daresay faster for larger sizes of points.

Temperature Scale in SA

First, this is not a question about temperature iteration counts or automatically optimized scheduling. It's how the data magnitude relates to the scaling of the exponentiation.
I'm using the classic formula:
if(delta < 0 || exp(-delta/tK) > random()) { // new state }
The input to the exp function is negative because delta/tK is positive, so the exp result is always less then 1. The random function also returns a value in the 0 to 1 range.
My test data is in the range 1 to 20, and the delta values are below 20. I pick a start temperature equal to the initial computed temperature of the system and linearly ramp down to 1.
In order to get SA to work, I have to scale tK. The working version uses:
exp(-delta/(tK * .001)) > random()
So how does the magnitude of tK relate to the magnitude of delta? I found the scaling factor by trial and error, and I don't understand why it's needed. To my understanding, as long as delta > tK and the step size and number of iterations are reasonable, it should work. In my test case, if I leave out the extra scale the temperature of the system does not decrease.
The various online sources I've looked at say nothing about working with real data. Sometimes they include the Boltzmann constant as a scale, but since I'm not simulating a physical particle system that doesn't help. Examples (typically with pseudocode) use values like 100 or 1000000.
So what am I missing? Is scaling another value that I must set by trial and error? It's bugging me because I don't just want to get this test case running, I want to understand the algorithm, and magic constants mean I don't know what's going on.
Classical SA has 2 parameters: startingTemperate and cooldownSchedule (= what you call scaling).
Configuring 2+ parameters is annoying, so in OptaPlanner's implementation, I automatically calculate the cooldownSchedule based on the timeGradiant (which is a double going from 0.0 to 1.0 during the solver time). This works well. As a guideline for the startingTemperature, I use the maximum score diff of a single move. For more information, see the docs.

Using google maps API to find average speed at a location

I am trying to get the current traffic conditions at a particular location. The GTrafficOverlay object mentioned here only provides an overlay on an existing map.
Does anyone know how I can get this data from Google using their API?
It is only theorical, but there is perhaps a way to extract those data using the distancematrix api.
Method
1)
Make a topological road network, with node and edge, something like this:
Each edge will have four attributes: [EDGE_NUMBER;EDGE_SPEED;EDGE_TIME,EDGE_LENGTH]
You can use the openstreetmap data to create this network.
At the begining each edge will have the same road speed, for example 50km/h.
You need to use only the drivelink and delete the other edges. Take also into account that some roads are oneway.
2)
Randomly chose two nodes that are not closer than 5 or 10km
Use the dijsktra shortest path algorithm to calculate the shortest path between this two nodes (the cost = EDGE_TIME). Use your topological network to do that. The output will look like:
NODE = [NODE_23,NODE_44] PATH = [EDGE_3,EDGE_130,EDGE_49,EDGE_39]
Calculate the time needed to drive between the two nodes with the distance matrix api.
Preallocate a matrix A of size N X number_of_edge filled with zero value
Preallocate a matrix B of size 1 X number_of_edge filled with zero value
In the first row of matrix A fill each column (corresponding to each edge) with the length of the edge if the corresponding edge is in the path.
[col_1,col_2,col_3,...,col_39,...,col_49,...,col_130]
[0, 0, len_3,...,len_39,...,len_49,...,len_130] %row 1
In the first row of matrix B put the time calculated with the distance matrix api.
Then select two news node that were not used in the first path and repeat the operation until that there is no node left. (so you will fill the row 2, the row 3...)
Now you can solve the linear equation system: Ax = B where speed = 1/x
Assign the new calculated speed to each edge.
3)
Iterate the point 2) until your calculated speed start to converge.
Comment
I'm not sure that the calculated speed will converge, it will be interesting to test the method.I will try to do that if I got some time.
The distance matrix api don't provide a traveling time more precise than 1 minute, that's why the distance between the pair of node need to be at least 5 or 10 or more km.
Also this method fails to respect the Google's terms of service.
Google does not make available public API for this data.
Yahoo has a feed (example) with traffic conditions -- construction, accidents, and such. A write-up on how to access it is here.
If you want actual road speeds, you will probably need to work with a commercial provider.