Please order the function belows by growth rate
n ^ 1.5
n ^ 0.5 + log n
n log ^ 2 n
n log ( n ^ 2 )
n log log n
n ^ 2 + log n
n log n
n
ps:
Ordering by growth rate means, as n gets larger and larger, which function will eventually be higher in value than the others.
ps2. I have ordered most of the functions:
n , n log log n, n log n, n log^2 n, n log ( n ^ 2 ), n ^ 1.5
I just do not know how to order:
n ^ 2 + log n,
n ^ 0.5 + log n,
these 2 values
Can anyone help me?
Thank you
You can figure this out fairly easily by graphing the functions and seeing which ones get larger (find a graphing calculator, check out Maxima, or try graphing the functions on Wolfram Alpha). Or, or course, you just pick some large value of n and compare the various functions, but graphs can give a bit of a better picture.
The key to the answer you seek is that when you sum two functions, their combined "growth rate" is going to be exactly that of the one with the higher growth rate of the two. So, you now know the growth rates of these two functions, since you appear (from knowing the correct ordering of all the others) to know the proper ordering of the growth rates that are in play here.
Plugging in a large number is not the correct way to approach this!
Since you have the order of growth, then you can use the following rules http://faculty.ksu.edu.sa/Alsalih/CSC311_10_11_01/3.3_GrowthofFunctionsAndAsymptoticNotations.pdf
In all of those cases, you're dealing with pairs of functions that themselves have different growth rates.
With that in mind, only the larger one really matters, since it will be most dominant even with a sum. So in each of those function sums, which is the bigger one and how does it compare to the other ones on your larger list?
If you need to proof mathematically, you should try something like this.
If you have two functions, e.g.:
f1(n) = n log n
f2(n) = n
You can simply find the limit of f3(n) = f1(n)/f2(n) when n tends to infinity.
If the result is zero, then f2(n) has a greater growth rate than f1(n).
On the other hand, if the result is infinity then f1(n) has a greater growth rate than f2(n).
n0.5 (or n1/2) is the square root of n. So, it grows more slowly than n2.
let say n = 4 then we get
n ^ 2 + log n = 16.6020599913
n ^ 1.5 = 8
n = 4
n log ( n ^ 2 ) = 4.81
n ^ 0.5 + log n = 2.60205999133
n log n = 2.4
n log ^ 2 n = ?
n log log n = -0.8
Related
I am dealing with a table of decimal values that represent binary numbers. My goal is to count the number of times Bit(0), Bit(1),... Bit(n) are high.
For example, if a table entry is 5 this converts to '101' which can be done using the BIN() function.
What I would like to do is increment a variable 'bit0Count' and 'bit2Count'
I have looked into the BIT_COUNT() function however this would only return 2 for the above example.
Any insight would be greatly appreciated.
SELECT SUM(n & (1<<2) > 0) AS bit2Count FROM ...
The & operator is a bitwise AND.
1<<2 is a number with only 1 bit set, left-shifted by two places, so it is binary 100. Using bitwise AND against you column n is either binary 100 or binary 000.
Testing that with > 0 returns either 1 or 0, since in MySQL, boolean results are literally the integers 1 for true and 0 for false (note this is not standard in other implementations of SQL).
Then you can SUM() these 1's and 0's to get a count of the occurrences where the bit was set.
To tell if bit N is set, use 1 << N to create a mask for that bit and then use bitwise AND to test it. So (column & (1 << N)) != 0 will be 1 if bit N is set, 0 if it's not set.
To total these across rows, use the SUM() aggregation function.
If you need to do this frequently, you could define a stored function:
CREATE FUNCTION bit_set(UNSIGNED INT val, TINYINT which) DETERMINISTIC
RETURN (val & (1 << which)) != 0;
I understand that in the worst case the number of guesses needed for a binary search is lg(n)+1 where n is the number of elements you're searching. I understand this completely, but this obviously only gives you a nice number if n is a power of 2. If n is not a power of 2 I'm told you simply go up to the next power of 2. So for example 5 would go up to 8 then lg(8) +1= 4. But if you were dealing with 5 elements worst case would be 3 guesses? What am I missing?
Thanks!
The actual formula is floor( log(n) + 1) (using base-2 log). Thus, floor( log(5) + 1 ) = floor( 2.x + 1 ) = floor( 3.x ) = 3.
Here is a short code I get from "Introduction to parallel computation" in udacity. The index in this code confuse me.
__global__ void use_shared_memory_GPU(float *array)
{
int i, index = threadIdx.x;
float average, sum=0.0f;
__shared__ float sh_arr[128];
sh_arr[index] = array[index];
__syncthreads();
// Now it begins to confuse me
for(i=0; i<index; i++) { sum += sh_arr[i]; } // what is the index here?
average = sum / (index + 1.0f); // what is the index here?
// why add 1.0f?
if(array[index] > average) {array[index] = average;}
}
The index is created as the Id for each thread, which I can understand. But when calculate the average, the index is used as number of threads. The first index used as a parallel computation id for arrays, while the second index is used just as common c. I repeat this procedure in my program, but the result doesn't repeat.
What's the trick behind the index? I print it in cuda-gdb, it just shows 0. Any detailed explanation for this?
Add one point. When calculate the average, why it adds 1.0f?
This code is computing prefix sums. A prefix sum for an array of values looks like this:
array: 1 2 4 3 5 7
prefix-sums: 1 3 7 10 15 22
averages: 1 2 2.33 2.25 3 3.67
index: 0 1 2 3 4 5
Each prefix sum is the sum of elements in the value array up to that position.
The code is also computing the "average" which is the prefix sum divided by the number of elements used to compute the sum.
In the code you have shown, each thread is computing a different element of the prefix-sum array (and a separate average).
Therefore, to compute the given average in each thread, we take the prefix-sum and divide by the index, but we must add 1 to the index, since adding 1 to the index gives us the number of elements used to compute the prefix-sum (and average) for that thread.
I want to evaluate the acceptance rate of a proposal, where a proposal can receive two types of votes: namely positive and negative.
So the simplest function that comes to mind is as follows:
p + n / p + n + \epsilon
However I would like to come up with a more sophisticated function which would satisfy the following two properties.
The ratio of positive votes to the total amount of votes should always take precedence. So where p1 = 5, n1 = 0, p2 = 99, n2 = 1 the function should calculate a higher acceptance rate for the first one.
When the ratios are equal, the function should return a higher acceptance rate for the one with the higher number of total votes. So in the following case where p1 = 1000, n1 = 0, p2 = 10, n2 = 0 again the first one should have a higher acceptance rate.
Another idea concerning the function could be the following:
w * [p / (p + n)] + (1 - w) * [(p + n) / maxV]
where maxV is the maximum number of votes that any proposal received and w is a real number in [0..1].
This function satisfy the second condition whereas the guarantee does not extend to the first one. Finding a value for w to satisfy the system could be cumbersome thus I'm searching for a better solution.
So I have a column with different numbers and wish to categorize them by range within 30 minute intervals. So 5 would be 0-30, 697 would be 690-720, and 169 would be 150-180. I was first thinking of doing a case statement, but it doesn't look like Access 2003 supports it. Is there perhaps some sort of algorithm that could determine the range? Preferably, this would be done within the query.
Thank you.
Take the integer portion of (number / 30) using the Int function and multiply it by 30 to get your lower bound, then add 30 to that number to get your upper bound.
Examples
Int(5 / 30) = 0 * 30 = 0
Int(697 / 30) = 23 * 30 = 690
Use / (integer division) and * (multiplication).
5/30*30 = 0
697/30*30 = 690
169/30*30 = 150
...
Let x be your column with the values you want to catalogue, the in pseudo-SQL you have:
select ((x/30)*30) as minrange,
(((x/30)+1)*30) as maxrange
from yourtable
(you should take the integer part of the division).
Hope this helps.
This is fairly straight forward. You can just use the following.
(number \ 30) * 30
This will give you the lower index of your range. It does have one problem, which is that 30, 720, 180 etc, will be returned as themselves. This means your ranges either need to be 0-29, 690-719, etc, or have your caller take this into account.
This assumes you are using VBA where the '\' operator returns only the quotient. See more on VB operators here