Now I calculate the value of F from equation. From the F that I determined, I need to find the diameter from set of data.
THe method is like this. If my F value is smaller than the value of F from the data, then I choose the corresponding diameter.
For example. The value of F that I calculated is 11. The value of F from the data is 8, 10, 12, 14, 16. The first value is 8, final value is 16, and its corresponding diameter is 1,2,3,4,5.
11 is bigger than 8, the first value of the data, hence we move to the next F on the data. Again, 11 is bigger 10, we move to the next F.
But 11 is less than 12, the iteration is stop. We need not look further. We take the diameter of 12, which is 3.
You get the idea.
And for the set of data. Here's the code. The while (Fa==0) function is the condition that I apply in order to perform this question.
while Fa==0
load data.dat;
diameter=data(:,1);
F=data(:,2);
I'm stuck at that.
Please help me
Here is how I understand your problem: You have a dataset from which you get a list of values F. Now you also calculate a single value Fc, and you want to find the element Fe in the list, which satisfies the two conditions
closest to Fc
Fc < Fe
One way to achieve this is by the following
F = [1 2 3 4.5 5 6 7 8];
Fc = 4;
sort(F);
for i=1:length(F)
if(Fc<F(i))
Fe = F(i);
break
end
end
This gives 4.5. From this value of Fe you can find the desired diameter.
Related
This is my first time trying to complete work in Octave. I have attempted to complete "for loops" to get the mean of each and then subtract this to centre the results in the 25 samples of the 5 items. I get the right figures, however I also get an out of bounds error (indicated below). Can anyone help me please?
error: TrialPartB: A(I,J): row index out of bounds; value 6 out of bound 5
You have populated your G_all structure with only 5 data members, but then, when you calculate the mean, you loop i=1:25. There are only 5 members, so when it gets to member 6, it fails with the 'row index out of bounds' error.
You need to limit the for loop to be just the size of the data, perhaps using rows(G_all) instead of 25 as the limit of the loop.
As rolfl already explained you are trying to access row 1..25 but G_all only has 5 rows.
But apart that problem you shouldn't calculate mean in a for loop but use the function "mean" instead.
a=[4 1 6];
mean(a)
ans = 3.6667
If you want to remove the mean from an vector just use "detrend":
detrend(a, 0)
ans =
0.33333 -2.66667 2.33333
I need to use a for-loop in a function in order to find spring constants of all possible combinations of springs in series and parallel. I have 5 springs with data therefore I found the spring constant (K) of each in a new matrix by using polyfit to find the slope (using F=Kx).
I have created a function that does so, however it returns data not in a matrix, but as individual outputs. So instead of KP (Parallel)= [1 2 3 4 5] it says KP=1, KP=2, KP=3, etc. Because of this, only the final output is stored in my workspace. Here is the code I have for the function. Keep in mind that the reason I need to use the +2 in the for loop for b is because my original matrix K with all spring constants is ten columns, with every odd number being a 0. Ex: K=[1 0 2 0 3 0 4 0 5] --- This is because my original dataset to find K (slope) was ten columns wide.
function[KP,KS]=function_name1(K)
L=length(K);
c=1;
for a=1:2:L
for b=a+2:2:L
KP=K(a)+K(b)
KS=1/((1/K(a))+(1/K(b)))
end
end
c=c+1;
and then a program calling that function
[KP,KS]=function_name1(K);
What I tried: - Suppressing and unsuppressing lines of code (unsuccessful)
Any help would be greatly appreciated.
hmmm...
your code seems workable, but you aren't dealing with things in the most practical manner
I'd start be redimensioning K so that it makes sense, that is that it's 5 spaces wide instead of your current 10 - you'll see why in a minute.
Then I'd adjust KP and KS to the size that you want (I'm going to do a 5X5 as that will give all the permutations - right now it looks like you are doing some triangular thing, I wouldn't worry too much about space unless you were to do this for say 50,000 spring constants or so)
So my code would look like this
function[KP,KS]=function_name1(K)
L=length(K);
KP = zeros(L);
KS = zeros(l);
c=1;
for a=1:L
for b=1:L
KP(a,b)=K(a)+K(b)
KS(a,b)=1/((1/K(a))+(1/K(b)))
end
end
c=c+1;
then when you want the parallel combination of springs 1 and 4 KP(1,4) or KP(4,1) will do the trick
I did a search but didn't really get any proper hits. Maybe I used incorrect terms?
What I want to ask about is an algorithm for simulating a biased role rather than a standard supposedly-random roll.
It wouldn't be a problem if you can't give me exact answers (maybe the explanation is lengthy?) but I would appreciate &pointers to material I can read about it.
What I have in mind is to for example, shift the bias towards the 5, 6 area so that the numbers rolls would have a higher chances of getting a 5 or a 6; that's the sort of problem I'm trying to solve.
[Update]
Upon further thought and by inspecting some of the answers, I've realized that what I want to achieve is really the Roulette Wheel Selection operator that's used in genetic algorithms since having a larger sector means increasing the odds the ball will land there. Am I correct with this line of thought?
In general, if your probabilities are {p1,p2, ...,p6}, construct the following helper list:
{a1, a2, ... a5} = { p1, p1+p2, p1+p2+p3, p1+p2+p3+p4, p1+p2+p3+p4+p5}
Now get a random number X in [0,1]
If
X <= a1 choose 1 as outcome
a1 < X <= a2 choose 2 as outcome
a2 < X <= a3 choose 3 as outcome
a3 < X <= a4 choose 4 as outcome
a4 < X <= a5 choose 5 as outcome
a5 < X choose 6 as outcome
Or, more efficient pseudocode
if X > a5 then N=6
elseif X > a4 then N=5
elseif X > a3 then N=4
elseif X > a2 then N=3
elseif X > a1 then N=2
else N=1
Edit
This is equivalent to the roulette wheel selection you mention in your question update as shown in this picture:
Let's say the die is biased towards a 3.
Instead of picking a random entry from an array 1..6 with 6 entries, pick a random entry from an array 1..6, 3, 3. (8 entries).
Make a 2 dimensional array of possible values and their weights. Sum up all the weights. Randomly choose a value on the range of 0 to the sum of the weights.
Now iterate through the array while keeping an accumulator of the weights seen so far. Once this value exceeds your random number, pick the value of the die represented here.
Hope this helps
Hmm. Say you want to have a 1/2 chance of getting a six, and a 1/10 chance of getting any other face. To simulate this, you could generate a random integer n in [1, 2, ... , 10] , and the outcome would map to six if n is in [6, 7, 8, 9, 10] and map to n otherwise.
One way that's usually fairly easy is to start with a random number in an expanded range, and break that range up into unequal pieces.
For example, with a perfectly even (six-sided) die, each number should come up 1/6th of the time. Let's assume you decide on round percentages -- all the other numbers should come up 16 percent of the time, but 2 should come up 17 percent of the time.
You could do that by generating numbers from 1 to 100. If the number is from 1 to 16, it comes out as a 1. If it's from 17 to 34, it comes out as a 2. If it's from 34 to 50, it comes out as a 3 (and the rest are blocks of 16 apiece).
I have a MySQL table with a field which is an unsigned tinyint (max value: 255).
Typical change in the requirements. We would need to create a new field because of a bunch of records in that table. But that would be very expensive for the application (lots of changes, a lot of work).
So we are thinking to combine the new value with the old value.
Basically in an unsigned tinyint (max value: 255), we need to store:
an integer that can be 1, 2, 3 or 4
an integer that can span from 1 to 30 (limits included)
The requirement is to get and set the 'combined' value with an algorithm as easy as possible.
How would you do that?
If possible I would like not to use any binary representation.
Thanks,
Dan
You could use multiples of 32 to represent 1-4 and add the 1-30 on top.
[1,1] would be 33
[1,2] would be 34
[1,30] would be 62
[2,1] would be 65
[2,30] would be 94
[4,1] would be 129
[4,30] would be 158
This would work and be unambiguous, but in general I really think you should not consort to a hack like this. Add the column and change your code. What will you do with the next requirements change? At the end, your software will be a collection of hacks and it can't be maintained anymore.
Let's call the two values x and y.
While storing the numbers perform these steps:
Multiply x by 100.
Add the result of 1 to y.
Store the result of 2 in the column.
Thus, if x were to be 3, and y 15, I would get 315 for the result. You can decode that easily by first extracting the last two digits from the number and then dividing by 100 will give you the first one.
But because you have to fit the numbers within 255, you can chose an appropriate multiplier that is less than 100.
I have an array of 10 rows by 20 columns. Each columns corresponds to a data set that cannot be fitted with any sort of continuous mathematical function (it's a series of numbers derived experimentally). I would like to calculate the integral of each column between row 4 and row 8, then store the obtained result in a new array (20 rows x 1 column).
I have tried using different scipy.integrate modules (e.g. quad, trpz,...).
The problem is that, from what I understand, scipy.integrate must be applied to functions, and I am not sure how to convert each column of my initial array into a function. As an alternative, I thought of calculating the average of each column between row 4 and row 8, then multiply this number by 4 (i.e. 8-4=4, the x-interval) and then store this into my final 20x1 array. The problem is...ehm...that I don't know how to calculate the average over a given range. The question I am asking are:
Which method is more efficient/straightforward?
Can integrals be calculated over a data set like the one that I have described?
How do I calculate the average over a range of rows?
Since you know only the data points, the best choice is to use trapz (the trapezoidal approximation to the integral, based on the data points you know).
You most likely don't want to convert your data sets to functions, and with trapz you don't need to.
So if I understand correctly, you want to do something like this:
from numpy import *
# x-coordinates for data points
x = array([0, 0.4, 1.6, 1.9, 2, 4, 5, 9, 10])
# some random data: 3 whatever data sets (sharing the same x-coordinates)
y = zeros([len(x), 3])
y[:,0] = 123
y[:,1] = 1 + x
y[:,2] = cos(x/5.)
print y
# compute approximations for integral(dataset, x=0..10) for datasets i=0,1,2
yi = trapz(y, x[:,newaxis], axis=0)
# what happens here: x must be an array of the same shape as y
# newaxis tells numpy to add a new "virtual" axis to x, in effect saying that the
# x-coordinates are the same for each data set
# approximations of the integrals based the datasets
# (here we also know the exact values, so print them too)
print yi[0], 123*10
print yi[1], 10 + 10*10/2.
print yi[2], sin(10./5.)*5.
To get the sum of the entries 4 to 8 (including both ends) in each column, use
a = numpy.arange(200).reshape(10, 20)
a[4:9].sum(axis=0)
(The first line is just to create an example array of the desired shape.)