I know many languages have the ability to round to a certain number of decimal places, such as with the Python:
>>> print round (123.123, 1)
123.1
>>> print round (123.123, -1)
120.0
But how do we round to an arbitrary resolution that is not a decimal multiple. For example, if I wanted to round a number to the nearest half or third so that:
123.123 rounded to nearest half is 123.0.
456.456 rounded to nearest half is 456.5.
789.789 rounded to nearest half is 790.0.
123.123 rounded to nearest third is 123.0.
456.456 rounded to nearest third is 456.333333333.
789.789 rounded to nearest third is 789.666666667.
You can round to an arbitrary resolution by simply scaling the number, which is multiplying the number by one divided by the resolution (or, easier, just dividing by the resolution).
Then you round it to the nearest integer, before scaling it back.
In Python (which is also a very good pseudo-code language), that would be:
def roundPartial (value, resolution):
return round (value / resolution) * resolution
print "Rounding to halves"
print roundPartial (123.123, 0.5)
print roundPartial (456.456, 0.5)
print roundPartial (789.789, 0.5)
print "Rounding to thirds"
print roundPartial (123.123, 1.0/3)
print roundPartial (456.456, 1.0/3)
print roundPartial (789.789, 1.0/3)
print "Rounding to tens"
print roundPartial (123.123, 10)
print roundPartial (456.456, 10)
print roundPartial (789.789, 10)
print "Rounding to hundreds"
print roundPartial (123.123, 100)
print roundPartial (456.456, 100)
print roundPartial (789.789, 100)
In that above code, it's the roundPartial function which provides the functionality and it should be very easy to translate that into any procedural language with a round function.
The rest of it, basically a test harness, outputs:
Rounding to halves
123.0
456.5
790.0
Rounding to thirds
123.0
456.333333333
789.666666667
Rounding to tens
120.0
460.0
790.0
Rounding to hundreds
100.0
500.0
800.0
Related
I need to square a a binary number, but I need each individual digit's contribution to the square.
As a simple example, the function f(x) = ax would be represented as f(x) = a [2k(xk)+ . . . +21(x1)+20(x0)] , where xk is the kth digit of the binary number x. So if x = 101, the function would then be f(x) = a*[22(1)+21(0)+20(1)].
Is there a way to represent f(x) = a*x2 (or even higher order polynomials) in such a way without carrying out a polynomial expansion of the terms?
I am trying to do a (quadratic) regression using Sage and the largest point I have is of this order of magnitude: (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) (number of digits shown).
When I run the code:
var('a,b,c')
model(x) = a*x^2+b*x+c
find_fit(data,model)
(yes, data is already defined)
The fit I get is: [a == 1.0, b == 1.0, c == 1.0] which is not even close to correct. Is this because my numbers are too large or is there some other reason?
The following script should generate a 440Hz sine wave, display the first part of it as a solid line x-t-plot and play the whole wave on the audio output device:
% Octave script to demonstrate
% frequency
f = 440; % Hz
% duration
duration = 3; % s
% sampling rate
fs = 44100; % Hz
% one second duration of time points to be sampled
t = ( 0:duration*fs-1 ) / fs;
% compose the wave
wave = 0.2 * sin( 2*pi * f * t );
% duration to display
td = 0.02; # s
% display x-t-diagram
fig = plot( t(t<td), wave(t<td), '-' );
% play sound
sound( wave, fs );
On one computer it does exactly this. However, on another with the same Linux OS, the line always appears as dashed and is hardly visible:
How can I tell the plot command to use a solid line on both computers on the X terminal?
I do not use any .octaverc and the system-wide octaverc below /usr/share/octave is unmodified on both computers.
When I plot to a file using saveas, the plot is saved with solid lines even if it appears with dashes on the terminal.
Adding the line
graphics_toolkit( 'gnuplot' );
before the plot command will indeed produce a solid line, however this will show up only after the sound has finished, which is not what I want.
I'm attempting my first solo project, after taking an introductory course to machine learning, where I'm trying to use linear approximation to predict the outcome of addition/subtraction of two numbers.
I have 3 features: first number, subtraction/addition (0 or 1), and second number.
So my input looks something like this:
3 0 1
4 1 2
3 0 3
With corresponding output like this:
2
6
0
I have (I think) successfully implemented logistic regression algorithm, as the squared error does gradually decrease, but in 100 values, ranging from 0 to 50, the squared error value flattens out at around 685.6 after about 400 iterations.
Graph: Squared Error vs Iterations
.
To fix this, I have tried using a larger dataset for training, getting rid of regularization, and normalizing the input values.
I know that one of the steps to fix high bias is to add complexity to the approximation, but I want to maximize the performance at this particular level. Is it possible to go any further on this level?
My linear approximation code in Octave:
% Iterate
for i = 1 : iter
% hypothesis
h = X * Theta;
% reg theta prep
regTheta = Theta;
regTheta(:, 1) = 0;
% cost calc
J(i, 2) = (1 / (2 * m)) * (sum((h - y) .^ 2) + lambda * sum(sum(regTheta .^ 2,1),2));
% theta calc
Theta = Theta - (alpha / m) * ((h - y)' * X)' + lambda * sum(sum(regTheta, 1), 2);
end
Note: I'm using 0 for lambda, as to ignore regularization.
function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent to learn theta
% theta = GRADIENTDESENT(X, y, theta, alpha, num_iters) updates theta by
% taking num_iters gradient steps with learning rate alpha
% Initialize some useful values
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
% ====================== YOUR CODE HERE ======================
% Instructions: Perform a single gradient step on the parameter vector
% theta.
%
% Hint: While debugging, it can be useful to print out the values
% of the cost function (computeCost) and gradient here.
%
hypothesis = x*theta;
theta_0 = theta(1) - alpha(1/m)*sum((hypothesis-y)*x);
theta_1 = theta(2) - alpha(1/m)*sum((hypothesis-y)*x);
theta(1) = theta_0;
theta(2) = theta_1;
% ============================================================
% Save the cost J in every iteration
J_history(iter) = computeCost(X, y, theta);
end
end
I keep getting this error
error: gradientDescent: subscript indices must be either positive integers less than 2^31 or logicals
on this line right in-between the first theta and =
theta_0 = theta(1) - alpha(1/m)*sum((hypothesis-y)*x);
I'm very new to octave so please go easy on me, and
thank you in advance.
This is from the coursera course on Machine Learning from Week 2
99% sure your error is on the line pointed out by topsig, where you have alpha(1/m)
it would help if you gave an example of input values to your function and what you hoped to see as an output, but I'm assuming from your comment
% taking num_iters gradient steps with learning rate alpha
that alpha is a constant, not a function. as such, you have the line alpha(1/m) without any operator in between. octave sees this as you indexing alpha with the value of 1/m.
i.e., if you had an array
x = [3 4 5]
x*(2) = [6 8 10] %% two times each element in the array
x(2) = [4] %% second element in the array
what you did doesn't seem to make sense, as 'm = length(y)' which will output a scalar, so
x = [3 4 5]; m = 3;
x*(1/m) = x*(1/3) = [1 1.3333 1.6666] %% element / 3
x(1/m) = ___error___ %% the 1/3 element in the array makes no sense
note that for certain errors it always indicates that the location of the error is at the assignment operator (the equal sign at the start of the line). if it points there, you usually have to look elsewhere in the line for the actual error. here, it was yelling at you for trying to apply a non-integer subscript (1/m)