I was trying to run the program but the window says
error: 'a' undefined near line 2, column 10
error: called from
false_position at line 2 column 6
here's my code
function y = false_position(f, a, b, error)
if ~(f(a) < 0)
disp("f(a) must be less than 0")
elseif ~(f(b) > 0)
disp("f(b) must be greater than zero")
else
c = 100000;
while abs(f(c)) > error
%Formula for the x-intercept
c = -f(b) * (b - a) / (f(b) - f(a)) + b;
if f(c) < 0
a = c;
else
b = c;
endif
disp(f(c))
endwhile
x = ["The root is approximately located at ", num2str(c)];
disp(x)
y = c;
endif
endfunction
every time i ran the code, it says like that and i am not really a pro with using the octave. I was kinda hoping someone will help me with this error.
Any answers will do
I tried to run this function in MATLAB:
function llik = log_likelihood(p)
global d;
N = length(d);
tau = fzero(#(t) (t - (t^2 * p + 1 - p) / (2 * (t * p + 1 - p))), [0,1]);
loglik = 0;
for i = 1 : N
loglik = loglik + log(isnan(d(i)) * (1 - p * (1 - tau) + ~isnan(d(i))* p * (1 - tau)));
end
llik = loglik / N;
end
Here, p is a scalar. MATLAB gives me an error warning saying
Error using fzero>localFirstFcnEval
FZERO cannot continue because user-supplied function_handle ==>
#(t)(t-(t^2*p+1-p)/(2*(t*p+1-p))) failed with the error below.
Unrecognized function or variable 'p'.
I am confused since p should be the argument of the function. How can it be unrecongized? Thank you for your help!
Everything seems okay with my Matlab if i assign the value d inside the function, where do you define the variable d, if it's a global variable, it must be define as:
global d;
This is my result:
The Grad sub object becomes "None" if expand the expression. Not sure why? Can somebody give some clue.
If expand the w.grand.zero_() throw error as "AttributeError: 'NoneType' object has no attribute 'zero_'"
Thanks,
Ganesh
import torch
x = torch.randint(size = (1,2), high = 10)
w = torch.Tensor([16,-14])
b = 36
y = w * x + b
epoch = 20
learning_rate = 0.01
w1 = torch.rand(size= (1,2), requires_grad= True)
b1 = torch.ones(size = [1], requires_grad= True)
for i in range(epoch):
y1 = w1 * x + b1
loss = torch.sum((y1-y)**2)
loss.backward()
with torch.no_grad():
#w1 = w1 - learning_rate * w1.grad //Not Working : w1.grad becomes "None" not sure how ;(
#b1 = b1 - learning_rate * b1.grad
w1 -= (learning_rate * w1.grad) // Working code.
b1 -= (learning_rate * b1.grad)
w1.grad.zero_()
b1.grad.zero_()
print("B ", b1)
print("W ", w1)
The thing is that in your working code you are modifying existing variable which has grad attribute, while in the non-working case you are creating a new variable.
As new w1/b1 variable is created it has no gradient attribute as you didn't call backward() on it, but on the "original" variable.
First, let's check whether that's really the case:
print(id(w1)) # Some id returned here
w1 = w1 - learning_rate * w1.grad
# In case below w1 address doesn't change
# w1 -= learning_rate * w1.grad
print(id(w1)) # Another id here
Now, you could copy it in-place and not brake it, but there is no point to do so and your working case is much clearer, but for posterity's sake:
w1.copy_(w1 - learning_rate * w1.grad)
The code you provided is updating the parameters w and b using gradient descent. In the first line, w.grad is the gradient of the loss function with respect to the parameter w and lr is the learning rate, a scalar value that determines the step size in the direction of the gradient.
The second line, b = b - b.grad * lr is updating the parameter b in the same way, by subtracting the gradient of the loss with respect to b multiplied by the learning rate.
However, the second line is incorrect, it should be b -= b.grad * lr instead of b = b - b.grad * lr
Using b = b - b.grad * lr would cause the parameter b to be re-assigned to the new value, but the original b would not be updated, therefore, the value of b would be None.
On the other hand, using b -= b.grad * lr will update the value of b in place, so the original b will be updated and its value will not be None.
I am trying to store the coefficients from a simulated regression in a variable b1 and b2 in the code below, but I'm not quite sure how to go about this. I've tried using return scalar b1 = _b[x1] and return scalar b2 = _b[x2], from the rclass() function, but that didn't work. Then I tried using scalar b1 = e(x1) and scalar b2 = e(x2), from the eclass() function and also wasn't successful.
The goal is to use these stored coefficients to estimate some value (say rhat) and test the standard error of rhat.
Here's my code below:
program montecarlo2, eclass
clear
version 11
drop _all
set obs 20
gen x1 = rchi2(4) - 4
gen x2 = (runiform(1,2) + 3.5)^2
gen u = 0.3*rnormal(0,25) + 0.7*rnormal(0,5)
gen y = 1.3*x1 + 0.7*x2 + 0.5*u
* OLS Model
regress y x1 x2
scalar b1 = e(x1)
scalar b2 = e(x2)
end
I want to do something like,
rhat = b1 + b2, and then test the standard error of rhat.
Let's hack a bit at your program:
Version 1
program montecarlo2
clear
version 11
set obs 20
gen x1 = rchi2(4) - 4
gen x2 = (runiform(1,2) + 3.5)^2
gen u = 0.3*rnormal(0,25) + 0.7*rnormal(0,5)
gen y = 1.3*x1 + 0.7*x2 + 0.5*u
* OLS Model
regress y x1 x2
end
I cut drop _all as unnecessary given the clear. I cut the eclass. One reason for doing that is the regress will leave e-class results in its wake any way. Also, you can if you wish add
scalar b1 = _b[x1]
scalar b2 = _b[x2]
scalar r = b1 + b2
either within the program after the regress or immediately after the program runs.
Version 2
program montecarlo2, eclass
clear
version 11
set obs 20
gen x1 = rchi2(4) - 4
gen x2 = (runiform(1,2) + 3.5)^2
gen u = 0.3*rnormal(0,25) + 0.7*rnormal(0,5)
gen y = 1.3*x1 + 0.7*x2 + 0.5*u
* OLS Model
regress y x1 x2
* stuff to add
end
Again, I cut drop _all as unnecessary given the clear. Now the declaration eclass is double-edged. It gives the programmer scope for their program to save e-class results, but you have to say what they will be. That's the stuff to add indicated by a comment above.
Warning: I've tested none of this. I am not addressing the wider context. #Dimitriy V. Masterov's suggestion of lincom is likely to be a really good idea for whatever your problem is.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
0 = A
1 = B
...
25 = Z
26 = AA
27 = AB
...
701 = ZZ
702 = AAA
I cannot think of any solution that does not involve loop-bruteforce :-(
I expect a function/program, that accepts a decimal number and returns a string as a result.
Haskell, 78 57 50 43 chars
o=map(['A'..'Z']:)$[]:o
e=(!!)$o>>=sequence
Other entries aren't counting the driver, which adds another 40 chars:
main=interact$unlines.map(e.read).lines
A new approach, using a lazy, infinite list, and the power of Monads! And besides, using sequence makes me :), using infinite lists makes me :o
If you look carefully the excel representation is like base 26 number but not exactly same as base 26.
In Excel conversion Z + 1 = AA while in base-26 Z + 1 = BA
The algorithm is almost same as decimal to base-26 conversion with just once change.
In base-26, we do a recursive call by passing it the quotient, but here we pass it quotient-1:
function decimalToExcel(num)
// base condition of recursion.
if num < 26
print 'A' + num
else
quotient = num / 26;
reminder = num % 26;
// recursive calls.
decimalToExcel(quotient - 1);
decimalToExcel(reminder);
end-if
end-function
Java Implementation
Python, 44 chars
Oh c'mon, we can do better than lengths of 100+ :
X=lambda n:~n and X(n/26-1)+chr(65+n%26)or''
Testing:
>>> for i in 0, 1, 25, 26, 27, 700, 701, 702:
... print i,'=',X(i)
...
0 = A
1 = B
25 = Z
26 = AA
27 = AB
700 = ZY
701 = ZZ
702 = AAA
Since I am not sure what base you're converting from and what base you want (your title suggests one and your question the opposite), I'll cover both.
Algorithm for converting ZZ to 701
First recognize that we have a number encoded in base 26, where the "digits" are A..Z. Set a counter a to zero and start reading the number at the rightmost (least significant digit). Progressing from right to left, read each number and convert its "digit" to a decimal number. Multiply this by 26a and add this to the result. Increment a and process the next digit.
Algorithm for converting 701 to ZZ
We simply factor the number into powers of 26, much like we do when converting to binary. Simply take num%26, convert it to A..Z "digits" and append to the converted number (assuming it's a string), then integer-divide your number. Repeat until num is zero. After this, reverse the converted number string to have the most significant bit first.
Edit: As you point out, once two-digit numbers are reached we actually have base 27 for all non-least-significant bits. Simply apply the same algorithms here, incrementing any "constants" by one. Should work, but I haven't tried it myself.
Re-edit: For the ZZ->701 case, don't increment the base exponent. Do however keep in mind that A no longer is 0 (but 1) and so forth.
Explanation of why this is not a base 26 conversion
Let's start by looking at the real base 26 positional system. (Rather, look as base 4 since it's less numbers). The following is true (assuming A = 0):
A = AA = A * 4^1 + A * 4^0 = 0 * 4^1 + 0 * 4^0 = 0
B = AB = A * 4^1 + B * 4^0 = 0 * 4^1 + 1 * 4^0 = 1
C = AC = A * 4^1 + C * 4^0 = 0 * 4^1 + 2 * 4^0 = 2
D = AD = A * 4^1 + D * 4^0 = 0 * 4^1 + 3 * 4^0 = 3
BA = B * 4^0 + A * 4^0 = 1 * 4^1 + 0 * 4^0 = 4
And so forth... notice that AA is 0 rather than 4 as it would be in Excel notation. Hence, Excel notation is not base 26.
In Excel VBA ... the obvious choice :)
Sub a()
For Each O In Range("A1:AA1")
k = O.Address()
Debug.Print Mid(k, 2, Len(k) - 3); "="; O.Column - 1
Next
End Sub
Or for getting the column number in the first row of the WorkSheet (which make more sense, since we are in Excel ...)
Sub a()
For Each O In Range("A1:AA1")
O.Value = O.Column - 1
Next
End Sub
Or better yet: 56 chars
Sub a()
Set O = Range("A1:AA1")
O.Formula = "=Column()"
End Sub
Scala: 63 chars
def c(n:Int):String=(if(n<26)""else c(n/26-1))+(65+n%26).toChar
Prolog, 109 123 bytes
Convert from decimal number to Excel string:
c(D,E):- d(D,X),atom_codes(E,X).
d(D,[E]):-D<26,E is D+65,!.
d(D,[O|M]):-N is D//27,d(N,M),O is 65+D rem 26.
That code does not work for c(27, N), which yields N='BB'
This one works fine:
c(D,E):-c(D,26,[],X),atom_codes(E,X).
c(D,B,T,M):-(D<B->M-S=[O|T]-B;(S=26,N is D//S,c(N,27,[O|T],M))),O is 91-S+D rem B,!.
Tests:
?- c(0, N).
N = 'A'.
?- c(27, N).
N = 'AB'.
?- c(701, N).
N = 'ZZ'.
?- c(702, N).
N = 'AAA'
Converts from Excel string to decimal number (87 bytes):
x(E,D):-x(E,0,D).
x([C],X,N):-N is X+C-65,!.
x([C|T],X,N):-Y is (X+C-64)*26,x(T,Y,N).
F# : 166 137
let rec c x = if x < 26 then [(char) ((int 'A') + x)] else List.append (c (x/26-1)) (c (x%26))
let s x = new string (c x |> List.toArray)
PHP: At least 59 and 33 characters.
<?for($a=NUM+1;$a>=1;$a=$a/26)$c=chr(--$a%26+65).$c;echo$c;
Or the shortest version:
<?for($a=A;$i++<NUM;++$a);echo$a;
Using the following formula, you can figure out the last character in the string:
transform(int num)
return (char)num + 47; // Transform int to ascii alphabetic char. 47 might not be right.
char lastChar(int num)
{
return transform(num % 26);
}
Using this, we can make a recursive function (I don't think its brute force).
string getExcelHeader(int decimal)
{
if (decimal > 26)
return getExcelHeader(decimal / 26) + transform(decimal % 26);
else
return transform(decimal);
}
Or.. something like that. I'm really tired, maybe I should stop answering questions and go to bed :P