Derivative in function - function

I'd like to write a Mathematica function that takes an expression as argument, takes the derivative of that expression, and then does something to the expression. So (as a toy example) I'd like to write
F[f_] = D[f, x] * 2
so that
F[x^2] = 4x
Instead, I get
F[x^2] = 0
Can someone point me to the relevant docs? I spent some time poking around the Mathematica reference, but didn't find anything helpful.

You've used assignment = when you mean to use delayed assignment :=. When you evaluate F[f_]=D[f,x]*2 using (non-delayed) assignment, Mathematica looks at D[f,x] and sees that f (an unassigned symbol) does not depend on x; hence, its derivative is 0. Thus, F[f_]=0 for any arguments to F, which is what it returns later.
If you want F to be evaluated only after you have specified what f_ should be, you need to use delayed assignment by replacing = with :=.

Related

Specify a general scalar function within a vector operation in Mathematica?

I am trying to take the derivative of a function that includes a scalar function of a vector and the vector itself. A simpler example of this is:
D[ A[b[t]]*b[t]/(b[t].b[t]), t]
where b[t] is a 3-vector and A[b[t]] is a scalar function. I get nonsense back out, since Mathematica isn't properly defining A[b[t]] to be a scalar.
I've tried using $Assumptions = {(b[t]) \[Element] Vectors[3, Reals], (M[b[t]] | t) \[Element] Reals} and this doesn't seem to help.
Any tips?
Edit to add more detail:
In that example case, I should get:
A/(b[t].b[t]) * (2(b[t].b'[t])b[t]/(b[t].b[t])^2 - b'[t])
- D[A,b]*1/(b[t].b[t])^(3/2) * (b[t].b'[t])b[t]
Where ' denotes a derivative with respect to t.
Mathematica gives everything correctly except the last term. The last term is instead:
-((b[t] b'[t] M'[b[t]])/b[t].b[t])

(Easy) Matlab: finding zero spots (fzero)

I'm all new to Matlab and I'm supposed to use this function to find all 3 zero spots.
f.m (my file where the function can be found)
function fval = f(x)
% FVAL = F(X), compute the value of a test function in x
fval = exp(-x) - exp(-2*x) + 0.05*x - 0.25;
So obviously I write "type f" to read my function but then I try to do like fzero ('f', 0) and I get the ans 0.4347 and I assume that's 1 of my 3 zero spots but how to find the other 2?
From fzero documentation
x = fzero(fun,x0) tries to find a zero of fun near x0, if x0 is a scalar. fun is a function handle. The value x returned by fzero is near a point where fun changes sign, or NaN if the search fails. In this case, the search terminates when the search interval is expanded until an Inf, NaN, or complex value is found.
So it can't find all zeros by itself, only one! Which one depends on your inputted x0.
Here's an example of how to find some more zeros, if you know the interval. However it just repeatedly calls fzero for different points in the interval (and then still can miss a zero if your discretization is to coarse), a more clever technique will obviously be faster:
http://www.mathworks.nl/support/solutions/en/data/1-19BT9/index.html?product=ML&solution=1-19BT9
As you can see in the documentation and the example above, the proper way for calling fzero is with a function handle (#fun), so in your case:
zero1 = fzero(#f, 0);
From this info you can also see that the actual roots are at 0.434738, 1.47755 and 4.84368. So if you call fzero with 0.4, 1.5 and 4.8 you probably get those values out of it (convergence of fzero depends on which algorithm it uses and what function you feed it).
Just to complement Gunther Struyf's answer: there's a nice function on the file exchange by Stephen Morris called FindRealRoots. This function finds an approximation to all roots of any function on any interval.
It works by approximating the function with a Chebyshev polynomial, and then compute the roots of that polynomial. This obviously only works well with continuous, smooth and otherwise well-behaved functions, but the function you give seems to have those qualities.
You would use this something like so:
%# find approximate roots
R = FindRealRoots(#f, -1, 10, 100);
%# refine all roots thus found
for ii = 1:numel(R)
R(ii) = fzero(#f, R(ii)); end

matlab function which is a function of an intergral

I need to write my own function which has the form f(x,y)=Integrate(g(x,y,z),z from 0 to inf). so the code I used was:
function y=f(x,y)
g=#(z)exp(-z.^2)./(z.^x).*(z.^2+y.^2).^(x/2);% as a function of x,y and z
y=quadgk(g,0,inf)
and if I call it for a single value like f(x0,y0), it works but if I try to calculate something like f([1:10],y0), then the error message says that there is something wrong with the times and dimension. In principle I can use for loops but then my code slows down and takes forever. Is there any help I can get from you guys? or references?
I'm trying to avoid the for loop since in matlab it's much faster to use matrix computation than to use for loop. I wonder if there is any trick that I can take advantage of this feature.
Thanks for any help in advance,
Lynn
Perhaps you can try to transpose the interval, creating row based values instead of column based f([1:10]',y0). Otherwise something in your function might be wrong, for example to get x^y to work with lists as input, you have to prefix with a dot x.^y. The same for mulitply and division I think..
If loop is no problem for you, you should do something like:
function y2=f(x,y)
y2=zeros(size(x));
for n=1:numel(x)
g=#(z)exp(-z.^2)./(z.^x(n)).*(z.^2+y.^2).^(x(n)/2);% as a function of x,y and z
y2(n)=quadgk(g,0,inf)
end
The problem here is that quadk itself uses vectors as argument for g. Then you have in g somethink like z.^x, which is the power of two vectors that is only defined if z and x have the same dimension. But this is not what you want.
I assume that you want to evaluate the function for all arguments in x and that the output vector has the same dimension as x. But this does not seem to be possible since even this simple example
g=#(x)[x;x.^2]
quad(g,0,1)
does not work:
Error using quad (line 79)
The integrand function must return an output vector of the same length as the
input vector.
A similar error shows when using quadgk. The documentation also says that this routine works only for scalar functions and this is not surprising since an adaptive quadrature rule would in general use different points for each function to evaluate the integral.
You have to use quadvinstead, which can integrate vector valued functions. But this gives wrong results since your function is integrated in the interval [0,\infty).

Mechanism of a function in Scheme

Here it is a strange function in Scheme:
(define f
(call/cc
(lambda (x) x) ) )
(((f 'f) f) 1 )
When f is called in the command line, the result displayed is f .
What is the explanation of this mechanism ?..
Thanks!
You've just stumbled upon 'continuations', possibly the hardest thing of Scheme to understand.
call/cc is an abbreviation for call-with-current-continuation, what the procedure does is it takes a single argument function as its own argument, and calls it with the current 'continuation'.
So what's a continuation? That's infamously hard to explain and you should probably google it to get a better explanation than mine. But a continuation is simply a function of one argument, whose body represents a certain 'continuation' of a value.
Like, when we have (+ 2 (* 2 exp)) with exp being a random expression, if we evaluate that expression there is a 'continuation' waiting for that result, a place where evaluation continues, if it evaluates to 3 for instance, it inserts that value into the expression (* 2 3) and goes on from there with the next 'continuation', or the place where evaluation continues, which is (+ 2 ...).
In almost all contexts of programming languages, the place where computation continues with the value is the same place as it started, though the return statement in many languages is a key counterexample, the continuation is at a totally different place than the return statement itself.
In Scheme, you have direct control over your continuations, you can 'capture' them like done there. What f does is nothing more than evaluate to the current continuation, after all, when (lambda (x) x) is called with the current continuation, it just evaluates to it, so the entire function body does. As I said, continuations are functions themselves whose body can just be seen as the continuation they are to capture, which was famously shown by the designers of Scheme, that continuations are simply just lambda abstractions.
So in the code f first evaluates to the continuation it was called in. Then this continuation as a function is applied to 'f (a symbol). This means that that symbol is brought back to that continuation, where it is evaluated again as a symbol, to reveal the function it is bound to, which again is called with a symbol as its argument, which is finally displayed.
Kind of mind boggling, if you've seen the film 'primer', maybe this explains it:
http://thisdomainisirrelevant.net/1047

What is this symbolic code transformation called?

I often cross this kind of code transformation (or even mathematical transformation). (Python example, but applies to any language.)
I've go a function
def f(x):
return x
I use it into another one.
def g(x):
return f(x)*f(x)
print g(2)
leads to 4
But I want to remove the functional dependency, and I change the function g into
def g(f):
return f*f
print g( f(2) )
leads to 4 too
How do you call this kind of transformation, locally turning a function into a scalar ?
I'm not sure there is a specific term for it.
In general terms for functional programming there usually isn't a distinction made between passing scalar arguments and passing functions as arguments.
In the first example I could still call g(f(2)) and it should calculate f(f(2))*f(f(2)), which (since f(x) is the identity transformation) will also result in 4 as the answer.