How can multiple surfaces be plotted on the axes but surfaces uses a different colormap?.
Using colormap("...") changes it for the entire figure, not just a single surface.
Thanks
Do You mean on same axes?
I haven't found a function that does this directly. But it is possible to pass the desired colors in the surf function.
Way I found:
Convert the data to a 0-1 scale and then convert to the desired colormap.
Example with hot and jet colormaps:
tx = ty = linspace (-8, 8, 41)';
[xx, yy] = meshgrid (tx, ty);
r = sqrt (xx .^ 2 + yy .^ 2) + eps;
tz = sin (r) ./ r ;
function normalized = normalize_01(data)
data_min = min(min(data))
data_max = max(max(data))
normalized = (data - data_min)/(data_max - data_min)
endfunction
function rgb = data2rgb(data, color_bits, cmap)
grays = normalize_01(data)
indexes = gray2ind(grays, color_bits)
rgb = ind2rgb(indexes, cmap)
endfunction
color_bits = 128
cmap_1 = hot(color_bits)
rgb_1 = data2rgb(tz, color_bits, cmap_1)
surf(tx, ty, tz, rgb_1)
hold on
cmap_2 = jet(color_bits)
rgb_2 = data2rgb(tz+3, color_bits, cmap_2)
surf(tx, ty, tz+3, rgb_2)
But if you also need a colorbar, this way might not be useful. Unless you find a way to manually add two colorbar like I did with the cmap.
Related
All, I am trying to take the laplacian of the following function:
g(x,y) = 1/2cx^2+1/2dy2
The laplacian is c + d, which is a constant. Using FFT I should get the same ( in my FFT example I am padding the function to avoid edge effects).
Here is my code:
Define a 2D function
n = 30 # number of points
Lx = 30 # extension in x
Ly = 30 # extension in x
dx = n/Lx # Step in x
dy = n/Ly # Step in x
c=4
d=4
x=np.arange(-Lx/2,Lx/2)
y=np.arange(-Ly/2,Ly/2)
g = np.zeros((Lx,Ly))
lapg = np.zeros((Lx,Ly))
for j in range(Ly):
for i in range(Lx):
g[i,j] = (1/2)*c*x[i]**2 + (1/2)*d*y[j]**2
lapg[i,j] = c + d
kxpad = 2*np.pi*np.fft.fftfreq(2*Lx,d=dx)
#kxpad = (2*np.pi/(2*Lx))*np.arange(-2*Lx/2,2*Lx/2)
#kxpad = np.fft.fftshift(kxpad)
#kypad = (2*np.pi/(2*Ly))*np.arange(-2*Ly/2,2*Ly/2)
#kypad = np.fft.fftshift(kypad)
kypad = 2*np.pi*np.fft.fftfreq(2*Ly,d=dy)
kpad = np.zeros((2*Lx,2*Ly))
for j in range(2*Ly):
for i in range(2*Lx):
kpad[i,j] = math.sqrt(kxpad[i]**2+kypad[j]**2)
kpad = np.fft.fftshift(kpad)
gpad = np.zeros((2*Lx,2*Ly))
gpad[:Lx,:Ly] = g # Filling main part of g in gpad
gpad[:Lx,Ly:] = g[:,-1::-1] # Filling the last 3 columns of gpad with g flipped
gpad[Lx:,:Ly] = g[-1::-1,:]# Filling the last 3 lines of gpad with g flipped
gpad[Lx:,Ly:] = g[-1::-1, -1::-1]# Filling the last 3 lines and last 3 columns of gpad with g flipped in line and column
rdFFT2D = np.zeros((Lx,Ly))
gpadhat = np.fft.fft2(gpad)
dgpadhat = -(kpad**2)*gpadhat #taking the derivative iwFFT(f)
rdpadFFT2D = np.real(np.fft.ifft2(dgpadhat))
rdFFT2D = rdpadFFT2D[:Lx,:Ly]
[
First image is the plot of the original function g(x,y), 2nd image is the analytical laplacian of g and 3rd image is the sugar loaf in Rio de Janeiro( lol ), actually it is the laplacian using FFT. What Am I doing wrong here?
Edit : Commenting on ripple effect.
Cris you mean the ripple effect due to the set_zlimit in the image below?Just to remember you that the result should be 8.
Edit 2 : Using non-symmetrical x and y values, produce the two images.
The padding will not change the boundary condition: You are padding by replicating the function, mirrored, four times. The function is symmetric, so the mirroring doesn't change it. Thus, your padding simply repeats the function four times. The convolution through the DFT (which you're attempting to implement) uses a periodic boundary condition, and thus already sees the input function as periodic. Replicating the function will not improve the convolution results at the edges.
To improve the result at the edges, you would need to implement a different boundary condition, the most effective one (since the input is analytical anyway) is to simply extend your domain and then crop it after applying the convolution. This introduces a boundary extension where the image is padded by seeing more data outside the original domain. It is an ideal boundary extension suitable for an ideal case where we don't have to deal with real-world data.
This implements the Laplace though the DFT with greatly simplified code, where we ignore any boundary extension, as well as the sample spacing (basically setting dx=1 and dy=1):
import numpy as np
import matplotlib.pyplot as pp
n = 30 # number of points
c = 4
d = 4
x = np.arange(-n//2,n//2)
y = np.arange(-n//2,n//2)
g = (1/2)*c*x[None,:]**2 + (1/2)*d*y[:,None]**2
kx = 2 * np.pi * np.fft.fftfreq(n)
ky = 2 * np.pi * np.fft.fftfreq(n)
lapg = np.real(np.fft.ifft2(np.fft.fft2(g) * (-kx[None, :]**2 - ky[:, None]**2)))
fig = pp.figure()
ax = fig.add_subplot(121, projection='3d')
ax.plot_surface(x[None,:], y[:,None], g)
ax = fig.add_subplot(122, projection='3d')
ax.plot_surface(x[None,:], y[:,None], lapg)
pp.show()
Edit: Boundary extension would work as follows:
import numpy as np
import matplotlib.pyplot as pp
n_true = 30 # number of pixels we want to compute
n_boundary = 15 # number of pixels to extend the image in all directions
c = 4
d = 4
# First compute g and lapg including boundary extenstion
n = n_true + n_boundary * 2
x = np.arange(-n//2,n//2)
y = np.arange(-n//2,n//2)
g = (1/2)*c*x[None,:]**2 + (1/2)*d*y[:,None]**2
kx = 2 * np.pi * np.fft.fftfreq(n)
ky = 2 * np.pi * np.fft.fftfreq(n)
lapg = np.real(np.fft.ifft2(np.fft.fft2(g) * (-kx[None, :]**2 - ky[:, None]**2)))
# Now crop the two images to our desired size
x = x[n_boundary:-n_boundary]
y = y[n_boundary:-n_boundary]
g = g[n_boundary:-n_boundary, n_boundary:-n_boundary]
lapg = lapg[n_boundary:-n_boundary, n_boundary:-n_boundary]
# Display
fig = pp.figure()
ax = fig.add_subplot(121, projection='3d')
ax.plot_surface(x[None,:], y[:,None], g)
ax.set_zlim(0, 800)
ax = fig.add_subplot(122, projection='3d')
ax.plot_surface(x[None,:], y[:,None], lapg)
ax.set_zlim(0, 800)
pp.show()
Note that I'm scaling the z-axes of the two plots in the same way to not enhance the effects of the boundary too much. Fourier-domain filtering like this is typically much more sensitive to edge effects than spatial-domain (or temporal-domain) filtering because the filter has an infinitely-long impulse response. If you leave out the set_zlim command, you'll see a ripple effect in the otherwise flat lapg image. The ripples are very small, but no matter how small, they'll look huge on a completely flat function because they'll stretch from the bottom to the top of the plot. The equal set_zlim in the two plots just puts this noise in proportion.
I am developing a finite element software that minimizes the energy of a mechanical structure. Using octave and its optim package, I run into a strange issue: The lm_feasible algorithm doesn't calculate at all when I use more than 300 degrees of freedom (DoF). Another algorithm (sqp) performs the calculation but doesn't work well when I complexify the structure and are out of my test case.
Is there a limit in the number of DoF with lm_feasible algorithm?
If so, how many DoF are maximally possible?
To give an overview and general idea of how the code works:
[x,y] = geometryGenerator()
U = zeros(lenght(x)*2,1);
U(1:2:end-1) = x;
U(2:2:end) = y;
%Non geometric argument are not optimised, and fixed during calculation
fct =#(U)complexFunctionOfEnergyIWrap(U(1:2:end-1),U(2:2:end), variousMaterialPropertiesAndOtherArgs)
para = optimset("f_equc_idx",contEq,"lb",lb,"ub",ub,"objf_grad",dEne,"objf_hessian",d2Ene,"MaxIter",1000);
[U,eneFinale,cvg,outp] = nonlin_min(fct,U,para)
Full example:
clear
pkg load optim
function [x,y] = geometryGenerator(r,elts = 100)
teta = linspace(0,pi,elts = 100);
x = r * cos(teta);
y = r * sin(teta);
endfunction
function ene = complexFunctionOfEnergyIWrap (x,y,E,P, X,Y)
ene = 0;
for i = 1:length(x)-1
ene += E*(x(i)/X(i))^4+ E*(y(i)/Y(i))^4- P *(x(i)^2+(x(i+1)^2)-x(i)*x(i+1))*abs(y(i)-y(i+1));
endfor
endfunction
[x,y] = geometryGenerator(5,100)
%Little distance from axis to avoid division by zero
x +=1e-6;
y +=1e-6;
%Saving initial geometry
X = x;
Y = y;
%Vectorisation of the function
%% Initial vector
U = zeros(length(x)*2,1);
U(1:2:end-1) = linspace(min(x),max(x),length(x));
U(2:2:end) = linspace(min(y),max(y),length(y));
%%Constraints
Aeq = zeros(3,length(U));
%%% Blocked bottom
Aeq(1,1) = 1;
Aeq(2,2) = 1;
%%% Sliding top
Aeq(3,end-1) = 1;
%%%Initial condition
beq = zeros(3,1);
beq(1) = U(1);
beq(2) = U(2);
beq(3) = U(end-1);
contEq = #(U) Aeq * U - beq;
%Parameter
Mat = 0.2e9;
pressure = 50;
%% Vectorized function. Non geometric argument are not optimised, and fixed during calculation
fct =#(U)complexFunctionOfEnergyIWrap(U(1:2:end-1),U(2:2:end), Mat, pressure, X, Y)
para = optimset("Algorithm","lm_feasible","f_equc_idx",contEq,"MaxIter",1000);
[U,eneFinale,cvg,outp] = nonlin_min(fct,U,para)
xFinal = U(1:2:end-1);
yFinal = U(2:2:end);
plot(x,y,';Initial geo;',xFinal,yFinal,'--x;Final geo;')
Finite Element Method is typically formulated as the optimal criteria for the minimization problem, which is equivalent to the Virtual Work Principle (see books like Hughes of Bathe). The Virtual Work, represents a set of linear (or nonlinear) equations which can be solved more efficiently (with fsolve).
If for some motive you must solve the problem as an optimization problem, then, if you are considering linear elasticity, your strain energy is quadratic, thus you could use the qp Octave function.
To use sparse matrices could also be helpful.
I have used this code but it showing me error. Help me solve this.
som=MiniSom(x=10,y=10,input_len=15,sigma=1.0,learning_rate=0.5)
som.random_weights_init(x)
som.train_random(data=x,num_iteration=100)
from pylab import bone, pcolor, colorbar, plot, show
bone()
pcolor(som.distance_map().T)
colorbar()
markers = ['o', 's']
colors = ['r', 'g']
for i, x1 in enumerate(x):
w = som.winner(x)
plot(w[0] + 0.5,
w[1] + 0.5,
markers[y[i]],
markeredgecolor = colors[y[i]],
markerfacecolor = 'None',
markersize = 10,
markeredgewidth = 2)
show()
The line w = som.winner(x) should be replaced with w = som.winner(x1)
MiniSom.winner() method computes the coordinates of the winning neuron for the sample x, where sample x is one single row of your dataset, and the corresponding variable name in your code is x1.
You are iterating x1 over rows of x, however still trying to use the dataset variable x with som.winner() method.
I am trying to implement the fmincon function in MATLAB. I am getting a warning with an algorithm change to evaluate my function (warning shown at the end of post). I wanted to use fminsearch, but I have 2 constraints I need to follow. It doesn't make sense for MATLAB to change algorithms to evaluate my function because my constraints are very simple. I have provided the constraint and piece of code:
Constraints:
theta(0) + theta(1) < 1
theta(0), theta(1), theta(2), theta(3) > 0
% Solve MLE using fmincon
ret_1000 = returns(1:1000);
A = [1 1 0 0];
b = [.99999];
lb = [0; 0; 0; 0];
ub = [1; 1; 1; 1];
Aeq = [];
beq = [];
noncoln = [];
init_guess = [.2;.5; long_term_sigma; initial_sigma];
%option = optimset('FunValCheck', 1000);
options = optimset('fmincon');
options = optimset(options, 'MaxFunEvals', 10000);
[x, maxim] = fmincon(#(theta)Log_likeli(theta, ret_1000), init_guess, A, b, Aeq, beq, lb, ub, noncoln, options);
Warning:
Warning: The default trust-region-reflective algorithm does not solve problems with the constraints you
have specified. FMINCON will use the active-set algorithm instead. For information on applicable
algorithms, see Choosing the Algorithm in the documentation.
> In fmincon at 486
In GARCH_loglikeli at 30
Local minimum possible. Constraints satisfied.
fmincon stopped because the predicted change in the objective function
is less than the selected value of the function tolerance and constraints
are satisfied to within the selected value of the constraint tolerance.
<stopping criteria details>
No active inequalities.
All matlab variables are double my default. You can force a double using, double(variableName), you can get the type of a variable using class(variableName). I would use the class on all your variables to make sure they are what you expect. I don't have fmincon, but I tried a variant of your code on fminsearch and it worked like a charm:
op = optimset('fminsearch');
op = optimset(op,'MaxFunEvals',1000,'MaxIter',1000);
a = sqrt(2);
banana = #(x)100*(x(2)-x(1)^2)^2+(a-x(1))^2;
[x,fval] = fminsearch(banana, [-1.2, 1],op)
Looking at the matlab documentation, I think your input variables are not correct:
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
I think you need:
% Let's be ultra specific to solve this syntax issue
fun = #(theta) Log_likeli(theta, ret_1000);
x0 = init_guess;
% A is defined as A
% b is defined as b
Aeq = [];
beq = [];
% lb is defined as lb
% ub is not defined, not sure if that's going to be an issue
% with the solver having lower, but not upper bounds probably isn't
% but thought it was worth a mention
ub = [];
nonlcon = [];
% options is defined as options
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
I'm using AS3 to program some collision detection for a flash game and am having trouble figuring out how to bounce a ball off of a line. I keep track of a vector that represents the ball's 2D velocity and I'm trying to reflect it over the vector that is perpendicular to the line that the ball's colliding with (aka the normal). My problem is that I don't know how to figure out the new vector (that's reflected over the normal). I figured that you can use Math.atan2 to find the difference between the normal and the ball's vector but I'm not sure how to expand that to solve my problem.
Vector algebra - You want the "bounce" vector:
vec1 is the ball's motion vector and vec2 is the surface/line vector:
// 1. Find the dot product of vec1 and vec2
// Note: dx and dy are vx and vy divided over the length of the vector (magnitude)
var dpA:Number = vec1.vx * vec2.dx + vec1.vy * vec2.dy;
// 2. Project vec1 over vec2
var prA_vx:Number = dpA * vec2.dx;
var prA_vy:Number = dpA * vec2.dy;
// 3. Find the dot product of vec1 and vec2's normal
// (left or right normal depending on line's direction, let's say left)
var dpB:Number = vec1.vx * vec2.leftNormal.dx + vec1.vy * vec2.leftNormal.dy;
// 4. Project vec1 over vec2's left normal
var prB_vx:Number = dpB * vec2.leftNormal.dx;
var prB_vy:Number = dpB * vec2.leftNormal.dy;
// 5. Add the first projection prA to the reverse of the second -prB
var new_vx:Number = prA_vx - prB_vx;
var new_vy:Number = prA_vy - prB_vy;
Assign those velocities to your ball's motion vector and let it bounce.
PS:
vec.leftNormal --> vx = vec.vy; vy = -vec.vx;
vec.rightNormal --> vx = -vec.vy; vy = vec.vx;
The mirror reflection of any vector v from a line/(hyper-)surface with normal n in any dimension can be computed using projection tensors. The parallel projection of v on n is: v|| = (v . n) n = v . nn. Here nn is the outer (or tensor) product of the normal with itself. In Cartesian coordinates it is a matrix with elements: nn[i,j] = n[i]*n[j]. The perpendicular projection is just the difference between the original vector and its parallel projection: v - v||. When the vector is reflected, its parallel projection is reversed while the perpendicular projection is retained. So the reflected vector is:
v' = -v|| + (v - v||) = v - 2 v|| = v . (I - 2 nn) = v . R( n ), where
R( n ) = I - 2 nn
(I is the identity tensor which in Cartesian coordinates is simply the diagonal identity matrix diag(1))
R is called the reflection tensor. In Cartesian coordinates it is a real symmetric matrix with components R[i,j] = delta[i,j] - 2*n[i]*n[j], where delta[i,j] = 1 if i == j and 0 otherwise. It is also symmetric with respect to n:
R( -n ) = I - 2(-n)(-n) = I - 2 nn = R( n )
Hence it doesn't matter if one uses the outward facing or the inward facing normal n - the result would be the same.
In two dimensions and Cartesian coordinates, R (the matrix representation of R) becomes:
[ R00 R01 ] [ 1.0-2.0*n.x*n.x -2.0*n.x*n.y ]
R = [ ] = [ ]
[ R10 R11 ] [ -2.0*n.x*n.y 1.0-2.0*n.y*n.y ]
The components of the reflected vector are then computed as a row-vector-matrix product:
v1.x = v.x*R00 + v.y*R10
v1.y = v.x*R01 + v.y*R11
or after expansion:
k = 2.0*(v.x*n.x + v.y*n.y)
v1.x = v.x - k*n.x
v1.y = v.y - k*n.y
In three dimensions:
k = 2.0*(v.x*n.x + v.y*n.y + v.z*n.z)
v1.x = v.x - k*n.x
v1.y = v.y - k*n.y
v1.z = v.z - k*n.z
Finding the exact point where the ball will hit the line/wall is more involved - see here.
Calculate two components of the vector.
One component will be the projection of your vector onto the reflecting surface the other component will be the projection on to the surface's normal (which you say you already have). Use dot products to get the projections. Add these two components together by summing the two vectors. You'll have your answer.
You can even calculate the second component A2 as being the original vector minus the first component, so: A2 = A - A1. And then the vector you want is A1 plus the reflected A2 (which is simply -A2 since its perpendicular to your surface) or:
Ar = A1-A2
or
Ar = 2A1 - A which is the same as Ar = -(2A2 - A)
If [Ax,Bx] is your balls velocity and [Wx,Wy] is a unit vector representing the wall:
A1x = (Ax*Wx+Ay*Wy)*Wx;
A1y = (Ax*Wx+Ay*Wy)*Wy;
Arx = 2*A1x - Ax;
Ary = 2*A1y - Ay;