I am trying to solve the scalar Poisson problem on mesh,
I think created the Laplace matrix and mass matrix correctly. Then I solve the Lx = Mb using a linear solver directly. but the result is wrong. I noticed the textbook said p should be minus some b bar. I am really confused.
Ax=b, A=LaplaceMatrix, b=-MassMatrix*b
b=scalarDensityOfVertex; // such as [0,0,0,1,0,0,0,-1]
SparseMatrix<double> LaplaceMatrix;
SparseMatrix<double> MassMatrix;
Vector<double> sol(rho.size()),bVector=-MassMatrix*b;
Solver<double> solver(Amatrix);
sol=solver.solve(bVector);
Related
I am new to using Eviews. I want to fit my data to an exponential curve: y = a*e^(bx). I am running the regression using these two equations:
MICE_1= COEF(1)*(EXP(COEF(2)*OBSERVATION))
LOG(MICE_1) = COEF(3) + COEF(4)*OBSERVATION
Since, both equations are the same, coef(2)=coef(4) and coef(1)= exp(coef(3))
However, I am getting different coefficients using the two equations. Can someone please explain why is this happening?
I have a kind of euclidean loss function which is:
\sum_{i,j} c_i*max{0,y_{ji}-k_{ji}} + p_i*max{0,k_{ji}-y_{ji}}
which y_{ji} are the output of caffe and k_{ji} are the real output value, i is the index of the items and j is index of samples.
The issue is about getting the values of parameters c_i and p_i.
When I have c_i = c_q for all i \neq q, and similarly for p_i, I simply get the values of them as parameters of the loss layer (I added two new parameters in the caffe.proto). However, the problems is that now I have around 300 items so that it is not reasonable to get them as loss layer parameters.
I tried to get their values in the loss layer, I mean I tried to add another bottom layer for loss layer, but it gave an error.
I am stuck here!
Please guide me how I can solve this issue.
Thanks in advance,
Afshin
I am attempting to fit a circle to some data. This requires numerically solving a set of three non-linear simultaneous equations (see the Full Least Squares Method of this document).
To me it seems that the NEWTON function provided by IDL is fit for solving this problem. NEWTON requires the name of a function that will compute the values of the equation system for particular values of the independent variables:
FUNCTION newtfunction,X
RETURN, [Some function of X, Some other function of X]
END
While this works fine, it requires that all parameters of the equation system (in this case the set of data points) is hard coded in the newtfunction. This is fine if there is only one data set to solve for, however I have many thousands of data sets, and defining a new function for each by hand is not an option.
Is there a way around this? Is it possible to define functions programmatically in IDL, or even just pass in the data set in some other manner?
I am not an expert on this matter, but if I were to solve this problem I would do the following. Instead of solving a system of 3 non-linear equations to find the three unknowns (i.e. xc, yc and r), I would use an optimization routine to converge to a solution by starting with an initial guess. For this steepest descent, conjugate gradient, or any other multivariate optimization method can be used.
I just quickly derived the least square equation for your problem as (please check before use):
F = (sum_{i=1}^{N} (xc^2 - 2 xi xc + xi^2 + yc^2 - 2 yi yc + yi^2 - r^2)^2)
Calculating the gradient for this function is fairly easy, since it is just a summation, and therefore writing a steepest descent code would be trivial, to calculate xc, yc and r.
I hope it helps.
It's usual to use a COMMON block in these types of functions to pass in other parameters, cached values, etc. that are not part of the calling signature of the numeric routine.
I would like to make a prediction by using Least Squares Support Vector Machine for Regression, which is proposed by Suykens et al. I am using LS-SVMlab, which you can find the MATLAB toolbox here. Let's consider I have an independent variable X and a dependent variable Y, that both are simulated. I am following the instructions in the tutorial.
>>X = linspace(-1,1,50)’;
>>Y = (15*(X.^2-1).^2.*X.^4).*exp(-X)+normrnd(0,0.1,length(X),1);
>>type = ’function estimation’;
>>[gam,sig2] = tunelssvm({X,Y,type,[], [],’RBF_kernel’},’simplex’,...’leaveoneoutlssvm’,’mse’});
>>[alpha,b] = trainlssvm({X,Y,type,gam,sig2,’RBF_kernel’});
>>plotlssvm({X,Y,type,gam,sig2,’RBF_kernel’},{alpha,b});
The code above finds the best parameters using simplex method and leave-one-out cross validation and trains the model and give me alphas (support vector values for all the data points in the training set) and b coefficients. However, it does not give me the predictions of the variable Y. It only draws the plot. In some articles, I saw plots like the one below,
As I said before, the LS-SVM toolbox does not give me the predicted values of Y, it only draws the plot but no values in the workspace. How can I get these values and draw a graph of predicted values together with actual values?
There is one solution that I think of. By using X values in the training set, I re-run the model and get the prediction of values Y by using simlssvm command but it does not seem reasonable to me. Any solution that you can offer? Thanks in advance.
I am afraid you have answered your own question. The only way to obtain the prediction for the training points in LS-SVMLab is by simulating the training points after training your model.
[yp,alpha,b,gam,sig2,model] = lssvm(x,y,'f')
when u use this function yp is the predicted value
I am trying to emulate a subset of opengl with my own software rasterizer.
I'm taking a wild guess that the process looks like this:
Multiply the 3d point by the modelview matrix -> multiply that result by the projection matrix
Is this correct?
Also what size is the projection matrix and how does it work?
The point is multiplied by the modelview matrix and then with projection matrix. The resultant is normalized and then multiplied with viewport matrix to get the screen coordinates. All matrices are 4X4 matrix. You can view this link for further details.
http://www.songho.ca/opengl/gl_transform.html#example2
(shameless self-promotion, sorry) I wrote a tutorial on the subject :
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
There is a slight caveat that I don't explain, though. At the end of the tutorial, you're in Normalized Device Coordinates, i.e. -1 to +1. A simple linear mapping transorms this to [0-screensize].
You might also benefit from looking at the gluProject() code. This takes an x, y, z point in object coordinates as well as pointers to modelView, projection, and viewport matrices and tells you what the x, y, (z) coordinates are in screenspace (the z is a value between 0 and 1 that can be used in the depth buffer). All three matrix multiplications are shown there in the code, along with the divisions necessary for perspective.