Regression query - regression

I am new to using Eviews. I want to fit my data to an exponential curve: y = a*e^(bx). I am running the regression using these two equations:
MICE_1= COEF(1)*(EXP(COEF(2)*OBSERVATION))
LOG(MICE_1) = COEF(3) + COEF(4)*OBSERVATION
Since, both equations are the same, coef(2)=coef(4) and coef(1)= exp(coef(3))
However, I am getting different coefficients using the two equations. Can someone please explain why is this happening?

Related

Is image rescaling between 0-255 needed for transfer learning

I am working on a classification task using transfer learning. I am using ResNet50 and weights from ImageNet.
My_model = (ResNet50( include_top=False, weights='imagenet', input_tensor=None,
input_shape=(img_height, img_width, 3),pooling=None))
I didn't rescale my input images between 0-255 but my result is quite good (acc: 93.25%). So my question is do I need to rescale images between 0-255? Do you think my result is wrong without rescaling between 0-255?
Thank you.
No basically your result is not wrong. to give a clue on that, we standardize the pixels values to a range between (0 and 1) just to avoid resulting big values during the calculus in the forward propagation z = w*x + b and then the backward propagation.
Why we do that ?
I develop, the optimization algorithm is definitely dependent on the result of the backward prop, so when we start updating our optimization algo with big values of weights/bias, then we need then a lot of epochs to reach the global minimum.

PyTorch find keypoints: output nodes to be in a range and negative loss

I am beginner in deep learning.
I am using this dataset and I want my network to detect keypoints of a hand.
How can I make my output layer's nodes to be in range [-1, 1] (range of normalized 2D points)?
Another problem is when I train for more than 1 epoch the loss gets negative values
criterion: torch.nn.MultiLabelSoftMarginLoss() and optimizer: torch.optim.SGD()
Here u can find my repo
net = nnModel.Net()
net = net.to(device)
criterion = nn.MultiLabelSoftMarginLoss()
optimizer = optim.SGD(net.parameters(), lr=learning_rate)
lr_scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer, gamma=decay_rate)
You can use the Tanh activation function, since the image of the function lies in [-1, 1].
The problem of predicting key-points in an image is more of a regression problem than a classification problem (especially if you're making your model outputs + targets fall within a continuous interval). Therefore, I suggest you use the L2 Loss.
In fact, it could be a good exercise for you to determine which loss function that is appropriate for regression problems provides the lowest expected generalization error using cross-validation. There's several such functions available in PyTorch.
One way I can think of is to use torch.nn.Sigmoid which produces outputs in [0,1] range and scale outputs to [-1,1] using 2*x-1 transformation.

A special loss function in caffe

I have a kind of euclidean loss function which is:
\sum_{i,j} c_i*max{0,y_{ji}-k_{ji}} + p_i*max{0,k_{ji}-y_{ji}}
which y_{ji} are the output of caffe and k_{ji} are the real output value, i is the index of the items and j is index of samples.
The issue is about getting the values of parameters c_i and p_i.
When I have c_i = c_q for all i \neq q, and similarly for p_i, I simply get the values of them as parameters of the loss layer (I added two new parameters in the caffe.proto). However, the problems is that now I have around 300 items so that it is not reasonable to get them as loss layer parameters.
I tried to get their values in the loss layer, I mean I tried to add another bottom layer for loss layer, but it gave an error.
I am stuck here!
Please guide me how I can solve this issue.
Thanks in advance,
Afshin

Best techinique to approximate a 32-bit function using machine learning?

I was wondering which is the best machine learning technique to approximate a function that takes a 32-bit number and returns another 32-bit number, from a set of observations.
Thanks!
Multilayer perceptron neural networks would be worth taking a look at. Though you'll need to process the inputs to a floating point number between 0 and 1, and then map the outputs back to the original range.
There are several possible solutions to your problem:
1.) Fitting a linear hypothesis with least-squares method
In that case, you are approximating a hypothesis y = ax + b with the least squares method. This one is really easy to implement, but sometimes, a linear model is not good enough to fit your data. But - I would give this one a try first.
Good thing is that there is a closed form, so you can directly calculate parameters a and b from your data.
See Least Squares
2.) Fitting a non-linear model
Once seen that your linear model does not describe your function very well, you can try to fit higher polynomial models to your data.
Your hypothesis then might look like
y = ax² + bx + c
y = ax³ + bx² + cx + d
etc.
You can also use least squares method to fit your data, and techniques from the gradient descent types (simmulated annealing, ...). See also this thread: Fitting polynomials to data
Or, as in the other answer, try fitting a Neural Network - the good thing is that it will automatically learn the hypothesis, but it is not so easy to explain what the relation between input and output is. But in the end, a neural network is also a linear combination of nonlinear functions (like sigmoid or tanh functions).

How to get the predicted values in training data set for Least Squares Support Vector Regression

I would like to make a prediction by using Least Squares Support Vector Machine for Regression, which is proposed by Suykens et al. I am using LS-SVMlab, which you can find the MATLAB toolbox here. Let's consider I have an independent variable X and a dependent variable Y, that both are simulated. I am following the instructions in the tutorial.
>>X = linspace(-1,1,50)’;
>>Y = (15*(X.^2-1).^2.*X.^4).*exp(-X)+normrnd(0,0.1,length(X),1);
>>type = ’function estimation’;
>>[gam,sig2] = tunelssvm({X,Y,type,[], [],’RBF_kernel’},’simplex’,...’leaveoneoutlssvm’,’mse’});
>>[alpha,b] = trainlssvm({X,Y,type,gam,sig2,’RBF_kernel’});
>>plotlssvm({X,Y,type,gam,sig2,’RBF_kernel’},{alpha,b});
The code above finds the best parameters using simplex method and leave-one-out cross validation and trains the model and give me alphas (support vector values for all the data points in the training set) and b coefficients. However, it does not give me the predictions of the variable Y. It only draws the plot. In some articles, I saw plots like the one below,
As I said before, the LS-SVM toolbox does not give me the predicted values of Y, it only draws the plot but no values in the workspace. How can I get these values and draw a graph of predicted values together with actual values?
There is one solution that I think of. By using X values in the training set, I re-run the model and get the prediction of values Y by using simlssvm command but it does not seem reasonable to me. Any solution that you can offer? Thanks in advance.
I am afraid you have answered your own question. The only way to obtain the prediction for the training points in LS-SVMLab is by simulating the training points after training your model.
[yp,alpha,b,gam,sig2,model] = lssvm(x,y,'f')
when u use this function yp is the predicted value