Standard deviation for mean absolute error (MAE) for gaussian plot - regression

I am finding formula to calculate standard deviation of MAE. I am working on regression problem.
Let's say I have y and predicted y as below. y = [1, 2, 3, 4, 5] and predicted y = [1.2, 2.5, 3.9, 4.8, 6.2]. My MAE will be 0.72 based on the formula below.
I want to plot a gaussian distribution graph for MAE where I need mean (MAE) and its standard deviation similar to image below. However I am not sure how to calculate the standard deviation of MAE. Kindly comment if more information is needed. Thank you.

Related

Stochastic Master Equation in Cartesian Form

I am trying to convert equation 2 in this paper which is a quantum stochastic master equation to the form of equation 5 which is in Cartesian coordinate form.
This screenshotis taken from the following peer reviewed journal article https://doi.org/10.1016/j.jfranklin.2019.05.021]
What I did so far is I took the derivative of equation 4, then set it equal to equation 2.
Since there are commutators in equation 2, when I substitute rho_t into equation 2, I always
get really confused.
If anyone have any insight, please share. Thank you for your help.

Convolutional neural network concept

Please go to the link http://scs.ryerson.ca/~aharley/vis/conv/flat.html
and draw a number in the box provided to see through various layers.
Now if you scroll through the different squares of the layers, you can see how your square is related to other squares of previous layers.
Now my doubt is, according to cs231n lecture 7, http://cs231n.github.io/convolutional-networks/ a filter has the same depth as the depth of the respective layer, and number of filters is equal to the depth of the succeeding layer. But if you go through the convolution layer 2, you can see that the particular square of a particular layer is only obtained from some of the squares of the preceding layer. I am trying to understand the concept here. Please help.
The following dimensions are according to (N, C, H, W).
pool1('6', 14, 14)
|
| kernel(16, '6', 5, 5)
v
conv2(16, 10, 10)
|
| kernel(2, 2), stride(2)
v
pool2(16, 5, 5)
Pool1 output 6 feature maps which are the input of Conv2. Accordingly, Conv2 has 16 kernels(which will generate 16 feature maps) and each of them has same depth or channel with Pool1 which is 6(surrounded by single quotes).

How to ask Stata (or any other statistical package) to estimate the R^2 of a user defined linear model?

Essentially, my task is to take a scatter plot that, under ideal conditions, would have a regression line with a particular slope - say ".5", and come up with some metric of how far off it is from this slope.
My original plan was to compute the scatter plot's actual regression line and compare the coefficient of that model to my "ideal" slope. However, I've realized that this method is prone to be very sensitive to outliers, in the sense that one outlier can totally flip the sign of the coefficient.
Therefore, my thought was to ask Stata to compute the R^2 of a model with slope of .5 -- but I don't know how to do this. Is it possible, in Stata or another package?
In R, with your slope of .5, you could calculate rsquared as
rsquared <- (.5 * (sd(x) / sd(y)))^2
Here's a simple example where I fit a small model and then use this calculation (and rsquared from both can be compared)
x <- c(3, 4, 5, 7, 10)
y <- c(5, 8, 9, 11, 18)
yfit <- lm (y~x)
slope <- yfit$coefficients[2]
slope
rsquaredfit <- summary(yfit)$r.squared
rsquaredfit
# From formula, given slope from fit
rsquared <- (slope * (sd(x) / sd(y)))^2
rsquared

How exactly Convolution2D layer works in Keras?

I want to write own convolution layer same as Convolution2D.
How it works in Keras?
For example, if Convolution2D(64, 3, 3, activation='relu', input_shape=(3,226,226)
Which equation will be for output data?
Since you input image shape is (266, 266, 3)[tf]/(3, 266, 266)[th], and the filter number is 64, and the kernel size is 3x3, and for padding, I think the default padding is 1, and the default stride is 1.
So, the output is 266x266x64.
output_width=output_height=(width – filter + 2*padding)/stride + 1
in your code, the width=266, the filter=3, the padding=1, and stride=1.
If you have any trouble understanding the basic concepts, I think you can read this cs231n post for more information.
For how to understanding the process of conv, click here.
Actually, Keras is not doing a convolution in conv2d.
In order to speed up the process, the convolution operation is converted into a matrix (row-per-column) multiplication.
More info here, at chapter 6.

How can I reproduce a scribbly pattern like this in code?

I made this graph in wolfram alpha by accident:
Can you write code to produce a larger version of this pattern?
Can you make similar looking patterns?
Readable code in any language is good, but something that can be run in a browser would be best (i.e. JavaScript / Canvas). If you write code in other languages, please include a screenshot.
Notes:
The input formula for the above image is: arg(sin(x+iy)) = sin^(-1)((sqrt(2) cos(x) sinh(y))/sqrt(cosh(2 y)-cos(2 x))) (link)
You don't have to use to use the above formula. Anything which produces a similar result would be cool. But "reverse engineering" Wolfram Alpha would be best
The two sides of the equation are equal (I think), So WA should have probably only returned 'true' instead of the graph
The pattern is probably the result of rounding errors.
I don't know if the pattern was generated by iterating over every pixel or if it's vector based (points and lines). My guess is with vector.
I don't know what causes this type of pattern ('Rounding errors' is the best guess.)
IEEE floating point standard does not say how sin or cos, etc should work, so trig functions vary between platforms and architectures.
No brownian motion plots please
Finally, here's another example which might help in your mission: (link)
As you asked for similar looking patterns in any language, here is the Mathematica code (really easy since Wolfram Alpha is based on Mathematica)
Edit
It is indeed a roundoff effect:
If we set:
and make a plot
Plot3D[f[x, y], {x, 7, 9}, {y, -8, -9},WorkingPrecision -> MachinePrecision]
The result is:
But if we extend the precision of the plot to 30 digits:
Plot3D[f[x, y], {x, 7, 9}, {y, -8, -9},WorkingPrecision -> 30]
We get
and the roughness is gone (which caused your scribbly pattern)
BTW, your f[x,y] is a very nice function:
So if I managed to copy your formulas without errors (which should be considered a miracle), both sides of your equation are equal only in certain periodic ranges in x, probably of the form [2 n Pi, (2 n + 1) Pi]