ANOVA table in R: F-value does not "match the math" - anova

I was playing around with a simple linear models when I noticed that, in the ANOVA table, the ratio MSreg/MSres does not exactly correspond to the F-value. Indeed, the two values are very similar but not the same.
Here my script
#quick view of the dataset
> head(my_data)
Diameter Height
1 0.325 0.080
2 0.320 0.100
3 0.280 0.110
4 0.125 0.040
5 0.400 0.135
6 0.335 0.100
#setting up the lm()
> ls1 <- lm(Diameter~Height, data=my_data)
> anova(ls1)
Analysis of Variance Table
Response: Diameter
Df Sum Sq Mean Sq F value Pr(>F)
Height 1 0.82415 0.82415 602.63 < 2.2e-16 ***
Residuals 98 0.13402 0.00137
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Here 0.82415/0.00137=601.5693 which is not the F value in the table. Is there a particular reason for that?

Related

statsmodels OLS gives parameters despite perfect multicollinearity

Assume the following df:
ib c d1 d2
0 1.14 1 1 0
1 1.0 1 1 0
2 0.71 1 1 0
3 0.6 1 1 0
4 0.66 1 1 0
5 1.0 1 1 0
6 1.26 1 1 0
7 1.29 1 1 0
8 1.52 1 1 0
9 1.31 1 1 0
10 0.89 1 0 1
d1 and d2 are perfectly colinear. Now I estimate the following regression model:
import statsmodels.api as sm
reg = sm.OLS(df['ib'], df[['c', 'd1', 'd2']]).fit().summary()
reg
This gives me the following output:
<class 'statsmodels.iolib.summary.Summary'>
"""
OLS Regression Results
==============================================================================
Dep. Variable: ib R-squared: 0.087
Model: OLS Adj. R-squared: -0.028
Method: Least Squares F-statistic: 0.7590
Date: Thu, 17 Nov 2022 Prob (F-statistic): 0.409
Time: 12:19:34 Log-Likelihood: -1.5470
No. Observations: 10 AIC: 7.094
Df Residuals: 8 BIC: 7.699
Df Model: 1
Covariance Type: nonrobust
===============================================================================
coef std err t P>|t| [0.025 0.975]
-------------------------------------------------------------------------------
c 0.7767 0.111 7.000 0.000 0.521 1.033
d1 0.2433 0.127 1.923 0.091 -0.048 0.535
d2 0.5333 0.213 2.499 0.037 0.041 1.026
==============================================================================
Omnibus: 0.257 Durbin-Watson: 0.760
Prob(Omnibus): 0.879 Jarque-Bera (JB): 0.404
Skew: 0.043 Prob(JB): 0.817
Kurtosis: 2.019 Cond. No. 8.91e+15
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The smallest eigenvalue is 2.34e-31. This might indicate that there are
strong multicollinearity problems or that the design matrix is singular.
"""
However, including c, d1 and d2 represents the well known dummy variable trap which, from my understanding, should make it impossible to estimate the model. Why is this not the case here?

which post-hoc test after welch-anova

i´m doing the statistical evaluation for my master´s thesis. the levene test was significant so i did the welch anova which was significant. now i tried the games-howell post hoc test but it didn´t work.
can anybody help me sending me the exact functions which i have to run in R to do the games-howell post hoc test and to get kind of a compact letter display, where it shows me which treatments are not significantly different from each other? i also wanted to ask if i did the welch anova the right way (you can find the output of R below)
here it the output which i did till now for the statistical evalutation:
data.frame': 30 obs. of 3 variables:
$ Dauer: Factor w/ 6 levels "0","2","4","6",..: 1 2 3 4 5 6 1 2 3 4 ...
$ WH : Factor w/ 5 levels "r1","r2","r3",..: 1 1 1 1 1 1 2 2 2 2 ...
$ TSO2 : num 107 86 98 97 88 95 93 96 96 99 ...
> leveneTest(TSO2~Dauer, data=TSO2R)
`Levene's Test for Homogeneity of Variance (center = median)
Df F value Pr(>F)
group 5 3.3491 0.01956 *
24
Signif. codes: 0 ‘’ 0.001 ‘’ 0.01 ‘’ 0.05 ‘.’ 0.1 ‘ ’ 1`
`> oneway.test (TSO2 ~Dauer, data=TSO2R, var.equal = FALSE) ###Welch-ANOVA
One-way analysis of means (not assuming equal variances)
data: TSO2 and Dauer
F = 5.7466, num df = 5.000, denom df = 10.685, p-value = 0.00807
'''`
Thank you very much!

Where is the "Corr" column for my lmer summary under "Random effects?"

I'm using the lmer package to build a model and want to check for correlations between random effects.
First I build a tibble:
id <- rep(1:6, each=4)
group <- rep(c("A","B"), each=12)
type <- rep(c("pencil", "pencil", "pen", "pen"), times=6)
color <- rep (c ("blue", "red"), times = 12)
dv <- c(-24.3854453, 17.0358639, -15.5174479, 8.6462489, -7.0561166, 3.3524410, 21.6199364, -6.1020999, 13.2464223, 20.3740206, 22.8571793, -6.6159629, 18.7898553, -8.2504319, 17.9571641, 2.9555213, -19.5516738, -0.5845135, 9.6041710, -4.1301420, 4.1740094, -24.2496521, 7.4432948, -0.8290391)
sample_data <- as_tibble(cbind(id, group, type, color, dv)
Here is my sample_data:
id group type color dv
1 A pencil blue 0.05925979
1 A pencil red 4.60326151
1 A pen blue -20.72000620
1 A pen red -15.27612843
2 A pencil blue -0.68719576
2 A pencil red 16.34200026
2 A pen blue 18.23954687
2 A pen red 21.02837383
3 A pencil blue -22.28695974
3 A pencil red -18.36587259
3 A pen blue -15.13952913
3 A pen red 19.95919637
4 B pencil blue -19.52410412
4 B pencil red -3.25912890
4 B pen blue -12.11669400
4 B pen red 15.93333896
5 B pencil blue -17.93575204
5 B pencil red -8.58879605
5 B pen blue 8.89757943
5 B pen red -13.42995221
6 B pencil blue 12.03769124
6 B pencil red -10.28876053
6 B pen blue 7.69523239
6 B pen red -2.94621122
Now I run my model and summarize it:
test.model <- lmer(dv ~ 1 + group * type * color + (1 * type * color | id), data = sample_data, REML = FALSE)
summary(test.model)
Here's my output:
Linear mixed model fit by maximum likelihood . t-tests use Satterthwaite's method ['lmerModLmerTest']
Formula: dv ~ 1 + group * type * color + (1 * type * color | id)
Data: test
AIC BIC logLik deviance df.resid
204.7 216.5 -92.4 184.7 14
Scaled residuals:
Min 1Q Median 3Q Max
-2.16529 -0.45429 0.09296 0.62406 1.62720
Random effects:
Groups Name Variance Std.Dev.
id (Intercept) 4.975 2.23
Residual 124.228 11.15
Number of obs: 24, groups: id, 6
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) -0.6679 6.5626 23.8937 -0.102 0.9198
groupA -0.6894 9.2809 23.8937 -0.074 0.9414
typepencil -10.3603 9.1005 18.0000 -1.138 0.2699
colorblue 12.3361 9.1005 18.0000 1.356 0.1920
groupA:typepencil 25.3050 12.8700 18.0000 1.966 0.0649 .
groupA:colorblue -1.3256 12.8700 18.0000 -0.103 0.9191
typepencil:colorblue -0.1705 12.8700 18.0000 -0.013 0.9896
groupA:typepencil:colorblue -30.4925 18.2010 18.0000 -1.675 0.1112
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) groupA typpnc colrbl grpA:t grpA:c typpn:
groupA -0.707
typepencil -0.693 0.490
colorblue -0.693 0.490 0.500
grpA:typpnc 0.490 -0.693 -0.707 -0.354
gropA:clrbl 0.490 -0.693 -0.354 -0.707 0.500
typpncl:clr 0.490 -0.347 -0.707 -0.707 0.500 0.500
grpA:typpn: -0.347 0.490 0.500 0.500 -0.707 -0.707 -0.707
I want to check the correlations for random effects, but I don't see the usual "Corr" column (usually appears next to "St.Dev." in the output under "Random effects"). Where is it?
I think that the problem stems from the random effects part of your model. You currently have:
(1 * type * color | id)
However, the standard formula is:
(1 + type * color | id)
When I run this, I get an error about the number of observations being less than the number of random effects (the interaction makes the random effects structure too complex for your sample dataset). Using a less complex random effects structure, (1 + type + color | id), I am able to get the Corr column that you are looking for:
Linear mixed model fit by maximum likelihood . t-tests use Satterthwaite's method ['lmerModLmerTest']
Formula: dv ~ 1 + group * type * color + (1 + type + color | id)
Data: sample_data
AIC BIC logLik deviance df.resid
203.8 221.5 -86.9 173.8 9
Scaled residuals:
Min 1Q Median 3Q Max
-1.5320 -0.7217 0.1363 0.7089 1.3920
Random effects:
Groups Name Variance Std.Dev. Corr
id (Intercept) 130.22 11.411
typepencil 15.49 3.936 0.42
colorred 219.98 14.832 -1.00 -0.37
Residual 41.79 6.464
Number of obs: 24, groups: id, 6
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 9.653 7.572 6.888 1.275 0.24368
groupB 2.015 10.708 6.888 0.188 0.85617
typepencil -15.718 5.747 14.358 -2.735 0.01582 *
colorred -11.010 10.059 7.985 -1.095 0.30562
groupB:typepencil 5.187 8.127 14.358 0.638 0.53333
groupB:colorred -1.326 14.226 7.985 -0.093 0.92805
typepencil:colorred 30.663 7.465 11.996 4.108 0.00145 **
groupB:typepencil:colorred -30.492 10.556 11.996 -2.889 0.01362 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) groupB typpnc colrrd grpB:t grpB:c typpn:
groupB -0.707
typepencil -0.174 0.123
colorred -0.922 0.652 0.117
grpB:typpnc 0.123 -0.174 -0.707 -0.083
gropB:clrrd 0.652 -0.922 -0.083 -0.707 0.117
typpncl:clr 0.246 -0.174 -0.649 -0.371 0.459 0.262
grpB:typpn: -0.174 0.246 0.459 0.262 -0.649 -0.371 -0.707
convergence code: 0
Model failed to converge with max|grad| = 0.00237651 (tol = 0.002, component 1)
I still get a warning about the model failing to converge. This is likely again due to the random effects structure being too complex for your sample dataset: lmer(dv ~ 1 + group * type * color + (1 | id), data = sample_data, REML = FALSE) gives no such warning.
Hope that helps?

How to calculate the Hamming weight for a vector?

I am trying to calculate the Hamming weight of a vector in Matlab.
function Hamming_weight (vet_dec)
Ham_Weight = sum(dec2bin(vet_dec) == '1')
endfunction
The vector is:
Hamming_weight ([208 15 217 252 128 35 50 252 209 120 97 140 235 220 32 251])
However, this gives the following result, which is not what I want:
Ham_Weight =
10 10 9 9 9 5 5 7
I would be very grateful if you could help me please.
You are summing over the wrong dimension!
sum(dec2bin(vet_dec) == '1',2).'
ans =
3 4 5 6 1 3 3 6 4 4 3 3 6 5 1 7
dec2bin(vet_dec) creates a matrix like this:
11010000
00001111
11011001
11111100
10000000
00100011
00110010
11111100
11010001
01111000
01100001
10001100
11101011
11011100
00100000
11111011
As you can see, you're interested in the sum of each row, not each column. Use the second input argument to sum(x, 2), which specifies the dimension you want to sum along.
Note that this approach is horribly slow, as you can see from this question.
EDIT
For this to be a valid, and meaningful MATLAB function, you must change your function definition a bit.
function ham_weight = hamming_weight(vector) % Return the variable ham_weight
ham_weight = sum(dec2bin(vector) == '1', 2).'; % Don't transpose if
% you want a column vector
end % endfunction is not a MATLAB command.

Backwards stepwise regression approach in Stata 13

. stepwise, pr(.05) : logit y1 (x1-x7)
begin with full model
p < 0.0500 for all terms in model
Logistic regression Number of obs = 28900
LR chi2(66) = 1182.91
Prob > chi2 = 0.0000
Log likelihood = -28120.170 Pseudo R2 = 0.0213
------------------------------------------------------------------------------
churn | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x1 | .0019635 .0007981 2.46 0.014 .0003992 .0035278
x2 | -.0002809 .0000496 -5.66 0.000 -.0003782 -.0001836
x3 | -.0031225 .0008888 -3.51 0.000 -.0048645 -.0013806
x4 | -.0011958 .0059387 -0.20 0.840 -.0128354 .0104439
x5 | .0007603 .0002804 2.71 0.007 .0002106 .0013099
x6 | .0070912 .0020636 3.44 0.001 .0030467 .0111357
x7 | -.0004919 .0000535 -9.19 0.660 -.0005968 -.0003871
_cons | .1497005 .0952738 1.57 0.116 -.0370327 .3364336
------------------------------------------------------------------------------
Note: 0 failures and 1 success completely determined.
As you can see, in the above logistic regression output, x4 and x7 both have p-values that are >0.05... however, Stata is telling me that p < 0.0500 for all terms in model, thereby rendering my stepwise approach useless.
Can anyone please advise what I may be doing wrong?
You insisted with your syntax that all the variables be kept together, so Stata has nowhere to go from where it started in this case. Hence there can be nothing stepwise with your syntax: it's either all in or all out.
See the help: a varlist in parentheses indicates that this group of variables is to be included or excluded together. All the predictors are so bound by what you typed.
After reading the help, all you may need to do is to omit the parentheses.
(Lack of a Stata tag for a month cut down mightily on the Stata users reading this.)