Different Sum Sq and MSS using lme4::lmer and lmerTest::lmer - lme4

I get sums of squares and mean sums of squares 10x higher when I use anova on lmerTest:: lmer compared to lme4:: lmer objects. See the R log file below. Note the warning that when I attach the lmerTest package, the stats::sigma function overrides the lme4::sigma function, and I suspect that it is this that leads to the discrepancy. In addition, the anova report now says that it is a Type III anova rather than the expected Type I. Is this a bug in the lmerTest package, or is there something about use of the Kenward-Roger approximation for degrees of freedom that changes the calculation of SumSQ and MSS and specification of the anova Type that I don't understand?
I would append the test file, but it is confidential clinical trial information. If necessary I can see if I can cobble up a test case.
Thanks in advance for any advice you all can provide about this.
> library(lme4)
Loading required package: Matrix
Attaching package: ‘lme4’
The following object is masked from ‘package:stats’:
sigma
> test100 <- lmer(log(value) ~ prepost * lowhi + (1|CID/LotNo/rep),
REML = F, data = GSIRlong, subset = !is.na(value))
> library(lmerTest)
Attaching package: ‘lmerTest’
The following object is masked from ‘package:lme4’:
lmer
The following object is masked from ‘package:stats’:
step
Warning message:
replacing previous import ‘lme4::sigma’ by ‘stats::sigma’ when loading
‘lmerTest’
> test200 <- lmer(log(value) ~ prepost * lowhi + (1|CID/LotNo/rep),
REML = F, data = GSIRlong, subset = !is.na(value))
> anova(test100)
Analysis of Variance Table
Df Sum Sq Mean Sq F value
prepost 1 3.956 3.956 18.4825
lowhi 1 130.647 130.647 610.3836
prepost:lowhi 1 0.038 0.038 0.1758
> anova(test200, ddf = 'Ken')
Analysis of Variance Table of type III with Kenward-Roger
approximation for degrees of freedom
Sum Sq Mean Sq NumDF DenDF F.value Pr(>F)
prepost 37.15 37.15 1 308.04 18.68 2.094e-05 ***
lowhi 1207.97 1207.97 1 376.43 607.33 < 2.2e-16 ***
prepost:lowhi 0.35 0.35 1 376.43 0.17 0.676
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Update: Thanks, Ben. I did a little code archeology on lmerTest to see if I could dope out an explanation for the above anomalies. First, it turns out that lmerTest::lmer just submits the model to lme4::lmer and then relabels the result as an "mermodLmerTest" object. The only effect of this is to invoke versions of summary() and anova() from the lmerTest package rather than the usual defaults from base and stats. (These lmerTest functions are compiled, and I have not yet gone farther to look at the C++ code.) lmerTest::summary just adds three columns to the base::summary result, giving df, t value, and Pr. Note that lmerTest::anova, by default, computes a type III anova rather than a type I as in stats::anova. (Explanation of my second question above.) Not a great choice if one's model includes interactions. One can request a type I, II, or III anova using the type = 1/2/3 option.
However there are other surprises using the nlmeTest versions of summary and anova, as shown in the R console file below. I used lmerTest's included sleepstudy data so that this code should be replicable.
a. Note that "sleepstudy" has 180 records (with 3 variables)
b. The summaries of fm1 and fm1a are identical except for the added Fixed effects columns. But note that in the lmerTest::summary the ddfs for the intercept and Days are 1371 and 1281 respectively; odd given that there are only 180 records in "sleepstudy."
c. Just as in my original model above, the nlm4 anad nlmrTest versions of anova give very different values of Sum Sq and Mean Sq. (30031 and 446.65 respectively).
d: Interestingly, the nlmrTest versions of anova using Satterthwaite and Kenward-Rogers estimates of the DenDF are wildly different (5794080 and 28 respecitvely). The K-R value seems more reasonable.
Given the above issues, I am reluctant at this point to depend on lmerTest to give reliable p-values. Based on your (Doug Bates's) blog entry (https://stat.ethz.ch/pipermail/r-help/2006-May/094765.html), I am using now (and recommending) the method from the posting by Dan Mirman (http://mindingthebrain.blogspot.ch/2014/02/three-ways-to-get-parameter-specific-p.html) in the final bit of code below to estimate a naive t-test p-value (assuming essentially infinite degrees of freedom) and a Kenward-Rogers estimate of df (using the R package 'pbkrtest' -- the same package used by lmerTest). I couldn't find R code to compute the Satterthwaite estimates. The naive t-test p-value is clearly anti-conservative, but the KR estimate is thought to be pretty good. If the two give similar estimates of p, then I think that one can feel comfortable with a p-value in the range of [naive t-test, KR estimate].
> library(lme4); library(lmerTest); library(pbkrtest);
dim(sleepstudy)
[1] 180 3
>
> fm1 <- lme4::lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
> fm1a <- lmerTest::lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
>
> summary(fm1)
Linear mixed model fit by REML ['lmerMod']
Formula: Reaction ~ Days + (Days | Subject)
Data: sleepstudy
REML criterion at convergence: 1743.6
Scaled residuals:
Min 1Q Median 3Q Max
-3.9536 -0.4634 0.0231 0.4634 5.1793
Random effects:
Groups Name Variance Std.Dev. Corr
Subject (Intercept) 612.09 24.740
Days 35.07 5.922 0.07
Residual 654.94 25.592
Number of obs: 180, groups: Subject, 18
Fixed effects:
Estimate Std. Error t value
(Intercept) 251.405 6.825 36.84
Days 10.467 1.546 6.77
Correlation of Fixed Effects:
(Intr)
Days -0.138
> summary(fm1a)
Linear mixed model fit by REML t-tests use Satterthwaite approximations to
degrees of freedom [lmerMod]
Formula: Reaction ~ Days + (Days | Subject)
Data: sleepstudy
REML criterion at convergence: 1743.6
Scaled residuals:
Min 1Q Median 3Q Max
-3.9536 -0.4634 0.0231 0.4634 5.1793
Random effects:
Groups Name Variance Std.Dev. Corr
Subject (Intercept) 612.09 24.740
Days 35.07 5.922 0.07
Residual 654.94 25.592
Number of obs: 180, groups: Subject, 18
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 251.405 6.825 1371.100 302.06 <2e-16 ***
Days 10.467 1.546 1281.700 55.52 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr)
Days -0.138
Warning message:
In deviance.merMod(object, ...) :
deviance() is deprecated for REML fits; use REMLcrit for the REML
criterion or deviance(.,REML=FALSE) for deviance calculated at the REML fit
>
> anova(fm1)
Analysis of Variance Table
Df Sum Sq Mean Sq F value
Days 1 30031 30031 45.853
> anova(fm1a, ddf = 'Sat', type = 1)
Analysis of Variance Table of type I with Satterthwaite
approximation for degrees of freedom
Sum Sq Mean Sq NumDF DenDF F.value Pr(>F)
Days 446.65 446.65 1 5794080 45.853 1.275e-11 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Warning message:
In deviance.merMod(object, ...) :
deviance() is deprecated for REML fits; use REMLcrit for the REML criterion or deviance(.,REML=FALSE) for deviance calculated at the REML fit
> anova(fm1a, ddf = 'Ken', type = 1)
Analysis of Variance Table of type I with Kenward-Roger
approximation for degrees of freedom
Sum Sq Mean Sq NumDF DenDF F.value Pr(>F)
Days 446.65 446.65 1 27.997 45.853 2.359e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Warning message:
In deviance.merMod(object, ...) :
deviance() is deprecated for REML fits; use REMLcrit for the REML criterion or deviance(.,REML=FALSE) for deviance calculated at the REML fit
>
> # t.test
> coefs <- data.frame(coef(summary(fm1)))
> coefs$p.z <- 2 * (1 - pnorm(abs(coefs$t.value)))
> coefs
Estimate Std..Error t.value p.z
(Intercept) 251.40510 6.824556 36.838311 0.000000e+00
Days 10.46729 1.545789 6.771485 1.274669e-11
>
> # Kenward-Rogers
> df.KR <- get_ddf_Lb(fm1, fixef(fm1))
> df.KR
[1] 25.89366
> coefs$p.KR <- 2 * (1 - pt(abs(coefs$t.value), df.KR))
> coefs
Estimate Std..Error t.value p.z p.KR
(Intercept) 251.40510 6.824556 36.838311 0.000000e+00 0.0000e+00
Days 10.46729 1.545789 6.771485 1.274669e-11 3.5447e-07

Related

Unexpected output of emmeans averaged accross variables

I transformed a variable (e.g. leaf_area) using a simple square transformation and then fitted to the following model containing an interaction:
fit <- lmer(leaf_area^2 ~genotype*soil_type + date_measurement + light + (1|repetition) + (1|y_position) + (1|x_position), data = dataset)
To obtain the emmeans averaged accross genotypes and soil type for each measurement date, I further use the following command:
fit.emm <- emmeans(fit, ~ genotype*soil_type + date_measurement, type = "response")
The emmeans are, nevertheless, averaged for the variable date_measurement.
As represented in the following example, emmeans are averages of genotypes x, y and z in the soil MT and in the date of measurement 27.4, but the measurement dates actually occured on 21, 23, 28, 30 and 35 das.
genotype soil_type date_measurement emmean SE df lower.CL upper.CL
x MT 27.4 0.190 0.0174 126.0 0.155 0.224
y MT 27.4 0.220 0.0147 74.1 0.191 0.250
z MT 27.4 0.210 0.0157 108.6 0.179 0.241
When I fit the model without interaction between genotype and soil type and run the emmeans, the results are still averaged for the measurement dates.
fit <- lmer(leaf_area^2 ~genotype + soil_type + date_measurement + light + (1|repetition) + (1|y_position) + (1|x_position), data = dataset)
fit.emm <- emmeans(fit, ~ genotype + soil_type + date_measurement, type = "response")
My question is: how can I obtain the emmeans averaged accross genotype and soil but separated for each date of measurement?
Class of variables:
date_measurement, light, x_position, y_position: numeric
genotype and soil_type: factor
Thank you in advance.
When you have a numerical predictor in the model, the default is to obtain predictions at the average value of that covariate. If you want the covariates treated like factors, you have to say so:
fit.emm <- emmeans(fit, ~ genotype*soil_type + date_measurement,
cov.reduce = FALSE)
In addition, emmeans cannot auto-detect your square transformation. You can fix it up by doing
fit.emm <- update(fit.emm, tran = make.tran("power", 2),
type = "response")
Then I think you will want to subsequently obtain marginal means by averaging over date_measurement at least -- i.e.,
fit.emm2 <- emmeans(fit.emm, ~ genotype*soil_type)
It will retain the transformation and type = "response" setting.

Contrast emmeans: post-hoc t-test as the average differences of the differences between baseline and treatment periods

I am using the lme4 package in R to undertake linear mixed effect models (LMM). Essentially all participants received two interventions (an intervention treatment and a placebo (control)) and were separated by a washout period. However, the order or sequence they received the interventions differed.
An interaction term of intervention and visit was included in the LMM with eight levels including all combinations of intervention (2 levels: control and intervention) and visit (4 levels: visit 1=baseline 1, visit 2, visit 3=post-randomization baseline 2, visit 4).
My question is how do I determine the intervention effect by a post-hoc t-test as the average differences of the differences between interventions, hence between visits 1 and 2 and between visits 3 and 4. I also want to determine the effects of the intervention and control compared to baseline.
Please see code below:
model1<- lmer(X ~ treatment_type:visit_code + (1|SID) + (1|SID:period), na.action= na.omit, data = data.x)
emm <- emmeans(model1 , ~treatment_type:visit_code)
My results of model 1 is:
emm
treatment_type visit_code emmean SE df lower.CL upper.CL
Control T0 -0.2915 0.167 26.0 -0.635 0.0520
Intervention T0 -0.1424 0.167 26.0 -0.486 0.2011
Control T1 -0.2335 0.167 26.0 -0.577 0.1100
Intervention T1 0.0884 0.167 26.0 -0.255 0.4319
Control T2 0.0441 0.167 26.0 -0.299 0.3876
Intervention T2 -0.2708 0.168 26.8 -0.616 0.0748
Control T3 0.1272 0.167 26.0 -0.216 0.4708
Intervention T3 0.0530 0.168 26.8 -0.293 0.3987
Degrees-of-freedom method: kenward-roger
Confidence level used: 0.95
I first created a matrix/ vectors:
#name vectors
Control.B1<- c(1,0,0,0,0,0,0,0) #control baseline 1 (visit 1)
Intervention.B1<- c(0,1,0,0,0,0,0,0) #intervention baseline 1 (visit 1)
Control.A2<- c(0,0,1,0,0,0,0,0) #post control 1 (visit 2)
Intervention.A2<- c(0,0,0,1,0,0,0,0) #post intervention 1 (visit 2)
ControlB3<- c(0,0,0,0,1,0,0,0) #control baseline 2 (visit 3)
Intervention.B3<- c(0,0,0,0,0,1,0,0) #intervention baseline 2 (visit 3)
Control.A4<- c(0,0,0,0,0,0,1,0) #post control 2 (visit 4)
Intervention.A4<- c(0,0,0,0,0,0,0,1) #post intervention 2 (visit 4)
Contbaseline = (Control.B1 + Control.B3)/2 # average of control baseline visits
Intbaseline = (Intervention. B1 + Intervention.B3)/2 # average of intervention baseline visits
ControlAfter= (Control.A2 + Control.A4)/2 # average of after control visits
IntervAfter= (Intervention.A2 + Intervention.A4)/2 # average of after intervention visits
Control.vs.Baseline = (ControlAfter-Contbaseline)
Intervention.vs.Baseline = (IntervAfter-Intbaseline)
Control.vs.Intervention = ((Control.vs.Baseline)-(Intervention.vs.Baseline))
the output of these are as follows:
> Control.vs.Baseline
[1] -0.5 0.0 0.5 0.0 -0.5 0.0 0.5 0.0
> Intervention.vs.Baseline
[1] 0.0 -0.5 0.0 0.5 0.0 -0.5 0.0 0.5
> Control.vs.Intervention
[1] -0.5 0.5 0.5 -0.5 -0.5 0.5 0.5 -0.5
Is this correct to the average differences of the differences between baseline and treatment periods?
Many thanks in advance!
A two-period crossover is the same as a repeated 2x2 Latin square. My suggestion for future such experiments is to structure the data accordingly, using variables for sequence (rows), period (columns), and treatment (assigned in the pattern (A,B) first sequence and (B,A) second sequence. The subjects are randomized to which sequence they are in.
So with your data, you would need to add a variable sequence that has the level AB for those subjects who receive the treatment sequence A, A, B, B, and level BA for those who receive B, B, A, A (though I guess the 1st and 3rd are really baseline for everybody).
Since there are 4 visits, it helps keep things sorted if you recode that as two factors trial and period, as follows:
visit trial period
1 base 1
2 test 1
3 base 2
4 test 2
Then fit the model with formula
model2 <- lmer(X ~ (sequence + period + treatment_type) * trial +
(1|SID:sequence), ...etc...)
The parenthesized part is the standard model for a Latin square. Then the analysis can be done without custom contrasts as follows:
RG <- ref_grid(model2) # same really as emmeans() for all 4 factors
CHG <- contrast(RG, "consec", simple = "trial")
CHG <- update(CHG, by = NULL, infer = c(TRUE, FALSE))
CHG contains the differences from baseline (trial differences for each combination of the other three factors. The update() step removes the by variables saved from contrast(). Now, we can get the marginal means and comparisons for each factor:
emmeans(CHG, consec ~ treatment_type)
emmeans(CHG, consec ~ period)
emmeans(CHG, consec ~ sequence)
These will be the same results you got the other way via custom contrasts. The one that was a difference of differences before is now handled by sequence. This works because in a 2x2 Latin square, the main effect of each factor is confounded with the two-way interaction of the other two factors.

Mathematical equation of binomial probit gam (mgcv) with tensor product interactions?

I have the following binomial (probit) gam using mgcv, which includes y (0 or 1), two continuous predictors (xa, xb) plus the ‘ti’ interactions of a third covariate (xc) with these two predictors.
mygam <- gamV(y ~ s(xa, k=10, bs="cr") + s(xb, k=10, bs="cr") +
ti(xc, xa, bs = c("cr", "cr"), k = c(5, 5)) +
ti(xc, xb, bs = c("cr", "cr"), k = c(5, 5)),
data = df, method = "ML", family = binomial(link = "probit"))
Using default k=10 for main effects and k=c(5,5) for interactions, the intercept and 50 coefficients are the following:
terms <- c("Intercept", "s(xa).1", "s(xa).2", "s(xa).3", "s(xa).4", "s(xa).5", "s(xa).6", "s(xa).7", "s(xa).8", "s(xa).9", "s(xb).1", "s(xb).2", "s(xb).3", "s(xb).4", "s(xb).5", "s(xb).6", "s(xb).7", "s(xb).8", "s(xb).9", "ti(xc,xa).1", "ti(xc,xa).2", "ti(xc,xa).3", "ti(xc,xa).4", "ti(xc,xa).5", "ti(xc,xa).6", "ti(xc,xa).7", "ti(xc,xa).8", "ti(xc,xa).9", "ti(xc,xa).10", "ti(xc,xa).11", "ti(xc,xa).12", "ti(xc,xa).13", "ti(xc,xa).14", "ti(xc,xa).15", "ti(xc,xa).16", "ti(xc,xb).1", "ti(xc,xb).2", "ti(xc,xb).3", "ti(xc,xb).4", "ti(xc,xb).5", "ti(xc,xb).6", "ti(xc,xb).7", "ti(xc,xb).8", "ti(xc,xb).9", "ti(xc,xb).10", "ti(xc,xb).11", "ti(xc,xb).12", "ti(xc,xb).13", "ti(xc,xb).14", "ti(xc,xb).15", "ti(xc,xb).16")
coefs <- c(-0.0702421404106311, 0.0768316292916553, 0.210036768213672, 0.409025596435604, 0.516554288252813, 0.314600352165584, -0.271938137725695, -1.1169186662112, -1.44829172827383, -2.39608336269616, 0.445091855160863, 0.119747299507175, -0.73508332280573, -1.3851857008194, -1.84125850675114, -1.77797283303084, -1.45118023146655, -1.56696555281429, -2.55103708393941, 0.0505422263407052, -0.110361707609838, -0.168897589312596, -0.0602318423244818, 0.095385784704545, -0.20818521830706, -0.318650042681766, -0.113613570916751, 0.123559386280642, -0.269467853796075, -0.412476320830133, -0.147039497705579, 0.189416535823022, -0.412990646359733, -0.632158143648671, -0.225344249076957, 0.0237165469278517, 0.0434926950921869, 0.080572361088243, 0.397397459143317, 0.0453636001566695, 0.0831126054198634, 0.153350111096294, 0.75009880522662, 0.0583689328419794, 0.107001374561518, 0.197852239031467, 0.970623037721609, 0.0894562434842868, 0.163989821269297, 0.303175057387294, 1.48718228468607)
df_coefs <- data.frame(terms, coefs)
I would like the mathematical equation of this model, which would allow to determine the probability of y given known covariates. Given as example from my dataset (n > 70000), the predicted probability ‘prob’ (type = “response”) obtained with xa = 7.116, xb = 2.6, and xc = 19 was prob = 0.76444141, which is the result to be determined with the expected mathematical equation.
Is this possible?
Thanks for your help and time.
Below, the summary(mygam)
Parametric coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.07024 0.00709 -9.907 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Approximate significance of smooth terms:
edf Ref.df Chi.sq p-value
s(xa) 8.007 8.548 5602.328 < 2e-16 ***
s(xb) 8.448 8.908 16282.793 < 2e-16 ***
ti(xc,xa) 1.004 1.007 10.278 0.00138 **
ti(xc,xb) 1.021 1.042 7.718 0.00627 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
R-sq.(adj) = 0.52 Deviance explained = 45.6%
-ML = 29379 Scale est. = 1 n = 77870
If you set type="terms" in the predict function, you get the contributions of the individual components to the linear predictor. However, these are not on the scale of outcome probability, but on that of the linear predictor.
Because of the non-linear transformation of the linear predictor -- in your case with the probit link -- attributing the predicted probability to the individual components requires attribution methods that come with additional assumptions.
An example of such an attribution method is Shapley values.

Calculate cutoff and sensitivity for specific values of specificity?

After calculating several regression models, I want to calculate sensitivity-values and the cut-off for pre-specified values of specificity (i.e, 0.99, 0.90, 0.85 and so on) to find the best model. I have created code to calculate sensitivity and specificity for given values of the cut-off (from 0.1 to 0.9), but now I want to use specific values of specificity (i.e., calculate the corresponding cut-off value and sensitivity-values), and here I'm stuck.
Suppose I have the following regression model (using the example dataset 'mtcars'):
data(mtcars)
model <- glm(formula= vs ~ wt + disp, data=mtcars, family=binomial)
Here is the code I've used for the calculation of sens and spec for given values of the cut-off:
predvalues <- model$fitted.values
getMisclass <- function(cutoff, p, labels){
d <- cut(predvalues, breaks=c(-Inf, cutoff, Inf), labels=c("0", "1"))
print(confusionMatrix(d, mtcars$vs, positive="1"))
cat("cutoff", cutoff, ":\n")
t <- table(d, mtcars$vs)
print(round(sum(t[c(1,4)])/sum(t), 2))
}
cutoffs <- seq(.1,.9,by=.1)
sapply(cutoffs, getMisclass, p=predval, labels=mtcars$vs)
Can someone help me how to rewrite this code for the calculation of sensitivity and cut-off scores given a range of specificity-values? Is it possible?
The values for the cutoff should be
cutoffs <- c(0.99, 0.90, 0.85, 0.80, 0.75)
Thanks a lot!
This is closely related to how ROC curves are calculated: if those are calculated with fine granularity you essentially get a sensitivity and specificity for "every" threshold value. So, what you could do is simply calculate the sensitivities, specificities and corresponding threshold as if you would want to obtain a ROC curve...
library(pROC)
myRoc <- roc(predictor = predvalues, response = mtcars$vs)
plot(myRoc)
myRoc$specificities
print(with(myRoc, data.frame(specificities, sensitivities, thresholds)))
# specificities sensitivities thresholds
# 1 0.00000000 1.00000000 -Inf
# 2 0.05555556 1.00000000 0.002462809
# 3 0.11111111 1.00000000 0.003577104
# 4 0.16666667 1.00000000 0.004656164
# 5 0.22222222 1.00000000 0.005191974
# 6 0.27777778 1.00000000 0.006171197
# [...]
...and then lookup the corresponding sensitivities and thresholds for whichever specificities you are interested in, e.g. as:
cutoffs <- c(0.99, 0.90, 0.85, 0.80, 0.75)
myData <- with(myRoc, data.frame(specificities, sensitivities, thresholds))
library(plyr)
print(laply(cutoffs, function(cutoff) myData$sensitivities[which.min(abs(myData$specificities-cutoff))]))
# [1] 0.7857143 0.8571429 0.8571429 0.9285714 0.9285714

Custom function to create an index of results

I’m trying to create a function which creates an index (starting at 100) and then adjust this index according to the results of investments. So, in a nutshell, if the first investment gives an profit of 5%, then the index will stand 105, if the second result is -7%, then the index stands at 97.65. In this question when I use the word "index", I'm not referring to the index function of the zoo package.
Besides creating this index, my goal is also to create an function which can be applied to various subsets of my complete data set (i.e. with the use of sapply and it's friends).
Here’s the function which I have so far (data at end of this question):
CalculateIndex <- function(x){
totalAccount <- accountValueStart
if(x$TradeResult.Currency == head(x$TradeResult.Currency., n = 1)){
indexedValues <- 100 + ( 100 *((((x$Size.Units. * x$EntryPrice) / totalAccount) * x$TradeResult.Percent.) / 100))
# Update the accountvalue
totalAccount <- totalAccount + x$TradeResult.Currency.
}
else{ # the value is not the first
indexedValues <- c(indexedValues,
indexedValues[-1] + (indexedValues[-1] *(((x$Size.Units. * x$EntryPrice) / totalAccount) * x$TradeResult.Percent.) / 100)
)
# Update the accountvalue
totalAccount <- totalAccount + x$TradeResult.Currency.
}
return(indexedValues)
}
In words the function does (read: is intended to do) the following:
If the value is the first, use 100 as an starting point for the index. If the value is not the first, use the previous calculated index value as the starting point for calculating the new index value. Besides this, the function also takes the weight of the individual result (compared with the totalAccount value) into account.
The problem:
Using this CalculateIndex function on the theData data frame gives the following incorrect output:
> CalculateIndex(theData)
[1] 99.97901 99.94180 99.65632 101.88689 100.89309 98.92878 102.02911 100.49159 98.52955 102.02243 98.43655 100.76502 99.34869 100.76401 101.18014 99.75136 97.90130
[18] 100.39935 99.81311 101.34961
Warning message:
In if (x$TradeResult.Currency == head(x$TradeResult.Currency., n = 1)) { :
the condition has length > 1 and only the first element will be used
Edit:
Wow, I already got an vote down, though I thought my question was already too long. Sorry, I thought/think the problem lay inside my loop, so I didn't want to bore you with the details, which I thought would only give less answers. Sorry, misjudgement on my part.
The problem is, with the above output from CalculateIndex, that the results are wildly different from Excel. Even though this could be resulting from rounding errors (as Joris mentions below), I doubt it. In comparison with the Excel results, the R results differ quite some:
R output Excel calculate values
99,9790085700 99,97900857
99,9418035700 99,92081189
99,6563228600 99,57713687
101,8868850000 101,4639947
100,8930864300 102,3570786
98,9287771400 101,2858564
102,0291071400 103,3149664
100,4915864300 103,806556
98,5295542900 102,3361186
102,0224285700 104,3585552
98,4365550000 102,795089
100,7650171400 103,5601228
99,3486857100 102,9087897
100,7640057100 103,6728077
101,1801400000 104,8529634
99,7513600000 104,6043164
97,9013000000 102,5055298
100,3993485700 102,9048999
99,8131085700 102,7179995
101,3496071400 104,0676555
I think it would be fair to say that the difference in output isn't the result of R versus Excel problems, but more an error in my function. So, let's focus on the function.
The manual calculation of the function
The function uses different variables:
Size.Units.; this is the number of units which are bought at the EntryPrice.
EntryPrice: the price at which the stocks are bought,
TradeResult.Percent.: the percentage gain or loss resulting from the investment,
TradeResult.Currency.: the currency value ($) of the gain or loss resulting from the investment,
These variables are used in the following section of the function:
100 + ( 100 *((((x$Size.Units. * x$EntryPrice) / totalAccount) * x$TradeResult.Percent.) / 100))
and
indexedValues[-1] + (indexedValues[-1] *(((x$Size.Units. * x$EntryPrice) / totalAccount) * x$TradeResult.Percent.) / 100)
Both of the formula's are essentially the same, with the difference that the the first starts at 100, and the second uses the previous value to calculate the new indexed value.
The formula can be broken down in different steps:
First, x$Size.Units. * x$EntryPrice determines the total position that was taken, in the sense that buying 100 shares at an price of 48.98 gives an position of $4898.
The resulting total position is then divided by the total account size (i.e. totalAccount). This is needed to correct the impact of one position relative to the complete portfolio. For example, if our 100 shares bought at 48.98 drop 10 percent, the calculated index (i.e. the CalculateIndex function) doesn't have to drop 10%, because off course not all the money in totalAccount is invested in one stock. So, by dividing the total position by the totalAccount we get an ratio which tells us how much money is invested. For example, the position with the size of 4898 dollar (on a total account of 14000) results in a total account loss of 3.49% if the stock drops 10%. (i.e. 4898 / 14000 = 0.349857. 0.349857 * 10% = 3.49857%)
This ratio (of invested amount versus total amount) is then in the formula multiplied with x$TradeResult.Percent., so to get the percentage impact on the total account (see calculation example in the previous paragraph).
As an last step, the percentage loss on the total account is applied to the index value (which starts at 100). In this case, the first investment in 100 shares bought at 48.89 dollar let's the index drop from it starting point at 100 to 99.97901, reflecting the losing trade's impact on the total account.
End of Edit
Stripping the function clean and then adding a part of the formula at a time, so to uncover the error, I came to the following step where the error seems to reside:
CalculateIndex <- function(x){
totalAccount <- accountValueStart
if(x$TradeResult.Currency == head(x$TradeResult.Currency., n = 1)){
indexedValues <- totalAccount
# Update the accountvalue
totalAccount <- totalAccount + x$TradeResult.Currency.
}
else{ # the value is not the first
indexedValues <- c(indexedValues, totalAccount)
# Update the accountvalue
totalAccount <- totalAccount + x$TradeResult.Currency.
}
return(indexedValues)
}
> CalculateIndex(theData)
[1] 14000
Warning message:
In if (x$TradeResult.Currency == head(x$TradeResult.Currency., n = 1)) { :
the condition has length > 1 and only the first element will be used
So, it seems that if I just use the totalAccount variable, the function doesn’t get updated correctly. This seems to suggest there is some error with the basics of the if else statement, because it only outputs the first value.
If I remove the else statement from the function, I do get values for each of the rows in theData. However, these are then wrongly calculated. So, it seems to me that there is some error in how this function updates the totalAccount variable. I don’t see where I made an error, so any suggestion would be highly appreciated. What am I doing wrong?
The Data
Here’s what my data looks like:
> theData
Size.Units. EntryPrice TradeResult.Percent. TradeResult.Currency.
1 100 48.98 -0.06 -3
11 100 32.59 -0.25 -8
12 100 32.51 -1.48 -48
2 100 49.01 5.39 264
13 100 32.99 3.79 125
14 100 34.24 -4.38 -150
3 100 51.65 5.50 284
4 100 48.81 1.41 69
15 100 35.74 -5.76 -206
5 100 49.50 5.72 283
6 100 46.67 -4.69 -219
16 100 33.68 3.18 107
7 100 44.48 -2.05 -91
17 100 32.61 3.28 107
8 100 45.39 3.64 165
9 100 47.04 -0.74 -35
10 100 47.39 -6.20 -294
18 100 33.68 1.66 56
19 100 33.12 -0.79 -26
20 100 32.86 5.75 189
theData <- structure(list(X = c(1L, 11L, 12L, 2L, 13L, 14L, 3L, 4L, 15L,
5L, 6L, 16L, 7L, 17L, 8L, 9L, 10L, 18L, 19L, 20L), Size.Units. = c(100L,
100L, 100L, 100L, 100L, 100L, 100L, 100L, 100L, 100L, 100L, 100L,
100L, 100L, 100L, 100L, 100L, 100L, 100L, 100L), EntryPrice = c(48.98,
32.59, 32.51, 49.01, 32.99, 34.24, 51.65, 48.81, 35.74, 49.5,
46.67, 33.68, 44.48, 32.61, 45.39, 47.04, 47.39, 33.68, 33.12,
32.86), TradeResult.Percent. = c(-0.06, -0.25, -1.48, 5.39, 3.79,
-4.38, 5.5, 1.41, -5.76, 5.72, -4.69, 3.18, -2.05, 3.28, 3.64,
-0.74, -6.2, 1.66, -0.79, 5.75), TradeResult.Currency. = c(-3L,
-8L, -48L, 264L, 125L, -150L, 284L, 69L, -206L, 283L, -219L,
107L, -91L, 107L, 165L, -35L, -294L, 56L, -26L, 189L)), .Names = c("X",
"Size.Units.", "EntryPrice", "TradeResult.Percent.", "TradeResult.Currency."
), class = "data.frame", row.names = c(NA, -20L))
# Set the account start # 14000
> accountValueStart <- 14000
Your code looks very strange, and it seems you have a lot of misconceptions about R that come from another programming language. Gavin and Gillespie pointed out already why you get the warniong. Let me add some tips for far more optimal coding:
[-1] does NOT mean: drop the last one. It means "keep everything but the first value", which also explains why you get erroneous results.
calculate common things in the beginning, to unclutter your code.
head(x$TradeResult.Currency., n = 1) is the same as x$TradeResult.Currency.[1].
Keep an eye on your vectors. Most of the mistakes in your code come from forgetting you're working with vectors.
If you need a value to be the first in a vector, put that OUTSIDE of any loop you'd use, never add an if-clause in the function.
predefine your vectors/matrices as much as possible, that goes a lot faster and gives less memory headaches when working with big data.
vectorization, vectorization, vectorization. Did I mention vectorization?
Learn the use of debug(), debugonce() and browser() to check what your function is doing. Many of your problems could have been solved by checking the objects when manipulated within the function.
This said and taken into account, your function becomes :
CalculateIndex <- function(x,accountValueStart){
# predifine your vector
indexedValues <- vector("numeric",nrow(x))
# get your totalAccount calculated FAST. This is a VECTOR!!!
totalAccount <- cumsum(c(accountValueStart,x$TradeResult.Currency.))
#adjust length:
totalAccount <- totalAccount[-(nrow(x)+1)]
# only once this calculation. This is a VECTOR!!!!
totRatio <- 1+(((x$Size.Units. * x$EntryPrice)/totalAccount) *
x$TradeResult.Percent.)/100
# and now the calculations
indexedValues[1] <- 100 * totRatio[1]
for(i in 2:nrow(x)){
indexedValues[i] <- indexedValues[i-1]*totRatio[i]
}
return(indexedValues)
}
and returns
> CalculateIndex(theData,14000)
[1] 99.97901 99.92081 99.57714 101.46399 102.35708 101.28586 103.31497
103.80656 102.33612 104.35856 102.79509 103.56012
[13] 102.90879 103.67281 104.85296 104.60432 102.50553 102.90490 102.71800
104.06766
So now you do:
invisible(replicate(10,print("I will never forget about vectorization any more!")))
The warning message is coming from this line:
if(x$TradeResult.Currency == head(x$TradeResult.Currency., n = 1)){
It is easy to see why; x$TradeResult.Currency is a vector and thus the comparison with head(x$TradeResult.Currency., n = 1) yields a vector of logicals. (By the way, why not x$TradeResult.Currency[1] instead of the head() call?). if() requires a single logical not a vector of logicals, and that is what the warning is about. ifelse() is useful if you want to do one of two things depending upon a condition that gives a vector of logicals.
In effect, what you are doing is only entering the if() part of the statement and it gets executed once only, because the first element of x$TradeResult.Currency == head(x$TradeResult.Currency., n = 1) is TRUE and R ignores the others.
> if(c(TRUE, FALSE)) {
+ print("Hi")
+ } else {
+ print("Bye")
+ }
[1] "Hi"
Warning message:
In if (c(TRUE, FALSE)) { :
the condition has length > 1 and only the first element will be used
> ifelse(c(TRUE, FALSE), print("Hi"), print("Bye"))
[1] "Hi"
[1] "Bye"
[1] "Hi" "Bye"
As to solving your real problem:
CalculateIndex2 <- function(x, value, start = 100) {
rowSeq <- seq_len(NROW(x))
totalAc <- cumsum(c(value, x$TradeResult.Currency.))[rowSeq]
idx <- numeric(length = nrow(x))
interm <- (((x$Size.Units. * x$EntryPrice) / totalAc) *
x$TradeResult.Percent.) / 100
for(i in rowSeq) {
idx[i] <- start + (start * interm[i])
start <- idx[i]
}
idx
}
which when used on theData gives:
> CalculateIndex2(theData, 14000)
[1] 99.97901 99.92081 99.57714 101.46399 102.35708 101.28586 103.31497
[8] 103.80656 102.33612 104.35856 102.79509 103.56012 102.90879 103.67281
[15] 104.85296 104.60432 102.50553 102.90490 102.71800 104.06766
What you want is a recursive function (IIRC); the current index is some function of the previous index. These are hard to solve in a vectorised way in R, hence the loop.
I'm still slightly confused as to what exactly you want to do, but hopefully the following will be helpful.
Your R script gives the same answers as your Excel function for the first value. You see a difference because R doesn't print out all digits.
> tmp = CalculateIndex(thedata)
Warning message:
In if (x$TradeResult.Currency == head(x$TradeResult.Currency., n = 1)) { :
the condition has length > 1 and only the first element will be used
> print(tmp, digits=10)
[1] 99.97900857 99.94180357 99.65632286 101.88688500 100.89308643
<snip>
The reason for the warning message is because x$TradeResult.Currency is a vector that is being compared to a single number.
That warning message is also where your bug lives. In your if statement, you never execute the else part, since only the value of x$TradeResult.Currency is being used. As the warning message states, only the first element of x$TradeResult.Currency is being used.