Showing p-values in scientific notation with gtsummary::tbl_regression? - regression

Your help is most appreciated to include p-values for gtsummary::tbl_regression function in scientific notation. I am using big data and it would be really useful to show 3 decimal points for p-values > 0.001 and scientific notation for p-values <0.001.
A representative example
# load packages
library(gtsummary)
theme_gtsummary_compact()
# linear model
mod1 <- lm(mpg ~ cyl + gear, data = mtcars %>% dplyr::mutate(cyl = factor(cyl)))
# regression table
tbl_regression(mod1)
I wonder if it is possible to get the 2.06e-4, 5.76e-8, and 0.422 p-value notations for cyl 6, cyl 8, and gear, respectively.
summary(mod1)
Thanks very much!

You can define your own function for the p-value format:
tbl_regression(
mod1,
pvalue_fun = function(x) {
if_else(
is.na(x),
NA_character_,
if_else(x < 0.001, format(x, digits = 3, scientific = TRUE), format(round(x, 3), scientific = F))
)
}
)

Related

Mixed regression model controlling for Genetic Relatedness

I am trying to perform a regression analysis but controlling for genetic relatedness among individuals (genetic relatedness matrix). See the code for an example including identical twins (“1” and siblings “0.5”). N=10
I know I should use a mixed model. However, I am not able to include the genetic relatedness matrix into the model. I have seen that these two packages are often used for this (“Kindship2” and “pedigreemm”.
Here is the code but I am unable to fit the model.
require("pedigreemm")
require("lme4")
library(kinship2)
library(car)
FAMID <- c(1,1,2,2,3,3,4,4,5,5)
UNIQUEID <- 1:10
Pheno <- c(2,4,5,5,8,10,15,20,0,0)
PRS <- c(0.1,0.5, 1,1, 2,2 ,3,3, -0.5, -0.4)
data <- as.data.frame(cbind(FAMID,UNIQUEID,Pheno,PRS))
RELMAT <- matrix(c(1,1,0.04,0.03,0.03,0.03,0.03,0.03,0.03,0.03,
1,1,0.03,0.03,0.03,0.03,0.03,0.03,0.03,0.03,
0.04,0.03,1,1,0.03,0.05,0.03,0.03,0.03,0.03,
0.03,0.03,1,1,0.03,0.03,0.03,0.03,0.03,0.03,
0.03,0.03,0.03,0.03,1,0.5,0.03,0.03,0.03,0.03,
0.03,0.03,0.05,0.03,0.5,1,0.03,0.03,0.03,0.03,
0.03,0.03,0.03,0.03,0.03,0.03,1,0.5,0.03,0.03,
0.03,0.03,0.03,0.03,0.03,0.03,0.5,1,0.03,0.03,
0.03,0.03,0.03,0.03,0.03,0.03,0.03,0.03,1,0.5,
0.03,0.03,0.03,0.03,0.03,0.03,0.03,0.03,0.5,1), nrow = 10, ncol = 10, byrow = T)
This is the model that I want to fit:
m1 = lmer(Pheno ~ PRS + (1 | RELMAT), data = data)
Thank you so much in advance.

tbl_uvregression for lme4 objects

I am trying to get a univariate regression table using tbl_uvregression from gtsummary. I am running these regression models with lme4 and I am not sure where and how to specify the random effect. Here's an example using the trial data from the survival package.
library(lme4)
#> Loading required package: Matrix
library(gtsummary)
library(survival)
data(trial)
trial %>%
tbl_uvregression(
method = glmer,
y = response,
method.args = list(family = binomial),
exponentiate = TRUE,
pvalue_fun = function(x) style_pvalue(x, digits = 2),
formula = "{y} ~ {x}+ {1|grade}"
)
#> Error: Problem with `mutate()` input `formula_chr`.
#> x object 'grade' not found
#> i Input `formula_chr` is `glue(formula)`.
Created on 2020-09-28 by the reprex package (v0.3.0)
Please help
For the RE in the model do not specify with the {} instead use ().
library(lme4)
#> Loading required package: Matrix
library(gtsummary)
library(survival)
data(trial)
trial %>%
tbl_uvregression(
method = glmer,
y = response,
method.args = list(family = binomial),
exponentiate = TRUE,
pvalue_fun = function(x) style_pvalue(x, digits = 2),
formula = "{y} ~ {x}+ (1|grade)"
)

R - MLR - Classifier Calibration - Benchmark Results

I've run a benchmark experiment with nested cross validation (tuning + performance measurement) for a classification problem and would like to create calibration charts.
If I pass a benchmark result object to generateCalibrationData, what does plotCalibration do? Is it averaging? If so how?
Does it make sense to have an aggregate = FALSE option to understand variability across folds as per generateThreshVsPerfData for ROC curves?
In response to #Zach's request for a reproducible example, I (the OP) edit my original post as follows:
Edit: Reproducible Example
# Practice Data
library("mlr")
library("ROCR")
library(mlbench)
data(BreastCancer)
dim(BreastCancer)
levels(BreastCancer$Class)
head(BreastCancer)
BreastCancer <- BreastCancer[, -c(1, 6, 7)]
BreastCancer$Cl.thickness <- as.factor(unclass(BreastCancer$Cl.thickness))
BreastCancer$Cell.size <- as.factor(unclass(BreastCancer$Cell.size))
BreastCancer$Cell.shape <- as.factor(unclass(BreastCancer$Cell.shape))
BreastCancer$Marg.adhesion <- as.factor(unclass(BreastCancer$Marg.adhesion))
head(BreastCancer)
# Define Nested Cross-Validation Strategy
cv.inner <- makeResampleDesc("CV", iters = 2, stratify = TRUE)
cv.outer <- makeResampleDesc("CV", iters = 6, stratify = TRUE)
# Define Performance Measures
perf.measures <- list(auc, mmce)
# Create Task
bc.task <- makeClassifTask(id = "bc",
data = BreastCancer,
target = "Class",
positive = "malignant")
# Create Tuned KSVM Learner
ksvm <- makeLearner("classif.ksvm",
predict.type = "prob")
ksvm.ps <- makeParamSet(makeDiscreteParam("C", values = 2^(-2:2)),
makeDiscreteParam("sigma", values = 2^(-2:2)))
ksvm.ctrl <- makeTuneControlGrid()
ksvm.lrn = makeTuneWrapper(ksvm,
resampling = cv.inner,
measures = perf.measures,
par.set = ksvm.ps,
control = ksvm.ctrl,
show.info = FALSE)
# Create Tuned Random Forest Learner
rf <- makeLearner("classif.randomForest",
predict.type = "prob",
fix.factors.prediction = TRUE)
rf.ps <- makeParamSet(makeDiscreteParam("mtry", values = c(2, 3, 5)))
rf.ctrl <- makeTuneControlGrid()
rf.lrn = makeTuneWrapper(rf,
resampling = cv.inner,
measures = perf.measures,
par.set = rf.ps,
control = rf.ctrl,
show.info = FALSE)
# Run Cross-Validation Experiments
bc.lrns = list(ksvm.lrn, rf.lrn)
bc.bmr <- benchmark(learners = bc.lrns,
tasks = bc.task,
resampling = cv.outer,
measures = perf.measures,
show.info = FALSE)
# Calibration Charts
bc.cal <- generateCalibrationData(bc.bmr)
plotCalibration(bc.cal)
Produces the following:
Aggregared Calibration Plot
Attempting to un-aggregate leads to:
> bc.cal <- generateCalibrationData(bc.bmr, aggregate = FALSE)
Error in generateCalibrationData(bc.bmr, aggregate = FALSE) :
unused argument (aggregate = FALSE)
>
> sessionInfo()
R version 3.2.3 (2015-12-10)
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] mlbench_2.1-1 ROCR_1.0-7 gplots_3.0.1 mlr_2.9
[5] stringi_1.1.1 ParamHelpers_1.10 ggplot2_2.1.0 BBmisc_1.10
loaded via a namespace (and not attached):
[1] digest_0.6.9 htmltools_0.3.5 R6_2.2.0 splines_3.2.3
[5] scales_0.4.0 assertthat_0.1 grid_3.2.3 stringr_1.0.0
[9] bitops_1.0-6 checkmate_1.8.2 gdata_2.17.0 survival_2.38-3
[13] munsell_0.4.3 tibble_1.2 randomForest_4.6-12 httpuv_1.3.3
[17] parallelMap_1.3 mime_0.5 DBI_0.5-1 labeling_0.3
[21] chron_2.3-47 shiny_1.0.0 KernSmooth_2.23-15 plyr_1.8.4
[25] data.table_1.9.6 magrittr_1.5 reshape2_1.4.1 kernlab_0.9-25
[29] ggvis_0.4.3 caTools_1.17.1 gtable_0.2.0 colorspace_1.2-6
[33] tools_3.2.3 parallel_3.2.3 dplyr_0.5.0 xtable_1.8-2
[37] gtools_3.5.0 backports_1.0.4 Rcpp_0.12.4
no plotCalibration doesn't do any averaging, though it can plot a smooth.
if you call generateCalibrationData on a benchmark result object it will treat each iteration of your resampled predictions as exchangeable and compute the calibration across all resampled predictions for that bin.
yes it probably would make sense to have an option to generate an unaggregated calibration data object and be able to plot it. you are welcome to open an issue on GitHub to that effect, but this is going to be low on my priority list TBH.

MLR - Regression Benchmark Results - Visualisation

What are the options for visualising the results of a benchmark experiment of regression learners? For instance, generateCalibrationData doesn't accept a benchmark result object derived from a set of regr. learners. I would like something similar to the calibration plots available for classification.
In response to #LarsKotthoff's comment, I (the OP) have edited my original post to provide greater detail as to what functionality I am seeking.
Edit:
I'm looking for actual vs predicted calibration type plots such as simple scatterplots or something like the plots that exist under Classifier Calibration. If I'm not mistaken, the following would make sense for regression problems (and seems to be what is done for Classifier Calibration):
decide on a number of buckets to discretize the predictions on the x-axis, say 10 equal length bins (obviously you could continue with the breaks and groups interface to generateCalibrationData that currently exists)
for each of those bins 10, calculate the mean "predicted" and plot (say via a dot) on the x-axis (possibly with some measure of variability) and join the dots across the 10 bins
for each of those bins 10, calculate the mean "actual" and plot on the y-axis (possibly with some measure of variability) and join the dots
provide some representation of volume in each bucket (as you've done for Classifier Calibration via "rag/rug" plots)
The basic premise behind my question is what kind of visualisation can be provided to help interpret an rsq, mae etc performance measure. There are many configurations of actual vs predicted that can lead to the same rsq, mae etc.
Once some plot exists, switching aggregation on/off would allow individual resampling results to be examined.
I would hope that the combination:
cal <- generateCalibrationData(bmr)
plotCalibration(cal)
would be available for regression tasks, at present it doesn't seem to be (reproducible example below):
# Practice Data
library("mlr")
library(mlbench)
data(BostonHousing)
dim(BostonHousing)
head(BostonHousing)
# Define Nested Cross-Validation Strategy
cv.inner <- makeResampleDesc("CV", iters = 2)
cv.outer <- makeResampleDesc("CV", iters = 6)
# Define Performance Measures
perf.measures <- list(rsq, mae)
# Create Task
bh.task <- makeRegrTask(id = "bh",
data = BostonHousing,
target = "medv")
# Create Tuned KSVM Learner
ksvm <- makeLearner("regr.ksvm")
ksvm.ps <- makeParamSet(makeDiscreteParam("C", values = 2^(-2:2)),
makeDiscreteParam("sigma", values = 2^(-2:2)))
ksvm.ctrl <- makeTuneControlGrid()
ksvm.lrn = makeTuneWrapper(ksvm,
resampling = cv.inner,
measures = perf.measures,
par.set = ksvm.ps,
control = ksvm.ctrl,
show.info = FALSE)
# Create Tuned Random Forest Learner
rf <- makeLearner("regr.randomForest",
fix.factors.prediction = TRUE)
rf.ps <- makeParamSet(makeDiscreteParam("mtry", values = c(2, 3, 5)))
rf.ctrl <- makeTuneControlGrid()
rf.lrn = makeTuneWrapper(rf,
resampling = cv.inner,
measures = perf.measures,
par.set = rf.ps,
control = rf.ctrl,
show.info = FALSE)
# Run Cross-Validation Experiments
bh.lrns = list(ksvm.lrn, rf.lrn)
bh.bmr <- benchmark(learners = bh.lrns,
tasks = bh.task,
resampling = cv.outer,
measures = perf.measures,
show.info = FALSE)
# Calibration Charts
bh.cal <- generateCalibrationData(bh.bmr)
plotCalibration(bh.cal)
which yields:
> bh.cal <- generateCalibrationData(bh.bmr)
Error in checkPrediction(x, task.type = "classif", predict.type = "prob") :
Prediction must be one of 'classif', but is: 'regr'
> sessionInfo()
R version 3.2.3 (2015-12-10)
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] mlbench_2.1-1 ROCR_1.0-7 gplots_3.0.1 mlr_2.9
[5] stringi_1.1.1 ParamHelpers_1.10 ggplot2_2.1.0 BBmisc_1.10
loaded via a namespace (and not attached):
[1] digest_0.6.9 htmltools_0.3.5 R6_2.2.0 splines_3.2.3
[5] scales_0.4.0 assertthat_0.1 grid_3.2.3 stringr_1.0.0
[9] bitops_1.0-6 checkmate_1.8.2 gdata_2.17.0 survival_2.38-3
[13] munsell_0.4.3 tibble_1.2 randomForest_4.6-12 httpuv_1.3.3
[17] parallelMap_1.3 mime_0.5 DBI_0.5-1 labeling_0.3
[21] chron_2.3-47 shiny_1.0.0 KernSmooth_2.23-15 plyr_1.8.4
[25] data.table_1.9.6 magrittr_1.5 reshape2_1.4.1 kernlab_0.9-25
[29] ggvis_0.4.3 caTools_1.17.1 gtable_0.2.0 colorspace_1.2-6
[33] tools_3.2.3 parallel_3.2.3 dplyr_0.5.0 xtable_1.8-2
[37] gtools_3.5.0 backports_1.0.4 Rcpp_0.12.4

R: calling rq() within a function and defining the linear predictor

I am trying to call rq() of the package quantreg within a function. Herebelow is a simplified explanation of my problem.
If I follow the recommendations found at
http://developer.r-project.org/model-fitting-functions.txt, I have a design matrix after the line
x <- model.matrix(mt, mf, contrasts)
with the first column full of 1's to create an intercept.
Now, when I call rq(), I am obliged to use something like
fit <- rq (y ~ x [,2], tau = 0.5, ...)
My problem happens if there is more than 1 explanatory variable. I don't know how to find an automatic way to write:
x [,2] + x [,3] + x [,4] + ...
Here is the complete simplified code:
ao_qr <- function (formula, data, method = "br",...) {
cl <- match.call ()
## keep only the arguments which should go into the model
## frame
mf <- match.call (expand.dots = FALSE)
m <- match (c ("formula", "data"), names (mf), 0)
mf <- mf[c (1, m)]
mf$drop.unused.levels <- TRUE
mf[[1]] <- as.name ("model.frame")
mf <- eval.parent (mf)
if (method == "model.frame") return (mf)
## allow model.frame to update the terms object before
## saving it
mt <- attr (mf, "terms")
y <- model.response (mf, "numeric")
x <- model.matrix (mt, mf, contrasts)
## proceed with the quantile regression
fit <- rq (y ~ x[,2], tau = 0.5, ...)
print (summary (fit, se = "boot", R = 100))
}
I call the function with:
ao_qr(pain ~ treatment + extra, data = data.subset)
And here is how to get the data:
require (lqmm)
data(labor)
data <- labor
data.subset <- subset (data, time == 90)
data.subset$extra <- rnorm (65)
In this case, with this code, my linear predictor only includes "treatment". If I want "extra", I have to manually add x[,3] in the linear predictor of rq() in the code. This is not automatic and will not work on other datasets with unknown number of variables.
Does anyone know how to tackle this ?
Any help would be greatly appreciated !!!
I found a simple solution:
x[,2:ncol(x)]