I've run a benchmark experiment with nested cross validation (tuning + performance measurement) for a classification problem and would like to create calibration charts.
If I pass a benchmark result object to generateCalibrationData, what does plotCalibration do? Is it averaging? If so how?
Does it make sense to have an aggregate = FALSE option to understand variability across folds as per generateThreshVsPerfData for ROC curves?
In response to #Zach's request for a reproducible example, I (the OP) edit my original post as follows:
Edit: Reproducible Example
# Practice Data
library("mlr")
library("ROCR")
library(mlbench)
data(BreastCancer)
dim(BreastCancer)
levels(BreastCancer$Class)
head(BreastCancer)
BreastCancer <- BreastCancer[, -c(1, 6, 7)]
BreastCancer$Cl.thickness <- as.factor(unclass(BreastCancer$Cl.thickness))
BreastCancer$Cell.size <- as.factor(unclass(BreastCancer$Cell.size))
BreastCancer$Cell.shape <- as.factor(unclass(BreastCancer$Cell.shape))
BreastCancer$Marg.adhesion <- as.factor(unclass(BreastCancer$Marg.adhesion))
head(BreastCancer)
# Define Nested Cross-Validation Strategy
cv.inner <- makeResampleDesc("CV", iters = 2, stratify = TRUE)
cv.outer <- makeResampleDesc("CV", iters = 6, stratify = TRUE)
# Define Performance Measures
perf.measures <- list(auc, mmce)
# Create Task
bc.task <- makeClassifTask(id = "bc",
data = BreastCancer,
target = "Class",
positive = "malignant")
# Create Tuned KSVM Learner
ksvm <- makeLearner("classif.ksvm",
predict.type = "prob")
ksvm.ps <- makeParamSet(makeDiscreteParam("C", values = 2^(-2:2)),
makeDiscreteParam("sigma", values = 2^(-2:2)))
ksvm.ctrl <- makeTuneControlGrid()
ksvm.lrn = makeTuneWrapper(ksvm,
resampling = cv.inner,
measures = perf.measures,
par.set = ksvm.ps,
control = ksvm.ctrl,
show.info = FALSE)
# Create Tuned Random Forest Learner
rf <- makeLearner("classif.randomForest",
predict.type = "prob",
fix.factors.prediction = TRUE)
rf.ps <- makeParamSet(makeDiscreteParam("mtry", values = c(2, 3, 5)))
rf.ctrl <- makeTuneControlGrid()
rf.lrn = makeTuneWrapper(rf,
resampling = cv.inner,
measures = perf.measures,
par.set = rf.ps,
control = rf.ctrl,
show.info = FALSE)
# Run Cross-Validation Experiments
bc.lrns = list(ksvm.lrn, rf.lrn)
bc.bmr <- benchmark(learners = bc.lrns,
tasks = bc.task,
resampling = cv.outer,
measures = perf.measures,
show.info = FALSE)
# Calibration Charts
bc.cal <- generateCalibrationData(bc.bmr)
plotCalibration(bc.cal)
Produces the following:
Aggregared Calibration Plot
Attempting to un-aggregate leads to:
> bc.cal <- generateCalibrationData(bc.bmr, aggregate = FALSE)
Error in generateCalibrationData(bc.bmr, aggregate = FALSE) :
unused argument (aggregate = FALSE)
>
> sessionInfo()
R version 3.2.3 (2015-12-10)
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] mlbench_2.1-1 ROCR_1.0-7 gplots_3.0.1 mlr_2.9
[5] stringi_1.1.1 ParamHelpers_1.10 ggplot2_2.1.0 BBmisc_1.10
loaded via a namespace (and not attached):
[1] digest_0.6.9 htmltools_0.3.5 R6_2.2.0 splines_3.2.3
[5] scales_0.4.0 assertthat_0.1 grid_3.2.3 stringr_1.0.0
[9] bitops_1.0-6 checkmate_1.8.2 gdata_2.17.0 survival_2.38-3
[13] munsell_0.4.3 tibble_1.2 randomForest_4.6-12 httpuv_1.3.3
[17] parallelMap_1.3 mime_0.5 DBI_0.5-1 labeling_0.3
[21] chron_2.3-47 shiny_1.0.0 KernSmooth_2.23-15 plyr_1.8.4
[25] data.table_1.9.6 magrittr_1.5 reshape2_1.4.1 kernlab_0.9-25
[29] ggvis_0.4.3 caTools_1.17.1 gtable_0.2.0 colorspace_1.2-6
[33] tools_3.2.3 parallel_3.2.3 dplyr_0.5.0 xtable_1.8-2
[37] gtools_3.5.0 backports_1.0.4 Rcpp_0.12.4
no plotCalibration doesn't do any averaging, though it can plot a smooth.
if you call generateCalibrationData on a benchmark result object it will treat each iteration of your resampled predictions as exchangeable and compute the calibration across all resampled predictions for that bin.
yes it probably would make sense to have an option to generate an unaggregated calibration data object and be able to plot it. you are welcome to open an issue on GitHub to that effect, but this is going to be low on my priority list TBH.
Related
I am trying to get a univariate regression table using tbl_uvregression from gtsummary. I am running these regression models with lme4 and I am not sure where and how to specify the random effect. Here's an example using the trial data from the survival package.
library(lme4)
#> Loading required package: Matrix
library(gtsummary)
library(survival)
data(trial)
trial %>%
tbl_uvregression(
method = glmer,
y = response,
method.args = list(family = binomial),
exponentiate = TRUE,
pvalue_fun = function(x) style_pvalue(x, digits = 2),
formula = "{y} ~ {x}+ {1|grade}"
)
#> Error: Problem with `mutate()` input `formula_chr`.
#> x object 'grade' not found
#> i Input `formula_chr` is `glue(formula)`.
Created on 2020-09-28 by the reprex package (v0.3.0)
Please help
For the RE in the model do not specify with the {} instead use ().
library(lme4)
#> Loading required package: Matrix
library(gtsummary)
library(survival)
data(trial)
trial %>%
tbl_uvregression(
method = glmer,
y = response,
method.args = list(family = binomial),
exponentiate = TRUE,
pvalue_fun = function(x) style_pvalue(x, digits = 2),
formula = "{y} ~ {x}+ (1|grade)"
)
I have a question in the MLR package,
after tuning a randomforest hyperparameters with a cross validation
getLearnerModel(rforest) - will not use CV, rather use the entire data set as a whole, is that correct?
#traintask
trainTask <- makeClassifTask(data = trainsample,target = "DIED30", positive="1")
#random forest tuning
rf <- makeLearner("classif.randomForest", predict.type = "prob", par.vals = list(ntree = 1000, mtry = 3))
rf$par.vals <- list( importance = TRUE)
rf_param <- makeParamSet(
makeDiscreteParam("ntree",values= c(500,750, 1000,2000)),
makeIntegerParam("mtry", lower = 1, upper = 15),
makeDiscreteParam("nodesize", values =c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20))
)
rancontrol <- makeTuneControlGrid()
set_cv <- makeResampleDesc("CV",iters = 10L)
rf_tune <- tuneParams(learner = rf, resampling = set_cv, task = trainTask, par.set = rf_param, control = rancontrol, measures = auc)
rf_tune$x
rf.tree <- setHyperPars(rf, par.vals = rf_tune$x)
#train best model
rforest <- train(rf.tree, trainTask)
getLearnerModel(rforest)
#predict
pforest<- predict(rforest,trainTask)
rforest is eventually trained using the RF model on the entire data, rather than cross validation.
is there any way to perform the final training with CV as well in MLR?
I'm planning to validate the result on an external dataset. Should I train the model with 10CV prior to running on the external dataset (don't know how) or just use parameters found in the 10CV hyperparameters search?
thanks in advance for your time,
What are the options for visualising the results of a benchmark experiment of regression learners? For instance, generateCalibrationData doesn't accept a benchmark result object derived from a set of regr. learners. I would like something similar to the calibration plots available for classification.
In response to #LarsKotthoff's comment, I (the OP) have edited my original post to provide greater detail as to what functionality I am seeking.
Edit:
I'm looking for actual vs predicted calibration type plots such as simple scatterplots or something like the plots that exist under Classifier Calibration. If I'm not mistaken, the following would make sense for regression problems (and seems to be what is done for Classifier Calibration):
decide on a number of buckets to discretize the predictions on the x-axis, say 10 equal length bins (obviously you could continue with the breaks and groups interface to generateCalibrationData that currently exists)
for each of those bins 10, calculate the mean "predicted" and plot (say via a dot) on the x-axis (possibly with some measure of variability) and join the dots across the 10 bins
for each of those bins 10, calculate the mean "actual" and plot on the y-axis (possibly with some measure of variability) and join the dots
provide some representation of volume in each bucket (as you've done for Classifier Calibration via "rag/rug" plots)
The basic premise behind my question is what kind of visualisation can be provided to help interpret an rsq, mae etc performance measure. There are many configurations of actual vs predicted that can lead to the same rsq, mae etc.
Once some plot exists, switching aggregation on/off would allow individual resampling results to be examined.
I would hope that the combination:
cal <- generateCalibrationData(bmr)
plotCalibration(cal)
would be available for regression tasks, at present it doesn't seem to be (reproducible example below):
# Practice Data
library("mlr")
library(mlbench)
data(BostonHousing)
dim(BostonHousing)
head(BostonHousing)
# Define Nested Cross-Validation Strategy
cv.inner <- makeResampleDesc("CV", iters = 2)
cv.outer <- makeResampleDesc("CV", iters = 6)
# Define Performance Measures
perf.measures <- list(rsq, mae)
# Create Task
bh.task <- makeRegrTask(id = "bh",
data = BostonHousing,
target = "medv")
# Create Tuned KSVM Learner
ksvm <- makeLearner("regr.ksvm")
ksvm.ps <- makeParamSet(makeDiscreteParam("C", values = 2^(-2:2)),
makeDiscreteParam("sigma", values = 2^(-2:2)))
ksvm.ctrl <- makeTuneControlGrid()
ksvm.lrn = makeTuneWrapper(ksvm,
resampling = cv.inner,
measures = perf.measures,
par.set = ksvm.ps,
control = ksvm.ctrl,
show.info = FALSE)
# Create Tuned Random Forest Learner
rf <- makeLearner("regr.randomForest",
fix.factors.prediction = TRUE)
rf.ps <- makeParamSet(makeDiscreteParam("mtry", values = c(2, 3, 5)))
rf.ctrl <- makeTuneControlGrid()
rf.lrn = makeTuneWrapper(rf,
resampling = cv.inner,
measures = perf.measures,
par.set = rf.ps,
control = rf.ctrl,
show.info = FALSE)
# Run Cross-Validation Experiments
bh.lrns = list(ksvm.lrn, rf.lrn)
bh.bmr <- benchmark(learners = bh.lrns,
tasks = bh.task,
resampling = cv.outer,
measures = perf.measures,
show.info = FALSE)
# Calibration Charts
bh.cal <- generateCalibrationData(bh.bmr)
plotCalibration(bh.cal)
which yields:
> bh.cal <- generateCalibrationData(bh.bmr)
Error in checkPrediction(x, task.type = "classif", predict.type = "prob") :
Prediction must be one of 'classif', but is: 'regr'
> sessionInfo()
R version 3.2.3 (2015-12-10)
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] mlbench_2.1-1 ROCR_1.0-7 gplots_3.0.1 mlr_2.9
[5] stringi_1.1.1 ParamHelpers_1.10 ggplot2_2.1.0 BBmisc_1.10
loaded via a namespace (and not attached):
[1] digest_0.6.9 htmltools_0.3.5 R6_2.2.0 splines_3.2.3
[5] scales_0.4.0 assertthat_0.1 grid_3.2.3 stringr_1.0.0
[9] bitops_1.0-6 checkmate_1.8.2 gdata_2.17.0 survival_2.38-3
[13] munsell_0.4.3 tibble_1.2 randomForest_4.6-12 httpuv_1.3.3
[17] parallelMap_1.3 mime_0.5 DBI_0.5-1 labeling_0.3
[21] chron_2.3-47 shiny_1.0.0 KernSmooth_2.23-15 plyr_1.8.4
[25] data.table_1.9.6 magrittr_1.5 reshape2_1.4.1 kernlab_0.9-25
[29] ggvis_0.4.3 caTools_1.17.1 gtable_0.2.0 colorspace_1.2-6
[33] tools_3.2.3 parallel_3.2.3 dplyr_0.5.0 xtable_1.8-2
[37] gtools_3.5.0 backports_1.0.4 Rcpp_0.12.4
Say I have two htmlwidgets
# Load energy projection data
# Load energy projection data
library(networkD3)
URL <- paste0(
"https://cdn.rawgit.com/christophergandrud/networkD3/",
"master/JSONdata/energy.json")
Energy <- jsonlite::fromJSON(URL)
# Plot
sankeyNetwork(Links = Energy$links, Nodes = Energy$nodes, Source = "source",
Target = "target", Value = "value", NodeID = "name",
units = "TWh", fontSize = 12, nodeWidth = 30)
and
library(leaflet)
data(quakes)
# Show first 20 rows from the `quakes` dataset
leaflet(data = quakes[1:20,]) %>% addTiles() %>%
addMarkers(~long, ~lat, popup = ~as.character(mag))
And I want to put them side by side in an html page. How can I do this? Could I use an iframe? Other?
There are lots of ways to answer this. Often sizing and positioning will vary based on who authored the htmlwidget, so you might need to experiment a little. The easiest way if you don't plan to use a CSS framework with grid helpers will be to wrap each htmlwidget in tags$div() and use CSS. You also might be interested in the very nice new flexbox-based dashboard package from RStudio http://github.com/rstudio/flexdashboard.
# Load energy projection data
# Load energy projection data
library(networkD3)
URL <- paste0(
"https://cdn.rawgit.com/christophergandrud/networkD3/",
"master/JSONdata/energy.json")
Energy <- jsonlite::fromJSON(URL)
# Plot
sn <- sankeyNetwork(Links = Energy$links, Nodes = Energy$nodes, Source = "source",
Target = "target", Value = "value", NodeID = "name",
units = "TWh", fontSize = 12, nodeWidth = 30,
width = "100%")
library(leaflet)
data(quakes)
# Show first 20 rows from the `quakes` dataset
leaf <- leaflet(data = quakes[1:20,]) %>% addTiles() %>%
addMarkers(~long, ~lat, popup = ~as.character(mag))
library(htmltools)
browsable(
tagList(list(
tags$div(
style = 'width:50%;display:block;float:left;',
sn
),
tags$div(
style = 'width:50%;display:block;float:left;',
leaf
)
))
)
I want to convert a json-file into a dataframe in R. With the following code:
link <- 'https://www.dropbox.com/s/ckfn1fpkcix1ccu/bevingenbag.json'
document <- fromJSON(file = link, method = 'C')
bev <- do.call("cbind", document)
i'm getting this:
type features
1 FeatureCollection list(type = "Feature", geometry = list(type = "Point", coordinates = c(6.54800000288927, 52.9920000044505)), properties = list(gid = "1496600", yymmdd = "19861226", lat = "52.992", lon = "6.548", mag = "2.8", depth = "1.0", knmilocatie = "Assen", baglocatie = "Assen", tijd = "74751"))
which is the first row of a matrix. All the other rows have the same structure. I'm interested in the properties = list(gid = "1496600", yymmdd = "19861226", lat = "52.992", lon = "6.548", mag = "2.8", depth = "1.0", knmilocatie = "Assen", baglocatie = "Assen", tijd = "74751") part, which should be converted into a dataframe with the columns gid, yymmdd, lat, lon, mag, depth, knmilocatie, baglocatie, tijd.
I searched for and tryed several solutions but none of them worked. I used the rjson package for this. I also tryed the RJSONIO & jsonlite package, but was unable to extract the desired information.
Anyone an idea how to solve this problem?
Here's a way to obtain the data frame:
library(rjson)
document <- fromJSON(file = "bevingenbag.json", method = 'C')
dat <- do.call(rbind, lapply(document$features,
function(x) data.frame(x$properties)))
Edit: How to replace empty values with NA:
dat$baglocatie[dat$baglocatie == ""] <- NA
The result:
head(dat)
gid yymmdd lat lon mag depth knmilocatie baglocatie tijd
1 1496600 19861226 52.992 6.548 2.8 1.0 Assen Assen 74751
2 1496601 19871214 52.928 6.552 2.5 1.5 Hooghalen Hooghalen 204951
3 1496602 19891201 52.529 4.971 2.7 1.2 Purmerend Kwadijk 200914
4 1496603 19910215 52.771 6.914 2.2 3.0 Emmen Emmen 21116
5 1496604 19910425 52.952 6.575 2.6 3.0 Geelbroek Ekehaar 102631
6 1496605 19910808 52.965 6.573 2.7 3.0 Eleveld Assen 40114
This is just another, quite similar, approach.
#SvenHohenstein's approach creates a dataframe at each step, an expensive process. It's much faster to create vectors and re-type the whole result at the end. Also, Sven's approach makes each column a factor, which might or might not be what you want. The approach below runs about 200 times faster. This can be important if you intend to do this repeatedly. Finally, you will need to convert columns lon, lat, mag, and depth to numeric.
library(microbenchmark)
library(rjson)
document <- fromJSON(file = "bevingenbag.json", method = 'C')
json2df.1 <- function(json){ # #SvenHohenstein approach
df <- do.call(rbind, lapply(json$features,
function(x) data.frame(x$properties, stringsAsFactors=F)))
return(df)
}
json2df.2 <- function(json){
df <- do.call(rbind,lapply(json[["features"]],function(x){c(x$properties)}))
df <- data.frame(apply(result,2,as.character), stringsAsFactors=F)
return(df)
}
microbenchmark(x<-json2df.1(document), y<-json2df.2(document), times=10)
# Unit: milliseconds
# expr min lq median uq max neval
# x <- json2df.1(document) 2304.34378 2654.95927 2822.73224 2977.75666 3227.30996 10
# y <- json2df.2(document) 13.44385 15.27091 16.78201 18.53474 19.70797 10
identical(x,y)
# [1] TRUE