Title: | Testing Workbench for Precision-Recall Curves |
---|---|
Description: | A testing workbench to evaluate tools that calculate precision-recall curves. Saito and Rehmsmeier (2015) <doi:10.1371/journal.pone.0118432>. |
Authors: | Takaya Saito [aut, cre] , Marc Rehmsmeier [aut] |
Maintainer: | Takaya Saito <[email protected]> |
License: | GPL-3 |
Version: | 1.1.8 |
Built: | 2024-11-01 04:24:58 UTC |
Source: | https://github.com/evalclass/prcbench |
The plot_eval_results
function validates Precision-Recall curves
and creates a plot.
## S3 method for class 'evalcurve' autoplot( object, base_plot = TRUE, ret_grob = FALSE, ncol = NULL, nrow = NULL, use_category = FALSE, multiplot_lib = "patchwork", ... )
## S3 method for class 'evalcurve' autoplot( object, base_plot = TRUE, ret_grob = FALSE, ncol = NULL, nrow = NULL, use_category = FALSE, multiplot_lib = "patchwork", ... )
object |
An S3 object that contains evaluation results of Precision-Recall curves. |
base_plot |
A Boolean value to specify whether the base points are plotted. |
ret_grob |
A Boolean value to specify whether the function returns a grob object. |
ncol |
An integer used for the column size of multiple panes. |
nrow |
An integer used for the row size of multiple panes. |
use_category |
A Boolean value to specify whether the categorical summary instead of the total summary. |
multiplot_lib |
A string to decide which library is used to combine multiple plots. Either "patchwork" or "grid". |
... |
Not used by this function. |
A data frame with validation results.
library(ggplot2) ## Plot evaluation results on test datasets r1, r2, and r3 testset <- create_testset("curve", c("c1", "c2", "c3")) toolset <- create_toolset(set_names = "crv5") eres1 <- run_evalcurve(testset, toolset) autoplot(eres1)
library(ggplot2) ## Plot evaluation results on test datasets r1, r2, and r3 testset <- create_testset("curve", c("c1", "c2", "c3")) toolset <- create_toolset(set_names = "crv5") eres1 <- run_evalcurve(testset, toolset) autoplot(eres1)
A list contains scores, labels, and pre-calculated recall and precision values as x and y.
data(C1DATA)
data(C1DATA)
A list with 5 items.
input scores
input labels
pre-calculated recall values for curve evaluation
pre-calculated precision values for curve evaluation
x position for displaying the test result in a plot
y position for displaying the test result in a plot
A list contains scores, labels, and pre-calculated recall and precision values as x and y.
data(C2DATA)
data(C2DATA)
See C1DATA
.
A list contains scores, labels, and pre-calculated recall and precision values as x and y.
data(C3DATA)
data(C3DATA)
See C1DATA
.
A list contains scores, labels, and pre-calculated recall and precision values as x and y.
data(C4DATA)
data(C4DATA)
See C1DATA
.
The create_example_func
function creates an example for the
create_usrtool
function.
create_example_func()
create_example_func()
A function as an example for create_usrtool
create_usrtool
requires the same format.
create_testset
for testset
.
## Create a function func <- create_example_func() func
## Create a function func <- create_example_func() func
The create_testset
function creates test datasets either for
benchmarking or curve evaluation.
create_testset(test_type, set_names = NULL)
create_testset(test_type, set_names = NULL)
test_type |
A single string to specify the type of dataset generated by this function.
|
|||||||||||||||
set_names |
A character vector to specify the names of test datasets.
|
A list of R6
test dataset objects.
run_benchmark
and run_evalcurve
require
the list of the datasets generated by this function.
TestDataB
for benchmarking test data.
TestDataC
, C1DATA
, C2DATA
,
C3DATA
, and C4DATA
for curve evaluation
test data.
create_usrdata
for creating a user-defined test set.
## Create a balanced data set with 50 positives and 50 negatives tset1 <- create_testset("bench", "b100") tset1 ## Create an imbalanced data set with 25 positives and 75 negatives tset2 <- create_testset("bench", "i100") tset2 ## Create P1 dataset tset3 <- create_testset("curve", "c1") tset3 ## Create P1 dataset tset4 <- create_testset("curve", c("c1", "c2")) tset4
## Create a balanced data set with 50 positives and 50 negatives tset1 <- create_testset("bench", "b100") tset1 ## Create an imbalanced data set with 25 positives and 75 negatives tset2 <- create_testset("bench", "i100") tset2 ## Create P1 dataset tset3 <- create_testset("curve", "c1") tset3 ## Create P1 dataset tset4 <- create_testset("curve", c("c1", "c2")) tset4
The create_toolset
function takes names of predefined tools and
generates a list of wrapper functions for Precision-Recall curve
calculations.
create_toolset( tool_names = NULL, set_names = NULL, calc_auc = TRUE, store_res = TRUE )
create_toolset( tool_names = NULL, set_names = NULL, calc_auc = TRUE, store_res = TRUE )
tool_names |
A character vector to specify the names of performance evaluation tools. The names for the following five tools can be currently used.
|
set_names |
A character vector to specify a predefined set name. Following six sets are currently available.
|
calc_auc |
A Boolean value to specify whether the AUC score should be calculated. |
store_res |
A Boolean value to specify whether the calculated curve is retrieved and stored |
A list of R6
tool objects.
run_benchmark
and run_evalcurve
require
the list of the tools generated by this function
ToolROCR
, ToolAUCCalculator
,
ToolPerfMeas
, ToolPRROC
, and
Toolprecrec
as R6 tool classes.
## Create ROCR and precrec toolset1 <- create_toolset(c("ROCR", "precrec")) toolset1 ## Create auc5 tools toolset2 <- create_toolset(set_names = "auc5") toolset2
## Create ROCR and precrec toolset1 <- create_toolset(c("ROCR", "precrec")) toolset1 ## Create auc5 tools toolset2 <- create_toolset(set_names = "auc5") toolset2
The create_usrdata
function creates various types of test datasets.
create_usrdata( test_type, scores = NULL, labels = NULL, tsname = NULL, base_x = NULL, base_y = NULL, text_x = NULL, text_y = NULL, text_x2 = text_x, text_y2 = text_y )
create_usrdata( test_type, scores = NULL, labels = NULL, tsname = NULL, base_x = NULL, base_y = NULL, text_x = NULL, text_y = NULL, text_x2 = text_x, text_y2 = text_y )
test_type |
A single string to specify the type of dataset generated by this function.
|
scores |
A numeric vector to set scores. |
labels |
A numeric vector to set labels. |
tsname |
A single string to specify the name of the dataset. |
base_x |
A numeric vector to set pre-calculated recall values for curve evaluation. |
base_y |
A numeric vector to set pre-calculated precision values for curve evaluation. |
text_x |
A single numeric value to set the x position for displaying the test result in a plot |
text_y |
A single numeric value to set the y position for displaying the test result in a plot |
text_x2 |
A single numeric value to set the x position for displaying the test result (group into categories) in a plot |
text_y2 |
A single numeric value to set the y position for displaying the test result (group into categories) in a plot |
A list of R6
test dataset objects.
create_testset
for creating a predefined test set.
TestDataB
for benchmarking test data.
TestDataC
for curve evaluation test data.
## Create a test dataset for benchmarking testset2 <- create_usrdata("bench", scores = c(0.1, 0.2), labels = c(1, 0), tsname = "m1" ) testset2 ## Create a test dataset for curve evaluation testset <- create_usrdata("curve", scores = c(0.1, 0.2), labels = c(1, 0), base_x = c(0, 1.0), base_y = c(0, 0.5) ) testset
## Create a test dataset for benchmarking testset2 <- create_usrdata("bench", scores = c(0.1, 0.2), labels = c(1, 0), tsname = "m1" ) testset2 ## Create a test dataset for curve evaluation testset <- create_usrdata("curve", scores = c(0.1, 0.2), labels = c(1, 0), base_x = c(0, 1.0), base_y = c(0, 0.5) ) testset
The create_toolset
function takes names of predefined tools and
generates a list of wrapper functions for Precision-Recall curve
calculations.
create_usrtool( tool_name, func, calc_auc = TRUE, store_res = TRUE, x = NA, y = NA )
create_usrtool( tool_name, func, calc_auc = TRUE, store_res = TRUE, x = NA, y = NA )
tool_name |
A single string to specify the name of a user-defined tool. |
func |
A function to calculate a Precision-Recall curve and the AUC. It
should take an element of the test dataset generated by
|
calc_auc |
A Boolean value to specify whether the AUC score should be calculated. |
store_res |
A Boolean value to specify whether the calculated curve is retrieved and stored. |
x |
Set pre-calculated recall values. |
y |
Set pre-calculated precision values. |
A list of R6
tool objects.
create_toolset
to create a predefined tool set.
create_testset
for testset
.
create_example_func
to create an example function.
## Create a new tool interface called "xyz" efunc <- create_example_func() toolset1 <- create_usrtool("xyz", efunc) toolset1 ## Example function with a correct argument testset <- create_usrdata("bench", scores = c(0.1, 0.2), labels = c(1, 0)) retf <- efunc(testset[[1]]) retf
## Create a new tool interface called "xyz" efunc <- create_example_func() toolset1 <- create_usrtool("xyz", efunc) toolset1 ## Example function with a correct argument testset <- create_usrdata("bench", scores = c(0.1, 0.2), labels = c(1, 0)) retf <- efunc(testset[[1]]) retf
The prcbench package provides four categories of important functions: tool interface, test data interface, benchmarking, and curve evaluation.
The create_toolset
function creates a common interface for
five different tools that calculate Precision-Recall curves. These tools
are ROCR,
AUCCalculator,
PerfMeas,
PRROC, and
precrec.
The create_usrtool
function helps users to make the same
interface of the predefined ones for their own tools.
The create_testset
function creates two different types of test
data sets. The first type is for benchmarking, and the second type is for
curve evaluation.
The create_usrdata
function helps users to make their own test
data sets.
The run_benchmark
function takes a tool set and a test data set
and run microbenchmark
for them.
The run_evalcurve
function takes a tool set and a test data set
and evaluates the accuracy of Precision-Recall curves for them.
The run_benchmark
function runs
microbenchmark
for specified tools
and test datasets
run_benchmark(testset, toolset, times = 5, unit = "ms", use_sys_time = FALSE)
run_benchmark(testset, toolset, times = 5, unit = "ms", use_sys_time = FALSE)
testset |
A character vector to specify a test set generated by
|
toolset |
A character vector to specify a tool set generated by
|
times |
The number of iteration used in
|
unit |
A single string to specify the unit used in
|
use_sys_time |
A Boolean value to specify
|
A data frame of microbenchmark results with additional columns.
create_testset
to generate a test dataset.
create_toolset
to generate a tool set.
microbenchmark
for benchmarking
details.
## Not run: ## Benchmarking for b10 and i10 test sets and crv5, auc5, and def5 tool sets testset <- create_testset("bench", c("b10", "i10")) toolset <- create_toolset(set_names = "def5") res1 <- run_benchmark(testset, toolset) res1 ## End(Not run)
## Not run: ## Benchmarking for b10 and i10 test sets and crv5, auc5, and def5 tool sets testset <- create_testset("bench", c("b10", "i10")) toolset <- create_toolset(set_names = "def5") res1 <- run_benchmark(testset, toolset) res1 ## End(Not run)
The run_evalcurve
function runs several tests to evaluate
the accuracy of Precision-Recall curves.
run_evalcurve(testset, toolset, auto_combo = TRUE)
run_evalcurve(testset, toolset, auto_combo = TRUE)
testset |
A character vector to specify a test set generated by
|
toolset |
A character vector to specify a tool set generated by
|
auto_combo |
A Boolean value to specify whether a combination of test and tool sets is automatically created. |
A data frame with validation results.
create_testset
to generate a test dataset.
create_toolset
to generate a tool set.
## Evaluate curves for c1, c2, c3 test sets and crv5 tool set testset <- create_testset("curve", c("c1", "c2", "c3")) toolset <- create_toolset(set_names = "crv5") res1 <- run_evalcurve(testset, toolset) res1
## Evaluate curves for c1, c2, c3 test sets and crv5 tool set testset <- create_testset("curve", c("c1", "c2", "c3")) toolset <- create_toolset(set_names = "crv5") res1 <- run_evalcurve(testset, toolset) res1
R6
class of test data set for performance evaluation tools.
An R6
class object.
TestDataB
is a class that contains scores and label for performance
evaluation tools. It provides necessary methods for benchmarking.
new()
Default class initialization method.
TestDataB$new(scores = NULL, labels = NULL, tsname = NA)
scores
A vector of scores.
labels
A vector of labels.
tsname
A dataset name.
get_tsname()
Get the dataset name.
TestDataB$get_tsname()
get_scores()
Get a vector of scores.
TestDataB$get_scores()
get_labels()
Get a vector of labels.
TestDataB$get_labels()
get_fg()
Get a vector of positive scores.
TestDataB$get_fg()
get_bg()
Get a vector of negative scores.
TestDataB$get_bg()
get_fname()
Get a file name that contains scores and labels.
TestDataB$get_fname()
del_file()
Delete the file with scores and labels.
TestDataB$del_file()
print()
Pretty print of the test dataset.
TestDataB$print(...)
...
Not used.
clone()
The objects of this class are cloneable with this method.
TestDataB$clone(deep = FALSE)
deep
Whether to make a deep clone.
create_testset
for creating a list of test datasets.
TestDataC
is derived from this class for curve evaluation.
## Initialize with scores, labels, and a dataset name testset <- TestDataB$new(c(0.1, 0.2, 0.3), c(0, 1, 1), "m1") testset
## Initialize with scores, labels, and a dataset name testset <- TestDataB$new(c(0.1, 0.2, 0.3), c(0, 1, 1), "m1") testset
R6
class of test dataset for Precision-Recall curve evaluation.
An R6
class object.
TestDataC
is a class that contains scores and label for performance
evaluation tools. It provides necessary methods for curve evaluation.
prcbench::TestDataB
-> TestDataC
set_basepoints_x()
Set pre-calculated recall values for curve evaluation.
TestDataC$set_basepoints_x(x)
x
A recall value.
set_basepoints_y()
Set pre-calculated precision values for curve evaluation.
TestDataC$set_basepoints_y(y)
y
A precision value.
get_basepoints_x()
Get pre-calculated recall values for curve evaluation.
TestDataC$get_basepoints_x()
get_basepoints_y()
Get pre-calculated precision values for curve evaluation.
TestDataC$get_basepoints_y()
set_textpos_x()
Set the position x
for displaying the test result in a plot.
TestDataC$set_textpos_x(x)
x
Position x of the test result.
set_textpos_y()
Set the y
position for displaying the test result in a plot.
TestDataC$set_textpos_y(y)
y
Position y of the test result.
set_textpos_x2()
Set the x
position for displaying the test result in a plot.
TestDataC$set_textpos_x2(x)
x
Position x of the test result.
set_textpos_y2()
Set the y
position for displaying the test result in a plot.
TestDataC$set_textpos_y2(y)
y
Position y of the test result.
get_textpos_x()
Get the position x
for displaying the test result in a plot.
TestDataC$get_textpos_x()
get_textpos_y()
Get the position y
for displaying the test result in a plot.
TestDataC$get_textpos_y()
get_textpos_x2()
Get the x
position for displaying the test result in a plot.
TestDataC$get_textpos_x2()
get_textpos_y2()
Get the y
position for displaying the test result in a plot.
TestDataC$get_textpos_y2()
clone()
The objects of this class are cloneable with this method.
TestDataC$clone(deep = FALSE)
deep
Whether to make a deep clone.
create_testset
for creating a list of test datasets.
It is derived from TestDataB
.
## Initialize with scores, labels, and a dataset name testset <- TestDataC$new(c(0.1, 0.2), c(1, 0), "c4") testset ## Set base points testset$set_basepoints_x(c(0.13, 0.2)) testset$set_basepoints_y(c(0.5, 0.6)) testset
## Initialize with scores, labels, and a dataset name testset <- TestDataC$new(c(0.1, 0.2), c(1, 0), "c4") testset ## Set base points testset$set_basepoints_x(c(0.13, 0.2)) testset$set_basepoints_y(c(0.5, 0.6)) testset
R6
class of the AUCCalculator tool
An R6
class object.
ToolAUCCalculator
is a wrapper class for
the AUCCalculator tool, which
is a Java library that provides calculations of ROC and Precision-Recall
curves.
prcbench::ToolIFBase
-> ToolAUCCalculator
prcbench::ToolIFBase$call()
prcbench::ToolIFBase$get_auc()
prcbench::ToolIFBase$get_result()
prcbench::ToolIFBase$get_setname()
prcbench::ToolIFBase$get_toolname()
prcbench::ToolIFBase$get_x()
prcbench::ToolIFBase$get_y()
prcbench::ToolIFBase$print()
prcbench::ToolIFBase$set_setname()
prcbench::ToolIFBase$set_toolname()
new()
Default class initialization method.
ToolAUCCalculator$new(...)
...
set value for jarpath
.
set_jarpath()
It sets an AUCCalculator jar file.
ToolAUCCalculator$set_jarpath(jarpath = NULL)
jarpath
File path of the AUCCalculator jar file,
e.g. "/path1/path2/auc2.jar"
.
set_curvetype()
It sets the type of curve.
ToolAUCCalculator$set_curvetype(curvetype = "SPR")
curvetype
"SPR", "PR", or "ROC"
set_auctype()
It sets the type of calculation method
ToolAUCCalculator$set_auctype(auctype)
auctype
"java" or "r"
clone()
The objects of this class are cloneable with this method.
ToolAUCCalculator$clone(deep = FALSE)
deep
Whether to make a deep clone.
This class is derived from ToolIFBase
.
create_toolset
for creating a list of tools.
## Initialization toolauccalc <- ToolAUCCalculator$new() ## Show object info toolauccalc ## create_toolset should be used for benchmarking and curve evaluation toolauccalc2 <- create_toolset("AUCCalculator")
## Initialization toolauccalc <- ToolAUCCalculator$new() ## Show object info toolauccalc ## create_toolset should be used for benchmarking and curve evaluation toolauccalc2 <- create_toolset("AUCCalculator")
Base class of performance evaluation tools.
An R6
class object
ToolIFBase
is an abstract class to provide a uniform interface for
performance evaluation tools.
new()
Default class initialization method.
ToolIFBase$new(...)
...
set value for setname
, calc_auc
,
store_res
, x
, y
.
call()
It calls the tool to calculate precision-recall curves.
ToolIFBase$call(testset, calc_auc, store_res)
testset
R6
object generated by the create_testset
function.
calc_auc
A Boolean value to specify whether the AUC score should be calculated.
store_res
A Boolean value to specify whether the calculated curve is retrieved and stored.
get_toolname()
Get the name of the tool.
ToolIFBase$get_toolname()
set_toolname()
Set the name of the tool.
ToolIFBase$set_toolname(toolname)
toolname
Name of the tool.
get_setname()
Get the name of the tool set.
ToolIFBase$get_setname()
set_setname()
Set the name of the tool set.
ToolIFBase$set_setname(setname)
setname
Name of the tool set.
get_result()
Get a list with curve values and the AUC score.
ToolIFBase$get_result()
get_x()
Get calculated recall values.
ToolIFBase$get_x()
get_y()
Get calculated precision values.
ToolIFBase$get_y()
get_auc()
Get tne AUC score.
ToolIFBase$get_auc()
print()
Pretty print of the tool interface
ToolIFBase$print(...)
...
Not used.
clone()
The objects of this class are cloneable with this method.
ToolIFBase$clone(deep = FALSE)
deep
Whether to make a deep clone.
ToolROCR
, ToolAUCCalculator
,
ToolPerfMeas
, ToolPRROC
,
and Toolprecrec
are derived from this class.
create_toolset
for creating a list of tools.
R6
class of the PerfMeas tool
An R6
class object.
ToolPerfMeas
is a wrapper class for
the PerfMeas tool,
which is an R library that provides several performance measures.
prcbench::ToolIFBase
-> ToolPerfMeas
prcbench::ToolIFBase$call()
prcbench::ToolIFBase$get_auc()
prcbench::ToolIFBase$get_result()
prcbench::ToolIFBase$get_setname()
prcbench::ToolIFBase$get_toolname()
prcbench::ToolIFBase$get_x()
prcbench::ToolIFBase$get_y()
prcbench::ToolIFBase$initialize()
prcbench::ToolIFBase$print()
prcbench::ToolIFBase$set_setname()
prcbench::ToolIFBase$set_toolname()
clone()
The objects of this class are cloneable with this method.
ToolPerfMeas$clone(deep = FALSE)
deep
Whether to make a deep clone.
This class is derived from ToolIFBase
.
create_toolset
for creating a list of tools.
## Initialization toolperf <- ToolPerfMeas$new() ## Show object info toolperf ## create_toolset should be used for benchmarking and curve evaluation toolperf2 <- create_toolset("PerfMeas")
## Initialization toolperf <- ToolPerfMeas$new() ## Show object info toolperf ## create_toolset should be used for benchmarking and curve evaluation toolperf2 <- create_toolset("PerfMeas")
R6
class of the precrec tool
An R6
class object.
Toolprecrec
is a wrapper class for
the precrec tool,
which is an R library that provides calculations of ROC and Precision-Recall
curves.
prcbench::ToolIFBase
-> Toolprecrec
prcbench::ToolIFBase$call()
prcbench::ToolIFBase$get_auc()
prcbench::ToolIFBase$get_result()
prcbench::ToolIFBase$get_setname()
prcbench::ToolIFBase$get_toolname()
prcbench::ToolIFBase$get_x()
prcbench::ToolIFBase$get_y()
prcbench::ToolIFBase$print()
prcbench::ToolIFBase$set_setname()
prcbench::ToolIFBase$set_toolname()
new()
Default class initialization method.
Toolprecrec$new(...)
...
set value for x_bins
.
set_x_bins()
Set the number of supporting points as the number of bins.
Toolprecrec$set_x_bins(x_bins)
x_bins
set value for x_bins
.
clone()
The objects of this class are cloneable with this method.
Toolprecrec$clone(deep = FALSE)
deep
Whether to make a deep clone.
This class is derived from ToolIFBase
.
create_toolset
for creating a list of tools.
## Initialization toolprecrec <- Toolprecrec$new() ## Show object info toolprecrec ## create_toolset should be used for benchmarking and curve evaluation toolprecrec2 <- create_toolset("precrec")
## Initialization toolprecrec <- Toolprecrec$new() ## Show object info toolprecrec ## create_toolset should be used for benchmarking and curve evaluation toolprecrec2 <- create_toolset("precrec")
R6
class of the PRROC tool
An R6
class object.
ToolPRROC
is a wrapper class for
the PRROC tool, which
is an R library that provides calculations of ROC and Precision-Recall
curves.
prcbench::ToolIFBase
-> ToolPRROC
prcbench::ToolIFBase$call()
prcbench::ToolIFBase$get_auc()
prcbench::ToolIFBase$get_result()
prcbench::ToolIFBase$get_setname()
prcbench::ToolIFBase$get_toolname()
prcbench::ToolIFBase$get_x()
prcbench::ToolIFBase$get_y()
prcbench::ToolIFBase$print()
prcbench::ToolIFBase$set_setname()
prcbench::ToolIFBase$set_toolname()
new()
Default class initialization method.
ToolPRROC$new(...)
...
set value for curve
, minStepSize
,
aucType
.
set_curve()
A Boolean value to specify whether precision-recall curve is calculated.
ToolPRROC$set_curve(val)
val
TRUE: calculate, FALSE: not calculate.
set_minStepSize()
A numeric value to specify the minimum step size between two intermediate points.
ToolPRROC$set_minStepSize(val)
val
Step size between two points.
set_aucType()
Set the AUC calculation method
ToolPRROC$set_aucType(val)
val
1: integral, 2: Davis Goadrich
clone()
The objects of this class are cloneable with this method.
ToolPRROC$clone(deep = FALSE)
deep
Whether to make a deep clone.
This class is derived from ToolIFBase
.
create_toolset
for creating a list of tools.
## Initialization toolprroc <- ToolPRROC$new() ## Show object info toolprroc ## create_toolset should be used for benchmarking and curve evaluation toolprroc2 <- create_toolset("PRROC")
## Initialization toolprroc <- ToolPRROC$new() ## Show object info toolprroc ## create_toolset should be used for benchmarking and curve evaluation toolprroc2 <- create_toolset("PRROC")
R6
class of the ROCR tool
An R6
class object.
ToolROCR
is a wrapper class for
the ROCR tool, which is an R
library that provides calculations of various performance evaluation
measures.
prcbench::ToolIFBase
-> ToolROCR
prcbench::ToolIFBase$call()
prcbench::ToolIFBase$get_auc()
prcbench::ToolIFBase$get_result()
prcbench::ToolIFBase$get_setname()
prcbench::ToolIFBase$get_toolname()
prcbench::ToolIFBase$get_x()
prcbench::ToolIFBase$get_y()
prcbench::ToolIFBase$initialize()
prcbench::ToolIFBase$print()
prcbench::ToolIFBase$set_setname()
prcbench::ToolIFBase$set_toolname()
clone()
The objects of this class are cloneable with this method.
ToolROCR$clone(deep = FALSE)
deep
Whether to make a deep clone.
This class is derived from ToolIFBase
.
create_toolset
for creating a list of tools.
## Initialization toolrocr <- ToolROCR$new() ## Show object info toolrocr ## create_toolset should be used for benchmarking and curve evaluation toolrocr2 <- create_toolset("ROCR")
## Initialization toolrocr <- ToolROCR$new() ## Show object info toolrocr ## create_toolset should be used for benchmarking and curve evaluation toolrocr2 <- create_toolset("ROCR")