Skip to contents

Compute confusion matrices from multi-model annotations

Usage

compute_confusion_matrices(
  annotations,
  gold = NULL,
  pairwise = TRUE,
  label_levels = NULL,
  sample_col = "sample_id",
  model_col = "model_id",
  label_col = "label",
  truth_col = "truth"
)

Arguments

annotations

Output from [explore()] or a compatible data frame with at least `sample_id`, `model_id`, and `label` columns.

gold

Optional vector of gold labels. Overrides the `truth` column when supplied.

pairwise

When `TRUE`, cross-model confusion tables are returned even if no gold labels exist.

label_levels

Optional factor levels to enforce a consistent ordering in the resulting tables.

sample_col, model_col, label_col, truth_col

Column names to use when `annotations` is a custom data frame.

Value

A list with elements `vs_gold` (named list of matrices, one per model) and `pairwise` (list of pairwise confusion tables).