Back to GlossaryEvaluation

Recall

Definition

The proportion of actual positive cases that the model correctly identifies, measuring how completely a model captures all positive instances.

Recall (also known as sensitivity or true positive rate) answers the question: "Of all the actual positive cases, how many did the model find?" It is calculated as true positives / (true positives + false negatives). High recall means the model rarely misses positive cases. Recall is critical in applications where false negatives are costly: medical screening (missing a disease diagnosis), security threat detection (failing to detect malware), and search engines (missing relevant results). A model can trivially achieve 100% recall by predicting everything as positive, but this would have terrible precision. The precision-recall trade-off is managed by adjusting classification thresholds based on the relative costs of false positives versus false negatives for the specific application.

Companies in Evaluation

View Evaluation companies →