Back to GlossaryEvaluation

Precision

Definition

The proportion of positive predictions that are actually correct, measuring how reliable a model's positive predictions are.

Precision answers the question: "Of all the items the model predicted as positive, how many actually were?" It is calculated as true positives / (true positives + false positives). High precision means the model rarely makes false positive errors — when it says something is positive, it's usually right. Precision is critical in applications where false positives are costly: spam filtering (falsely marking important emails as spam), content moderation (incorrectly removing benign content), and medical testing (unnecessary treatment). There is typically a trade-off between precision and recall — increasing one tends to decrease the other, governed by the classification threshold. The F1 score balances both metrics. In information retrieval, precision measures what fraction of returned results are relevant.

Companies in Evaluation

View Evaluation companies →