Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
)
from tests.transformers_tests.models.modeling_common import floats_numpy, random_attention_mask

DTYPE_AND_THRESHOLDS = {"fp32": 5e-4, "fp16": 5e-3, "bf16": 5e-3}
DTYPE_AND_THRESHOLDS = {"fp32": 5e-4, "fp16": 5e-3, "bf16": 5e-2}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Increasing the bf16 threshold by a factor of 10 to 5e-2 is a significant change that could mask future precision regressions. To maintain test quality, it's important to keep thresholds as tight as possible.

If this large threshold is necessary due to limitations with bfloat16 on CPU, please add an inline comment to explain the reason. This provides valuable context for future developers and justifies the large value.

Suggested change
DTYPE_AND_THRESHOLDS = {"fp32": 5e-4, "fp16": 5e-3, "bf16": 5e-2}
DTYPE_AND_THRESHOLDS = {"fp32": 5e-4, "fp16": 5e-3, "bf16": 5e-2} # Increased for bf16 on CPU due to precision issues.

MODES = [1]


Expand Down