Skip to content

Conversation

@wenyi-guo
Copy link
Collaborator

support more tpu tests

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @wenyi-guo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the robustness of Keras's test suite when running on Tensor Processing Units (TPUs). It achieves this by re-integrating a large number of tests that were previously skipped on TPUs and by introducing a mechanism to define and apply specific numerical tolerances for assertions when executing on TPU hardware. This ensures that Keras functionalities are thoroughly validated against the unique precision characteristics of TPUs, leading to more reliable and performant models on these accelerators.

Highlights

  • Expanded TPU Test Coverage: A significant number of tests previously excluded from TPU runs have been re-enabled by drastically reducing the excluded_tpu_tests.txt file, allowing for more comprehensive validation on TPU environments.
  • TPU-Specific Assertion Tolerances: Introduced tpu_atol, tpu_rtol, and tpu_decimal parameters to assertAllClose and assertAlmostEqual calls across numerous test files. These parameters allow for adjusted precision checks tailored to the numerical characteristics of TPU computations.
  • Dynamic TPU Detection: The TestCase class now includes logic to detect if tests are running on a TPU (for both JAX and TensorFlow backends), enabling the dynamic application of TPU-specific assertion tolerances when appropriate.
  • Conditional Test Behavior: Implemented conditional logic in specific tests (e.g., test_argmax_negative_zero, test_argmin_negative_zero, test_spectrogram_error) and explicitly skipped test_eig on JAX+TPU, addressing known differences or limitations in TPU computations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables a large number of TPU tests by introducing TPU-specific precision tolerances. The core changes in test_case.py to support tpu_atol and tpu_rtol are well-implemented. However, I've identified a critical issue in the TensorFlow TPU detection logic that could break tests in non-TPU environments. Additionally, there's a bug in a numpy test and several instances of extremely loose tolerances that could mask real issues on TPUs. Addressing these points will improve the robustness and reliability of the test suite.

@codecov-commenter
Copy link

codecov-commenter commented Dec 2, 2025

Codecov Report

❌ Patch coverage is 63.15789% with 14 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.61%. Comparing base (74fba84) to head (63f5d72).
⚠️ Report is 2 commits behind head on master.

Files with missing lines Patch % Lines
keras/src/testing/test_case.py 61.11% 10 Missing and 4 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21887      +/-   ##
==========================================
+ Coverage   82.57%   82.61%   +0.03%     
==========================================
  Files         577      578       +1     
  Lines       59650    59819     +169     
  Branches     9356     9391      +35     
==========================================
+ Hits        49254    49417     +163     
+ Misses       7984     7979       -5     
- Partials     2412     2423      +11     
Flag Coverage Δ
keras 82.42% <57.89%> (+0.03%) ⬆️
keras-jax 62.25% <39.47%> (-0.61%) ⬇️
keras-numpy 57.44% <31.57%> (-0.08%) ⬇️
keras-openvino 34.32% <15.78%> (-0.01%) ⬇️
keras-tensorflow 64.41% <42.10%> (+0.01%) ⬆️
keras-torch 63.57% <31.57%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@wenyi-guo wenyi-guo marked this pull request as draft December 2, 2025 22:02
@wenyi-guo wenyi-guo marked this pull request as ready for review December 2, 2025 22:41
Copy link
Collaborator

@hertschuh hertschuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all the changes!

Comment on lines 420 to 421
if testing.jax_uses_tpu():
self.skipTest("Skipping test with JAX + TPU as it's not supported")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh weird, what's the error?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NotImplementedError: MLIR translation rule for primitive 'eig' not found for platform tpu

hertschuh
hertschuh previously approved these changes Dec 3, 2025
Copy link
Collaborator

@hertschuh hertschuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving to run the tests.

@google-ml-butler google-ml-butler bot added kokoro:force-run ready to pull Ready to be merged into the codebase labels Dec 3, 2025
@google-ml-butler google-ml-butler bot removed the ready to pull Ready to be merged into the codebase label Dec 3, 2025
@wenyi-guo wenyi-guo added the ready to pull Ready to be merged into the codebase label Dec 3, 2025
@hertschuh hertschuh dismissed their stale review December 3, 2025 19:28

Removing review to re-trigger TPU tests

Copy link
Collaborator

@hertschuh hertschuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving to re-trigger TPU tests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awaiting review ready to pull Ready to be merged into the codebase size:L

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants