Skip to content

Increase Firebase AI Logic unit test coverage #15126

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

paulb777
Copy link
Member

@paulb777 paulb777 commented Jul 17, 2025

This PR significantly improves the unit test coverage for the FirebaseAI Swift Package Manager target. The new tests were generated with the assistance of the Gemini CLI
and address several key areas that were previously under-tested.

Key Improvements:

  • Chat Tests (ChatTests.swift):

    • Added tests for the non-streaming sendMessage method, including history updates.
    • Added error handling tests for sendMessageStream to ensure the chat history is not updated with partial responses on failure.
    • Added tests for initializing a Chat instance with a pre-existing history.
  • GenerativeModel Tests:

    • Added dedicated failure tests for countTokens to both GenerativeModelGoogleAITests.swift and GenerativeModelVertexAITests.swift.
    • Added comprehensive tests for tool/function calling to the Google AI backend tests (GenerativeModelGoogleAITests.swift) to ensure parity with the Vertex AI backend.
  • Data Model and Utility Tests:

    • Added tests for complex and nested scenarios in JSONValueTests.swift.
    • Added tests for more complex combinations of PartsRepresentable types, including mixed data and image content, and error handling for invalid images.
    • Added tests for Safety.swift to ensure robust decoding of SafetyRating with missing or unknown values.

#no-changelog

Copy link
Contributor

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

@paulb777
Copy link
Member Author

/gemini description

@paulb777
Copy link
Member Author

/gemini review

Copy link
Contributor

Hi @paulb777,

It looks like you tried to use /gemini description in your previous comment. That's not a recognized command. If you're looking for a summary of the pull request, please use /gemini summary.

I'm now processing your request for a code review as indicated by /gemini review.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces new unit tests covering chat functionality, JSON value handling, parts representation, and safety features. Some tests can be improved by avoiding force-unwrapping, using object comparison for JSON encoding, and using XCTExpectFailure for known test failures.

@paulb777 paulb777 requested a review from andrewheard July 17, 2025 20:54
.requestHandler = try GenerativeModelTestUtil.httpRequestHandler(
forResource: "unary-success-function-call-empty-arguments",
withExtension: "json",
subdirectory: "mock-responses/vertexai"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just realized that all of the tests in this file are using the vertexai samples. We'd need to create similar samples for googleai and check them in.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The stub files are missing, so I'll delete for now since there's nothing different about these tests than the existing vertexai ones.

@paulb777
Copy link
Member Author

paulb777 commented Jul 18, 2025

Thanks for the review. I asked Gemini CLI to address the comments. I couldn't find the mock response files either so I deleted GenerativeModelGoogleAITests+Coverage.swift for now. Gemini's summary below

✦ I have addressed all the review comments:

  1. GenerativeModelGoogleAITests+Coverage.swift: I attempted to update the tests to use the correct googleai subdirectory for mock responses, but I was unable to locate the
    correct files. I have skipped this step.
  2. PartsRepresentableTests+Coverage.swift: I have removed the !os(visionOS) condition to enable the tests on that platform.
  3. SafetyTests.swift: I have updated the encoding tests to use JSONEncoder and verify the encoded JSON string.

@paulb777 paulb777 requested a review from andrewheard July 18, 2025 19:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants