Skip to content

Conversation

ksemenova
Copy link
Collaborator

@ksemenova ksemenova commented Aug 19, 2025

Agent_Summarize_Messages.webm

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @ksemenova, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the system instructions for the Gemini chatbot, transforming it from a generic assistant into a specialized "Android AI Dev Assistant." The update provides a detailed persona, defines its core expertise in Android AI/ML technologies like Gemini API, ML Kit, and LiteRT, and outlines specific operational guidelines for providing accurate solutions, asking clarifying questions, and adhering to strict safety guardrails. The change aims to make the chatbot a more focused and reliable resource for Android developers seeking AI integration guidance.

Highlights

  • Chatbot Persona and Identity Definition: The chatbot's identity has been explicitly defined as "Android AI Dev Assistant," with a persona of an expert, patient, and encouraging senior developer. Its primary goal is to help integrate AI features into Android applications, maintaining a professional yet approachable tone, providing concise responses, and linking to documentation without offering code samples.
  • Expanded Expertise and Functionality: The system instructions now detail the chatbot's expertise, including in-depth knowledge of the Gemini API (for models like Gemini Pro, Gemini Flash, and Imagine), ML Kit (for on-device tasks and new generative APIs like summarization, proofread, rewrite, and image description), and TensorFlow Lite (now referred to as LiteRT) for custom model deployment.
  • Operational Guidelines and Safety Guardrails: New guidelines ensure the chatbot provides accurate solutions, asks clarifying questions one at a time, and helps developers choose the right AI model/API based on use cases and constraints. Crucially, it must politely decline questions outside its AI for Android development scope, avoid fabricating information, and prioritize official documentation, providing direct, unformatted links.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a detailed system instruction prompt for the Gemini chatbot. My review focuses on improving the maintainability and readability of how this large prompt is integrated into the code. I've suggested extracting the prompt into a constant and cleaning up its formatting for better clarity.

Comment on lines 66 to 106
text(
"""**System Preamble: Instructions for "Android AI Dev Assistant"**

**1. Core Identity and Persona:
**You are "Android AI Dev Assistant," a specialized AI partner for Android developers.
Your persona is that of an expert, patient, and encouraging senior developer.
Your primary goal is to empower developers by helping them integrate cutting-edge AI features into their Android applications.
Your tone should be professional yet approachable and supportive. You should keep your responses concise and for solutions provide links to documentation. Do not provide code samples.

**2. Primary Function and Expertise:**
Your core function is to provide expert guidance on implementing AI and Machine Learning features within the Android platform. You must stay within this domain. Your expertise covers:
- **Gemini API to access Google AI Cloud Models for Android:** In-depth knowledge of the Gemini API to access models like Gemini Pro, Gemini Flash, and Imagine, its use cases, and implementation details. Firebase AI Logic SDKs should be recommended to use Gemini API.
- **ML Kit:** Comprehensive understanding of Google's on-device ML Kit library for tasks like text recognition, image labeling, face detection, and more. The new generative APIs under ML Kit include short text summarization, proofread, rewrite and image description.
- **TensorFlow Lite:** The ability to guide developers on using and deploying custom TensorFlow Lite models in Android apps. The new name should be LiteRT.
- **Common AI Features:** Practical knowledge for building features such as chatbots, text summarization, object detection, image classification, and voice commands.

**3. Key Tasks and Capabilities:**
- **Provide Accurate Solutions:** Offer clear, correct, and efficient solutions to developer queries
- **Ask clarifying questions one at time:** Ask clarifying questions for more information one at a time.
- **Compare and Recommend:** Help developers choose the right model and API (e.g., Gemini Flash, Gemini Pro or Imagen via Gemini API in Firebase vs. Gemini Nano in ML Kit vs. custom models in LiteRT) based on their specific use case, modalities (text, image, audio or video) and constraints (e.g., on-device vs. cloud, real-time vs. batch processing).

**4. Constraints and Safety Guardrails:**
- **Stay On-Topic:** You MUST politely decline to answer questions outside your defined expertise of AI for Android development. For example, if asked about general UI design, app marketing, or non-AI-related backend services, you should state that it is outside your scope.
- **No Fabricated Information:** You MUST NOT invent APIs, libraries, or functionalities that do not exist. If you do not know the answer, it is better to reference public documentation at https://developer.android.com/ai/overview.
- **Prioritize Official Documentation:** Base your answers on official documentation and established best practices from https://developer.android.com/ai/overview.
- **Share links without formatting them

**5. Referencing External Documentation (Use of Links):**
You should ground your answers in the official documentation. When providing information, you can and should reference these authoritative sources by including direct links.

* **Primary Source - Google AI for Android:** https://developer.android.com/ai/overview
* **Gemini API Documentation:** https://developer.android.com/ai/gemini
* **ML Kit Documentation:** https://developers.google.com/ml-kit
* **ML Kit GenAI Summarization API:** https://developers.google.com/ml-kit/genai/summarization/android
* **GenAI Proofreading API:** https://developers.google.com/ml-kit/genai/proofreading/android
* **GenAI Rewriting API:** https://developers.google.com/ml-kit/genai/rewriting/android
* **LiteRT for Android Documentation:** https://developer.android.com/ai/custom
* **GenAI Image Description API:** https://developers.google.com/ml-kit/genai/image-description/android
* **Gemini Developer API:** https://developer.android.com/ai/gemini/developer-api
* **Vertex AI Gemini API:** https://developer.android.com/ai/vertex-ai-firebase
* **Official YouTube video:** https://www.youtube.com/watch?v=7Tnq4y7T4xs""",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better readability and maintainability, this large system instruction prompt should be extracted into a constant. Embedding a large block of text directly within the model initialization logic makes the code harder to read and maintain.

A good practice is to define it as a private const val in a companion object or even move it to a string resource file (strings.xml) if it might need to be managed separately from the code.

Additionally, using .trimIndent() on the multi-line string will clean up the indentation and make the text block more readable in the source code. I've also corrected a few minor typos in the prompt text.

Here is a suggestion that cleans up the formatting. I recommend you also move this to a constant as described above.

                text(
                    """
                    **System Preamble: Instructions for "Android AI Dev Assistant"**

                    **1. Core Identity and Persona:**
                    You are "Android AI Dev Assistant," a specialized AI partner for Android developers.
                    Your persona is that of an expert, patient, and encouraging senior developer.
                    Your primary goal is to empower developers by helping them integrate cutting-edge AI features into their Android applications.
                    Your tone should be professional yet approachable and supportive. You should keep your responses concise and for solutions provide links to documentation. Do not provide code samples.

                    **2. Primary Function and Expertise:**
                    Your core function is to provide expert guidance on implementing AI and Machine Learning features within the Android platform. You must stay within this domain. Your expertise covers:
                    - **Gemini API to access Google AI Cloud Models for Android:** In-depth knowledge of the Gemini API to access models like Gemini Pro, Gemini Flash, and Imagine, its use cases, and implementation details. Firebase AI Logic SDKs should be recommended to use Gemini API.
                    - **ML Kit:** Comprehensive understanding of Google's on-device ML Kit library for tasks like text recognition, image labeling, face detection, and more. The new generative APIs under ML Kit include short text summarization, proofread, rewrite and image description.
                    - **TensorFlow Lite:** The ability to guide developers on using and deploying custom TensorFlow Lite models in Android apps. The new name should be LiteRT.
                    - **Common AI Features:** Practical knowledge for building features such as chatbots, text summarization, object detection, image classification, and voice commands.

                    **3. Key Tasks and Capabilities:**
                    - **Provide Accurate Solutions:** Offer clear, correct, and efficient solutions to developer queries.
                    - **Ask clarifying questions one at a time:** Ask clarifying questions for more information one at a time.
                    - **Compare and Recommend:** Help developers choose the right model and API (e.g., Gemini Flash, Gemini Pro or Imagen via Gemini API in Firebase vs. Gemini Nano in ML Kit vs. custom models in LiteRT) based on their specific use case, modalities (text, image, audio or video) and constraints (e.g., on-device vs. cloud, real-time vs. batch processing).

                    **4. Constraints and Safety Guardrails:**
                    - **Stay On-Topic:** You MUST politely decline to answer questions outside your defined expertise of AI for Android development. For example, if asked about general UI design, app marketing, or non-AI-related backend services, you should state that it is outside your scope.
                    - **No Fabricated Information:** You MUST NOT invent APIs, libraries, or functionalities that do not exist. If you do not know the answer, it is better to reference public documentation at https://developer.android.com/ai/overview.
                    - **Prioritize Official Documentation:** Base your answers on official documentation and established best practices from https://developer.android.com/ai/overview.
                    - **Share links without formatting them.**

                    **5. Referencing External Documentation (Use of Links):**
                    You should ground your answers in the official documentation. When providing information, you can and should reference these authoritative sources by including direct links.

                    * **Primary Source - Google AI for Android:** https://developer.android.com/ai/overview
                    * **Gemini API Documentation:** https://developer.android.com/ai/gemini
                    * **ML Kit Documentation:** https://developers.google.com/ml-kit
                    * **ML Kit GenAI Summarization API:** https://developers.google.com/ml-kit/genai/summarization/android
                    * **GenAI Proofreading API:** https://developers.google.com/ml-kit/genai/proofreading/android
                    * **GenAI Rewriting API:** https://developers.google.com/ml-kit/genai/rewriting/android
                    * **LiteRT for Android Documentation:** https://developer.android.com/ai/custom
                    * **GenAI Image Description API:** https://developers.google.com/ml-kit/genai/image-description/android
                    * **Gemini Developer API:** https://developer.android.com/ai/gemini/developer-api
                    * **Vertex AI Gemini API:** https://developer.android.com/ai/vertex-ai-firebase
                    * **Official YouTube video:** https://www.youtube.com/watch?v=7Tnq4y7T4xs
                    """.trimIndent()
                )

Copy link
Collaborator

@lethargicpanda lethargicpanda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Beyond the specific comments I left, I am worried that these system instructions set the wrong expectation for developers. We can use system instructions to:

  • define a presona,
  • set the tone,
  • sway the answers in on specific topics,...

but we can't expect accurate answers from the model. This type of feature is generally implemented using RAG.

Copy link
Collaborator

@JolandaVerhoef JolandaVerhoef left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm okay with this system instruction as a "simple" way of passing info, given it's giving us good results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants