Add local NPU/GPU text generation support with complete Phi Silica integration #183
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR implements local NPU/GPU text generation capabilities using Windows AI Foundry's Phi Silica model, enabling the app to use local hardware for AI text generation when available instead of requiring cloud API calls.
Overview
The implementation adds a complete TurboModule infrastructure for local AI while maintaining full backward compatibility with the existing OpenAI cloud-based system. Users can now choose between local and cloud AI through a new settings toggle.
Key Changes
LocalAI TurboModule Infrastructure
NativeLocalAI.ts
): Defines capability detection and text generation APIsLocalAI.h
): Native module with complete Phi Silica integrationSettings Integration
AI Flow Integration
AiQuery.tsx
to conditionally use local AI when enabled and availableError Handling & UX
Technical Implementation
The LocalAI module follows the same TurboModule pattern as the existing Speech module:
The C++ implementation provides complete Phi Silica integration with:
Microsoft.Windows.AI.Text.LanguageModel
API usageGetDefault()
methodGenerateResponseAsync()
Hardware Requirements
Validation
This implementation provides production-ready support for CoPilot+ PC users to leverage their local NPU hardware for AI text generation while ensuring the app continues to work seamlessly on all devices.
Fixes #182.
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.