Skip to content

Conversation

renefloor
Copy link
Contributor

@renefloor renefloor commented Jun 26, 2025

🎯 Goal

Add a notification in dogfooding when the user is talking while muted.

FLU-107

🛠 Implementation details

Added a public SpeakingWhileMutedRecognition and an internal AudioRecognitionWebRTC implementation.

The SpeakingWhileMutedRecognition starts and stops the audio recognition when the user gets (un)muted.
The AudioRecognitionWebRTC makes 2 peer connections to listen to the audio volume, heavily inspired by the react native implementation.

It can be used by creating the object and listening to the state stream:

  late StreamSubscription<SpeakingWhileMutedState> _speechSubscription;
  late SpeakingWhileMutedRecognition _speakingWhileMutedRecognition;

  @override
  void initState() {
    super.initState();
    _speakingWhileMutedRecognition =
        SpeakingWhileMutedRecognition(call: widget.call);
    _speechSubscription = _speakingWhileMutedRecognition.stream.listen((state) {
      final context = this.context;
      if (state.isSpeakingWhileMuted && context.mounted) {
        final colorTheme = StreamVideoTheme.of(context).colorTheme;
        
        ScaffoldMessenger.maybeOf(context)?.showSnackBar(
          SnackBar(
            content: const Text('You are speaking while muted'),
            behavior: SnackBarBehavior.floating,
            backgroundColor: colorTheme.accentPrimary,
          ),
        );
      }
    });
  }

  @override
  void dispose() {
    _speechSubscription.cancel();
    _speakingWhileMutedRecognition.dispose();
    super.dispose();
  }

🎨 UI Changes

image

🧪 Testing

You can test by starting/joining a call, mute yourself and start talking.

☑️Contributor Checklist

General

  • Assigned a person / code owner group (required)
  • Thread with the PR link started in a respective Slack channel (#flutter-team) (required)
  • PR is linked to the GitHub issue it resolves

☑️Reviewer Checklist

  • Sample runs & works
  • UI Changes correct (before & after images)
  • Bugs validated (bugfixes)
  • New feature tested and works
  • All code we touched has new or updated Documentation

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features
    • Users receive notifications when speaking while muted during calls.
    • Added adaptive audio recognition using WebRTC for speech detection.
  • Bug Fixes
    • None.
  • Tests
    • Added unit tests for speech-while-muted detection scenarios.
    • Introduced mock classes for audio recognition and call interfaces.
  • Documentation
    • Updated changelog to include speaking-while-muted detection feature.

@renefloor renefloor requested a review from a team as a code owner June 26, 2025 13:57
Copy link

coderabbitai bot commented Jun 26, 2025

Walkthrough

This update introduces a new system to detect when a user is speaking while muted during a call. It adds audio recognition capabilities using WebRTC, exposes new audio processing APIs, implements state management for "speaking while muted" detection, and provides unit tests and supporting mocks for these features.

Changes

File(s) Change Summary
Call Screen Integration
dogfooding/lib/screens/call_screen.dart
Integrates "speaking while muted" detection: manages recognition lifecycle, shows SnackBar when triggered.
Changelog Update
packages/stream_video/CHANGELOG.md
Documents the addition of SpeakingWhileMutedRecognition in the changelog.
Audio Recognition Interface
packages/stream_video/lib/src/audio_processing/audio_recognition.dart
Adds AudioRecognition interface, SoundState, and callback typedef for sound activity detection.
WebRTC Audio Recognition Implementation
packages/stream_video/lib/src/audio_processing/audio_recognition_webrtc.dart
Implements AudioRecognitionWebRTC for speech detection via WebRTC; adds configuration class.
Speaking While Muted Recognition Logic
packages/stream_video/lib/src/audio_processing/speaking_while_muted_recognition.dart
Adds SpeakingWhileMutedRecognition and SpeakingWhileMutedState for managing detection state.
Public API Exports
packages/stream_video/lib/stream_video.dart
Exports new audio processing modules for public API access.
Unit Tests for Speaking While Muted
packages/stream_video/test/src/audio_processing/speaking_while_muted_recognition_test.dart
Adds unit tests for SpeakingWhileMutedRecognition using mocks and simulated call state transitions.
Test Refactor for MockCall
packages/stream_video/test/src/core/client_state_test.dart
Switches to importing MockCall from test helpers instead of local definition.
Test Helpers Addition
packages/stream_video/test/test_helpers.dart
Adds MockCall and MockAudioRecognition classes for use in tests.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CallScreen
    participant SpeakingWhileMutedRecognition
    participant AudioRecognition (WebRTC)
    participant SnackBar

    User->>CallScreen: Joins call
    CallScreen->>SpeakingWhileMutedRecognition: Initialize with current Call
    loop During call
        SpeakingWhileMutedRecognition->>Call: Listen for mute state changes
        alt User is muted
            SpeakingWhileMutedRecognition->>AudioRecognition (WebRTC): Start audio detection
            AudioRecognition (WebRTC)-->>SpeakingWhileMutedRecognition: isSpeaking = true
            SpeakingWhileMutedRecognition-->>CallScreen: isSpeakingWhileMuted = true
            CallScreen->>SnackBar: Show "You are speaking while muted"
        else User is unmuted or leaves
            SpeakingWhileMutedRecognition->>AudioRecognition (WebRTC): Stop detection
            SpeakingWhileMutedRecognition-->>CallScreen: isSpeakingWhileMuted = false
        end
    end
    CallScreen->>SpeakingWhileMutedRecognition: Dispose on call end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~15 minutes

Assessment against linked issues

Objective Addressed Explanation
Add muted while speaking notification (FLU-107)

Assessment against linked issues: Out-of-scope changes

No out-of-scope changes detected.

Suggested reviewers

  • Brazol

Poem

🐇
I heard a whisper, soft and slight,
While muted in a call one night.
Now with code both sharp and keen,
We catch the words that go unseen.
SnackBars hop up, quick to say:
"You’re speaking while muted—hippity hooray!"
🥕

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4ba9730 and 8f020b4.

📒 Files selected for processing (1)
  • packages/stream_video/CHANGELOG.md (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/stream_video/CHANGELOG.md
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: stream_video
  • GitHub Check: stream_video_screen_sharing
  • GitHub Check: analyze
  • GitHub Check: stream_video_noise_cancellation
  • GitHub Check: stream_video_push_notification
  • GitHub Check: build
  • GitHub Check: stream_video_flutter
  • GitHub Check: analyze_legacy_version
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/speech-recognition-while-muted

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (3)
packages/stream_video/lib/src/audio_processing/audio_recognition.dart (1)

3-11: Add documentation to interface methods.

The interface methods lack documentation. Please add doc comments to clarify the expected behavior and lifecycle of each method.

 abstract interface class AudioRecognition {
+  /// Starts audio recognition and invokes [onSoundStateChanged] when sound state changes.
+  /// Throws if recognition is already started.
   Future<void> start({
     required SoundStateChangedCallback onSoundStateChanged,
   });
 
+  /// Stops audio recognition.
+  /// Safe to call multiple times.
   Future<void> stop();
 
+  /// Disposes of all resources.
+  /// The instance should not be used after calling this method.
   Future<void> dispose();
 }
packages/stream_video/lib/src/audio_processing/speaking_while_muted_recognition.dart (1)

55-61: Consider deferring subscription initialization.

Calling _init() in the constructor starts listening to call state immediately. This could lead to race conditions if the call is not fully initialized. Consider deferring initialization or making it explicit.

Consider either:

  1. Making initialization explicit by exposing an init() method
  2. Deferring the subscription until the first start() call
  3. Adding a check for call readiness before subscribing
packages/stream_video/lib/src/audio_processing/audio_recognition_webrtc.dart (1)

13-76: Consider if dual peer connections are necessary.

The implementation uses two peer connections to detect audio. This seems overly complex for simple audio level detection. Have you considered using the Web Audio API or directly monitoring the MediaStreamTrack's audio level?

The dual peer connection approach might be inspired by the React Native implementation mentioned in the PR description, but it adds complexity and resource overhead. Consider exploring simpler alternatives:

  1. Use the Web Audio API's AnalyserNode for direct audio level monitoring
  2. Monitor the MediaStreamTrack's statistics directly if available
  3. Document why this approach was chosen if there are specific technical reasons
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 71c2b7a and 028bf9b.

📒 Files selected for processing (10)
  • dogfooding/lib/screens/call_screen.dart (1 hunks)
  • packages/stream_video/CHANGELOG.md (1 hunks)
  • packages/stream_video/lib/src/audio_processing/audio_recognition.dart (1 hunks)
  • packages/stream_video/lib/src/audio_processing/audio_recognition_webrtc.dart (1 hunks)
  • packages/stream_video/lib/src/audio_processing/speaking_while_muted_recognition.dart (1 hunks)
  • packages/stream_video/lib/src/call_state.dart (1 hunks)
  • packages/stream_video/lib/stream_video.dart (1 hunks)
  • packages/stream_video/test/src/audio_processing/speaking_while_muted_recognition_test.dart (1 hunks)
  • packages/stream_video/test/src/core/client_state_test.dart (1 hunks)
  • packages/stream_video/test/test_helpers.dart (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (7)
  • GitHub Check: analyze
  • GitHub Check: stream_video
  • GitHub Check: stream_video_push_notification
  • GitHub Check: stream_video_noise_cancellation
  • GitHub Check: stream_video_flutter
  • GitHub Check: stream_video_screen_sharing
  • GitHub Check: build
🔇 Additional comments (10)
packages/stream_video/lib/src/call_state.dart (1)

221-221: LGTM - Minor formatting improvement

The blank line addition improves code readability in the copyWith method without affecting functionality.

packages/stream_video/test/test_helpers.dart (2)

5-5: Good refactoring - Centralizing mock implementations

Moving MockCall to the shared test helpers improves reusability across test files and follows good testing practices.


12-12: Well-structured mock for audio recognition testing

The MockAudioRecognition mock properly supports testing of the new SpeakingWhileMutedRecognition feature and follows the established mocktail patterns in this file.

packages/stream_video/test/src/core/client_state_test.dart (1)

7-7: LGTM - Consistent with mock centralization

The import correctly uses the centralized MockCall from test helpers, eliminating code duplication and improving maintainability.

packages/stream_video/CHANGELOG.md (1)

5-5: Well-documented feature addition

The changelog entry clearly describes the new SpeakingWhileMutedRecognition feature and is properly placed in the "Added" section with appropriate formatting.

packages/stream_video/lib/stream_video.dart (1)

5-6: Appropriate public API expansion

The new exports correctly expose the audio recognition interfaces and SpeakingWhileMutedRecognition class to the public API, enabling users to implement the speaking-while-muted detection feature as described in the PR objectives.

packages/stream_video/lib/src/audio_processing/speaking_while_muted_recognition.dart (1)

9-53: Excellent documentation!

The documentation clearly explains the behavior, state transitions, and provides a practical example. This greatly improves the developer experience.

dogfooding/lib/screens/call_screen.dart (1)

57-73: Well-implemented feature integration!

The speaking-while-muted recognition is properly integrated with:

  • Correct lifecycle management (init in initState, dispose in dispose)
  • Safe context usage with mounted check
  • Clear user feedback via SnackBar
packages/stream_video/test/src/audio_processing/speaking_while_muted_recognition_test.dart (1)

15-178: Comprehensive test coverage!

The test suite thoroughly covers:

  • State transitions (muting/unmuting)
  • Edge cases (recognition already active)
  • Disconnection scenarios
  • Proper mock setup and verification

Well-structured and follows testing best practices.

packages/stream_video/lib/src/audio_processing/audio_recognition_webrtc.dart (1)

78-148: Well-implemented audio detection algorithm!

The audio level detection logic is well-designed with:

  • Smoothing via moving average
  • Adaptive baseline noise level
  • Proper debouncing for speech detection
  • Clear separation of speech onset and offset detection

Copy link

codecov bot commented Jun 27, 2025

Codecov Report

❌ Patch coverage is 23.63636% with 84 lines in your changes missing coverage. Please review.
✅ Project coverage is 4.83%. Comparing base (804b916) to head (8f020b4).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
...src/audio_processing/audio_recognition_webrtc.dart 1.44% 68 Missing ⚠️
...o_processing/speaking_while_muted_recognition.dart 62.50% 15 Missing ⚠️
...eo/lib/src/audio_processing/audio_recognition.dart 0.00% 1 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff            @@
##            main    #999      +/-   ##
========================================
+ Coverage   4.75%   4.83%   +0.07%     
========================================
  Files        574     577       +3     
  Lines      38692   38802     +110     
========================================
+ Hits        1841    1875      +34     
- Misses     36851   36927      +76     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@Brazol Brazol left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Added minor comments

Co-authored-by: Maciej Brażewicz <[email protected]>
@renefloor renefloor force-pushed the feature/speech-recognition-while-muted branch from d7f8c68 to f65a932 Compare July 2, 2025 13:33
renefloor added 3 commits July 2, 2025 15:40
…ition-while-muted

# Conflicts:
#	packages/stream_video/lib/stream_video.dart
#	packages/stream_video/test/test_helpers.dart
…ition-while-muted

# Conflicts:
#	packages/stream_video/CHANGELOG.md
#	packages/stream_video/test/test_helpers.dart
@renefloor renefloor enabled auto-merge (squash) August 5, 2025 13:50
@renefloor renefloor merged commit d94c6fa into main Aug 6, 2025
13 checks passed
@renefloor renefloor deleted the feature/speech-recognition-while-muted branch August 6, 2025 10:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants