Skip to content

Conversation

@destillat89
Copy link

No description provided.

@destillat89
Copy link
Author

tts engine doesn't request audio focus automatically on android, and tts.speak() starts speaking together with other audio app in parallel. I suggest to request audio focus beforehand, to stop other audio app (if ducking is turned off).

@PVoLan
Copy link

PVoLan commented Oct 1, 2020

  1. See also other ducking usages inside setUtteranceProgress() method around line 60. If you request focus, you have to abandon it too.

  2. We need an option to NOT use audiofocus at all while using TTS, if we want to. It can be managed somewhere outside.

  3. If you'll fix 1 and will abandon focus after every utterance speaked, it may be unwanted behavior in case if next utterance coming soon.

P.S. In fact, I dislike the idea of existance of audiofocus management in this module at all. The audiofocus concept is more complicated than just "request/abandon", and proper audiofocus management may differ depending on application purpose

@destillat89
Copy link
Author

thanks @PVoLan !

1 See also other ducking usages inside setUtteranceProgress() method around line 60. If you request focus, you have to abandon it too.
3 If you'll fix 1 and will abandon focus after every utterance speaked, it may be unwanted behavior in case if next utterance coming soon.

Yes, this is unwanted behaviour (in our case). At least in our app, it's ok to take audio focus forever, like any other audio app. If need to abandon focus after every utterance speaked - use ducking. If user needs to turn back his music from other audio app - he can go back to that audio app and resume the playback.
Does it create any problem if audiofocus is requested but not abandoned in the end? I imagine that all audio apps work this way.

  1. We need an option to NOT use audiofocus at all while using TTS, if we want to. It can be managed somewhere outside.

Initially I thought that request/abandon audiofocus could be also extracted to separate method of tts package, but probably it's not needed in most cases.

I agree re option to NOT use audiofocus, it would be a nice further refinement. But in this pull request I've just made logic consistent for both android and iOS platform, because iOS takes audiofocus, and android doesn't.

Overall, I would say that it would be nice to have both options: separate method for managing audiofocus manually, and parameter for using audiofocus in speak method

@PVoLan
Copy link

PVoLan commented Oct 2, 2020

Does it create any problem if audiofocus is requested but not abandoned in the end?

Yes. As any other system resource, it has to be released when your app has no more need in this resource. Sometimes it happens that OS can take focus away from you by force, but if this doesn't happen you'd better release it willingly.

I imagine that all audio apps work this way.

No. The good practice is to request focus when you want to play music and abandon it after you've finished playing. This works both for long-time music and short-time sounds.

@PVoLan
Copy link

PVoLan commented Oct 2, 2020

Initially I thought that request/abandon audiofocus could be also extracted to separate method of tts package, but probably it's not needed in most cases.

In my app audiofocus is captured manually outside of react-native-tts module, and it worked fine before. If react-native-tts will also capture audiofocus, this can lead to struggle between two audiofocuses and bugs.

It is ok for react-native-tts to capture audiofocus in some cases, but it should not be a default option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants