Skip to content

Conversation

@IgorAherne
Copy link

This PR adds efficient batch processing and multi-sequence generation.

Changes:

  • Batch Generation: Process a list of different texts at once for a significant speed-up.

    model.generate(["Hello world.", "This is a test."])
  • Multiple Variations: Use num_return_sequences to efficiently create diverse samples from the same text.

    model.generate("Hello world.", num_return_sequences=3)

The core inference pipeline has been updated to support these features in parallel, and example_tts.py is updated to demonstrate usage.

IgorAherne and others added 30 commits July 20, 2025 14:21
* Add multilingual support

* Update model card and README

* Update examples and README

* Remove EOS supression

* Keep EOS supression for longer text

* Update README.md

* Update README.md

* modify tokenizers, set alignment analyzer default to none

* minor fixes, strict forcing EOS

* Update model repo
* remove src

* bump version 0.1.4

* fix pip
* multilingual v2 vocab and russian stresser update

* multilinugal tokenizer fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants