-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Description
Problem
When using TrustClaw via Telegram bot, responses are sent as complete messages rather than streaming incrementally. This creates a poor user experience for longer responses, as users must wait for the entire response to be generated before seeing anything.
Proposed Solution
Implement pseudo-streaming for Telegram bot responses using the editMessageText API. Since Telegram doesn't support true streaming, we can simulate it by:
- Send an initial "thinking..." message
- Split the response into chunks (by sentence or character count)
- Progressively edit the message to append new chunks
- Create a typing effect that feels responsive
Implementation
I've created a complete implementation ready for integration:
Download: https://backend.composio.dev/api/v3/sl/3719VB69l8
The package includes:
telegram_streaming_complete.py- Full Python implementationINTEGRATION_GUIDE.md- Step-by-step integration guide
Key features:
- Configurable chunking (by sentence or character count)
- Rate-limit handling (Telegram API limits: ~3 edits/second)
- Fallback to regular sending if streaming fails
- Minimal code changes required
Benefits
✅ Better UX - Users see responses appear incrementally
✅ Perceived performance - Feels faster even if generation time is same
✅ User feedback - "thinking..." indicator shows the bot is working
✅ Production-ready - Handles rate limits and errors gracefully
Integration Example
# Before:
bot.send_message(chat_id, response_text)
# After:
streamer = TelegramStreamingHelper(bot_token)
streamer.send_streaming_response(
chat_id=chat_id,
full_text=response_text,
method="sentence", # Split by sentences
delay=0.2 # 200ms between updates
)Context
This was requested by a TrustClaw user who wants streaming responses in Telegram conversations. The implementation is tested and ready to use.
Let me know if you need any clarifications or modifications!