-
-
Notifications
You must be signed in to change notification settings - Fork 56
Description
At least Gemini & OpenAI have support for client-to-server flow where the client/browser connects directly to the platform via WebSocket.
This solution scales much better than streaming with php and it also provides smaller latency, which is especially important with voice chat.


In such flow, the server fetches an ephemeral access token (which by default can only be used once) using the static api key and provides that to the client, then the client uses that to establish the websocket connection.
When fetching the ephemeral access token, you can also provide many other settings, including which model to use, tools, system prompt etc. In this flow you'd need to expose an endpoint to call tools and then make the frontend call the endpoint when receiving a toolcall message from the websocket connection.
An example request to get ephemeral access token from Gemini Live API (there are no official docs for this)
POST https://generativelanguage.googleapis.com/v1alpha/auth_tokens?key=xx
{
"uses": 1,
"expireTime": "2025-08-02T00:00:00Z",
"bidiGenerateContentSetup": {
"model": "models/gemini-2.0-flash-live-001",
"generationConfig": {
"temperature": 0.7,
"responseModalities": [
"TEXT"
]
}
}
}
For OpenAI: https://platform.openai.com/docs/api-reference/realtime-sessions
Do you see a place for this within symfony/ai
?