This repository was archived by the owner on Aug 6, 2025. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 37
Rate Limit / Free tier is unusable with this action #36
Copy link
Copy link
Open
Labels
kind/bugSomething isn't workingSomething isn't working
Description
What happened?
With a new account with 0 usage I hit the 'rate limit' I assume this is something like a burst limit which makes sense as locally it works fine and I can easily get 30mins - 1 hour of back and forth.
Attempt 4 failed with status 429. Retrying with backoff...
ApiError:
{
"error": {
"code": 429,
"message": "You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits.",
"status": "RESOURCE_EXHAUSTED",
"details": [
{
"@type": "type.googleapis.com/google.rpc.QuotaFailure",
"violations": [
{
"quotaMetric": "generativelanguage.googleapis.com/generate_content_free_tier_input_token_count",
"quotaId": "GenerateContentInputTokensPerModelPerMinute-FreeTier",
"quotaDimensions": {
"location": "global",
"model": "gemini-2.5-pro"
},
"quotaValue": "250000"
}
]
},
{
"@type": "type.googleapis.com/google.rpc.Help",
"links": [
{
"description": "Google developers documentation.",
"url": "https://ai.google.dev/gemini-api/docs/rate-limits."
}
]
}
]
}
}
at throwErrorIfNotOK (file:///opt/hostedtoolcache/node/22.16.0/x64/lib/node_modules/@google/gemini-cli/node_modules/@google/genai/dist/node/index.mjs:13187:30)
at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
at async file:///opt/hostedtoolcache/node/22.16.0/x64/lib/node_modules/@google/gemini-cli/node_modules/@google/genai/dist/node/index.mjs:12978:13
at async Models.generateContentStream (file:///opt/hostedtoolcache/node/22.16.0/x64/lib/node_modules/@google/gemini-cli/node_modules/@google/genai/dist/node/index.mjs:14308:24)
at async retryWithBackoff (file:///opt/hostedtoolcache/node/22.16.0/x64/lib/node_modules/@google/gemini-cli/node_modules/@google/gemini-cli-core/dist/src/utils/retry.js:62:20)
at async GeminiChat.sendMessageStream (file:///opt/hostedtoolcache/node/22.16.0/x64/lib/node_modules/@google/gemini-cli/node_modules/@google/gemini-cli-core/dist/src/core/geminiChat.js:299:36)
at async Turn.run (file:///opt/hostedtoolcache/node/22.16.0/x64/lib/node_modules/@google/gemini-cli/node_modules/@google/gemini-cli-core/dist/src/core/turn.js:40:36)
at async GeminiClient.sendMessageStream (file:///opt/hostedtoolcache/node/22.16.0/x64/lib/node_modules/@google/gemini-cli/node_modules/@google/gemini-cli-core/dist/src/core/client.js:311:26)
at async runNonInteractive (file:///opt/hostedtoolcache/node/22.16.0/x64/lib/node_modules/@google/gemini-cli/dist/src/nonInteractiveCli.js:32:30)
at async main (file:///opt/hostedtoolcache/node/22.16.0/x64/lib/node_modules/@google/gemini-cli/dist/src/gemini.js:229:5) {
status: 429
}
What did you expect to happen?
When i run my prompt to not hit the burst (RPM?) limit, or for it to switch to a cheaper model which allows for a more generous usage. Or even to specify the model beforehand.
Anything else we need to know?
No response
Metadata
Metadata
Assignees
Labels
kind/bugSomething isn't workingSomething isn't working