Skip to content

Show token usage #1077

@sp1r1t

Description

@sp1r1t

When using LLMs through an API I always have to keep an eye on the cost produced. To get a better feel I would like to see the amount of tokens used by a request.

At the moment I can set a token maximum, which is very nice to cap the costs of a request, but it does not give me actual usage information.

To my knowledge, at least for openai, the API returns the token counts, so it should be possible to display it.

This could be shown in the gptel-mode status bar or added at the end of the the LLM response. E.g. In: 10, Out: 100, Total: 110.

It could be switched on in gptel-menu by t for token, or by the the variable gptel-show-token-usage.

I've not encountered this feature in any gpt client so far, but I would love to see it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions