generative-ai-python

google.generativeai.protos.CountMessageTokensRequest

Counts the number of tokens in the prompt sent to a model.

Models may tokenize text differently, so each model may return a different token_count.

`model` `str` Required. The model's resource name. This serves as an ID for the Model to use. This name should match a model name returned by the ``ListModels`` method. Format: ``models/{model}``
`prompt` `google.ai.generativelanguage.MessagePrompt` Required. The prompt, whose token count is to be returned.