google.generativeai.protos.CountTokensResponse
A response from CountTokens
.
It returns the model’s token_count
for the prompt
.
Attributes |
`total_tokens`
|
`int`
The number of tokens that the ``Model`` tokenizes the
``prompt`` into. Always non-negative.
|
`cached_content_token_count`
|
`int`
Number of tokens in the cached part of the
prompt (the cached content).
|