generative-ai-python

google.generativeai.protos.GenerateContentResponse.UsageMetadata

Metadata on the generation request’s token usage.

`prompt_token_count` `int` Number of tokens in the prompt. When ``cached_content`` is set, this is still the total effective prompt size meaning this includes the number of tokens in the cached content.
`cached_content_token_count` `int` Number of tokens in the cached part of the prompt (the cached content)
`candidates_token_count` `int` Total number of tokens across all the generated response candidates.
`total_token_count` `int` Total token count for the generation request (prompt + response candidates).