generative-ai-python

google.generativeai.protos.CountTokensRequest

Counts the number of tokens in the prompt sent to a model.

Models may tokenize text differently, so each model may return a different token_count.

`model` `str` Required. The model's resource name. This serves as an ID for the Model to use. This name should match a model name returned by the ``ListModels`` method. Format: ``models/{model}``
`contents` `MutableSequence[google.ai.generativelanguage.Content]` Optional. The input given to the model as a prompt. This field is ignored when ``generate_content_request`` is set.
`generate_content_request` `google.ai.generativelanguage.GenerateContentRequest` Optional. The overall input given to the ``Model``. This includes the prompt as well as other model steering information like `system instructions <https://ai.google.dev/gemini-api/docs/system-instructions>`__, and/or function declarations for `function calling <https://ai.google.dev/gemini-api/docs/function-calling>`__. ``Model``\ s/\ ``Content``\ s and ``generate_content_request``\ s are mutually exclusive. You can either send ``Model`` + ``Content``\ s or a ``generate_content_request``, but never both.