google.generativeai.protos.Model
Information about a Generative Language Model.
Attributes |
`name`
|
`str`
Required. The resource name of the ``Model``. Refer to
`Model
variants <https://ai.google.dev/gemini-api/docs/models/gemini#model-variations>`__
for all allowed values.
Format: ``models/{model}`` with a ``{model}`` naming
convention of:
- "{base_model_id}-{version}"
Examples:
- ``models/gemini-1.5-flash-001``
|
`base_model_id`
|
`str`
Required. The name of the base model, pass this to the
generation request.
Examples:
- ``gemini-1.5-flash``
|
`version`
|
`str`
Required. The version number of the model.
This represents the major version (``1.0`` or ``1.5``)
|
`display_name`
|
`str`
The human-readable name of the model. E.g.
"Gemini 1.5 Flash".
The name can be up to 128 characters long and
can consist of any UTF-8 characters.
|
`description`
|
`str`
A short description of the model.
|
`input_token_limit`
|
`int`
Maximum number of input tokens allowed for
this model.
|
`output_token_limit`
|
`int`
Maximum number of output tokens available for
this model.
|
`supported_generation_methods`
|
`MutableSequence[str]`
The model's supported generation methods.
The corresponding API method names are defined as Pascal
case strings, such as ``generateMessage`` and
``generateContent``.
|
`temperature`
|
`float`
Controls the randomness of the output.
Values can range over ``[0.0,max_temperature]``, inclusive.
A higher value will produce responses that are more varied,
while a value closer to ``0.0`` will typically result in
less surprising responses from the model. This value
specifies default to be used by the backend while making the
call to the model.
|
`max_temperature`
|
`float`
The maximum temperature this model can use.
|
`top_p`
|
`float`
For `Nucleus
sampling <https://ai.google.dev/gemini-api/docs/prompting-strategies#top-p>`__.
Nucleus sampling considers the smallest set of tokens whose
probability sum is at least ``top_p``. This value specifies
default to be used by the backend while making the call to
the model.
|
`top_k`
|
`int`
For Top-k sampling.
Top-k sampling considers the set of ``top_k`` most probable
tokens. This value specifies default to be used by the
backend while making the call to the model. If empty,
indicates the model doesn't use top-k sampling, and
``top_k`` isn't allowed as a generation parameter.
|