google.generativeai.protos.GenerateTextRequest
Request to generate a text completion response from the model.
Attributes |
`model`
|
`str`
Required. The name of the ``Model`` or ``TunedModel`` to use
for generating the completion. Examples:
models/text-bison-001 tunedModels/sentence-translator-u3b7m
|
`prompt`
|
`google.ai.generativelanguage.TextPrompt`
Required. The free-form input text given to
the model as a prompt.
Given a prompt, the model will generate a
TextCompletion response it predicts as the
completion of the input text.
|
`temperature`
|
`float`
Optional. Controls the randomness of the output. Note: The
default value varies by model, see the Model.temperature
attribute of the ``Model`` returned the ``getModel``
function.
Values can range from [0.0,1.0], inclusive. A value closer
to 1.0 will produce responses that are more varied and
creative, while a value closer to 0.0 will typically result
in more straightforward responses from the model.
|
`candidate_count`
|
`int`
Optional. Number of generated responses to return.
This value must be between [1, 8], inclusive. If unset, this
will default to 1.
|
`max_output_tokens`
|
`int`
Optional. The maximum number of tokens to include in a
candidate.
If unset, this will default to output_token_limit specified
in the ``Model`` specification.
|
`top_p`
|
`float`
Optional. The maximum cumulative probability of tokens to
consider when sampling.
The model uses combined Top-k and nucleus sampling.
Tokens are sorted based on their assigned probabilities so
that only the most likely tokens are considered. Top-k
sampling directly limits the maximum number of tokens to
consider, while Nucleus sampling limits number of tokens
based on the cumulative probability.
Note: The default value varies by model, see the
Model.top_p attribute of the ``Model`` returned the
``getModel`` function.
|
`top_k`
|
`int`
Optional. The maximum number of tokens to consider when
sampling.
The model uses combined Top-k and nucleus sampling.
Top-k sampling considers the set of ``top_k`` most probable
tokens. Defaults to 40.
Note: The default value varies by model, see the
Model.top_k attribute of the ``Model`` returned the
``getModel`` function.
|
`safety_settings`
|
`MutableSequence[google.ai.generativelanguage.SafetySetting]`
Optional. A list of unique ``SafetySetting`` instances for
blocking unsafe content.
that will be enforced on the GenerateTextRequest.prompt
and GenerateTextResponse.candidates . There should not be
more than one setting for each ``SafetyCategory`` type. The
API will block any prompts and responses that fail to meet
the thresholds set by these settings. This list overrides
the default settings for each ``SafetyCategory`` specified
in the safety_settings. If there is no ``SafetySetting`` for
a given ``SafetyCategory`` provided in the list, the API
will use the default safety setting for that category. Harm
categories HARM_CATEGORY_DEROGATORY, HARM_CATEGORY_TOXICITY,
HARM_CATEGORY_VIOLENCE, HARM_CATEGORY_SEXUAL,
HARM_CATEGORY_MEDICAL, HARM_CATEGORY_DANGEROUS are supported
in text service.
|
`stop_sequences`
|
`MutableSequence[str]`
The set of character sequences (up to 5) that
will stop output generation. If specified, the
API will stop at the first appearance of a stop
sequence. The stop sequence will not be included
as part of the response.
|