| |
- google.ai.generativelanguage_v1beta.services.generative_service.transports.rest.GenerativeServiceRestTransport(google.ai.generativelanguage_v1beta.services.generative_service.transports.base.GenerativeServiceTransport)
-
- GenerativeServiceRestTransport
- google.generativeai.generative_models.GenerativeModel(builtins.object)
-
- GenerativeModel
class GenerativeModel(google.generativeai.generative_models.GenerativeModel) |
|
GenerativeModel(*, model: 'str | NotGiven' = <gen_ai_hub.proxy.core.utils.NotGiven object at 0x7fb7c782f9a0>, deployment_id: 'str | NotGiven' = <gen_ai_hub.proxy.core.utils.NotGiven object at 0x7fb7c782f9a0>, model_name: 'str | NotGiven' = <gen_ai_hub.proxy.core.utils.NotGiven object at 0x7fb7c782f9a0>, config_id: 'str | NotGiven' = <gen_ai_hub.proxy.core.utils.NotGiven object at 0x7fb7c782f9a0>, config_name: 'str | NotGiven' = <gen_ai_hub.proxy.core.utils.NotGiven object at 0x7fb7c782f9a0>, proxy_client: 'Optional[BaseProxyClient]' = None, **kwargs) -> 'None'
drop-in replacement for `google.generativeai.GenerativeModel`
that uses the current deployment for Google Vertex models |
|
- Method resolution order:
- GenerativeModel
- google.generativeai.generative_models.GenerativeModel
- builtins.object
Methods defined here:
- __init__(self, *, model: 'str | NotGiven' = <gen_ai_hub.proxy.core.utils.NotGiven object at 0x7fb7c782f9a0>, deployment_id: 'str | NotGiven' = <gen_ai_hub.proxy.core.utils.NotGiven object at 0x7fb7c782f9a0>, model_name: 'str | NotGiven' = <gen_ai_hub.proxy.core.utils.NotGiven object at 0x7fb7c782f9a0>, config_id: 'str | NotGiven' = <gen_ai_hub.proxy.core.utils.NotGiven object at 0x7fb7c782f9a0>, config_name: 'str | NotGiven' = <gen_ai_hub.proxy.core.utils.NotGiven object at 0x7fb7c782f9a0>, proxy_client: 'Optional[BaseProxyClient]' = None, **kwargs) -> 'None'
- Initialize self. See help(type(self)) for accurate signature.
- generate_content(self, contents: 'content_types.ContentsType', *, generation_config: 'generation_types.GenerationConfigType | None' = None, safety_settings: 'safety_types.SafetySettingOptions | None' = None, stream: 'bool' = False, tools: 'content_types.FunctionLibraryType | None' = None, tool_config: 'content_types.ToolConfigType | None' = None, request_options: 'dict[str, Any] | None' = None) -> 'generation_types.GenerateContentResponse'
- A multipurpose function to generate responses from the model.
This `GenerativeModel.generate_content` method can handle multimodal input, and multi-turn
conversations.
>>> model = genai.GenerativeModel('models/gemini-pro')
>>> response = model.generate_content('Tell me a story about a magic backpack')
>>> response.text
### Streaming
This method supports streaming with the `stream=True`. The result has the same type as the non streaming case,
but you can iterate over the response chunks as they become available:
>>> response = model.generate_content('Tell me a story about a magic backpack', stream=True)
>>> for chunk in response:
... print(chunk.text)
### Multi-turn
This method supports multi-turn chats but is **stateless**: the entire conversation history needs to be sent with each
request. This takes some manual management but gives you complete control:
>>> messages = [{'role':'user', 'parts': ['hello']}]
>>> response = model.generate_content(messages) # "Hello, how can I help"
>>> messages.append(response.candidates[0].content)
>>> messages.append({'role':'user', 'parts': ['How does quantum physics work?']})
>>> response = model.generate_content(messages)
For a simpler multi-turn interface see `GenerativeModel.start_chat`.
### Input type flexibility
While the underlying API strictly expects a `list[glm.Content]` objects, this method
will convert the user input into the correct type. The hierarchy of types that can be
converted is below. Any of these objects can be passed as an equivalent `dict`.
* `Iterable[glm.Content]`
* `glm.Content`
* `Iterable[glm.Part]`
* `glm.Part`
* `str`, `Image`, or `glm.Blob`
In an `Iterable[glm.Content]` each `content` is a separate message.
But note that an `Iterable[glm.Part]` is taken as the parts of a single message.
Arguments:
contents: The contents serving as the model's prompt.
generation_config: Overrides for the model's generation config.
safety_settings: Overrides for the model's safety settings.
stream: If True, yield response chunks as they are generated.
tools: `glm.Tools` more info coming soon.
request_options: Options for the request.
Methods inherited from google.generativeai.generative_models.GenerativeModel:
- __repr__ = __str__(self)
- __str__(self)
- Return str(self).
- count_tokens(self, contents: 'content_types.ContentsType' = None, *, generation_config: 'generation_types.GenerationConfigType | None' = None, safety_settings: 'safety_types.SafetySettingOptions | None' = None, tools: 'content_types.FunctionLibraryType | None' = None, tool_config: 'content_types.ToolConfigType | None' = None, request_options: 'dict[str, Any] | None' = None) -> 'glm.CountTokensResponse'
- # fmt: off
- async count_tokens_async(self, contents: 'content_types.ContentsType' = None, *, generation_config: 'generation_types.GenerationConfigType | None' = None, safety_settings: 'safety_types.SafetySettingOptions | None' = None, tools: 'content_types.FunctionLibraryType | None' = None, tool_config: 'content_types.ToolConfigType | None' = None, request_options: 'dict[str, Any] | None' = None) -> 'glm.CountTokensResponse'
- async generate_content_async(self, contents: 'content_types.ContentsType', *, generation_config: 'generation_types.GenerationConfigType | None' = None, safety_settings: 'safety_types.SafetySettingOptions | None' = None, stream: 'bool' = False, tools: 'content_types.FunctionLibraryType | None' = None, tool_config: 'content_types.ToolConfigType | None' = None, request_options: 'dict[str, Any] | None' = None) -> 'generation_types.AsyncGenerateContentResponse'
- The async version of `GenerativeModel.generate_content`.
- start_chat(self, *, history: 'Iterable[content_types.StrictContentType] | None' = None, enable_automatic_function_calling: 'bool' = False) -> 'ChatSession'
- Returns a `genai.ChatSession` attached to this model.
>>> model = genai.GenerativeModel()
>>> chat = model.start_chat(history=[...])
>>> response = chat.send_message("Hello?")
Arguments:
history: An iterable of `glm.Content` objects, or equvalents to initialize the session.
Readonly properties inherited from google.generativeai.generative_models.GenerativeModel:
- model_name
Data descriptors inherited from google.generativeai.generative_models.GenerativeModel:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class GenerativeServiceRestTransport(google.ai.generativelanguage_v1beta.services.generative_service.transports.rest.GenerativeServiceRestTransport) |
|
GenerativeServiceRestTransport(*, host: str = 'generativelanguage.googleapis.com', credentials: Optional[google.auth.credentials.Credentials] = None, credentials_file: Optional[str] = None, scopes: Optional[Sequence[str]] = None, client_cert_source_for_mtls: Optional[Callable[[], Tuple[bytes, bytes]]] = None, quota_project_id: Optional[str] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = <google.api_core.gapic_v1.client_info.ClientInfo object at 0x7fb7c41317e0>, always_use_jwt_access: Optional[bool] = False, url_scheme: str = 'https', interceptor: Optional[google.ai.generativelanguage_v1beta.services.generative_service.transports.rest.GenerativeServiceRestInterceptor] = None, api_audience: Optional[str] = None) -> None
rest transport class for overriding the google model uri |
|
- Method resolution order:
- GenerativeServiceRestTransport
- google.ai.generativelanguage_v1beta.services.generative_service.transports.rest.GenerativeServiceRestTransport
- google.ai.generativelanguage_v1beta.services.generative_service.transports.base.GenerativeServiceTransport
- abc.ABC
- builtins.object
Data and other attributes defined here:
- __abstractmethods__ = frozenset()
Methods inherited from google.ai.generativelanguage_v1beta.services.generative_service.transports.rest.GenerativeServiceRestTransport:
- __init__(self, *, host: str = 'generativelanguage.googleapis.com', credentials: Optional[google.auth.credentials.Credentials] = None, credentials_file: Optional[str] = None, scopes: Optional[Sequence[str]] = None, client_cert_source_for_mtls: Optional[Callable[[], Tuple[bytes, bytes]]] = None, quota_project_id: Optional[str] = None, client_info: google.api_core.gapic_v1.client_info.ClientInfo = <google.api_core.gapic_v1.client_info.ClientInfo object at 0x7fb7c41317e0>, always_use_jwt_access: Optional[bool] = False, url_scheme: str = 'https', interceptor: Optional[google.ai.generativelanguage_v1beta.services.generative_service.transports.rest.GenerativeServiceRestInterceptor] = None, api_audience: Optional[str] = None) -> None
- Instantiate the transport.
Args:
host (Optional[str]):
The hostname to connect to (default: 'generativelanguage.googleapis.com').
credentials (Optional[google.auth.credentials.Credentials]): The
authorization credentials to attach to requests. These
credentials identify the application to the service; if none
are specified, the client will attempt to ascertain the
credentials from the environment.
credentials_file (Optional[str]): A file with credentials that can
be loaded with :func:`google.auth.load_credentials_from_file`.
This argument is ignored if ``channel`` is provided.
scopes (Optional(Sequence[str])): A list of scopes. This argument is
ignored if ``channel`` is provided.
client_cert_source_for_mtls (Callable[[], Tuple[bytes, bytes]]): Client
certificate to configure mutual TLS HTTP channel. It is ignored
if ``channel`` is provided.
quota_project_id (Optional[str]): An optional project to use for billing
and quota.
client_info (google.api_core.gapic_v1.client_info.ClientInfo):
The client info used to send a user-agent string along with
API requests. If ``None``, then default info will be used.
Generally, you only need to set this if you are developing
your own client library.
always_use_jwt_access (Optional[bool]): Whether self signed JWT should
be used for service account credentials.
url_scheme: the protocol scheme for the API endpoint. Normally
"https", but for testing or local servers,
"http" can be specified.
- close(self)
- Closes resources associated with the transport.
.. warning::
Only call this method if the transport is NOT shared
with other clients - this may cause errors in other clients!
Readonly properties inherited from google.ai.generativelanguage_v1beta.services.generative_service.transports.rest.GenerativeServiceRestTransport:
- batch_embed_contents
- count_tokens
- embed_content
- generate_answer
- generate_content
- kind
- stream_generate_content
Readonly properties inherited from google.ai.generativelanguage_v1beta.services.generative_service.transports.base.GenerativeServiceTransport:
- host
Data descriptors inherited from google.ai.generativelanguage_v1beta.services.generative_service.transports.base.GenerativeServiceTransport:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
Data and other attributes inherited from google.ai.generativelanguage_v1beta.services.generative_service.transports.base.GenerativeServiceTransport:
- AUTH_SCOPES = ()
- DEFAULT_HOST = 'generativelanguage.googleapis.com'
- __annotations__ = {'DEFAULT_HOST': <class 'str'>}
| |