| |
- builtins.object
-
- OrchestrationService
- gen_ai_hub.orchestration.models.base.JSONSerializable(abc.ABC)
-
- OrchestrationRequest
class OrchestrationRequest(gen_ai_hub.orchestration.models.base.JSONSerializable) |
|
OrchestrationRequest(config: gen_ai_hub.orchestration.models.config.OrchestrationConfig, template_values: List[gen_ai_hub.orchestration.models.template.TemplateValue], history: List[gen_ai_hub.orchestration.models.message.Message]) -> None
Represents a request for the orchestration process, including configuration,
template values, and message history. |
|
- Method resolution order:
- OrchestrationRequest
- gen_ai_hub.orchestration.models.base.JSONSerializable
- abc.ABC
- builtins.object
Methods defined here:
- __eq__(self, other)
- Return self==value.
- __init__(self, config: gen_ai_hub.orchestration.models.config.OrchestrationConfig, template_values: List[gen_ai_hub.orchestration.models.template.TemplateValue], history: List[gen_ai_hub.orchestration.models.message.Message]) -> None
- Initialize self. See help(type(self)) for accurate signature.
- __repr__(self)
- Return repr(self).
- to_dict(self)
- Convert the object to a JSON-serializable dictionary.
Data and other attributes defined here:
- __abstractmethods__ = frozenset()
- __annotations__ = {'config': <class 'gen_ai_hub.orchestration.models.config.OrchestrationConfig'>, 'history': typing.List[gen_ai_hub.orchestration.models.message.Message], 'template_values': typing.List[gen_ai_hub.orchestration.models.template.TemplateValue]}
- __dataclass_fields__ = {'config': Field(name='config',type=<class 'gen_ai_hub.orch...appingproxy({}),kw_only=False,_field_type=_FIELD), 'history': Field(name='history',type=typing.List[gen_ai_hub...appingproxy({}),kw_only=False,_field_type=_FIELD), 'template_values': Field(name='template_values',type=typing.List[ge...appingproxy({}),kw_only=False,_field_type=_FIELD)}
- __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,order=False,unsafe_hash=False,frozen=False)
- __hash__ = None
- __match_args__ = ('config', 'template_values', 'history')
Data descriptors inherited from gen_ai_hub.orchestration.models.base.JSONSerializable:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class OrchestrationService(builtins.object) |
|
OrchestrationService(api_url: Optional[str] = None, config: Optional[gen_ai_hub.orchestration.models.config.OrchestrationConfig] = None, proxy_client: Optional[gen_ai_hub.proxy.gen_ai_hub_proxy.client.GenAIHubProxyClient] = None, deployment_id: Optional[str] = None, config_name: Optional[str] = None, config_id: Optional[str] = None, timeout: Union[int, float, openai.Timeout, NoneType] = None)
A service for executing orchestration requests, allowing for the generation of LLM-generated content
through a pipeline of configured modules.
This service supports both synchronous and asynchronous request execution. For streaming responses,
special care is taken to not close the underlying HTTP stream prematurely.
Args:
api_url: The base URL for the orchestration API.
config: The default orchestration configuration.
proxy_client: A GenAIHubProxyClient instance.
deployment_id: Optional deployment ID.
config_name: Optional configuration name.
config_id: Optional configuration ID.
timeout: Optional timeout for HTTP requests. |
|
Methods defined here:
- __init__(self, api_url: Optional[str] = None, config: Optional[gen_ai_hub.orchestration.models.config.OrchestrationConfig] = None, proxy_client: Optional[gen_ai_hub.proxy.gen_ai_hub_proxy.client.GenAIHubProxyClient] = None, deployment_id: Optional[str] = None, config_name: Optional[str] = None, config_id: Optional[str] = None, timeout: Union[int, float, openai.Timeout, NoneType] = None)
- Initialize self. See help(type(self)) for accurate signature.
- async aclose_http_connection(self)
- Closes the httpx asynchronous client.
- async arun(self, config: Optional[gen_ai_hub.orchestration.models.config.OrchestrationConfig] = None, template_values: Optional[List[gen_ai_hub.orchestration.models.template.TemplateValue]] = None, history: Optional[List[gen_ai_hub.orchestration.models.message.Message]] = None) -> gen_ai_hub.orchestration.models.response.OrchestrationResponse
- Executes an orchestration request asynchronously (non-streaming).
Args:
config: Optional orchestration configuration.
template_values: Optional list of template values.
history: Optional message history.
Returns:
An OrchestrationResponse object.
- async astream(self, config: Optional[gen_ai_hub.orchestration.models.config.OrchestrationConfig] = None, template_values: Optional[List[gen_ai_hub.orchestration.models.template.TemplateValue]] = None, history: Optional[List[gen_ai_hub.orchestration.models.message.Message]] = None, stream_options: Optional[dict] = None) -> gen_ai_hub.orchestration.sse_client.AsyncSSEClient
- Executes an orchestration request asynchronously in streaming mode.
The returned AsyncSSEClient instance yields OrchestrationResponseStreaming objects.
Args:
config: Optional orchestration configuration.
template_values: Optional list of template values.
history: Optional message history.
stream_options: Optional dictionary of additional streaming options.
Returns:
An AsyncSSEClient instance for iterating over the streaming response.
- close_http_connection(self)
- Closes the httpx synchronous client.
- run(self, config: Optional[gen_ai_hub.orchestration.models.config.OrchestrationConfig] = None, template_values: Optional[List[gen_ai_hub.orchestration.models.template.TemplateValue]] = None, history: Optional[List[gen_ai_hub.orchestration.models.message.Message]] = None) -> gen_ai_hub.orchestration.models.response.OrchestrationResponse
- Executes an orchestration request synchronously (non-streaming).
Args:
config: Optional orchestration configuration; if not provided, the default configuration is used.
template_values: Optional list of template values.
history: Optional message history.
Returns:
An OrchestrationResponse object.
- stream(self, config: Optional[gen_ai_hub.orchestration.models.config.OrchestrationConfig] = None, template_values: Optional[List[gen_ai_hub.orchestration.models.template.TemplateValue]] = None, history: Optional[List[gen_ai_hub.orchestration.models.message.Message]] = None, stream_options: Optional[dict] = None) -> gen_ai_hub.orchestration.sse_client.SSEClient
- Executes an orchestration request in streaming mode (synchronously).
The returned SSEClient instance yields OrchestrationResponseStreaming objects.
Args:
config: Optional orchestration configuration.
template_values: Optional list of template values.
history: Optional message history.
stream_options: Optional dictionary of additional streaming options.
Returns:
An iterable of OrchestrationResponseStreaming objects.
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
| |