type
stringclasses
5 values
name
stringlengths
1
55
qualified_name
stringlengths
5
143
docstring
stringlengths
0
3.59k
filepath
stringclasses
180 values
is_public
bool
2 classes
is_private
bool
2 classes
line_start
float64
0
1.54k
line_end
float64
0
1.56k
annotation
stringclasses
8 values
returns
stringclasses
236 values
parameters
listlengths
0
74
parent_class
stringclasses
298 values
value
stringclasses
112 values
bases
listlengths
0
3
api_element_summary
stringlengths
199
23k
method
__init__
fenic._inference.common_openai.openai_embeddings_core.OpenAIEmbeddingsCore.__init__
Initialize the OpenAI embeddings client core. Args: model: The model to use model_provider: The provider of the model token_counter: Counter for estimating token usage client: The OpenAI client
site-packages/fenic/_inference/common_openai/openai_embeddings_core.py
true
false
34
55
null
null
[ "self", "model", "model_provider", "token_counter", "client" ]
OpenAIEmbeddingsCore
null
null
Type: method Member Name: __init__ Qualified Name: fenic._inference.common_openai.openai_embeddings_core.OpenAIEmbeddingsCore.__init__ Docstring: Initialize the OpenAI embeddings client core. Args: model: The model to use model_provider: The provider of the model token_counter: Counter for estimating token usage client: The OpenAI client Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "model", "model_provider", "token_counter", "client"] Returns: none Parent Class: OpenAIEmbeddingsCore
method
reset_metrics
fenic._inference.common_openai.openai_embeddings_core.OpenAIEmbeddingsCore.reset_metrics
Reset the metrics.
site-packages/fenic/_inference/common_openai/openai_embeddings_core.py
true
false
57
59
null
None
[ "self" ]
OpenAIEmbeddingsCore
null
null
Type: method Member Name: reset_metrics Qualified Name: fenic._inference.common_openai.openai_embeddings_core.OpenAIEmbeddingsCore.reset_metrics Docstring: Reset the metrics. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: None Parent Class: OpenAIEmbeddingsCore
method
get_metrics
fenic._inference.common_openai.openai_embeddings_core.OpenAIEmbeddingsCore.get_metrics
Get the metrics.
site-packages/fenic/_inference/common_openai/openai_embeddings_core.py
true
false
61
63
null
RMMetrics
[ "self" ]
OpenAIEmbeddingsCore
null
null
Type: method Member Name: get_metrics Qualified Name: fenic._inference.common_openai.openai_embeddings_core.OpenAIEmbeddingsCore.get_metrics Docstring: Get the metrics. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: RMMetrics Parent Class: OpenAIEmbeddingsCore
method
make_single_request
fenic._inference.common_openai.openai_embeddings_core.OpenAIEmbeddingsCore.make_single_request
Make a single request to the OpenAI API. Args: request: The text to embed Returns: The embedding vector or an exception
site-packages/fenic/_inference/common_openai/openai_embeddings_core.py
true
false
65
121
null
Union[None, List[float], TransientException, FatalException]
[ "self", "request" ]
OpenAIEmbeddingsCore
null
null
Type: method Member Name: make_single_request Qualified Name: fenic._inference.common_openai.openai_embeddings_core.OpenAIEmbeddingsCore.make_single_request Docstring: Make a single request to the OpenAI API. Args: request: The text to embed Returns: The embedding vector or an exception Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: Union[None, List[float], TransientException, FatalException] Parent Class: OpenAIEmbeddingsCore
method
get_request_key
fenic._inference.common_openai.openai_embeddings_core.OpenAIEmbeddingsCore.get_request_key
Generate a unique key for request deduplication. Args: request: The request to generate a key for Returns: A unique key for the request
site-packages/fenic/_inference/common_openai/openai_embeddings_core.py
true
false
123
132
null
str
[ "self", "request" ]
OpenAIEmbeddingsCore
null
null
Type: method Member Name: get_request_key Qualified Name: fenic._inference.common_openai.openai_embeddings_core.OpenAIEmbeddingsCore.get_request_key Docstring: Generate a unique key for request deduplication. Args: request: The request to generate a key for Returns: A unique key for the request Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: str Parent Class: OpenAIEmbeddingsCore
method
estimate_tokens_for_request
fenic._inference.common_openai.openai_embeddings_core.OpenAIEmbeddingsCore.estimate_tokens_for_request
Estimate the number of tokens for a request. Args: request: The request to estimate tokens for Returns: TokenEstimate with input token count
site-packages/fenic/_inference/common_openai/openai_embeddings_core.py
true
false
134
143
null
TokenEstimate
[ "self", "request" ]
OpenAIEmbeddingsCore
null
null
Type: method Member Name: estimate_tokens_for_request Qualified Name: fenic._inference.common_openai.openai_embeddings_core.OpenAIEmbeddingsCore.estimate_tokens_for_request Docstring: Estimate the number of tokens for a request. Args: request: The request to estimate tokens for Returns: TokenEstimate with input token count Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: TokenEstimate Parent Class: OpenAIEmbeddingsCore
module
openai_profile_manager
fenic._inference.common_openai.openai_profile_manager
null
site-packages/fenic/_inference/common_openai/openai_profile_manager.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: openai_profile_manager Qualified Name: fenic._inference.common_openai.openai_profile_manager Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
OpenAICompletionProfileConfiguration
fenic._inference.common_openai.openai_profile_manager.OpenAICompletionProfileConfiguration
null
site-packages/fenic/_inference/common_openai/openai_profile_manager.py
true
false
9
12
null
null
null
null
null
[ "BaseProfileConfiguration" ]
Type: class Member Name: OpenAICompletionProfileConfiguration Qualified Name: fenic._inference.common_openai.openai_profile_manager.OpenAICompletionProfileConfiguration Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.common_openai.openai_profile_manager.OpenAICompletionProfileConfiguration.__init__
null
site-packages/fenic/_inference/common_openai/openai_profile_manager.py
true
false
0
0
null
None
[ "self", "additional_parameters", "expected_additional_reasoning_tokens" ]
OpenAICompletionProfileConfiguration
null
null
Type: method Member Name: __init__ Qualified Name: fenic._inference.common_openai.openai_profile_manager.OpenAICompletionProfileConfiguration.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "additional_parameters", "expected_additional_reasoning_tokens"] Returns: None Parent Class: OpenAICompletionProfileConfiguration
class
OpenAICompletionsProfileManager
fenic._inference.common_openai.openai_profile_manager.OpenAICompletionsProfileManager
Manages OpenAI-specific profile configurations.
site-packages/fenic/_inference/common_openai/openai_profile_manager.py
true
false
15
74
null
null
null
null
null
[ "ProfileManager[ResolvedOpenAIModelProfile, OpenAICompletionProfileConfiguration]" ]
Type: class Member Name: OpenAICompletionsProfileManager Qualified Name: fenic._inference.common_openai.openai_profile_manager.OpenAICompletionsProfileManager Docstring: Manages OpenAI-specific profile configurations. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.common_openai.openai_profile_manager.OpenAICompletionsProfileManager.__init__
null
site-packages/fenic/_inference/common_openai/openai_profile_manager.py
true
false
19
26
null
null
[ "self", "model_parameters", "profile_configurations", "default_profile_name" ]
OpenAICompletionsProfileManager
null
null
Type: method Member Name: __init__ Qualified Name: fenic._inference.common_openai.openai_profile_manager.OpenAICompletionsProfileManager.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "model_parameters", "profile_configurations", "default_profile_name"] Returns: none Parent Class: OpenAICompletionsProfileManager
method
_process_profile
fenic._inference.common_openai.openai_profile_manager.OpenAICompletionsProfileManager._process_profile
Process OpenAI profile configuration.
site-packages/fenic/_inference/common_openai/openai_profile_manager.py
false
true
28
55
null
OpenAICompletionProfileConfiguration
[ "self", "profile" ]
OpenAICompletionsProfileManager
null
null
Type: method Member Name: _process_profile Qualified Name: fenic._inference.common_openai.openai_profile_manager.OpenAICompletionsProfileManager._process_profile Docstring: Process OpenAI profile configuration. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "profile"] Returns: OpenAICompletionProfileConfiguration Parent Class: OpenAICompletionsProfileManager
method
get_default_profile
fenic._inference.common_openai.openai_profile_manager.OpenAICompletionsProfileManager.get_default_profile
Get default OpenAI configuration.
site-packages/fenic/_inference/common_openai/openai_profile_manager.py
true
false
57
74
null
OpenAICompletionProfileConfiguration
[ "self" ]
OpenAICompletionsProfileManager
null
null
Type: method Member Name: get_default_profile Qualified Name: fenic._inference.common_openai.openai_profile_manager.OpenAICompletionsProfileManager.get_default_profile Docstring: Get default OpenAI configuration. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: OpenAICompletionProfileConfiguration Parent Class: OpenAICompletionsProfileManager
module
anthropic
fenic._inference.anthropic
null
site-packages/fenic/_inference/anthropic/__init__.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: anthropic Qualified Name: fenic._inference.anthropic Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
module
anthropic_profile_manager
fenic._inference.anthropic.anthropic_profile_manager
null
site-packages/fenic/_inference/anthropic/anthropic_profile_manager.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: anthropic_profile_manager Qualified Name: fenic._inference.anthropic.anthropic_profile_manager Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
AnthropicProfileConfiguration
fenic._inference.anthropic.anthropic_profile_manager.AnthropicProfileConfiguration
Configuration for Anthropic model profiles. Attributes: thinking_enabled: Whether thinking/reasoning is enabled for this profile thinking_token_budget: Token budget allocated for thinking/reasoning thinking_config: Anthropic-specific thinking configuration
site-packages/fenic/_inference/anthropic/anthropic_profile_manager.py
true
false
11
23
null
null
null
null
null
[ "BaseProfileConfiguration" ]
Type: class Member Name: AnthropicProfileConfiguration Qualified Name: fenic._inference.anthropic.anthropic_profile_manager.AnthropicProfileConfiguration Docstring: Configuration for Anthropic model profiles. Attributes: thinking_enabled: Whether thinking/reasoning is enabled for this profile thinking_token_budget: Token budget allocated for thinking/reasoning thinking_config: Anthropic-specific thinking configuration Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.anthropic.anthropic_profile_manager.AnthropicProfileConfiguration.__init__
null
site-packages/fenic/_inference/anthropic/anthropic_profile_manager.py
true
false
0
0
null
None
[ "self", "thinking_enabled", "thinking_token_budget", "thinking_config" ]
AnthropicProfileConfiguration
null
null
Type: method Member Name: __init__ Qualified Name: fenic._inference.anthropic.anthropic_profile_manager.AnthropicProfileConfiguration.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "thinking_enabled", "thinking_token_budget", "thinking_config"] Returns: None Parent Class: AnthropicProfileConfiguration
class
AnthropicCompletionsProfileManager
fenic._inference.anthropic.anthropic_profile_manager.AnthropicCompletionsProfileManager
Manages Anthropic-specific profile configurations. This class handles the conversion of Fenic profile configurations to Anthropic-specific configurations, including thinking/reasoning settings.
site-packages/fenic/_inference/anthropic/anthropic_profile_manager.py
true
false
26
79
null
null
null
null
null
[ "ProfileManager[ResolvedAnthropicModelProfile, AnthropicProfileConfiguration]" ]
Type: class Member Name: AnthropicCompletionsProfileManager Qualified Name: fenic._inference.anthropic.anthropic_profile_manager.AnthropicCompletionsProfileManager Docstring: Manages Anthropic-specific profile configurations. This class handles the conversion of Fenic profile configurations to Anthropic-specific configurations, including thinking/reasoning settings. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.anthropic.anthropic_profile_manager.AnthropicCompletionsProfileManager.__init__
Initialize the Anthropic profile configuration manager. Args: model_parameters: Parameters for the completion model profile_configurations: Dictionary of profile configurations default_profile_name: Name of the default profile to use
site-packages/fenic/_inference/anthropic/anthropic_profile_manager.py
true
false
33
47
null
null
[ "self", "model_parameters", "profile_configurations", "default_profile_name" ]
AnthropicCompletionsProfileManager
null
null
Type: method Member Name: __init__ Qualified Name: fenic._inference.anthropic.anthropic_profile_manager.AnthropicCompletionsProfileManager.__init__ Docstring: Initialize the Anthropic profile configuration manager. Args: model_parameters: Parameters for the completion model profile_configurations: Dictionary of profile configurations default_profile_name: Name of the default profile to use Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "model_parameters", "profile_configurations", "default_profile_name"] Returns: none Parent Class: AnthropicCompletionsProfileManager
method
_process_profile
fenic._inference.anthropic.anthropic_profile_manager.AnthropicCompletionsProfileManager._process_profile
Process Anthropic profile configuration. Converts a Fenic profile configuration to an Anthropic-specific configuration, handling thinking/reasoning settings based on model capabilities. Args: profile: The Fenic profile configuration to process Returns: Anthropic-specific profile configuration
site-packages/fenic/_inference/anthropic/anthropic_profile_manager.py
false
true
49
71
null
AnthropicProfileConfiguration
[ "self", "profile" ]
AnthropicCompletionsProfileManager
null
null
Type: method Member Name: _process_profile Qualified Name: fenic._inference.anthropic.anthropic_profile_manager.AnthropicCompletionsProfileManager._process_profile Docstring: Process Anthropic profile configuration. Converts a Fenic profile configuration to an Anthropic-specific configuration, handling thinking/reasoning settings based on model capabilities. Args: profile: The Fenic profile configuration to process Returns: Anthropic-specific profile configuration Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "profile"] Returns: AnthropicProfileConfiguration Parent Class: AnthropicCompletionsProfileManager
method
get_default_profile
fenic._inference.anthropic.anthropic_profile_manager.AnthropicCompletionsProfileManager.get_default_profile
Get default Anthropic configuration. Returns: Default configuration with thinking disabled
site-packages/fenic/_inference/anthropic/anthropic_profile_manager.py
true
false
73
79
null
AnthropicProfileConfiguration
[ "self" ]
AnthropicCompletionsProfileManager
null
null
Type: method Member Name: get_default_profile Qualified Name: fenic._inference.anthropic.anthropic_profile_manager.AnthropicCompletionsProfileManager.get_default_profile Docstring: Get default Anthropic configuration. Returns: Default configuration with thinking disabled Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: AnthropicProfileConfiguration Parent Class: AnthropicCompletionsProfileManager
module
anthropic_batch_chat_completions_client
fenic._inference.anthropic.anthropic_batch_chat_completions_client
null
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: anthropic_batch_chat_completions_client Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
TEXT_DELTA
fenic._inference.anthropic.anthropic_batch_chat_completions_client.TEXT_DELTA
null
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
51
51
null
null
null
null
'text_delta'
null
Type: attribute Member Name: TEXT_DELTA Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.TEXT_DELTA Docstring: none Value: 'text_delta' Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
MESSAGE_STOP
fenic._inference.anthropic.anthropic_batch_chat_completions_client.MESSAGE_STOP
null
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
53
53
null
null
null
null
'message_stop'
null
Type: attribute Member Name: MESSAGE_STOP Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.MESSAGE_STOP Docstring: none Value: 'message_stop' Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
INPUT_JSON_DELTA
fenic._inference.anthropic.anthropic_batch_chat_completions_client.INPUT_JSON_DELTA
null
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
55
55
null
null
null
null
'input_json_delta'
null
Type: attribute Member Name: INPUT_JSON_DELTA Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.INPUT_JSON_DELTA Docstring: none Value: 'input_json_delta' Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
CONTENT_BLOCK_DELTA
fenic._inference.anthropic.anthropic_batch_chat_completions_client.CONTENT_BLOCK_DELTA
null
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
57
57
null
null
null
null
'content_block_delta'
null
Type: attribute Member Name: CONTENT_BLOCK_DELTA Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.CONTENT_BLOCK_DELTA Docstring: none Value: 'content_block_delta' Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
EPHEMERAL_CACHE_CONTROL
fenic._inference.anthropic.anthropic_batch_chat_completions_client.EPHEMERAL_CACHE_CONTROL
null
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
59
59
null
null
null
null
CacheControlEphemeralParam(type='ephemeral')
null
Type: attribute Member Name: EPHEMERAL_CACHE_CONTROL Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.EPHEMERAL_CACHE_CONTROL Docstring: none Value: CacheControlEphemeralParam(type='ephemeral') Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
AnthropicBatchCompletionsClient
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient
Anthropic batch chat completions client. This client handles communication with Anthropic's Claude models for batch chat completions. It supports streaming responses, structured output, thinking/reasoning capabilities, and token counting with Anthropic-specific adjustments.
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
62
403
null
null
null
null
null
[ "ModelClient[FenicCompletionsRequest, FenicCompletionsResponse]" ]
Type: class Member Name: AnthropicBatchCompletionsClient Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient Docstring: Anthropic batch chat completions client. This client handles communication with Anthropic's Claude models for batch chat completions. It supports streaming responses, structured output, thinking/reasoning capabilities, and token counting with Anthropic-specific adjustments. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.__init__
Initialize the Anthropic batch completions client. Args: rate_limit_strategy: Strategy for rate limiting requests queue_size: Maximum size of the request queue model: Anthropic model name to use max_backoffs: Maximum number of retry backoffs profiles: Dictionary of profile configurations default_profile_name: Name of the default profile to use
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
74
116
null
null
[ "self", "rate_limit_strategy", "model", "queue_size", "max_backoffs", "profiles", "default_profile_name" ]
AnthropicBatchCompletionsClient
null
null
Type: method Member Name: __init__ Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.__init__ Docstring: Initialize the Anthropic batch completions client. Args: rate_limit_strategy: Strategy for rate limiting requests queue_size: Maximum size of the request queue model: Anthropic model name to use max_backoffs: Maximum number of retry backoffs profiles: Dictionary of profile configurations default_profile_name: Name of the default profile to use Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "rate_limit_strategy", "model", "queue_size", "max_backoffs", "profiles", "default_profile_name"] Returns: none Parent Class: AnthropicBatchCompletionsClient
method
make_single_request
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.make_single_request
Make a single completion request to Anthropic. Handles both text and structured output requests, with support for thinking/reasoning when enabled. Processes streaming responses and extracts usage metrics. Args: request: The completion request to process Returns: Completion response, transient exception, or fatal exception
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
118
198
null
Union[None, FenicCompletionsResponse, TransientException, FatalException]
[ "self", "request" ]
AnthropicBatchCompletionsClient
null
null
Type: method Member Name: make_single_request Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.make_single_request Docstring: Make a single completion request to Anthropic. Handles both text and structured output requests, with support for thinking/reasoning when enabled. Processes streaming responses and extracts usage metrics. Args: request: The completion request to process Returns: Completion response, transient exception, or fatal exception Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: Union[None, FenicCompletionsResponse, TransientException, FatalException] Parent Class: AnthropicBatchCompletionsClient
method
_handle_text_streaming_response
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient._handle_text_streaming_response
Handle streaming text response from Anthropic. Processes streaming chunks to extract text content and usage data. Args: payload: The request payload sent to Anthropic Returns: Tuple of (content, usage_data)
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
false
true
200
220
null
tuple[str, Optional[anthropic.types.Usage]]
[ "self", "payload" ]
AnthropicBatchCompletionsClient
null
null
Type: method Member Name: _handle_text_streaming_response Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient._handle_text_streaming_response Docstring: Handle streaming text response from Anthropic. Processes streaming chunks to extract text content and usage data. Args: payload: The request payload sent to Anthropic Returns: Tuple of (content, usage_data) Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "payload"] Returns: tuple[str, Optional[anthropic.types.Usage]] Parent Class: AnthropicBatchCompletionsClient
method
_handle_structured_output_streaming_response
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient._handle_structured_output_streaming_response
Handle streaming structured output response from Anthropic. Processes streaming chunks to extract JSON content from tool use and usage data. Args: payload: The request payload sent to Anthropic Returns: Tuple of (tool_use_content, usage_data)
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
false
true
222
242
null
tuple[str, Optional[anthropic.types.Usage]]
[ "self", "payload" ]
AnthropicBatchCompletionsClient
null
null
Type: method Member Name: _handle_structured_output_streaming_response Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient._handle_structured_output_streaming_response Docstring: Handle streaming structured output response from Anthropic. Processes streaming chunks to extract JSON content from tool use and usage data. Args: payload: The request payload sent to Anthropic Returns: Tuple of (tool_use_content, usage_data) Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "payload"] Returns: tuple[str, Optional[anthropic.types.Usage]] Parent Class: AnthropicBatchCompletionsClient
method
estimate_response_format_tokens
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.estimate_response_format_tokens
Estimate token count for a response format schema. Uses Anthropic's API to count tokens in a tool parameter that represents the response format schema. Results are cached for performance. Args: response_format: Pydantic model class defining the response format Returns: Estimated token count for the response format
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
246
270
null
int
[ "self", "response_format" ]
AnthropicBatchCompletionsClient
null
null
Type: method Member Name: estimate_response_format_tokens Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.estimate_response_format_tokens Docstring: Estimate token count for a response format schema. Uses Anthropic's API to count tokens in a tool parameter that represents the response format schema. Results are cached for performance. Args: response_format: Pydantic model class defining the response format Returns: Estimated token count for the response format Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "response_format"] Returns: int Parent Class: AnthropicBatchCompletionsClient
method
_estimate_structured_output_overhead
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient._estimate_structured_output_overhead
Use Anthropic's API-based token counting for structured output. Args: response_format: Pydantic model class defining the response format Returns: Estimated token overhead for structured output
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
false
true
272
281
null
int
[ "self", "response_format" ]
AnthropicBatchCompletionsClient
null
null
Type: method Member Name: _estimate_structured_output_overhead Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient._estimate_structured_output_overhead Docstring: Use Anthropic's API-based token counting for structured output. Args: response_format: Pydantic model class defining the response format Returns: Estimated token overhead for structured output Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "response_format"] Returns: int Parent Class: AnthropicBatchCompletionsClient
method
_get_max_output_tokens
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient._get_max_output_tokens
Get maximum output tokens including thinking budget. Args: request: The completion request Returns: Maximum output tokens (completion + thinking budget)
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
false
true
283
292
null
int
[ "self", "request" ]
AnthropicBatchCompletionsClient
null
null
Type: method Member Name: _get_max_output_tokens Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient._get_max_output_tokens Docstring: Get maximum output tokens including thinking budget. Args: request: The completion request Returns: Maximum output tokens (completion + thinking budget) Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "request"] Returns: int Parent Class: AnthropicBatchCompletionsClient
method
count_tokens
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.count_tokens
Count tokens with Anthropic encoding adjustment. Applies a tokenizer adjustment ratio to account for differences between Anthropic's and OpenAI's tokenization. Args: messages: Messages to count tokens for Returns: Adjusted token count
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
297
309
null
int
[ "self", "messages" ]
AnthropicBatchCompletionsClient
null
null
Type: method Member Name: count_tokens Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.count_tokens Docstring: Count tokens with Anthropic encoding adjustment. Applies a tokenizer adjustment ratio to account for differences between Anthropic's and OpenAI's tokenization. Args: messages: Messages to count tokens for Returns: Adjusted token count Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "messages"] Returns: int Parent Class: AnthropicBatchCompletionsClient
method
get_request_key
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.get_request_key
Generate a unique key for the request. Args: request: The completion request Returns: Unique request key for caching
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
311
320
null
str
[ "self", "request" ]
AnthropicBatchCompletionsClient
null
null
Type: method Member Name: get_request_key Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.get_request_key Docstring: Generate a unique key for the request. Args: request: The completion request Returns: Unique request key for caching Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: str Parent Class: AnthropicBatchCompletionsClient
method
estimate_tokens_for_request
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.estimate_tokens_for_request
Estimate the number of tokens for a request. Args: request: The request to estimate tokens for Returns: TokenEstimate: The estimated token usage
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
322
342
null
null
[ "self", "request" ]
AnthropicBatchCompletionsClient
null
null
Type: method Member Name: estimate_tokens_for_request Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.estimate_tokens_for_request Docstring: Estimate the number of tokens for a request. Args: request: The request to estimate tokens for Returns: TokenEstimate: The estimated token usage Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: none Parent Class: AnthropicBatchCompletionsClient
method
get_metrics
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.get_metrics
Get current metrics. Returns: Current language model metrics
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
344
350
null
LMMetrics
[ "self" ]
AnthropicBatchCompletionsClient
null
null
Type: method Member Name: get_metrics Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.get_metrics Docstring: Get current metrics. Returns: Current language model metrics Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: LMMetrics Parent Class: AnthropicBatchCompletionsClient
method
reset_metrics
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.reset_metrics
Reset metrics to initial state.
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
352
354
null
null
[ "self" ]
AnthropicBatchCompletionsClient
null
null
Type: method Member Name: reset_metrics Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.reset_metrics Docstring: Reset metrics to initial state. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: none Parent Class: AnthropicBatchCompletionsClient
method
create_response_format_tool
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.create_response_format_tool
Create a tool parameter for structured output. Converts a JSON schema to an Anthropic tool parameter for structured output formatting. Args: response_format: Resolved JSON schema defining the response format Returns: Anthropic tool parameter
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
356
374
null
ToolParam
[ "self", "response_format" ]
AnthropicBatchCompletionsClient
null
null
Type: method Member Name: create_response_format_tool Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.create_response_format_tool Docstring: Create a tool parameter for structured output. Converts a JSON schema to an Anthropic tool parameter for structured output formatting. Args: response_format: Resolved JSON schema defining the response format Returns: Anthropic tool parameter Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "response_format"] Returns: ToolParam Parent Class: AnthropicBatchCompletionsClient
method
convert_messages
fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.convert_messages
Convert Fenic messages to Anthropic format. Converts Fenic LMRequestMessages to Anthropic's TextBlockParam and MessageParam format, including system prompt and conversation history. Args: messages: Fenic message format Returns: Tuple of (system_prompt, message_params)
site-packages/fenic/_inference/anthropic/anthropic_batch_chat_completions_client.py
true
false
376
403
null
tuple[TextBlockParam, list[MessageParam]]
[ "self", "messages" ]
AnthropicBatchCompletionsClient
null
null
Type: method Member Name: convert_messages Qualified Name: fenic._inference.anthropic.anthropic_batch_chat_completions_client.AnthropicBatchCompletionsClient.convert_messages Docstring: Convert Fenic messages to Anthropic format. Converts Fenic LMRequestMessages to Anthropic's TextBlockParam and MessageParam format, including system prompt and conversation history. Args: messages: Fenic message format Returns: Tuple of (system_prompt, message_params) Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "messages"] Returns: tuple[TextBlockParam, list[MessageParam]] Parent Class: AnthropicBatchCompletionsClient
module
anthropic_provider
fenic._inference.anthropic.anthropic_provider
Anthropic model provider implementation.
site-packages/fenic/_inference/anthropic/anthropic_provider.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: anthropic_provider Qualified Name: fenic._inference.anthropic.anthropic_provider Docstring: Anthropic model provider implementation. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
logger
fenic._inference.anthropic.anthropic_provider.logger
null
site-packages/fenic/_inference/anthropic/anthropic_provider.py
true
false
9
9
null
null
null
null
logging.getLogger(__name__)
null
Type: attribute Member Name: logger Qualified Name: fenic._inference.anthropic.anthropic_provider.logger Docstring: none Value: logging.getLogger(__name__) Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
AnthropicModelProvider
fenic._inference.anthropic.anthropic_provider.AnthropicModelProvider
Anthropic implementation of ModelProvider.
site-packages/fenic/_inference/anthropic/anthropic_provider.py
true
false
12
31
null
null
null
null
null
[ "ModelProviderClass" ]
Type: class Member Name: AnthropicModelProvider Qualified Name: fenic._inference.anthropic.anthropic_provider.AnthropicModelProvider Docstring: Anthropic implementation of ModelProvider. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
create_client
fenic._inference.anthropic.anthropic_provider.AnthropicModelProvider.create_client
Create an Anthropic sync client instance.
site-packages/fenic/_inference/anthropic/anthropic_provider.py
true
false
19
21
null
null
[ "self" ]
AnthropicModelProvider
null
null
Type: method Member Name: create_client Qualified Name: fenic._inference.anthropic.anthropic_provider.AnthropicModelProvider.create_client Docstring: Create an Anthropic sync client instance. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: none Parent Class: AnthropicModelProvider
method
create_aio_client
fenic._inference.anthropic.anthropic_provider.AnthropicModelProvider.create_aio_client
Create an Anthropic async client instance.
site-packages/fenic/_inference/anthropic/anthropic_provider.py
true
false
23
25
null
null
[ "self" ]
AnthropicModelProvider
null
null
Type: method Member Name: create_aio_client Qualified Name: fenic._inference.anthropic.anthropic_provider.AnthropicModelProvider.create_aio_client Docstring: Create an Anthropic async client instance. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: none Parent Class: AnthropicModelProvider
method
validate_api_key
fenic._inference.anthropic.anthropic_provider.AnthropicModelProvider.validate_api_key
Validate Anthropic API key by making a minimal completion request.
site-packages/fenic/_inference/anthropic/anthropic_provider.py
true
false
27
31
null
None
[ "self" ]
AnthropicModelProvider
null
null
Type: method Member Name: validate_api_key Qualified Name: fenic._inference.anthropic.anthropic_provider.AnthropicModelProvider.validate_api_key Docstring: Validate Anthropic API key by making a minimal completion request. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: None Parent Class: AnthropicModelProvider
module
openai
fenic._inference.openai
null
site-packages/fenic/_inference/openai/__init__.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: openai Qualified Name: fenic._inference.openai Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
module
openai_batch_embeddings_client
fenic._inference.openai.openai_batch_embeddings_client
Client for making batch requests to OpenAI's embeddings API.
site-packages/fenic/_inference/openai/openai_batch_embeddings_client.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: openai_batch_embeddings_client Qualified Name: fenic._inference.openai.openai_batch_embeddings_client Docstring: Client for making batch requests to OpenAI's embeddings API. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
logger
fenic._inference.openai.openai_batch_embeddings_client.logger
null
site-packages/fenic/_inference/openai/openai_batch_embeddings_client.py
true
false
25
25
null
null
null
null
logging.getLogger(__name__)
null
Type: attribute Member Name: logger Qualified Name: fenic._inference.openai.openai_batch_embeddings_client.logger Docstring: none Value: logging.getLogger(__name__) Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
OpenAIBatchEmbeddingsClient
fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient
Client for making batch requests to OpenAI's embeddings API.
site-packages/fenic/_inference/openai/openai_batch_embeddings_client.py
true
false
28
117
null
null
null
null
null
[ "ModelClient[FenicEmbeddingsRequest, list[float]]" ]
Type: class Member Name: OpenAIBatchEmbeddingsClient Qualified Name: fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient Docstring: Client for making batch requests to OpenAI's embeddings API. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient.__init__
Initialize the OpenAI batch embeddings client. Args: rate_limit_strategy: Strategy for handling rate limits queue_size: Size of the request queue model: The model to use max_backoffs: Maximum number of backoff attempts
site-packages/fenic/_inference/openai/openai_batch_embeddings_client.py
true
false
31
61
null
null
[ "self", "rate_limit_strategy", "model", "queue_size", "max_backoffs" ]
OpenAIBatchEmbeddingsClient
null
null
Type: method Member Name: __init__ Qualified Name: fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient.__init__ Docstring: Initialize the OpenAI batch embeddings client. Args: rate_limit_strategy: Strategy for handling rate limits queue_size: Size of the request queue model: The model to use max_backoffs: Maximum number of backoff attempts Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "rate_limit_strategy", "model", "queue_size", "max_backoffs"] Returns: none Parent Class: OpenAIBatchEmbeddingsClient
method
make_single_request
fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient.make_single_request
Make a single request to the OpenAI API. Args: request: The request to make Returns: The response from the API or an exception
site-packages/fenic/_inference/openai/openai_batch_embeddings_client.py
true
false
63
74
null
Union[None, list[float], TransientException, FatalException]
[ "self", "request" ]
OpenAIBatchEmbeddingsClient
null
null
Type: method Member Name: make_single_request Qualified Name: fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient.make_single_request Docstring: Make a single request to the OpenAI API. Args: request: The request to make Returns: The response from the API or an exception Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: Union[None, list[float], TransientException, FatalException] Parent Class: OpenAIBatchEmbeddingsClient
method
get_request_key
fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient.get_request_key
Generate a unique key for request deduplication. Args: request: The request to generate a key for Returns: A unique key for the request
site-packages/fenic/_inference/openai/openai_batch_embeddings_client.py
true
false
76
85
null
str
[ "self", "request" ]
OpenAIBatchEmbeddingsClient
null
null
Type: method Member Name: get_request_key Qualified Name: fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient.get_request_key Docstring: Generate a unique key for request deduplication. Args: request: The request to generate a key for Returns: A unique key for the request Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: str Parent Class: OpenAIBatchEmbeddingsClient
method
estimate_tokens_for_request
fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient.estimate_tokens_for_request
Estimate the number of tokens for a request. Overriding the behavior in the base class as Embedding models do not generate any output tokens. Args: request: The request to estimate tokens for Returns: TokenEstimate with input token count
site-packages/fenic/_inference/openai/openai_batch_embeddings_client.py
true
false
87
97
null
TokenEstimate
[ "self", "request" ]
OpenAIBatchEmbeddingsClient
null
null
Type: method Member Name: estimate_tokens_for_request Qualified Name: fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient.estimate_tokens_for_request Docstring: Estimate the number of tokens for a request. Overriding the behavior in the base class as Embedding models do not generate any output tokens. Args: request: The request to estimate tokens for Returns: TokenEstimate with input token count Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: TokenEstimate Parent Class: OpenAIBatchEmbeddingsClient
method
reset_metrics
fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient.reset_metrics
Reset all metrics to their initial values.
site-packages/fenic/_inference/openai/openai_batch_embeddings_client.py
true
false
99
101
null
null
[ "self" ]
OpenAIBatchEmbeddingsClient
null
null
Type: method Member Name: reset_metrics Qualified Name: fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient.reset_metrics Docstring: Reset all metrics to their initial values. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: none Parent Class: OpenAIBatchEmbeddingsClient
method
get_metrics
fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient.get_metrics
Get the current metrics. Returns: The current metrics
site-packages/fenic/_inference/openai/openai_batch_embeddings_client.py
true
false
103
109
null
RMMetrics
[ "self" ]
OpenAIBatchEmbeddingsClient
null
null
Type: method Member Name: get_metrics Qualified Name: fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient.get_metrics Docstring: Get the current metrics. Returns: The current metrics Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: RMMetrics Parent Class: OpenAIBatchEmbeddingsClient
method
_get_max_output_tokens
fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient._get_max_output_tokens
null
site-packages/fenic/_inference/openai/openai_batch_embeddings_client.py
false
true
111
112
null
int
[ "self", "request" ]
OpenAIBatchEmbeddingsClient
null
null
Type: method Member Name: _get_max_output_tokens Qualified Name: fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient._get_max_output_tokens Docstring: none Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "request"] Returns: int Parent Class: OpenAIBatchEmbeddingsClient
method
validate_api_key
fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient.validate_api_key
Validate the OpenAI API token by making a minimal API call.
site-packages/fenic/_inference/openai/openai_batch_embeddings_client.py
true
false
114
117
null
null
[ "self" ]
OpenAIBatchEmbeddingsClient
null
null
Type: method Member Name: validate_api_key Qualified Name: fenic._inference.openai.openai_batch_embeddings_client.OpenAIBatchEmbeddingsClient.validate_api_key Docstring: Validate the OpenAI API token by making a minimal API call. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: none Parent Class: OpenAIBatchEmbeddingsClient
module
openai_batch_chat_completions_client
fenic._inference.openai.openai_batch_chat_completions_client
Client for making batch requests to OpenAI's chat completions API.
site-packages/fenic/_inference/openai/openai_batch_chat_completions_client.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: openai_batch_chat_completions_client Qualified Name: fenic._inference.openai.openai_batch_chat_completions_client Docstring: Client for making batch requests to OpenAI's chat completions API. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
class
OpenAIBatchChatCompletionsClient
fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient
Client for making batch requests to OpenAI's chat completions API.
site-packages/fenic/_inference/openai/openai_batch_chat_completions_client.py
true
false
27
127
null
null
null
null
null
[ "ModelClient[FenicCompletionsRequest, FenicCompletionsResponse]" ]
Type: class Member Name: OpenAIBatchChatCompletionsClient Qualified Name: fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient Docstring: Client for making batch requests to OpenAI's chat completions API. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient.__init__
Initialize the OpenAI batch chat completions client. Args: rate_limit_strategy: Strategy for handling rate limits queue_size: Size of the request queue model: The model to use max_backoffs: Maximum number of backoff attempts profiles: Dictionary of profile configurations default_profile_name: Default profile to use when none specified
site-packages/fenic/_inference/openai/openai_batch_chat_completions_client.py
true
false
30
69
null
null
[ "self", "rate_limit_strategy", "model", "queue_size", "max_backoffs", "profiles", "default_profile_name" ]
OpenAIBatchChatCompletionsClient
null
null
Type: method Member Name: __init__ Qualified Name: fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient.__init__ Docstring: Initialize the OpenAI batch chat completions client. Args: rate_limit_strategy: Strategy for handling rate limits queue_size: Size of the request queue model: The model to use max_backoffs: Maximum number of backoff attempts profiles: Dictionary of profile configurations default_profile_name: Default profile to use when none specified Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "rate_limit_strategy", "model", "queue_size", "max_backoffs", "profiles", "default_profile_name"] Returns: none Parent Class: OpenAIBatchChatCompletionsClient
method
make_single_request
fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient.make_single_request
Make a single request to the OpenAI API. Args: request: The request to make Returns: The response from the API or an exception
site-packages/fenic/_inference/openai/openai_batch_chat_completions_client.py
true
false
71
82
null
Union[None, FenicCompletionsResponse, TransientException, FatalException]
[ "self", "request" ]
OpenAIBatchChatCompletionsClient
null
null
Type: method Member Name: make_single_request Qualified Name: fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient.make_single_request Docstring: Make a single request to the OpenAI API. Args: request: The request to make Returns: The response from the API or an exception Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: Union[None, FenicCompletionsResponse, TransientException, FatalException] Parent Class: OpenAIBatchChatCompletionsClient
method
get_request_key
fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient.get_request_key
Generate a unique key for request deduplication. Args: request: The request to generate a key for Returns: A unique key for the request
site-packages/fenic/_inference/openai/openai_batch_chat_completions_client.py
true
false
84
93
null
str
[ "self", "request" ]
OpenAIBatchChatCompletionsClient
null
null
Type: method Member Name: get_request_key Qualified Name: fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient.get_request_key Docstring: Generate a unique key for request deduplication. Args: request: The request to generate a key for Returns: A unique key for the request Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: str Parent Class: OpenAIBatchChatCompletionsClient
method
estimate_tokens_for_request
fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient.estimate_tokens_for_request
Estimate the number of tokens for a request. Args: request: The request to estimate tokens for Returns: TokenEstimate: The estimated token usage
site-packages/fenic/_inference/openai/openai_batch_chat_completions_client.py
true
false
95
107
null
TokenEstimate
[ "self", "request" ]
OpenAIBatchChatCompletionsClient
null
null
Type: method Member Name: estimate_tokens_for_request Qualified Name: fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient.estimate_tokens_for_request Docstring: Estimate the number of tokens for a request. Args: request: The request to estimate tokens for Returns: TokenEstimate: The estimated token usage Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "request"] Returns: TokenEstimate Parent Class: OpenAIBatchChatCompletionsClient
method
reset_metrics
fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient.reset_metrics
Reset all metrics to their initial values.
site-packages/fenic/_inference/openai/openai_batch_chat_completions_client.py
true
false
109
111
null
null
[ "self" ]
OpenAIBatchChatCompletionsClient
null
null
Type: method Member Name: reset_metrics Qualified Name: fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient.reset_metrics Docstring: Reset all metrics to their initial values. Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: none Parent Class: OpenAIBatchChatCompletionsClient
method
get_metrics
fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient.get_metrics
Get the current metrics. Returns: The current metrics
site-packages/fenic/_inference/openai/openai_batch_chat_completions_client.py
true
false
113
119
null
LMMetrics
[ "self" ]
OpenAIBatchChatCompletionsClient
null
null
Type: method Member Name: get_metrics Qualified Name: fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient.get_metrics Docstring: Get the current metrics. Returns: The current metrics Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self"] Returns: LMMetrics Parent Class: OpenAIBatchChatCompletionsClient
method
_get_max_output_tokens
fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient._get_max_output_tokens
Conservative estimate: max_completion_tokens + reasoning effort-based thinking tokens.
site-packages/fenic/_inference/openai/openai_batch_chat_completions_client.py
false
true
121
127
null
int
[ "self", "request" ]
OpenAIBatchChatCompletionsClient
null
null
Type: method Member Name: _get_max_output_tokens Qualified Name: fenic._inference.openai.openai_batch_chat_completions_client.OpenAIBatchChatCompletionsClient._get_max_output_tokens Docstring: Conservative estimate: max_completion_tokens + reasoning effort-based thinking tokens. Value: none Annotation: none is Public? : false is Private? : true Parameters: ["self", "request"] Returns: int Parent Class: OpenAIBatchChatCompletionsClient
module
_gen
fenic._gen
Generated code for fenic.
site-packages/fenic/_gen/__init__.py
false
true
null
null
null
null
null
null
null
null
Type: module Member Name: _gen Qualified Name: fenic._gen Docstring: Generated code for fenic. Value: none Annotation: none is Public? : false is Private? : true Parameters: none Returns: none Parent Class: none
module
protos
fenic._gen.protos
Generated protobuf files for fenic.
site-packages/fenic/_gen/protos/__init__.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: protos Qualified Name: fenic._gen.protos Docstring: Generated protobuf files for fenic. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
module
logical_plan
fenic._gen.protos.logical_plan
Logical plan protobuf files.
site-packages/fenic/_gen/protos/logical_plan/__init__.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: logical_plan Qualified Name: fenic._gen.protos.logical_plan Docstring: Logical plan protobuf files. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
module
v1
fenic._gen.protos.logical_plan.v1
Logical plan v1 protobuf files.
site-packages/fenic/_gen/protos/logical_plan/v1/__init__.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: v1 Qualified Name: fenic._gen.protos.logical_plan.v1 Docstring: Logical plan v1 protobuf files. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
module
enums_pb2_grpc
fenic._gen.protos.logical_plan.v1.enums_pb2_grpc
Client and server classes corresponding to protobuf-defined services.
site-packages/fenic/_gen/protos/logical_plan/v1/enums_pb2_grpc.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: enums_pb2_grpc Qualified Name: fenic._gen.protos.logical_plan.v1.enums_pb2_grpc Docstring: Client and server classes corresponding to protobuf-defined services. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
module
tools_pb2
fenic._gen.protos.logical_plan.v1.tools_pb2
Generated protocol buffer code.
site-packages/fenic/_gen/protos/logical_plan/v1/tools_pb2.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: tools_pb2 Qualified Name: fenic._gen.protos.logical_plan.v1.tools_pb2 Docstring: Generated protocol buffer code. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
_sym_db
fenic._gen.protos.logical_plan.v1.tools_pb2._sym_db
null
site-packages/fenic/_gen/protos/logical_plan/v1/tools_pb2.py
false
true
22
22
null
null
null
null
_symbol_database.Default()
null
Type: attribute Member Name: _sym_db Qualified Name: fenic._gen.protos.logical_plan.v1.tools_pb2._sym_db Docstring: none Value: _symbol_database.Default() Annotation: none is Public? : false is Private? : true Parameters: none Returns: none Parent Class: none
attribute
DESCRIPTOR
fenic._gen.protos.logical_plan.v1.tools_pb2.DESCRIPTOR
null
site-packages/fenic/_gen/protos/logical_plan/v1/tools_pb2.py
true
false
30
30
_descriptor.FileDescriptor
null
null
null
_descriptor_pool.Default().AddSerializedFile(b'\n\x1blogical_plan/v1/tools.proto\x12\x0flogical_plan.v1\x1a\x1flogical_plan/v1/datatypes.proto\x1a#logical_plan/v1/complex_types.proto\x1a\x1blogical_plan/v1/plans.proto"\xd9\x02\n\rToolParameter\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12 \n\x0bdescription\x18\x02 \x01(\tR\x0bdescription\x126\n\tdata_type\x18\x03 \x01(\x0b2\x19.logical_plan.v1.DataTypeR\x08dataType\x12\x1a\n\x08required\x18\x04 \x01(\x08R\x08required\x12\x1f\n\x0bhas_default\x18\x05 \x01(\x08R\nhasDefault\x12F\n\rdefault_value\x18\x06 \x01(\x0b2\x1c.logical_plan.v1.ScalarValueH\x00R\x0cdefaultValue\x88\x01\x01\x12C\n\x0eallowed_values\x18\x07 \x03(\x0b2\x1c.logical_plan.v1.ScalarValueR\rallowedValuesB\x10\n\x0e_default_value"\xee\x01\n\x0eToolDefinition\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12 \n\x0bdescription\x18\x02 \x01(\tR\x0bdescription\x126\n\x06params\x18\x03 \x03(\x0b2\x1e.logical_plan.v1.ToolParameterR\x06params\x12K\n\x12parameterized_view\x18\x04 \x01(\x0b2\x1c.logical_plan.v1.LogicalPlanR\x11parameterizedView\x12!\n\x0cresult_limit\x18\x05 \x01(\x05R\x0bresultLimitBz\n\x13com.logical_plan.v1B\nToolsProtoP\x01\xa2\x02\x03LXX\xaa\x02\x0eLogicalPlan.V1\xca\x02\x0eLogicalPlan\\V1\xe2\x02\x1aLogicalPlan\\V1\\GPBMetadata\xea\x02\x0fLogicalPlan::V1b\x06proto3')
null
Type: attribute Member Name: DESCRIPTOR Qualified Name: fenic._gen.protos.logical_plan.v1.tools_pb2.DESCRIPTOR Docstring: none Value: _descriptor_pool.Default().AddSerializedFile(b'\n\x1blogical_plan/v1/tools.proto\x12\x0flogical_plan.v1\x1a\x1flogical_plan/v1/datatypes.proto\x1a#logical_plan/v1/complex_types.proto\x1a\x1blogical_plan/v1/plans.proto"\xd9\x02\n\rToolParameter\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12 \n\x0bdescription\x18\x02 \x01(\tR\x0bdescription\x126\n\tdata_type\x18\x03 \x01(\x0b2\x19.logical_plan.v1.DataTypeR\x08dataType\x12\x1a\n\x08required\x18\x04 \x01(\x08R\x08required\x12\x1f\n\x0bhas_default\x18\x05 \x01(\x08R\nhasDefault\x12F\n\rdefault_value\x18\x06 \x01(\x0b2\x1c.logical_plan.v1.ScalarValueH\x00R\x0cdefaultValue\x88\x01\x01\x12C\n\x0eallowed_values\x18\x07 \x03(\x0b2\x1c.logical_plan.v1.ScalarValueR\rallowedValuesB\x10\n\x0e_default_value"\xee\x01\n\x0eToolDefinition\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12 \n\x0bdescription\x18\x02 \x01(\tR\x0bdescription\x126\n\x06params\x18\x03 \x03(\x0b2\x1e.logical_plan.v1.ToolParameterR\x06params\x12K\n\x12parameterized_view\x18\x04 \x01(\x0b2\x1c.logical_plan.v1.LogicalPlanR\x11parameterizedView\x12!\n\x0cresult_limit\x18\x05 \x01(\x05R\x0bresultLimitBz\n\x13com.logical_plan.v1B\nToolsProtoP\x01\xa2\x02\x03LXX\xaa\x02\x0eLogicalPlan.V1\xca\x02\x0eLogicalPlan\\V1\xe2\x02\x1aLogicalPlan\\V1\\GPBMetadata\xea\x02\x0fLogicalPlan::V1b\x06proto3') Annotation: _descriptor.FileDescriptor is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
_globals
fenic._gen.protos.logical_plan.v1.tools_pb2._globals
null
site-packages/fenic/_gen/protos/logical_plan/v1/tools_pb2.py
false
true
32
32
null
null
null
null
globals()
null
Type: attribute Member Name: _globals Qualified Name: fenic._gen.protos.logical_plan.v1.tools_pb2._globals Docstring: none Value: globals() Annotation: none is Public? : false is Private? : true Parameters: none Returns: none Parent Class: none
class
ToolParameter
fenic._gen.protos.logical_plan.v1.tools_pb2.ToolParameter
null
site-packages/fenic/_gen/protos/logical_plan/v1/tools_pb2.py
true
false
11
27
null
null
null
null
null
[ "_message.Message" ]
Type: class Member Name: ToolParameter Qualified Name: fenic._gen.protos.logical_plan.v1.tools_pb2.ToolParameter Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._gen.protos.logical_plan.v1.tools_pb2.ToolParameter.__init__
null
site-packages/fenic/_gen/protos/logical_plan/v1/tools_pb2.py
true
false
27
27
null
None
[ "self", "name", "description", "data_type", "required", "has_default", "default_value", "allowed_values" ]
ToolParameter
null
null
Type: method Member Name: __init__ Qualified Name: fenic._gen.protos.logical_plan.v1.tools_pb2.ToolParameter.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "name", "description", "data_type", "required", "has_default", "default_value", "allowed_values"] Returns: None Parent Class: ToolParameter
class
ToolDefinition
fenic._gen.protos.logical_plan.v1.tools_pb2.ToolDefinition
null
site-packages/fenic/_gen/protos/logical_plan/v1/tools_pb2.py
true
false
29
41
null
null
null
null
null
[ "_message.Message" ]
Type: class Member Name: ToolDefinition Qualified Name: fenic._gen.protos.logical_plan.v1.tools_pb2.ToolDefinition Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._gen.protos.logical_plan.v1.tools_pb2.ToolDefinition.__init__
null
site-packages/fenic/_gen/protos/logical_plan/v1/tools_pb2.py
true
false
41
41
null
None
[ "self", "name", "description", "params", "parameterized_view", "result_limit" ]
ToolDefinition
null
null
Type: method Member Name: __init__ Qualified Name: fenic._gen.protos.logical_plan.v1.tools_pb2.ToolDefinition.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "name", "description", "params", "parameterized_view", "result_limit"] Returns: None Parent Class: ToolDefinition
module
tools_pb2_grpc
fenic._gen.protos.logical_plan.v1.tools_pb2_grpc
Client and server classes corresponding to protobuf-defined services.
site-packages/fenic/_gen/protos/logical_plan/v1/tools_pb2_grpc.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: tools_pb2_grpc Qualified Name: fenic._gen.protos.logical_plan.v1.tools_pb2_grpc Docstring: Client and server classes corresponding to protobuf-defined services. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
module
complex_types_pb2
fenic._gen.protos.logical_plan.v1.complex_types_pb2
Generated protocol buffer code.
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
null
null
null
null
null
null
null
null
Type: module Member Name: complex_types_pb2 Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2 Docstring: Generated protocol buffer code. Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
_sym_db
fenic._gen.protos.logical_plan.v1.complex_types_pb2._sym_db
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
false
true
22
22
null
null
null
null
_symbol_database.Default()
null
Type: attribute Member Name: _sym_db Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2._sym_db Docstring: none Value: _symbol_database.Default() Annotation: none is Public? : false is Private? : true Parameters: none Returns: none Parent Class: none
attribute
DESCRIPTOR
fenic._gen.protos.logical_plan.v1.complex_types_pb2.DESCRIPTOR
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
28
28
_descriptor.FileDescriptor
null
null
null
_descriptor_pool.Default().AddSerializedFile(b'\n#logical_plan/v1/complex_types.proto\x12\x0flogical_plan.v1\x1a\x1flogical_plan/v1/datatypes.proto"\xcd\x02\n\x0bScalarValue\x12#\n\x0cstring_value\x18\x01 \x01(\tH\x00R\x0bstringValue\x12\x1d\n\tint_value\x18\x02 \x01(\x05H\x00R\x08intValue\x12#\n\x0cdouble_value\x18\x03 \x01(\x01H\x00R\x0bdoubleValue\x12\x1f\n\nbool_value\x18\x04 \x01(\x08H\x00R\tboolValue\x12!\n\x0bbytes_value\x18\x05 \x01(\x0cH\x00R\nbytesValue\x12?\n\x0barray_value\x18\x06 \x01(\x0b2\x1c.logical_plan.v1.ScalarArrayH\x00R\narrayValue\x12B\n\x0cstruct_value\x18\x07 \x01(\x0b2\x1d.logical_plan.v1.ScalarStructH\x00R\x0bstructValueB\x0c\n\nvalue_type"G\n\x0bScalarArray\x128\n\x08elements\x18\x01 \x03(\x0b2\x1c.logical_plan.v1.ScalarValueR\x08elements"J\n\x0cScalarStruct\x12:\n\x06fields\x18\x01 \x03(\x0b2".logical_plan.v1.ScalarStructFieldR\x06fields"[\n\x11ScalarStructField\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x122\n\x05value\x18\x02 \x01(\x0b2\x1c.logical_plan.v1.ScalarValueR\x05value"f\n\x17ResolvedClassDefinition\x12\x14\n\x05label\x18\x01 \x01(\tR\x05label\x12%\n\x0bdescription\x18\x02 \x01(\tH\x00R\x0bdescription\x88\x01\x01B\x0e\n\x0c_description"S\n\x12ResolvedModelAlias\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12\x1d\n\x07profile\x18\x02 \x01(\tH\x00R\x07profile\x88\x01\x01B\n\n\x08_profile"\xdd\x01\n\x16ResolvedResponseFormat\x12\x16\n\x06schema\x18\x01 \x01(\tR\x06schema\x12?\n\x0bstruct_type\x18\x02 \x01(\x0b2\x19.logical_plan.v1.DataTypeH\x00R\nstructType\x88\x01\x01\x12=\n\x18prompt_schema_definition\x18\x03 \x01(\tH\x01R\x16promptSchemaDefinition\x88\x01\x01B\x0e\n\x0c_struct_typeB\x1b\n\x19_prompt_schema_definition"L\n\nNumpyArray\x12\x12\n\x04data\x18\x01 \x01(\x0cR\x04data\x12\x14\n\x05shape\x18\x02 \x03(\x05R\x05shape\x12\x14\n\x05dtype\x18\x03 \x01(\tR\x05dtype"*\n\tKeyPoints\x12\x1d\n\nmax_points\x18\x01 \x01(\x05R\tmaxPoints"(\n\tParagraph\x12\x1b\n\tmax_words\x18\x01 \x01(\x05R\x08maxWords"\x9d\x01\n\x13SummarizationFormat\x12;\n\nkey_points\x18\x01 \x01(\x0b2\x1a.logical_plan.v1.KeyPointsH\x00R\tkeyPoints\x12:\n\tparagraph\x18\x02 \x01(\x0b2\x1a.logical_plan.v1.ParagraphH\x00R\tparagraphB\r\n\x0bformat_type"\xba\x01\n\nMapExample\x12<\n\x05input\x18\x01 \x03(\x0b2&.logical_plan.v1.MapExample.InputEntryR\x05input\x12\x16\n\x06output\x18\x02 \x01(\tR\x06output\x1aV\n\nInputEntry\x12\x10\n\x03key\x18\x01 \x01(\tR\x03key\x122\n\x05value\x18\x02 \x01(\x0b2\x1c.logical_plan.v1.ScalarValueR\x05value:\x028\x01"O\n\x14MapExampleCollection\x127\n\x08examples\x18\x01 \x03(\x0b2\x1b.logical_plan.v1.MapExampleR\x08examples"?\n\x0fClassifyExample\x12\x14\n\x05input\x18\x01 \x01(\tR\x05input\x12\x16\n\x06output\x18\x02 \x01(\tR\x06output"Y\n\x19ClassifyExampleCollection\x12<\n\x08examples\x18\x01 \x03(\x0b2 .logical_plan.v1.ClassifyExampleR\x08examples"\xc6\x01\n\x10PredicateExample\x12B\n\x05input\x18\x01 \x03(\x0b2,.logical_plan.v1.PredicateExample.InputEntryR\x05input\x12\x16\n\x06output\x18\x02 \x01(\x08R\x06output\x1aV\n\nInputEntry\x12\x10\n\x03key\x18\x01 \x01(\tR\x03key\x122\n\x05value\x18\x02 \x01(\x0b2\x1c.logical_plan.v1.ScalarValueR\x05value:\x028\x01"[\n\x1aPredicateExampleCollection\x12=\n\x08examples\x18\x01 \x03(\x0b2!.logical_plan.v1.PredicateExampleR\x08examples"\x8b\x01\n\x0bJoinExample\x120\n\x04left\x18\x01 \x01(\x0b2\x1c.logical_plan.v1.ScalarValueR\x04left\x122\n\x05right\x18\x02 \x01(\x0b2\x1c.logical_plan.v1.ScalarValueR\x05right\x12\x16\n\x06output\x18\x03 \x01(\x08R\x06output"Q\n\x15JoinExampleCollection\x128\n\x08examples\x18\x01 \x03(\x0b2\x1c.logical_plan.v1.JoinExampleR\x08examplesB\x81\x01\n\x13com.logical_plan.v1B\x11ComplexTypesProtoP\x01\xa2\x02\x03LXX\xaa\x02\x0eLogicalPlan.V1\xca\x02\x0eLogicalPlan\\V1\xe2\x02\x1aLogicalPlan\\V1\\GPBMetadata\xea\x02\x0fLogicalPlan::V1b\x06proto3')
null
Type: attribute Member Name: DESCRIPTOR Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2.DESCRIPTOR Docstring: none Value: _descriptor_pool.Default().AddSerializedFile(b'\n#logical_plan/v1/complex_types.proto\x12\x0flogical_plan.v1\x1a\x1flogical_plan/v1/datatypes.proto"\xcd\x02\n\x0bScalarValue\x12#\n\x0cstring_value\x18\x01 \x01(\tH\x00R\x0bstringValue\x12\x1d\n\tint_value\x18\x02 \x01(\x05H\x00R\x08intValue\x12#\n\x0cdouble_value\x18\x03 \x01(\x01H\x00R\x0bdoubleValue\x12\x1f\n\nbool_value\x18\x04 \x01(\x08H\x00R\tboolValue\x12!\n\x0bbytes_value\x18\x05 \x01(\x0cH\x00R\nbytesValue\x12?\n\x0barray_value\x18\x06 \x01(\x0b2\x1c.logical_plan.v1.ScalarArrayH\x00R\narrayValue\x12B\n\x0cstruct_value\x18\x07 \x01(\x0b2\x1d.logical_plan.v1.ScalarStructH\x00R\x0bstructValueB\x0c\n\nvalue_type"G\n\x0bScalarArray\x128\n\x08elements\x18\x01 \x03(\x0b2\x1c.logical_plan.v1.ScalarValueR\x08elements"J\n\x0cScalarStruct\x12:\n\x06fields\x18\x01 \x03(\x0b2".logical_plan.v1.ScalarStructFieldR\x06fields"[\n\x11ScalarStructField\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x122\n\x05value\x18\x02 \x01(\x0b2\x1c.logical_plan.v1.ScalarValueR\x05value"f\n\x17ResolvedClassDefinition\x12\x14\n\x05label\x18\x01 \x01(\tR\x05label\x12%\n\x0bdescription\x18\x02 \x01(\tH\x00R\x0bdescription\x88\x01\x01B\x0e\n\x0c_description"S\n\x12ResolvedModelAlias\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12\x1d\n\x07profile\x18\x02 \x01(\tH\x00R\x07profile\x88\x01\x01B\n\n\x08_profile"\xdd\x01\n\x16ResolvedResponseFormat\x12\x16\n\x06schema\x18\x01 \x01(\tR\x06schema\x12?\n\x0bstruct_type\x18\x02 \x01(\x0b2\x19.logical_plan.v1.DataTypeH\x00R\nstructType\x88\x01\x01\x12=\n\x18prompt_schema_definition\x18\x03 \x01(\tH\x01R\x16promptSchemaDefinition\x88\x01\x01B\x0e\n\x0c_struct_typeB\x1b\n\x19_prompt_schema_definition"L\n\nNumpyArray\x12\x12\n\x04data\x18\x01 \x01(\x0cR\x04data\x12\x14\n\x05shape\x18\x02 \x03(\x05R\x05shape\x12\x14\n\x05dtype\x18\x03 \x01(\tR\x05dtype"*\n\tKeyPoints\x12\x1d\n\nmax_points\x18\x01 \x01(\x05R\tmaxPoints"(\n\tParagraph\x12\x1b\n\tmax_words\x18\x01 \x01(\x05R\x08maxWords"\x9d\x01\n\x13SummarizationFormat\x12;\n\nkey_points\x18\x01 \x01(\x0b2\x1a.logical_plan.v1.KeyPointsH\x00R\tkeyPoints\x12:\n\tparagraph\x18\x02 \x01(\x0b2\x1a.logical_plan.v1.ParagraphH\x00R\tparagraphB\r\n\x0bformat_type"\xba\x01\n\nMapExample\x12<\n\x05input\x18\x01 \x03(\x0b2&.logical_plan.v1.MapExample.InputEntryR\x05input\x12\x16\n\x06output\x18\x02 \x01(\tR\x06output\x1aV\n\nInputEntry\x12\x10\n\x03key\x18\x01 \x01(\tR\x03key\x122\n\x05value\x18\x02 \x01(\x0b2\x1c.logical_plan.v1.ScalarValueR\x05value:\x028\x01"O\n\x14MapExampleCollection\x127\n\x08examples\x18\x01 \x03(\x0b2\x1b.logical_plan.v1.MapExampleR\x08examples"?\n\x0fClassifyExample\x12\x14\n\x05input\x18\x01 \x01(\tR\x05input\x12\x16\n\x06output\x18\x02 \x01(\tR\x06output"Y\n\x19ClassifyExampleCollection\x12<\n\x08examples\x18\x01 \x03(\x0b2 .logical_plan.v1.ClassifyExampleR\x08examples"\xc6\x01\n\x10PredicateExample\x12B\n\x05input\x18\x01 \x03(\x0b2,.logical_plan.v1.PredicateExample.InputEntryR\x05input\x12\x16\n\x06output\x18\x02 \x01(\x08R\x06output\x1aV\n\nInputEntry\x12\x10\n\x03key\x18\x01 \x01(\tR\x03key\x122\n\x05value\x18\x02 \x01(\x0b2\x1c.logical_plan.v1.ScalarValueR\x05value:\x028\x01"[\n\x1aPredicateExampleCollection\x12=\n\x08examples\x18\x01 \x03(\x0b2!.logical_plan.v1.PredicateExampleR\x08examples"\x8b\x01\n\x0bJoinExample\x120\n\x04left\x18\x01 \x01(\x0b2\x1c.logical_plan.v1.ScalarValueR\x04left\x122\n\x05right\x18\x02 \x01(\x0b2\x1c.logical_plan.v1.ScalarValueR\x05right\x12\x16\n\x06output\x18\x03 \x01(\x08R\x06output"Q\n\x15JoinExampleCollection\x128\n\x08examples\x18\x01 \x03(\x0b2\x1c.logical_plan.v1.JoinExampleR\x08examplesB\x81\x01\n\x13com.logical_plan.v1B\x11ComplexTypesProtoP\x01\xa2\x02\x03LXX\xaa\x02\x0eLogicalPlan.V1\xca\x02\x0eLogicalPlan\\V1\xe2\x02\x1aLogicalPlan\\V1\\GPBMetadata\xea\x02\x0fLogicalPlan::V1b\x06proto3') Annotation: _descriptor.FileDescriptor is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
attribute
_globals
fenic._gen.protos.logical_plan.v1.complex_types_pb2._globals
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
false
true
30
30
null
null
null
null
globals()
null
Type: attribute Member Name: _globals Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2._globals Docstring: none Value: globals() Annotation: none is Public? : false is Private? : true Parameters: none Returns: none Parent Class: none
class
ScalarValue
fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarValue
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
9
25
null
null
null
null
null
[ "_message.Message" ]
Type: class Member Name: ScalarValue Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarValue Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarValue.__init__
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
25
25
null
None
[ "self", "string_value", "int_value", "double_value", "bool_value", "bytes_value", "array_value", "struct_value" ]
ScalarValue
null
null
Type: method Member Name: __init__ Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarValue.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "string_value", "int_value", "double_value", "bool_value", "bytes_value", "array_value", "struct_value"] Returns: None Parent Class: ScalarValue
class
ScalarArray
fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarArray
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
27
31
null
null
null
null
null
[ "_message.Message" ]
Type: class Member Name: ScalarArray Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarArray Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarArray.__init__
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
31
31
null
None
[ "self", "elements" ]
ScalarArray
null
null
Type: method Member Name: __init__ Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarArray.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "elements"] Returns: None Parent Class: ScalarArray
class
ScalarStruct
fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarStruct
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
33
37
null
null
null
null
null
[ "_message.Message" ]
Type: class Member Name: ScalarStruct Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarStruct Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarStruct.__init__
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
37
37
null
None
[ "self", "fields" ]
ScalarStruct
null
null
Type: method Member Name: __init__ Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarStruct.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "fields"] Returns: None Parent Class: ScalarStruct
class
ScalarStructField
fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarStructField
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
39
45
null
null
null
null
null
[ "_message.Message" ]
Type: class Member Name: ScalarStructField Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarStructField Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarStructField.__init__
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
45
45
null
None
[ "self", "name", "value" ]
ScalarStructField
null
null
Type: method Member Name: __init__ Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2.ScalarStructField.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "name", "value"] Returns: None Parent Class: ScalarStructField
class
ResolvedClassDefinition
fenic._gen.protos.logical_plan.v1.complex_types_pb2.ResolvedClassDefinition
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
47
53
null
null
null
null
null
[ "_message.Message" ]
Type: class Member Name: ResolvedClassDefinition Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2.ResolvedClassDefinition Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._gen.protos.logical_plan.v1.complex_types_pb2.ResolvedClassDefinition.__init__
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
53
53
null
None
[ "self", "label", "description" ]
ResolvedClassDefinition
null
null
Type: method Member Name: __init__ Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2.ResolvedClassDefinition.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "label", "description"] Returns: None Parent Class: ResolvedClassDefinition
class
ResolvedModelAlias
fenic._gen.protos.logical_plan.v1.complex_types_pb2.ResolvedModelAlias
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
55
61
null
null
null
null
null
[ "_message.Message" ]
Type: class Member Name: ResolvedModelAlias Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2.ResolvedModelAlias Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none
method
__init__
fenic._gen.protos.logical_plan.v1.complex_types_pb2.ResolvedModelAlias.__init__
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
61
61
null
None
[ "self", "name", "profile" ]
ResolvedModelAlias
null
null
Type: method Member Name: __init__ Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2.ResolvedModelAlias.__init__ Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: ["self", "name", "profile"] Returns: None Parent Class: ResolvedModelAlias
class
ResolvedResponseFormat
fenic._gen.protos.logical_plan.v1.complex_types_pb2.ResolvedResponseFormat
null
site-packages/fenic/_gen/protos/logical_plan/v1/complex_types_pb2.py
true
false
63
71
null
null
null
null
null
[ "_message.Message" ]
Type: class Member Name: ResolvedResponseFormat Qualified Name: fenic._gen.protos.logical_plan.v1.complex_types_pb2.ResolvedResponseFormat Docstring: none Value: none Annotation: none is Public? : true is Private? : false Parameters: none Returns: none Parent Class: none