type
stringclasses 5
values | name
stringlengths 1
55
| qualified_name
stringlengths 5
143
| docstring
stringlengths 0
3.59k
⌀ | filepath
stringclasses 180
values | is_public
bool 2
classes | is_private
bool 2
classes | line_start
float64 0
1.54k
⌀ | line_end
float64 0
1.56k
⌀ | annotation
stringclasses 8
values | returns
stringclasses 236
values | parameters
listlengths 0
74
⌀ | parent_class
stringclasses 298
values | value
stringclasses 112
values | bases
listlengths 0
3
⌀ | api_element_summary
stringlengths 199
23k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
method
|
check_and_consume_rate_limit
|
fenic._inference.rate_limit_strategy.UnifiedTokenRateLimitStrategy.check_and_consume_rate_limit
|
Checks and consumes rate limits for both requests and total tokens.
This implementation uses a single token bucket for both input and output tokens,
enforcing the total token limit across all token types.
Args:
token_estimate: A TokenEstimate object containing the estimated input, output,
and total tokens for the request.
Returns:
bool: True if there was enough capacity and it was consumed, False otherwise.
|
site-packages/fenic/_inference/rate_limit_strategy.py
| true | false | 126 | 151 | null |
bool
|
[
"self",
"token_estimate"
] |
UnifiedTokenRateLimitStrategy
| null | null |
Type: method
Member Name: check_and_consume_rate_limit
Qualified Name: fenic._inference.rate_limit_strategy.UnifiedTokenRateLimitStrategy.check_and_consume_rate_limit
Docstring: Checks and consumes rate limits for both requests and total tokens.
This implementation uses a single token bucket for both input and output tokens,
enforcing the total token limit across all token types.
Args:
token_estimate: A TokenEstimate object containing the estimated input, output,
and total tokens for the request.
Returns:
bool: True if there was enough capacity and it was consumed, False otherwise.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "token_estimate"]
Returns: bool
Parent Class: UnifiedTokenRateLimitStrategy
|
method
|
context_tokens_per_minute
|
fenic._inference.rate_limit_strategy.UnifiedTokenRateLimitStrategy.context_tokens_per_minute
|
Returns the total token rate limit per minute.
Returns:
int: The total number of tokens allowed per minute (tpm).
|
site-packages/fenic/_inference/rate_limit_strategy.py
| true | false | 153 | 159 | null |
int
|
[
"self"
] |
UnifiedTokenRateLimitStrategy
| null | null |
Type: method
Member Name: context_tokens_per_minute
Qualified Name: fenic._inference.rate_limit_strategy.UnifiedTokenRateLimitStrategy.context_tokens_per_minute
Docstring: Returns the total token rate limit per minute.
Returns:
int: The total number of tokens allowed per minute (tpm).
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: int
Parent Class: UnifiedTokenRateLimitStrategy
|
method
|
__str__
|
fenic._inference.rate_limit_strategy.UnifiedTokenRateLimitStrategy.__str__
|
Returns a string representation of the rate limit strategy.
Returns:
str: A string showing the RPM and TPM limits.
|
site-packages/fenic/_inference/rate_limit_strategy.py
| true | false | 161 | 167 | null | null |
[
"self"
] |
UnifiedTokenRateLimitStrategy
| null | null |
Type: method
Member Name: __str__
Qualified Name: fenic._inference.rate_limit_strategy.UnifiedTokenRateLimitStrategy.__str__
Docstring: Returns a string representation of the rate limit strategy.
Returns:
str: A string showing the RPM and TPM limits.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: none
Parent Class: UnifiedTokenRateLimitStrategy
|
class
|
SeparatedTokenRateLimitStrategy
|
fenic._inference.rate_limit_strategy.SeparatedTokenRateLimitStrategy
|
Rate limiting strategy that uses separate token buckets for input and output tokens.
This strategy enforces both a request rate limit (RPM) and separate token rate limits
for input (input_tpm) and output (output_tpm) tokens.
Attributes:
input_tpm: Input tokens per minute limit. Must be greater than 0.
output_tpm: Output tokens per minute limit. Must be greater than 0.
input_tokens_bucket: Token bucket for tracking and limiting input token usage.
output_tokens_bucket: Token bucket for tracking and limiting output token usage.
|
site-packages/fenic/_inference/rate_limit_strategy.py
| true | false | 170 | 240 | null | null | null | null | null |
[
"RateLimitStrategy"
] |
Type: class
Member Name: SeparatedTokenRateLimitStrategy
Qualified Name: fenic._inference.rate_limit_strategy.SeparatedTokenRateLimitStrategy
Docstring: Rate limiting strategy that uses separate token buckets for input and output tokens.
This strategy enforces both a request rate limit (RPM) and separate token rate limits
for input (input_tpm) and output (output_tpm) tokens.
Attributes:
input_tpm: Input tokens per minute limit. Must be greater than 0.
output_tpm: Output tokens per minute limit. Must be greater than 0.
input_tokens_bucket: Token bucket for tracking and limiting input token usage.
output_tokens_bucket: Token bucket for tracking and limiting output token usage.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic._inference.rate_limit_strategy.SeparatedTokenRateLimitStrategy.__init__
| null |
site-packages/fenic/_inference/rate_limit_strategy.py
| true | false | 182 | 187 | null | null |
[
"self",
"rpm",
"input_tpm",
"output_tpm"
] |
SeparatedTokenRateLimitStrategy
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic._inference.rate_limit_strategy.SeparatedTokenRateLimitStrategy.__init__
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "rpm", "input_tpm", "output_tpm"]
Returns: none
Parent Class: SeparatedTokenRateLimitStrategy
|
method
|
backoff
|
fenic._inference.rate_limit_strategy.SeparatedTokenRateLimitStrategy.backoff
|
Backoff the request/token rate limit bucket.
|
site-packages/fenic/_inference/rate_limit_strategy.py
| true | false | 189 | 194 | null |
int
|
[
"self",
"curr_time"
] |
SeparatedTokenRateLimitStrategy
| null | null |
Type: method
Member Name: backoff
Qualified Name: fenic._inference.rate_limit_strategy.SeparatedTokenRateLimitStrategy.backoff
Docstring: Backoff the request/token rate limit bucket.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "curr_time"]
Returns: int
Parent Class: SeparatedTokenRateLimitStrategy
|
method
|
check_and_consume_rate_limit
|
fenic._inference.rate_limit_strategy.SeparatedTokenRateLimitStrategy.check_and_consume_rate_limit
|
Checks and consumes rate limits for requests, input tokens, and output tokens.
This implementation uses separate token buckets for input and output tokens,
enforcing separate limits for each token type.
Args:
token_estimate: A TokenEstimate object containing the estimated input, output,
and total tokens for the request.
Returns:
bool: True if there was enough capacity and it was consumed, False otherwise.
|
site-packages/fenic/_inference/rate_limit_strategy.py
| true | false | 196 | 224 | null |
bool
|
[
"self",
"token_estimate"
] |
SeparatedTokenRateLimitStrategy
| null | null |
Type: method
Member Name: check_and_consume_rate_limit
Qualified Name: fenic._inference.rate_limit_strategy.SeparatedTokenRateLimitStrategy.check_and_consume_rate_limit
Docstring: Checks and consumes rate limits for requests, input tokens, and output tokens.
This implementation uses separate token buckets for input and output tokens,
enforcing separate limits for each token type.
Args:
token_estimate: A TokenEstimate object containing the estimated input, output,
and total tokens for the request.
Returns:
bool: True if there was enough capacity and it was consumed, False otherwise.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "token_estimate"]
Returns: bool
Parent Class: SeparatedTokenRateLimitStrategy
|
method
|
context_tokens_per_minute
|
fenic._inference.rate_limit_strategy.SeparatedTokenRateLimitStrategy.context_tokens_per_minute
|
Returns the total token rate limit per minute.
Returns:
int: The sum of input and output tokens allowed per minute.
|
site-packages/fenic/_inference/rate_limit_strategy.py
| true | false | 226 | 232 | null |
int
|
[
"self"
] |
SeparatedTokenRateLimitStrategy
| null | null |
Type: method
Member Name: context_tokens_per_minute
Qualified Name: fenic._inference.rate_limit_strategy.SeparatedTokenRateLimitStrategy.context_tokens_per_minute
Docstring: Returns the total token rate limit per minute.
Returns:
int: The sum of input and output tokens allowed per minute.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: int
Parent Class: SeparatedTokenRateLimitStrategy
|
method
|
__str__
|
fenic._inference.rate_limit_strategy.SeparatedTokenRateLimitStrategy.__str__
|
Returns a string representation of the rate limit strategy.
Returns:
str: A string showing the RPM, input TPM, and output TPM limits.
|
site-packages/fenic/_inference/rate_limit_strategy.py
| true | false | 234 | 240 | null | null |
[
"self"
] |
SeparatedTokenRateLimitStrategy
| null | null |
Type: method
Member Name: __str__
Qualified Name: fenic._inference.rate_limit_strategy.SeparatedTokenRateLimitStrategy.__str__
Docstring: Returns a string representation of the rate limit strategy.
Returns:
str: A string showing the RPM, input TPM, and output TPM limits.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: none
Parent Class: SeparatedTokenRateLimitStrategy
|
module
|
cohere
|
fenic._inference.cohere
| null |
site-packages/fenic/_inference/cohere/__init__.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: cohere
Qualified Name: fenic._inference.cohere
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
module
|
cohere_provider
|
fenic._inference.cohere.cohere_provider
|
Cohere model provider implementation.
|
site-packages/fenic/_inference/cohere/cohere_provider.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: cohere_provider
Qualified Name: fenic._inference.cohere.cohere_provider
Docstring: Cohere model provider implementation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
logger
|
fenic._inference.cohere.cohere_provider.logger
| null |
site-packages/fenic/_inference/cohere/cohere_provider.py
| true | false | 10 | 10 | null | null | null | null |
logging.getLogger(__name__)
| null |
Type: attribute
Member Name: logger
Qualified Name: fenic._inference.cohere.cohere_provider.logger
Docstring: none
Value: logging.getLogger(__name__)
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
CohereModelProvider
|
fenic._inference.cohere.cohere_provider.CohereModelProvider
|
Cohere implementation of ModelProvider.
|
site-packages/fenic/_inference/cohere/cohere_provider.py
| true | false | 13 | 38 | null | null | null | null | null |
[
"ModelProviderClass"
] |
Type: class
Member Name: CohereModelProvider
Qualified Name: fenic._inference.cohere.cohere_provider.CohereModelProvider
Docstring: Cohere implementation of ModelProvider.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
get_api_key
|
fenic._inference.cohere.cohere_provider.CohereModelProvider.get_api_key
|
Get the Cohere API key.
|
site-packages/fenic/_inference/cohere/cohere_provider.py
| true | false | 20 | 24 | null |
str
|
[
"self"
] |
CohereModelProvider
| null | null |
Type: method
Member Name: get_api_key
Qualified Name: fenic._inference.cohere.cohere_provider.CohereModelProvider.get_api_key
Docstring: Get the Cohere API key.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: str
Parent Class: CohereModelProvider
|
method
|
create_client
|
fenic._inference.cohere.cohere_provider.CohereModelProvider.create_client
|
Create a Cohere client instance.
|
site-packages/fenic/_inference/cohere/cohere_provider.py
| true | false | 26 | 28 | null | null |
[
"self"
] |
CohereModelProvider
| null | null |
Type: method
Member Name: create_client
Qualified Name: fenic._inference.cohere.cohere_provider.CohereModelProvider.create_client
Docstring: Create a Cohere client instance.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: none
Parent Class: CohereModelProvider
|
method
|
create_aio_client
|
fenic._inference.cohere.cohere_provider.CohereModelProvider.create_aio_client
|
Create a Cohere async client instance.
|
site-packages/fenic/_inference/cohere/cohere_provider.py
| true | false | 30 | 32 | null | null |
[
"self"
] |
CohereModelProvider
| null | null |
Type: method
Member Name: create_aio_client
Qualified Name: fenic._inference.cohere.cohere_provider.CohereModelProvider.create_aio_client
Docstring: Create a Cohere async client instance.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: none
Parent Class: CohereModelProvider
|
method
|
validate_api_key
|
fenic._inference.cohere.cohere_provider.CohereModelProvider.validate_api_key
|
Validate Cohere API key by making a minimal API call.
|
site-packages/fenic/_inference/cohere/cohere_provider.py
| true | false | 34 | 38 | null |
None
|
[
"self"
] |
CohereModelProvider
| null | null |
Type: method
Member Name: validate_api_key
Qualified Name: fenic._inference.cohere.cohere_provider.CohereModelProvider.validate_api_key
Docstring: Validate Cohere API key by making a minimal API call.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: None
Parent Class: CohereModelProvider
|
module
|
cohere_profile_manager
|
fenic._inference.cohere.cohere_profile_manager
| null |
site-packages/fenic/_inference/cohere/cohere_profile_manager.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: cohere_profile_manager
Qualified Name: fenic._inference.cohere.cohere_profile_manager
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
CohereEmbeddingsProfileConfiguration
|
fenic._inference.cohere.cohere_profile_manager.CohereEmbeddingsProfileConfiguration
|
Configuration for Cohere embeddings model profiles.
Attributes:
output_dimensionality: The desired output dimensionality for embeddings
input_type: The type of input text (search_query, search_document, classification, clustering)
Note:
Cohere supports other embedding types, but we only support float embeddings.
|
site-packages/fenic/_inference/cohere/cohere_profile_manager.py
| true | false | 12 | 24 | null | null | null | null | null |
[
"BaseProfileConfiguration"
] |
Type: class
Member Name: CohereEmbeddingsProfileConfiguration
Qualified Name: fenic._inference.cohere.cohere_profile_manager.CohereEmbeddingsProfileConfiguration
Docstring: Configuration for Cohere embeddings model profiles.
Attributes:
output_dimensionality: The desired output dimensionality for embeddings
input_type: The type of input text (search_query, search_document, classification, clustering)
Note:
Cohere supports other embedding types, but we only support float embeddings.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic._inference.cohere.cohere_profile_manager.CohereEmbeddingsProfileConfiguration.__init__
| null |
site-packages/fenic/_inference/cohere/cohere_profile_manager.py
| true | false | 0 | 0 | null |
None
|
[
"self",
"output_dimensionality",
"input_type"
] |
CohereEmbeddingsProfileConfiguration
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic._inference.cohere.cohere_profile_manager.CohereEmbeddingsProfileConfiguration.__init__
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "output_dimensionality", "input_type"]
Returns: None
Parent Class: CohereEmbeddingsProfileConfiguration
|
class
|
CohereEmbeddingsProfileManager
|
fenic._inference.cohere.cohere_profile_manager.CohereEmbeddingsProfileManager
|
Manages Cohere-specific profile configurations for embeddings.
|
site-packages/fenic/_inference/cohere/cohere_profile_manager.py
| true | false | 26 | 61 | null | null | null | null | null |
[
"ProfileManager[ResolvedCohereModelProfile, CohereEmbeddingsProfileConfiguration]"
] |
Type: class
Member Name: CohereEmbeddingsProfileManager
Qualified Name: fenic._inference.cohere.cohere_profile_manager.CohereEmbeddingsProfileManager
Docstring: Manages Cohere-specific profile configurations for embeddings.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic._inference.cohere.cohere_profile_manager.CohereEmbeddingsProfileManager.__init__
| null |
site-packages/fenic/_inference/cohere/cohere_profile_manager.py
| true | false | 29 | 36 | null | null |
[
"self",
"model_parameters",
"profile_configurations",
"default_profile_name"
] |
CohereEmbeddingsProfileManager
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic._inference.cohere.cohere_profile_manager.CohereEmbeddingsProfileManager.__init__
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "model_parameters", "profile_configurations", "default_profile_name"]
Returns: none
Parent Class: CohereEmbeddingsProfileManager
|
method
|
_process_profile
|
fenic._inference.cohere.cohere_profile_manager.CohereEmbeddingsProfileManager._process_profile
|
Process Cohere profile configuration.
Args:
name: Name of the profile
profile: The profile configuration to process
Returns:
Processed Cohere-specific profile configuration
Raises:
ConfigurationError: If dimensionality is invalid
|
site-packages/fenic/_inference/cohere/cohere_profile_manager.py
| false | true | 38 | 57 | null |
CohereEmbeddingsProfileConfiguration
|
[
"self",
"profile"
] |
CohereEmbeddingsProfileManager
| null | null |
Type: method
Member Name: _process_profile
Qualified Name: fenic._inference.cohere.cohere_profile_manager.CohereEmbeddingsProfileManager._process_profile
Docstring: Process Cohere profile configuration.
Args:
name: Name of the profile
profile: The profile configuration to process
Returns:
Processed Cohere-specific profile configuration
Raises:
ConfigurationError: If dimensionality is invalid
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["self", "profile"]
Returns: CohereEmbeddingsProfileConfiguration
Parent Class: CohereEmbeddingsProfileManager
|
method
|
get_default_profile
|
fenic._inference.cohere.cohere_profile_manager.CohereEmbeddingsProfileManager.get_default_profile
|
Get default Cohere configuration.
|
site-packages/fenic/_inference/cohere/cohere_profile_manager.py
| true | false | 59 | 61 | null |
CohereEmbeddingsProfileConfiguration
|
[
"self"
] |
CohereEmbeddingsProfileManager
| null | null |
Type: method
Member Name: get_default_profile
Qualified Name: fenic._inference.cohere.cohere_profile_manager.CohereEmbeddingsProfileManager.get_default_profile
Docstring: Get default Cohere configuration.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: CohereEmbeddingsProfileConfiguration
Parent Class: CohereEmbeddingsProfileManager
|
module
|
cohere_batch_embeddings_client
|
fenic._inference.cohere.cohere_batch_embeddings_client
| null |
site-packages/fenic/_inference/cohere/cohere_batch_embeddings_client.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: cohere_batch_embeddings_client
Qualified Name: fenic._inference.cohere.cohere_batch_embeddings_client
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
logger
|
fenic._inference.cohere.cohere_batch_embeddings_client.logger
| null |
site-packages/fenic/_inference/cohere/cohere_batch_embeddings_client.py
| true | false | 26 | 26 | null | null | null | null |
logging.getLogger(__name__)
| null |
Type: attribute
Member Name: logger
Qualified Name: fenic._inference.cohere.cohere_batch_embeddings_client.logger
Docstring: none
Value: logging.getLogger(__name__)
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
CohereBatchEmbeddingsClient
|
fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient
|
Client for making batch requests to Cohere's embeddings API.
|
site-packages/fenic/_inference/cohere/cohere_batch_embeddings_client.py
| true | false | 29 | 192 | null | null | null | null | null |
[
"ModelClient[FenicEmbeddingsRequest, List[float]]"
] |
Type: class
Member Name: CohereBatchEmbeddingsClient
Qualified Name: fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient
Docstring: Client for making batch requests to Cohere's embeddings API.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient.__init__
|
Initialize the Cohere batch embeddings client.
Args:
rate_limit_strategy: Strategy for handling rate limits
model: The model to use
queue_size: Size of the request queue
max_backoffs: Maximum number of backoff attempts
preset_configurations: Dictionary of preset configurations
default_preset_name: Default preset to use when none specified
|
site-packages/fenic/_inference/cohere/cohere_batch_embeddings_client.py
| true | false | 32 | 73 | null | null |
[
"self",
"rate_limit_strategy",
"model",
"queue_size",
"max_backoffs",
"profile_configurations",
"default_profile_name"
] |
CohereBatchEmbeddingsClient
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient.__init__
Docstring: Initialize the Cohere batch embeddings client.
Args:
rate_limit_strategy: Strategy for handling rate limits
model: The model to use
queue_size: Size of the request queue
max_backoffs: Maximum number of backoff attempts
preset_configurations: Dictionary of preset configurations
default_preset_name: Default preset to use when none specified
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "rate_limit_strategy", "model", "queue_size", "max_backoffs", "profile_configurations", "default_profile_name"]
Returns: none
Parent Class: CohereBatchEmbeddingsClient
|
method
|
make_single_request
|
fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient.make_single_request
|
Make a single request to the Cohere embeddings API.
Args:
request: The embedding request to process
Returns:
List of embedding floats, or an exception wrapper
|
site-packages/fenic/_inference/cohere/cohere_batch_embeddings_client.py
| true | false | 75 | 138 | null |
Union[None, List[float], TransientException, FatalException]
|
[
"self",
"request"
] |
CohereBatchEmbeddingsClient
| null | null |
Type: method
Member Name: make_single_request
Qualified Name: fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient.make_single_request
Docstring: Make a single request to the Cohere embeddings API.
Args:
request: The embedding request to process
Returns:
List of embedding floats, or an exception wrapper
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "request"]
Returns: Union[None, List[float], TransientException, FatalException]
Parent Class: CohereBatchEmbeddingsClient
|
method
|
get_request_key
|
fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient.get_request_key
|
Generate a unique key for request deduplication.
Args:
request: The request to generate a key for
Returns:
A unique key for the request
|
site-packages/fenic/_inference/cohere/cohere_batch_embeddings_client.py
| true | false | 140 | 158 | null |
str
|
[
"self",
"request"
] |
CohereBatchEmbeddingsClient
| null | null |
Type: method
Member Name: get_request_key
Qualified Name: fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient.get_request_key
Docstring: Generate a unique key for request deduplication.
Args:
request: The request to generate a key for
Returns:
A unique key for the request
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "request"]
Returns: str
Parent Class: CohereBatchEmbeddingsClient
|
method
|
estimate_tokens_for_request
|
fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient.estimate_tokens_for_request
|
Estimate the number of tokens for a request.
Args:
request: The request to estimate tokens for
Returns:
TokenEstimate: The estimated token usage
|
site-packages/fenic/_inference/cohere/cohere_batch_embeddings_client.py
| true | false | 160 | 172 | null |
TokenEstimate
|
[
"self",
"request"
] |
CohereBatchEmbeddingsClient
| null | null |
Type: method
Member Name: estimate_tokens_for_request
Qualified Name: fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient.estimate_tokens_for_request
Docstring: Estimate the number of tokens for a request.
Args:
request: The request to estimate tokens for
Returns:
TokenEstimate: The estimated token usage
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "request"]
Returns: TokenEstimate
Parent Class: CohereBatchEmbeddingsClient
|
method
|
_get_max_output_tokens
|
fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient._get_max_output_tokens
|
Get maximum output tokens (always 0 for embeddings).
Returns:
0 since embeddings don't produce text tokens
|
site-packages/fenic/_inference/cohere/cohere_batch_embeddings_client.py
| false | true | 174 | 180 | null |
int
|
[
"self",
"request"
] |
CohereBatchEmbeddingsClient
| null | null |
Type: method
Member Name: _get_max_output_tokens
Qualified Name: fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient._get_max_output_tokens
Docstring: Get maximum output tokens (always 0 for embeddings).
Returns:
0 since embeddings don't produce text tokens
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["self", "request"]
Returns: int
Parent Class: CohereBatchEmbeddingsClient
|
method
|
reset_metrics
|
fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient.reset_metrics
|
Reset all metrics to their initial values.
|
site-packages/fenic/_inference/cohere/cohere_batch_embeddings_client.py
| true | false | 182 | 184 | null | null |
[
"self"
] |
CohereBatchEmbeddingsClient
| null | null |
Type: method
Member Name: reset_metrics
Qualified Name: fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient.reset_metrics
Docstring: Reset all metrics to their initial values.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: none
Parent Class: CohereBatchEmbeddingsClient
|
method
|
get_metrics
|
fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient.get_metrics
|
Get the current metrics.
Returns:
The current metrics
|
site-packages/fenic/_inference/cohere/cohere_batch_embeddings_client.py
| true | false | 186 | 192 | null |
RMMetrics
|
[
"self"
] |
CohereBatchEmbeddingsClient
| null | null |
Type: method
Member Name: get_metrics
Qualified Name: fenic._inference.cohere.cohere_batch_embeddings_client.CohereBatchEmbeddingsClient.get_metrics
Docstring: Get the current metrics.
Returns:
The current metrics
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: RMMetrics
Parent Class: CohereBatchEmbeddingsClient
|
module
|
google
|
fenic._inference.google
| null |
site-packages/fenic/_inference/google/__init__.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: google
Qualified Name: fenic._inference.google
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
module
|
google_profile_manager
|
fenic._inference.google.google_profile_manager
| null |
site-packages/fenic/_inference/google/google_profile_manager.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: google_profile_manager
Qualified Name: fenic._inference.google.google_profile_manager
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
GoogleCompletionsProfileConfig
|
fenic._inference.google.google_profile_manager.GoogleCompletionsProfileConfig
|
Configuration for Google Gemini model profiles.
Attributes:
thinking_enabled: Whether thinking/reasoning is enabled for this profile
thinking_token_budget: Token budget allocated for thinking/reasoning
additional_generation_config: Additional Google-specific generation configuration
|
site-packages/fenic/_inference/google/google_profile_manager.py
| true | false | 18 | 29 | null | null | null | null | null |
[
"BaseProfileConfiguration"
] |
Type: class
Member Name: GoogleCompletionsProfileConfig
Qualified Name: fenic._inference.google.google_profile_manager.GoogleCompletionsProfileConfig
Docstring: Configuration for Google Gemini model profiles.
Attributes:
thinking_enabled: Whether thinking/reasoning is enabled for this profile
thinking_token_budget: Token budget allocated for thinking/reasoning
additional_generation_config: Additional Google-specific generation configuration
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic._inference.google.google_profile_manager.GoogleCompletionsProfileConfig.__init__
| null |
site-packages/fenic/_inference/google/google_profile_manager.py
| true | false | 0 | 0 | null |
None
|
[
"self",
"thinking_enabled",
"thinking_token_budget",
"additional_generation_config"
] |
GoogleCompletionsProfileConfig
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic._inference.google.google_profile_manager.GoogleCompletionsProfileConfig.__init__
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "thinking_enabled", "thinking_token_budget", "additional_generation_config"]
Returns: None
Parent Class: GoogleCompletionsProfileConfig
|
class
|
GoogleEmbeddingsProfileConfig
|
fenic._inference.google.google_profile_manager.GoogleEmbeddingsProfileConfig
|
Configuration for Google Gemini embeddings model profiles.
|
site-packages/fenic/_inference/google/google_profile_manager.py
| true | false | 31 | 34 | null | null | null | null | null |
[
"BaseProfileConfiguration"
] |
Type: class
Member Name: GoogleEmbeddingsProfileConfig
Qualified Name: fenic._inference.google.google_profile_manager.GoogleEmbeddingsProfileConfig
Docstring: Configuration for Google Gemini embeddings model profiles.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic._inference.google.google_profile_manager.GoogleEmbeddingsProfileConfig.__init__
| null |
site-packages/fenic/_inference/google/google_profile_manager.py
| true | false | 0 | 0 | null |
None
|
[
"self",
"additional_embedding_config"
] |
GoogleEmbeddingsProfileConfig
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic._inference.google.google_profile_manager.GoogleEmbeddingsProfileConfig.__init__
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "additional_embedding_config"]
Returns: None
Parent Class: GoogleEmbeddingsProfileConfig
|
class
|
GoogleEmbeddingsProfileManager
|
fenic._inference.google.google_profile_manager.GoogleEmbeddingsProfileManager
| null |
site-packages/fenic/_inference/google/google_profile_manager.py
| true | false | 36 | 61 | null | null | null | null | null |
[
"ProfileManager[ResolvedGoogleModelProfile, GoogleEmbeddingsProfileConfig]"
] |
Type: class
Member Name: GoogleEmbeddingsProfileManager
Qualified Name: fenic._inference.google.google_profile_manager.GoogleEmbeddingsProfileManager
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic._inference.google.google_profile_manager.GoogleEmbeddingsProfileManager.__init__
| null |
site-packages/fenic/_inference/google/google_profile_manager.py
| true | false | 38 | 45 | null | null |
[
"self",
"model_parameters",
"profiles",
"default_profile_name"
] |
GoogleEmbeddingsProfileManager
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic._inference.google.google_profile_manager.GoogleEmbeddingsProfileManager.__init__
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "model_parameters", "profiles", "default_profile_name"]
Returns: none
Parent Class: GoogleEmbeddingsProfileManager
|
method
|
_process_profile
|
fenic._inference.google.google_profile_manager.GoogleEmbeddingsProfileManager._process_profile
| null |
site-packages/fenic/_inference/google/google_profile_manager.py
| false | true | 48 | 58 | null |
GoogleEmbeddingsProfileConfig
|
[
"self",
"profile"
] |
GoogleEmbeddingsProfileManager
| null | null |
Type: method
Member Name: _process_profile
Qualified Name: fenic._inference.google.google_profile_manager.GoogleEmbeddingsProfileManager._process_profile
Docstring: none
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["self", "profile"]
Returns: GoogleEmbeddingsProfileConfig
Parent Class: GoogleEmbeddingsProfileManager
|
method
|
get_default_profile
|
fenic._inference.google.google_profile_manager.GoogleEmbeddingsProfileManager.get_default_profile
| null |
site-packages/fenic/_inference/google/google_profile_manager.py
| true | false | 60 | 61 | null |
GoogleEmbeddingsProfileConfig
|
[
"self"
] |
GoogleEmbeddingsProfileManager
| null | null |
Type: method
Member Name: get_default_profile
Qualified Name: fenic._inference.google.google_profile_manager.GoogleEmbeddingsProfileManager.get_default_profile
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: GoogleEmbeddingsProfileConfig
Parent Class: GoogleEmbeddingsProfileManager
|
class
|
GoogleCompletionsProfileManager
|
fenic._inference.google.google_profile_manager.GoogleCompletionsProfileManager
|
Manages Google-specific profile configurations.
This class handles the conversion of Fenic profile configurations to
Google Gemini-specific configurations, including thinking/reasoning settings.
|
site-packages/fenic/_inference/google/google_profile_manager.py
| true | false | 65 | 151 | null | null | null | null | null |
[
"ProfileManager[ResolvedGoogleModelProfile, GoogleCompletionsProfileConfig]"
] |
Type: class
Member Name: GoogleCompletionsProfileManager
Qualified Name: fenic._inference.google.google_profile_manager.GoogleCompletionsProfileManager
Docstring: Manages Google-specific profile configurations.
This class handles the conversion of Fenic profile configurations to
Google Gemini-specific configurations, including thinking/reasoning settings.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic._inference.google.google_profile_manager.GoogleCompletionsProfileManager.__init__
|
Initialize the Google profile configuration manager.
Args:
model_parameters: Parameters for the completion model
profile_configurations: Dictionary of profile configurations
default_profile_name: Name of the default profile to use
|
site-packages/fenic/_inference/google/google_profile_manager.py
| true | false | 72 | 86 | null | null |
[
"self",
"model_parameters",
"profile_configurations",
"default_profile_name"
] |
GoogleCompletionsProfileManager
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic._inference.google.google_profile_manager.GoogleCompletionsProfileManager.__init__
Docstring: Initialize the Google profile configuration manager.
Args:
model_parameters: Parameters for the completion model
profile_configurations: Dictionary of profile configurations
default_profile_name: Name of the default profile to use
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "model_parameters", "profile_configurations", "default_profile_name"]
Returns: none
Parent Class: GoogleCompletionsProfileManager
|
method
|
_process_profile
|
fenic._inference.google.google_profile_manager.GoogleCompletionsProfileManager._process_profile
|
Process Google profile configuration.
Converts a Fenic profile configuration to a Google-specific configuration,
handling thinking/reasoning settings based on model capabilities.
Args:
profile: The Fenic profile configuration to process
Returns:
Google-specific profile configuration
|
site-packages/fenic/_inference/google/google_profile_manager.py
| false | true | 88 | 132 | null |
GoogleCompletionsProfileConfig
|
[
"self",
"profile"
] |
GoogleCompletionsProfileManager
| null | null |
Type: method
Member Name: _process_profile
Qualified Name: fenic._inference.google.google_profile_manager.GoogleCompletionsProfileManager._process_profile
Docstring: Process Google profile configuration.
Converts a Fenic profile configuration to a Google-specific configuration,
handling thinking/reasoning settings based on model capabilities.
Args:
profile: The Fenic profile configuration to process
Returns:
Google-specific profile configuration
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["self", "profile"]
Returns: GoogleCompletionsProfileConfig
Parent Class: GoogleCompletionsProfileManager
|
method
|
get_default_profile
|
fenic._inference.google.google_profile_manager.GoogleCompletionsProfileManager.get_default_profile
|
Get default Google configuration.
Returns:
Default configuration with thinking disabled
|
site-packages/fenic/_inference/google/google_profile_manager.py
| true | false | 134 | 151 | null |
GoogleCompletionsProfileConfig
|
[
"self"
] |
GoogleCompletionsProfileManager
| null | null |
Type: method
Member Name: get_default_profile
Qualified Name: fenic._inference.google.google_profile_manager.GoogleCompletionsProfileManager.get_default_profile
Docstring: Get default Google configuration.
Returns:
Default configuration with thinking disabled
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: GoogleCompletionsProfileConfig
Parent Class: GoogleCompletionsProfileManager
|
module
|
gemini_native_chat_completions_client
|
fenic._inference.google.gemini_native_chat_completions_client
| null |
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: gemini_native_chat_completions_client
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
logger
|
fenic._inference.google.gemini_native_chat_completions_client.logger
| null |
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| true | false | 47 | 47 | null | null | null | null |
logging.getLogger(__name__)
| null |
Type: attribute
Member Name: logger
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client.logger
Docstring: none
Value: logging.getLogger(__name__)
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
GeminiNativeChatCompletionsClient
|
fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient
|
Native (google-genai) Google Gemini chat-completions client.
This client handles communication with Google's Gemini models using the native
google-genai library. It supports both standard and Vertex AI environments,
thinking/reasoning capabilities, structured output, and comprehensive token
tracking.
|
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| true | false | 50 | 434 | null | null | null | null | null |
[
"ModelClient[FenicCompletionsRequest, FenicCompletionsResponse]"
] |
Type: class
Member Name: GeminiNativeChatCompletionsClient
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient
Docstring: Native (google-genai) Google Gemini chat-completions client.
This client handles communication with Google's Gemini models using the native
google-genai library. It supports both standard and Vertex AI environments,
thinking/reasoning capabilities, structured output, and comprehensive token
tracking.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient.__init__
|
Initialize the Gemini native chat completions client.
Args:
rate_limit_strategy: Strategy for rate limiting requests
model_provider: Google model provider (Developer or Vertex AI)
model: Gemini model name to use
queue_size: Maximum size of the request queue
max_backoffs: Maximum number of retry backoffs
profiles: Dictionary of profile configurations
default_profile_name: Name of the default profile to use
|
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| true | false | 62 | 107 | null | null |
[
"self",
"rate_limit_strategy",
"model_provider",
"model",
"queue_size",
"max_backoffs",
"profiles",
"default_profile_name"
] |
GeminiNativeChatCompletionsClient
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient.__init__
Docstring: Initialize the Gemini native chat completions client.
Args:
rate_limit_strategy: Strategy for rate limiting requests
model_provider: Google model provider (Developer or Vertex AI)
model: Gemini model name to use
queue_size: Maximum size of the request queue
max_backoffs: Maximum number of retry backoffs
profiles: Dictionary of profile configurations
default_profile_name: Name of the default profile to use
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "rate_limit_strategy", "model_provider", "model", "queue_size", "max_backoffs", "profiles", "default_profile_name"]
Returns: none
Parent Class: GeminiNativeChatCompletionsClient
|
method
|
reset_metrics
|
fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient.reset_metrics
|
Reset metrics to initial state.
|
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| true | false | 109 | 111 | null | null |
[
"self"
] |
GeminiNativeChatCompletionsClient
| null | null |
Type: method
Member Name: reset_metrics
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient.reset_metrics
Docstring: Reset metrics to initial state.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: none
Parent Class: GeminiNativeChatCompletionsClient
|
method
|
get_metrics
|
fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient.get_metrics
|
Get current metrics.
Returns:
Current language model metrics
|
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| true | false | 113 | 119 | null |
LMMetrics
|
[
"self"
] |
GeminiNativeChatCompletionsClient
| null | null |
Type: method
Member Name: get_metrics
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient.get_metrics
Docstring: Get current metrics.
Returns:
Current language model metrics
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: LMMetrics
Parent Class: GeminiNativeChatCompletionsClient
|
method
|
_convert_messages
|
fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient._convert_messages
|
Convert Fenic LMRequestMessages → list of google-genai `Content` objects.
Converts Fenic message format to Google's Content format, including
few-shot examples and the final user prompt.
Args:
messages: Fenic message format
Returns:
List of Google Content objects
|
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| false | true | 121 | 155 | null |
list[genai.types.ContentUnion]
|
[
"self",
"messages"
] |
GeminiNativeChatCompletionsClient
| null | null |
Type: method
Member Name: _convert_messages
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient._convert_messages
Docstring: Convert Fenic LMRequestMessages → list of google-genai `Content` objects.
Converts Fenic message format to Google's Content format, including
few-shot examples and the final user prompt.
Args:
messages: Fenic message format
Returns:
List of Google Content objects
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["self", "messages"]
Returns: list[genai.types.ContentUnion]
Parent Class: GeminiNativeChatCompletionsClient
|
method
|
count_tokens
|
fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient.count_tokens
|
Count tokens in messages.
Re-exposes the parent implementation for type checking.
Args:
messages: Messages to count tokens for
Returns:
Token count
|
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| true | false | 157 | 169 | null |
int
|
[
"self",
"messages"
] |
GeminiNativeChatCompletionsClient
| null | null |
Type: method
Member Name: count_tokens
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient.count_tokens
Docstring: Count tokens in messages.
Re-exposes the parent implementation for type checking.
Args:
messages: Messages to count tokens for
Returns:
Token count
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "messages"]
Returns: int
Parent Class: GeminiNativeChatCompletionsClient
|
method
|
_estimate_structured_output_overhead
|
fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient._estimate_structured_output_overhead
|
Use Google-specific response schema token estimation.
Args:
response_format: Pydantic model class defining the response format
Returns:
Estimated token overhead for structured output
|
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| false | true | 171 | 180 | null |
int
|
[
"self",
"response_format"
] |
GeminiNativeChatCompletionsClient
| null | null |
Type: method
Member Name: _estimate_structured_output_overhead
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient._estimate_structured_output_overhead
Docstring: Use Google-specific response schema token estimation.
Args:
response_format: Pydantic model class defining the response format
Returns:
Estimated token overhead for structured output
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["self", "response_format"]
Returns: int
Parent Class: GeminiNativeChatCompletionsClient
|
method
|
_get_max_output_tokens
|
fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient._get_max_output_tokens
|
Get maximum output tokens including thinking budget.
Conservative estimate that includes both completion tokens and
thinking token budget with a safety margin.
Args:
request: The completion request
Returns:
Maximum output tokens (completion + thinking budget with safety margin)
|
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| false | true | 182 | 199 | null |
int
|
[
"self",
"request"
] |
GeminiNativeChatCompletionsClient
| null | null |
Type: method
Member Name: _get_max_output_tokens
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient._get_max_output_tokens
Docstring: Get maximum output tokens including thinking budget.
Conservative estimate that includes both completion tokens and
thinking token budget with a safety margin.
Args:
request: The completion request
Returns:
Maximum output tokens (completion + thinking budget with safety margin)
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["self", "request"]
Returns: int
Parent Class: GeminiNativeChatCompletionsClient
|
method
|
_estimate_response_schema_tokens
|
fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient._estimate_response_schema_tokens
|
Estimate token count for a response format schema.
Uses Google's tokenizer to count tokens in a JSON schema representation
of the response format. Results are cached for performance.
Args:
response_format: Pydantic model class defining the response format
Returns:
Estimated token count for the response format
|
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| false | true | 201 | 215 | null |
int
|
[
"self",
"response_format"
] |
GeminiNativeChatCompletionsClient
| null | null |
Type: method
Member Name: _estimate_response_schema_tokens
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient._estimate_response_schema_tokens
Docstring: Estimate token count for a response format schema.
Uses Google's tokenizer to count tokens in a JSON schema representation
of the response format. Results are cached for performance.
Args:
response_format: Pydantic model class defining the response format
Returns:
Estimated token count for the response format
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["self", "response_format"]
Returns: int
Parent Class: GeminiNativeChatCompletionsClient
|
method
|
get_request_key
|
fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient.get_request_key
|
Generate a unique key for the request.
Args:
request: The completion request
Returns:
Unique request key for caching
|
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| true | false | 217 | 226 | null |
str
|
[
"self",
"request"
] |
GeminiNativeChatCompletionsClient
| null | null |
Type: method
Member Name: get_request_key
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient.get_request_key
Docstring: Generate a unique key for the request.
Args:
request: The completion request
Returns:
Unique request key for caching
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "request"]
Returns: str
Parent Class: GeminiNativeChatCompletionsClient
|
method
|
estimate_tokens_for_request
|
fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient.estimate_tokens_for_request
|
Estimate the number of tokens for a request.
Args:
request: The request to estimate tokens for
Returns:
TokenEstimate: The estimated token usage
|
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| true | false | 228 | 245 | null | null |
[
"self",
"request"
] |
GeminiNativeChatCompletionsClient
| null | null |
Type: method
Member Name: estimate_tokens_for_request
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient.estimate_tokens_for_request
Docstring: Estimate the number of tokens for a request.
Args:
request: The request to estimate tokens for
Returns:
TokenEstimate: The estimated token usage
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "request"]
Returns: none
Parent Class: GeminiNativeChatCompletionsClient
|
method
|
make_single_request
|
fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient.make_single_request
|
Make a single completion request to Google Gemini.
Handles both text and structured output requests, with support for
thinking/reasoning when enabled. Processes responses and extracts
comprehensive usage metrics including thinking tokens.
Args:
request: The completion request to process
Returns:
Completion response, transient exception, or fatal exception
|
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| true | false | 247 | 391 | null |
Union[None, FenicCompletionsResponse, TransientException, FatalException]
|
[
"self",
"request"
] |
GeminiNativeChatCompletionsClient
| null | null |
Type: method
Member Name: make_single_request
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient.make_single_request
Docstring: Make a single completion request to Google Gemini.
Handles both text and structured output requests, with support for
thinking/reasoning when enabled. Processes responses and extracts
comprehensive usage metrics including thinking tokens.
Args:
request: The completion request to process
Returns:
Completion response, transient exception, or fatal exception
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "request"]
Returns: Union[None, FenicCompletionsResponse, TransientException, FatalException]
Parent Class: GeminiNativeChatCompletionsClient
|
method
|
_prepare_schema
|
fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient._prepare_schema
|
Google Gemini does not support additionalProperties in JSON schemas, even if it is set to False.
This function copies the original schema and recursively removes all additionalProperties from its objects.
If additionalProperties is not removed, the genai service will reject the schema and return a 400 error.
Args:
response_format: The response format to prepare
Returns:
The prepared schema
|
site-packages/fenic/_inference/google/gemini_native_chat_completions_client.py
| false | true | 393 | 434 | null |
dict[str, Any]
|
[
"self",
"response_format"
] |
GeminiNativeChatCompletionsClient
| null | null |
Type: method
Member Name: _prepare_schema
Qualified Name: fenic._inference.google.gemini_native_chat_completions_client.GeminiNativeChatCompletionsClient._prepare_schema
Docstring: Google Gemini does not support additionalProperties in JSON schemas, even if it is set to False.
This function copies the original schema and recursively removes all additionalProperties from its objects.
If additionalProperties is not removed, the genai service will reject the schema and return a 400 error.
Args:
response_format: The response format to prepare
Returns:
The prepared schema
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["self", "response_format"]
Returns: dict[str, Any]
Parent Class: GeminiNativeChatCompletionsClient
|
module
|
gemini_batch_embeddings_client
|
fenic._inference.google.gemini_batch_embeddings_client
| null |
site-packages/fenic/_inference/google/gemini_batch_embeddings_client.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: gemini_batch_embeddings_client
Qualified Name: fenic._inference.google.gemini_batch_embeddings_client
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
GoogleBatchEmbeddingsClient
|
fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient
| null |
site-packages/fenic/_inference/google/gemini_batch_embeddings_client.py
| true | false | 30 | 131 | null | null | null | null | null |
[
"ModelClient[FenicEmbeddingsRequest, List[float]]"
] |
Type: class
Member Name: GoogleBatchEmbeddingsClient
Qualified Name: fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient.__init__
| null |
site-packages/fenic/_inference/google/gemini_batch_embeddings_client.py
| true | false | 31 | 60 | null | null |
[
"self",
"rate_limit_strategy",
"model_provider",
"model",
"queue_size",
"max_backoffs",
"profiles",
"default_profile_name"
] |
GoogleBatchEmbeddingsClient
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient.__init__
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "rate_limit_strategy", "model_provider", "model", "queue_size", "max_backoffs", "profiles", "default_profile_name"]
Returns: none
Parent Class: GoogleBatchEmbeddingsClient
|
method
|
make_single_request
|
fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient.make_single_request
| null |
site-packages/fenic/_inference/google/gemini_batch_embeddings_client.py
| true | false | 62 | 98 | null |
Union[None, List[float], TransientException, FatalException]
|
[
"self",
"request"
] |
GoogleBatchEmbeddingsClient
| null | null |
Type: method
Member Name: make_single_request
Qualified Name: fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient.make_single_request
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "request"]
Returns: Union[None, List[float], TransientException, FatalException]
Parent Class: GoogleBatchEmbeddingsClient
|
method
|
get_request_key
|
fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient.get_request_key
|
Generate a unique key for request deduplication.
Args:
request: The request to generate a key for
Returns:
A unique key for the request
|
site-packages/fenic/_inference/google/gemini_batch_embeddings_client.py
| true | false | 100 | 117 | null |
str
|
[
"self",
"request"
] |
GoogleBatchEmbeddingsClient
| null | null |
Type: method
Member Name: get_request_key
Qualified Name: fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient.get_request_key
Docstring: Generate a unique key for request deduplication.
Args:
request: The request to generate a key for
Returns:
A unique key for the request
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "request"]
Returns: str
Parent Class: GoogleBatchEmbeddingsClient
|
method
|
estimate_tokens_for_request
|
fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient.estimate_tokens_for_request
| null |
site-packages/fenic/_inference/google/gemini_batch_embeddings_client.py
| true | false | 119 | 122 | null |
TokenEstimate
|
[
"self",
"request"
] |
GoogleBatchEmbeddingsClient
| null | null |
Type: method
Member Name: estimate_tokens_for_request
Qualified Name: fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient.estimate_tokens_for_request
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "request"]
Returns: TokenEstimate
Parent Class: GoogleBatchEmbeddingsClient
|
method
|
_get_max_output_tokens
|
fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient._get_max_output_tokens
| null |
site-packages/fenic/_inference/google/gemini_batch_embeddings_client.py
| false | true | 124 | 125 | null |
int
|
[
"self",
"request"
] |
GoogleBatchEmbeddingsClient
| null | null |
Type: method
Member Name: _get_max_output_tokens
Qualified Name: fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient._get_max_output_tokens
Docstring: none
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["self", "request"]
Returns: int
Parent Class: GoogleBatchEmbeddingsClient
|
method
|
reset_metrics
|
fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient.reset_metrics
| null |
site-packages/fenic/_inference/google/gemini_batch_embeddings_client.py
| true | false | 127 | 128 | null | null |
[
"self"
] |
GoogleBatchEmbeddingsClient
| null | null |
Type: method
Member Name: reset_metrics
Qualified Name: fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient.reset_metrics
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: none
Parent Class: GoogleBatchEmbeddingsClient
|
method
|
get_metrics
|
fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient.get_metrics
| null |
site-packages/fenic/_inference/google/gemini_batch_embeddings_client.py
| true | false | 130 | 131 | null |
RMMetrics
|
[
"self"
] |
GoogleBatchEmbeddingsClient
| null | null |
Type: method
Member Name: get_metrics
Qualified Name: fenic._inference.google.gemini_batch_embeddings_client.GoogleBatchEmbeddingsClient.get_metrics
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: RMMetrics
Parent Class: GoogleBatchEmbeddingsClient
|
module
|
google_provider
|
fenic._inference.google.google_provider
|
Google model provider implementation.
|
site-packages/fenic/_inference/google/google_provider.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: google_provider
Qualified Name: fenic._inference.google.google_provider
Docstring: Google model provider implementation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
logger
|
fenic._inference.google.google_provider.logger
| null |
site-packages/fenic/_inference/google/google_provider.py
| true | false | 11 | 11 | null | null | null | null |
logging.getLogger(__name__)
| null |
Type: attribute
Member Name: logger
Qualified Name: fenic._inference.google.google_provider.logger
Docstring: none
Value: logging.getLogger(__name__)
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
GoogleModelProvider
|
fenic._inference.google.google_provider.GoogleModelProvider
|
Google implementation of ModelProvider.
|
site-packages/fenic/_inference/google/google_provider.py
| true | false | 14 | 30 | null | null | null | null | null |
[
"ModelProviderClass"
] |
Type: class
Member Name: GoogleModelProvider
Qualified Name: fenic._inference.google.google_provider.GoogleModelProvider
Docstring: Google implementation of ModelProvider.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
create_client
|
fenic._inference.google.google_provider.GoogleModelProvider.create_client
| null |
site-packages/fenic/_inference/google/google_provider.py
| true | false | 17 | 19 | null | null |
[
"self"
] |
GoogleModelProvider
| null | null |
Type: method
Member Name: create_client
Qualified Name: fenic._inference.google.google_provider.GoogleModelProvider.create_client
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: none
Parent Class: GoogleModelProvider
|
method
|
validate_api_key
|
fenic._inference.google.google_provider.GoogleModelProvider.validate_api_key
|
Validate Google API key by listing models.
|
site-packages/fenic/_inference/google/google_provider.py
| true | false | 21 | 26 | null |
None
|
[
"self"
] |
GoogleModelProvider
| null | null |
Type: method
Member Name: validate_api_key
Qualified Name: fenic._inference.google.google_provider.GoogleModelProvider.validate_api_key
Docstring: Validate Google API key by listing models.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: None
Parent Class: GoogleModelProvider
|
method
|
create_aio_client
|
fenic._inference.google.google_provider.GoogleModelProvider.create_aio_client
|
Create a Google async client instance.
|
site-packages/fenic/_inference/google/google_provider.py
| true | false | 28 | 30 | null | null |
[
"self"
] |
GoogleModelProvider
| null | null |
Type: method
Member Name: create_aio_client
Qualified Name: fenic._inference.google.google_provider.GoogleModelProvider.create_aio_client
Docstring: Create a Google async client instance.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: none
Parent Class: GoogleModelProvider
|
class
|
GoogleDeveloperModelProvider
|
fenic._inference.google.google_provider.GoogleDeveloperModelProvider
|
Google Developer implementation of ModelProvider.
|
site-packages/fenic/_inference/google/google_provider.py
| true | false | 33 | 45 | null | null | null | null | null |
[
"GoogleModelProvider"
] |
Type: class
Member Name: GoogleDeveloperModelProvider
Qualified Name: fenic._inference.google.google_provider.GoogleDeveloperModelProvider
Docstring: Google Developer implementation of ModelProvider.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
create_client
|
fenic._inference.google.google_provider.GoogleDeveloperModelProvider.create_client
|
Create a Google Developer client instance.
|
site-packages/fenic/_inference/google/google_provider.py
| true | false | 40 | 45 | null | null |
[
"self"
] |
GoogleDeveloperModelProvider
| null | null |
Type: method
Member Name: create_client
Qualified Name: fenic._inference.google.google_provider.GoogleDeveloperModelProvider.create_client
Docstring: Create a Google Developer client instance.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: none
Parent Class: GoogleDeveloperModelProvider
|
class
|
GoogleVertexModelProvider
|
fenic._inference.google.google_provider.GoogleVertexModelProvider
|
Google Vertex implementation of ModelProvider.
|
site-packages/fenic/_inference/google/google_provider.py
| true | false | 48 | 60 | null | null | null | null | null |
[
"GoogleModelProvider"
] |
Type: class
Member Name: GoogleVertexModelProvider
Qualified Name: fenic._inference.google.google_provider.GoogleVertexModelProvider
Docstring: Google Vertex implementation of ModelProvider.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
create_client
|
fenic._inference.google.google_provider.GoogleVertexModelProvider.create_client
|
Create a Google Vertex client instance.
Passing `vertexai=True` automatically routes traffic through Vertex-AI if the environment is configured for it.
|
site-packages/fenic/_inference/google/google_provider.py
| true | false | 55 | 60 | null | null |
[
"self"
] |
GoogleVertexModelProvider
| null | null |
Type: method
Member Name: create_client
Qualified Name: fenic._inference.google.google_provider.GoogleVertexModelProvider.create_client
Docstring: Create a Google Vertex client instance.
Passing `vertexai=True` automatically routes traffic through Vertex-AI if the environment is configured for it.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: none
Parent Class: GoogleVertexModelProvider
|
module
|
common_openai
|
fenic._inference.common_openai
| null |
site-packages/fenic/_inference/common_openai/__init__.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: common_openai
Qualified Name: fenic._inference.common_openai
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
module
|
openai_chat_completions_core
|
fenic._inference.common_openai.openai_chat_completions_core
|
Core functionality for OpenAI chat completions clients.
|
site-packages/fenic/_inference/common_openai/openai_chat_completions_core.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: openai_chat_completions_core
Qualified Name: fenic._inference.common_openai.openai_chat_completions_core
Docstring: Core functionality for OpenAI chat completions clients.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
logger
|
fenic._inference.common_openai.openai_chat_completions_core.logger
| null |
site-packages/fenic/_inference/common_openai/openai_chat_completions_core.py
| true | false | 36 | 36 | null | null | null | null |
logging.getLogger(__name__)
| null |
Type: attribute
Member Name: logger
Qualified Name: fenic._inference.common_openai.openai_chat_completions_core.logger
Docstring: none
Value: logging.getLogger(__name__)
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
OpenAIChatCompletionsCore
|
fenic._inference.common_openai.openai_chat_completions_core.OpenAIChatCompletionsCore
|
Core functionality for OpenAI chat completions clients.
|
site-packages/fenic/_inference/common_openai/openai_chat_completions_core.py
| true | false | 39 | 206 | null | null | null | null | null |
[] |
Type: class
Member Name: OpenAIChatCompletionsCore
Qualified Name: fenic._inference.common_openai.openai_chat_completions_core.OpenAIChatCompletionsCore
Docstring: Core functionality for OpenAI chat completions clients.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic._inference.common_openai.openai_chat_completions_core.OpenAIChatCompletionsCore.__init__
|
Initialize the OpenAI chat completions client core.
Args:
model: The model to use
model_provider: The provider of the model
token_counter: Counter for estimating token usage
client: The OpenAI client
additional_params: Additional parameters to pass to the API, e.g. {"reasoning_effort": "none"} for thinking models.
|
site-packages/fenic/_inference/common_openai/openai_chat_completions_core.py
| true | false | 42 | 64 | null | null |
[
"self",
"model",
"model_provider",
"token_counter",
"client"
] |
OpenAIChatCompletionsCore
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic._inference.common_openai.openai_chat_completions_core.OpenAIChatCompletionsCore.__init__
Docstring: Initialize the OpenAI chat completions client core.
Args:
model: The model to use
model_provider: The provider of the model
token_counter: Counter for estimating token usage
client: The OpenAI client
additional_params: Additional parameters to pass to the API, e.g. {"reasoning_effort": "none"} for thinking models.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "model", "model_provider", "token_counter", "client"]
Returns: none
Parent Class: OpenAIChatCompletionsCore
|
method
|
reset_metrics
|
fenic._inference.common_openai.openai_chat_completions_core.OpenAIChatCompletionsCore.reset_metrics
|
Reset the metrics.
|
site-packages/fenic/_inference/common_openai/openai_chat_completions_core.py
| true | false | 66 | 68 | null |
None
|
[
"self"
] |
OpenAIChatCompletionsCore
| null | null |
Type: method
Member Name: reset_metrics
Qualified Name: fenic._inference.common_openai.openai_chat_completions_core.OpenAIChatCompletionsCore.reset_metrics
Docstring: Reset the metrics.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: None
Parent Class: OpenAIChatCompletionsCore
|
method
|
get_metrics
|
fenic._inference.common_openai.openai_chat_completions_core.OpenAIChatCompletionsCore.get_metrics
|
Get the metrics.
|
site-packages/fenic/_inference/common_openai/openai_chat_completions_core.py
| true | false | 70 | 72 | null |
LMMetrics
|
[
"self"
] |
OpenAIChatCompletionsCore
| null | null |
Type: method
Member Name: get_metrics
Qualified Name: fenic._inference.common_openai.openai_chat_completions_core.OpenAIChatCompletionsCore.get_metrics
Docstring: Get the metrics.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: LMMetrics
Parent Class: OpenAIChatCompletionsCore
|
method
|
make_single_request
|
fenic._inference.common_openai.openai_chat_completions_core.OpenAIChatCompletionsCore.make_single_request
|
Make a single request to the OpenAI API.
Args:
request: The messages to send
profile_configuration: The optional profile configuration for the request (for passing reasoning_effort and verbosity)
Returns:
The response text or an exception
|
site-packages/fenic/_inference/common_openai/openai_chat_completions_core.py
| true | false | 74 | 195 | null |
Union[None, FenicCompletionsResponse, TransientException, FatalException]
|
[
"self",
"request",
"profile_configuration"
] |
OpenAIChatCompletionsCore
| null | null |
Type: method
Member Name: make_single_request
Qualified Name: fenic._inference.common_openai.openai_chat_completions_core.OpenAIChatCompletionsCore.make_single_request
Docstring: Make a single request to the OpenAI API.
Args:
request: The messages to send
profile_configuration: The optional profile configuration for the request (for passing reasoning_effort and verbosity)
Returns:
The response text or an exception
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "request", "profile_configuration"]
Returns: Union[None, FenicCompletionsResponse, TransientException, FatalException]
Parent Class: OpenAIChatCompletionsCore
|
method
|
get_request_key
|
fenic._inference.common_openai.openai_chat_completions_core.OpenAIChatCompletionsCore.get_request_key
|
Generate a unique key for request deduplication.
Args:
request: The request to generate a key for
Returns:
A unique key for the request
|
site-packages/fenic/_inference/common_openai/openai_chat_completions_core.py
| true | false | 197 | 206 | null |
str
|
[
"self",
"request"
] |
OpenAIChatCompletionsCore
| null | null |
Type: method
Member Name: get_request_key
Qualified Name: fenic._inference.common_openai.openai_chat_completions_core.OpenAIChatCompletionsCore.get_request_key
Docstring: Generate a unique key for request deduplication.
Args:
request: The request to generate a key for
Returns:
A unique key for the request
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "request"]
Returns: str
Parent Class: OpenAIChatCompletionsCore
|
module
|
openai_provider
|
fenic._inference.common_openai.openai_provider
|
OpenAI model provider implementation.
|
site-packages/fenic/_inference/common_openai/openai_provider.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: openai_provider
Qualified Name: fenic._inference.common_openai.openai_provider
Docstring: OpenAI model provider implementation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
logger
|
fenic._inference.common_openai.openai_provider.logger
| null |
site-packages/fenic/_inference/common_openai/openai_provider.py
| true | false | 9 | 9 | null | null | null | null |
logging.getLogger(__name__)
| null |
Type: attribute
Member Name: logger
Qualified Name: fenic._inference.common_openai.openai_provider.logger
Docstring: none
Value: logging.getLogger(__name__)
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
OpenAIModelProvider
|
fenic._inference.common_openai.openai_provider.OpenAIModelProvider
|
OpenAI implementation of ModelProvider.
|
site-packages/fenic/_inference/common_openai/openai_provider.py
| true | false | 12 | 31 | null | null | null | null | null |
[
"ModelProviderClass"
] |
Type: class
Member Name: OpenAIModelProvider
Qualified Name: fenic._inference.common_openai.openai_provider.OpenAIModelProvider
Docstring: OpenAI implementation of ModelProvider.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
create_client
|
fenic._inference.common_openai.openai_provider.OpenAIModelProvider.create_client
|
Create an OpenAI client instance.
|
site-packages/fenic/_inference/common_openai/openai_provider.py
| true | false | 19 | 21 | null | null |
[
"self"
] |
OpenAIModelProvider
| null | null |
Type: method
Member Name: create_client
Qualified Name: fenic._inference.common_openai.openai_provider.OpenAIModelProvider.create_client
Docstring: Create an OpenAI client instance.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: none
Parent Class: OpenAIModelProvider
|
method
|
create_aio_client
|
fenic._inference.common_openai.openai_provider.OpenAIModelProvider.create_aio_client
|
Create an OpenAI async client instance.
|
site-packages/fenic/_inference/common_openai/openai_provider.py
| true | false | 23 | 25 | null | null |
[
"self"
] |
OpenAIModelProvider
| null | null |
Type: method
Member Name: create_aio_client
Qualified Name: fenic._inference.common_openai.openai_provider.OpenAIModelProvider.create_aio_client
Docstring: Create an OpenAI async client instance.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: none
Parent Class: OpenAIModelProvider
|
method
|
validate_api_key
|
fenic._inference.common_openai.openai_provider.OpenAIModelProvider.validate_api_key
|
Validate OpenAI API key by listing models.
|
site-packages/fenic/_inference/common_openai/openai_provider.py
| true | false | 27 | 31 | null |
None
|
[
"self"
] |
OpenAIModelProvider
| null | null |
Type: method
Member Name: validate_api_key
Qualified Name: fenic._inference.common_openai.openai_provider.OpenAIModelProvider.validate_api_key
Docstring: Validate OpenAI API key by listing models.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: None
Parent Class: OpenAIModelProvider
|
module
|
openai_embeddings_core
|
fenic._inference.common_openai.openai_embeddings_core
|
Core functionality for OpenAI embeddings clients.
|
site-packages/fenic/_inference/common_openai/openai_embeddings_core.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: openai_embeddings_core
Qualified Name: fenic._inference.common_openai.openai_embeddings_core
Docstring: Core functionality for OpenAI embeddings clients.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
logger
|
fenic._inference.common_openai.openai_embeddings_core.logger
| null |
site-packages/fenic/_inference/common_openai/openai_embeddings_core.py
| true | false | 29 | 29 | null | null | null | null |
logging.getLogger(__name__)
| null |
Type: attribute
Member Name: logger
Qualified Name: fenic._inference.common_openai.openai_embeddings_core.logger
Docstring: none
Value: logging.getLogger(__name__)
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
OpenAIEmbeddingsCore
|
fenic._inference.common_openai.openai_embeddings_core.OpenAIEmbeddingsCore
|
Core functionality for OpenAI embeddings clients.
|
site-packages/fenic/_inference/common_openai/openai_embeddings_core.py
| true | false | 31 | 143 | null | null | null | null | null |
[] |
Type: class
Member Name: OpenAIEmbeddingsCore
Qualified Name: fenic._inference.common_openai.openai_embeddings_core.OpenAIEmbeddingsCore
Docstring: Core functionality for OpenAI embeddings clients.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.