type
stringclasses 5
values | name
stringlengths 1
55
| qualified_name
stringlengths 5
143
| docstring
stringlengths 0
3.59k
⌀ | filepath
stringclasses 180
values | is_public
bool 2
classes | is_private
bool 2
classes | line_start
float64 0
1.54k
⌀ | line_end
float64 0
1.56k
⌀ | annotation
stringclasses 8
values | returns
stringclasses 236
values | parameters
listlengths 0
74
⌀ | parent_class
stringclasses 298
values | value
stringclasses 112
values | bases
listlengths 0
3
⌀ | api_element_summary
stringlengths 199
23k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
function
|
extract
|
fenic.api.functions.semantic.extract
|
Extracts structured information from unstructured text using a provided Pydantic model schema.
This function applies an instruction-driven extraction process to text columns, returning
structured data based on the fields and descriptions provided. Useful for pulling out key entities,
facts, or labels from documents.
The schema must be a valid Pydantic model type with supported field types. These include:
- Primitive types: `str`, `int`, `float`, `bool`
- Optional fields: `Optional[T]` where `T` is a supported type
- Lists: `List[T]` where `T` is a supported type
- Literals: `Literal[...`] (for enum-like constraints)
- Nested Pydantic models (recursive schemas are supported, but must be JSON-serializable and acyclic)
Unsupported types (e.g., unions, custom classes, runtime circular references, or complex generics) will raise errors at runtime.
Args:
column: Column containing text to extract from.
response_format: A Pydantic model type that defines the output structure with descriptions for each field.
model_alias: Optional alias for the language model to use for the extraction. If None, will use the language model configured as the default.
temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0).
max_output_tokens: Optional parameter to constrain the model to generate at most this many tokens. If None, fenic will calculate the expected max
tokens, based on the model's context length and other operator-specific parameters.
Returns:
Column: A new column with structured values (a struct) based on the provided schema.
Example: Extracting knowledge graph triples and named entities from text
```python
class Triple(BaseModel):
subject: str = Field(description="The subject of the triple")
predicate: str = Field(description="The predicate or relation")
object: str = Field(description="The object of the triple")
class KGResult(BaseModel):
triples: List[Triple] = Field(description="List of extracted knowledge graph triples")
entities: list[str] = Field(description="Flat list of all detected named entities")
df.select(semantic.extract("blurb", KGResult))
```
|
site-packages/fenic/api/functions/semantic.py
| true | false | 139 | 203 | null |
Column
|
[
"column",
"response_format",
"max_output_tokens",
"temperature",
"model_alias"
] | null | null | null |
Type: function
Member Name: extract
Qualified Name: fenic.api.functions.semantic.extract
Docstring: Extracts structured information from unstructured text using a provided Pydantic model schema.
This function applies an instruction-driven extraction process to text columns, returning
structured data based on the fields and descriptions provided. Useful for pulling out key entities,
facts, or labels from documents.
The schema must be a valid Pydantic model type with supported field types. These include:
- Primitive types: `str`, `int`, `float`, `bool`
- Optional fields: `Optional[T]` where `T` is a supported type
- Lists: `List[T]` where `T` is a supported type
- Literals: `Literal[...`] (for enum-like constraints)
- Nested Pydantic models (recursive schemas are supported, but must be JSON-serializable and acyclic)
Unsupported types (e.g., unions, custom classes, runtime circular references, or complex generics) will raise errors at runtime.
Args:
column: Column containing text to extract from.
response_format: A Pydantic model type that defines the output structure with descriptions for each field.
model_alias: Optional alias for the language model to use for the extraction. If None, will use the language model configured as the default.
temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0).
max_output_tokens: Optional parameter to constrain the model to generate at most this many tokens. If None, fenic will calculate the expected max
tokens, based on the model's context length and other operator-specific parameters.
Returns:
Column: A new column with structured values (a struct) based on the provided schema.
Example: Extracting knowledge graph triples and named entities from text
```python
class Triple(BaseModel):
subject: str = Field(description="The subject of the triple")
predicate: str = Field(description="The predicate or relation")
object: str = Field(description="The object of the triple")
class KGResult(BaseModel):
triples: List[Triple] = Field(description="List of extracted knowledge graph triples")
entities: list[str] = Field(description="Flat list of all detected named entities")
df.select(semantic.extract("blurb", KGResult))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "response_format", "max_output_tokens", "temperature", "model_alias"]
Returns: Column
Parent Class: none
|
function
|
predicate
|
fenic.api.functions.semantic.predicate
|
Applies a boolean predicate to one or more columns, typically used for filtering.
Args:
predicate: A Jinja2 template containing a yes/no question or boolean claim.
Should reference column values using {{ column_name }} syntax. The model will
evaluate this condition for each row and return True or False.
strict: If True, when any of the provided columns has a None value for a row,
the entire row's output will be None (template is not rendered).
If False, None values are handled using Jinja2's null rendering behavior.
Default is True.
examples: Optional few-shot examples showing how to evaluate the predicate.
Helps ensure consistent True/False decisions.
model_alias: Optional language model alias. If None, uses the default model.
temperature: Language model temperature (default: 0.0).
**columns: Named column arguments that correspond to template variables.
Keys must match the variable names used in the template.
Returns:
Column: A boolean column expression.
Example: Filtering product descriptions
```python
wireless_products = df.filter(
fc.semantic.predicate(
dedent('''\
Product: {{ description }}
Is this product wireless or battery-powered?'''),
description=fc.col("product_description")
)
)
```
Example: Filtering support tickets
```python
df = df.with_column(
"is_urgent",
fc.semantic.predicate(
dedent('''\
Subject: {{ subject }}
Body: {{ body }}
This ticket indicates an urgent issue.'''),
subject=fc.col("ticket_subject"),
body=fc.col("ticket_body")
)
)
```
Example: Filtering with examples
```python
examples = PredicateExampleCollection()
examples.create_example(PredicateExample(
input={"ticket": "I was charged twice for my subscription and need help."},
output=True
))
examples.create_example(PredicateExample(
input={"ticket": "How do I reset my password?"},
output=False
))
fc.semantic.predicate(
dedent('''\
Ticket: {{ ticket }}
This ticket is about billing.'''),
ticket=fc.col("ticket_text"),
examples=examples
)
```
|
site-packages/fenic/api/functions/semantic.py
| true | false | 206 | 307 | null |
Column
|
[
"predicate",
"strict",
"examples",
"model_alias",
"temperature",
"columns"
] | null | null | null |
Type: function
Member Name: predicate
Qualified Name: fenic.api.functions.semantic.predicate
Docstring: Applies a boolean predicate to one or more columns, typically used for filtering.
Args:
predicate: A Jinja2 template containing a yes/no question or boolean claim.
Should reference column values using {{ column_name }} syntax. The model will
evaluate this condition for each row and return True or False.
strict: If True, when any of the provided columns has a None value for a row,
the entire row's output will be None (template is not rendered).
If False, None values are handled using Jinja2's null rendering behavior.
Default is True.
examples: Optional few-shot examples showing how to evaluate the predicate.
Helps ensure consistent True/False decisions.
model_alias: Optional language model alias. If None, uses the default model.
temperature: Language model temperature (default: 0.0).
**columns: Named column arguments that correspond to template variables.
Keys must match the variable names used in the template.
Returns:
Column: A boolean column expression.
Example: Filtering product descriptions
```python
wireless_products = df.filter(
fc.semantic.predicate(
dedent('''\
Product: {{ description }}
Is this product wireless or battery-powered?'''),
description=fc.col("product_description")
)
)
```
Example: Filtering support tickets
```python
df = df.with_column(
"is_urgent",
fc.semantic.predicate(
dedent('''\
Subject: {{ subject }}
Body: {{ body }}
This ticket indicates an urgent issue.'''),
subject=fc.col("ticket_subject"),
body=fc.col("ticket_body")
)
)
```
Example: Filtering with examples
```python
examples = PredicateExampleCollection()
examples.create_example(PredicateExample(
input={"ticket": "I was charged twice for my subscription and need help."},
output=True
))
examples.create_example(PredicateExample(
input={"ticket": "How do I reset my password?"},
output=False
))
fc.semantic.predicate(
dedent('''\
Ticket: {{ ticket }}
This ticket is about billing.'''),
ticket=fc.col("ticket_text"),
examples=examples
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["predicate", "strict", "examples", "model_alias", "temperature", "columns"]
Returns: Column
Parent Class: none
|
function
|
reduce
|
fenic.api.functions.semantic.reduce
|
Aggregate function: reduces a set of strings in a column to a single string using a natural language instruction.
Args:
prompt: A string containing the semantic.reduce prompt.
The instruction can optionally include Jinja2 template variables (e.g., {{variable}}) that
reference columns from the group_context parameter. These will be replaced with
actual values from the first row of each group during execution.
column: The column containing documents/strings to reduce.
group_context: Optional dictionary mapping variable names to columns. These columns
provide context for each group and can be referenced in the instruction template.
order_by: Optional list of columns to sort grouped documents by before reduction. Documents are
processed in ascending order by default if no sort function is provided. Use a sort function
(e.g., col("date").desc()/fc.desc("date")) for descending order. The order_by columns help
preserve the temporal/logical sequence of the documents (e.g chunks in a document, speaker turns in a meeting transcript)
for more coherent summaries.
model_alias: Optional alias for the language model to use. If None, uses the default model.
temperature: Temperature parameter for the language model (default: 0.0).
max_output_tokens: Maximum tokens the model can generate (default: 512).
Returns:
Column: A column expression representing the semantic reduction operation.
Example: Simple reduction
```python
# Simple reduction
df.group_by("category").agg(
semantic.reduce("Summarize the documents", col("document_text"))
)
```
Example: With group context
```python
df.group_by("department", "region").agg(
semantic.reduce(
"Summarize these {{department}} reports from {{region}}",
col("document_text"),
group_context={
"department": col("department"),
"region": col("region")
}
)
)
```
Example: With sorting
```python
df.group_by("category").agg(
semantic.reduce(
"Summarize the documents",
col("document_text"),
order_by=col("date")
)
)
```
|
site-packages/fenic/api/functions/semantic.py
| true | false | 310 | 401 | null |
Column
|
[
"prompt",
"column",
"group_context",
"order_by",
"model_alias",
"temperature",
"max_output_tokens"
] | null | null | null |
Type: function
Member Name: reduce
Qualified Name: fenic.api.functions.semantic.reduce
Docstring: Aggregate function: reduces a set of strings in a column to a single string using a natural language instruction.
Args:
prompt: A string containing the semantic.reduce prompt.
The instruction can optionally include Jinja2 template variables (e.g., {{variable}}) that
reference columns from the group_context parameter. These will be replaced with
actual values from the first row of each group during execution.
column: The column containing documents/strings to reduce.
group_context: Optional dictionary mapping variable names to columns. These columns
provide context for each group and can be referenced in the instruction template.
order_by: Optional list of columns to sort grouped documents by before reduction. Documents are
processed in ascending order by default if no sort function is provided. Use a sort function
(e.g., col("date").desc()/fc.desc("date")) for descending order. The order_by columns help
preserve the temporal/logical sequence of the documents (e.g chunks in a document, speaker turns in a meeting transcript)
for more coherent summaries.
model_alias: Optional alias for the language model to use. If None, uses the default model.
temperature: Temperature parameter for the language model (default: 0.0).
max_output_tokens: Maximum tokens the model can generate (default: 512).
Returns:
Column: A column expression representing the semantic reduction operation.
Example: Simple reduction
```python
# Simple reduction
df.group_by("category").agg(
semantic.reduce("Summarize the documents", col("document_text"))
)
```
Example: With group context
```python
df.group_by("department", "region").agg(
semantic.reduce(
"Summarize these {{department}} reports from {{region}}",
col("document_text"),
group_context={
"department": col("department"),
"region": col("region")
}
)
)
```
Example: With sorting
```python
df.group_by("category").agg(
semantic.reduce(
"Summarize the documents",
col("document_text"),
order_by=col("date")
)
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["prompt", "column", "group_context", "order_by", "model_alias", "temperature", "max_output_tokens"]
Returns: Column
Parent Class: none
|
function
|
classify
|
fenic.api.functions.semantic.classify
|
Classifies a string column into one of the provided classes.
This is useful for tagging incoming documents with predefined categories.
Args:
column: Column or column name containing text to classify.
classes: List of class labels or ClassDefinition objects defining the available classes. Use ClassDefinition objects to provide descriptions for the classes.
examples: Optional collection of example classifications to guide the model.
Examples should be created using ClassifyExampleCollection.create_example(),
with instruction variables mapped to their expected classifications.
model_alias: Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default.
temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0).
Returns:
Column: Expression containing the classification results.
Raises:
ValueError: If column is invalid or classes is empty or has duplicate labels.
Example: Categorizing incoming support requests
```python
# Categorize incoming support requests
semantic.classify("message", ["Account Access", "Billing Issue", "Technical Problem"])
```
Example: Categorizing incoming support requests using ClassDefinition objects
```python
# Categorize incoming support requests
semantic.classify("message", [
ClassDefinition(label="Account Access", description="General questions, feature requests, or non-technical assistance"),
ClassDefinition(label="Billing Issue", description="Questions about charges, payments, subscriptions, or account billing"),
ClassDefinition(label="Technical Problem", description="Problems with product functionality, bugs, or technical difficulties")
])
```
Example: Categorizing incoming support requests with ClassDefinition objects and examples
```python
examples = ClassifyExampleCollection()
class_definitions = [
ClassDefinition(label="Account Access", description="General questions, feature requests, or non-technical assistance"),
ClassDefinition(label="Billing Issue", description="Questions about charges, payments, subscriptions, or account billing"),
ClassDefinition(label="Technical Problem", description="Problems with product functionality, bugs, or technical difficulties")
]
examples.create_example(ClassifyExample(
input="I can't reset my password or access my account.",
output="Account Access"))
examples.create_example(ClassifyExample(
input="You charged me twice for the same month.",
output="Billing Issue"))
semantic.classify("message", class_definitions, examples)
```
|
site-packages/fenic/api/functions/semantic.py
| true | false | 404 | 493 | null |
Column
|
[
"column",
"classes",
"examples",
"model_alias",
"temperature"
] | null | null | null |
Type: function
Member Name: classify
Qualified Name: fenic.api.functions.semantic.classify
Docstring: Classifies a string column into one of the provided classes.
This is useful for tagging incoming documents with predefined categories.
Args:
column: Column or column name containing text to classify.
classes: List of class labels or ClassDefinition objects defining the available classes. Use ClassDefinition objects to provide descriptions for the classes.
examples: Optional collection of example classifications to guide the model.
Examples should be created using ClassifyExampleCollection.create_example(),
with instruction variables mapped to their expected classifications.
model_alias: Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default.
temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0).
Returns:
Column: Expression containing the classification results.
Raises:
ValueError: If column is invalid or classes is empty or has duplicate labels.
Example: Categorizing incoming support requests
```python
# Categorize incoming support requests
semantic.classify("message", ["Account Access", "Billing Issue", "Technical Problem"])
```
Example: Categorizing incoming support requests using ClassDefinition objects
```python
# Categorize incoming support requests
semantic.classify("message", [
ClassDefinition(label="Account Access", description="General questions, feature requests, or non-technical assistance"),
ClassDefinition(label="Billing Issue", description="Questions about charges, payments, subscriptions, or account billing"),
ClassDefinition(label="Technical Problem", description="Problems with product functionality, bugs, or technical difficulties")
])
```
Example: Categorizing incoming support requests with ClassDefinition objects and examples
```python
examples = ClassifyExampleCollection()
class_definitions = [
ClassDefinition(label="Account Access", description="General questions, feature requests, or non-technical assistance"),
ClassDefinition(label="Billing Issue", description="Questions about charges, payments, subscriptions, or account billing"),
ClassDefinition(label="Technical Problem", description="Problems with product functionality, bugs, or technical difficulties")
]
examples.create_example(ClassifyExample(
input="I can't reset my password or access my account.",
output="Account Access"))
examples.create_example(ClassifyExample(
input="You charged me twice for the same month.",
output="Billing Issue"))
semantic.classify("message", class_definitions, examples)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "classes", "examples", "model_alias", "temperature"]
Returns: Column
Parent Class: none
|
function
|
analyze_sentiment
|
fenic.api.functions.semantic.analyze_sentiment
|
Analyzes the sentiment of a string column. Returns one of 'positive', 'negative', or 'neutral'.
Args:
column: Column or column name containing text for sentiment analysis.
model_alias: Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default.
temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0).
Returns:
Column: Expression containing sentiment results ('positive', 'negative', or 'neutral').
Raises:
ValueError: If column is invalid or cannot be resolved.
Example: Analyzing the sentiment of a user comment
```python
semantic.analyze_sentiment(col('user_comment'))
```
|
site-packages/fenic/api/functions/semantic.py
| true | false | 496 | 527 | null |
Column
|
[
"column",
"model_alias",
"temperature"
] | null | null | null |
Type: function
Member Name: analyze_sentiment
Qualified Name: fenic.api.functions.semantic.analyze_sentiment
Docstring: Analyzes the sentiment of a string column. Returns one of 'positive', 'negative', or 'neutral'.
Args:
column: Column or column name containing text for sentiment analysis.
model_alias: Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default.
temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0).
Returns:
Column: Expression containing sentiment results ('positive', 'negative', or 'neutral').
Raises:
ValueError: If column is invalid or cannot be resolved.
Example: Analyzing the sentiment of a user comment
```python
semantic.analyze_sentiment(col('user_comment'))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "model_alias", "temperature"]
Returns: Column
Parent Class: none
|
function
|
embed
|
fenic.api.functions.semantic.embed
|
Generate embeddings for the specified string column.
Args:
column: Column or column name containing the values to generate embeddings for.
model_alias: Optional alias for the embedding model to use for the mapping.
If None, will use the embedding model configured as the default.
Returns:
A Column expression that represents the embeddings for each value in the input column
Raises:
TypeError: If the input column is not a string column.
Example: Generate embeddings for a text column
```python
df.select(semantic.embed(col("text_column")).alias("text_embeddings"))
```
|
site-packages/fenic/api/functions/semantic.py
| true | false | 530 | 557 | null |
Column
|
[
"column",
"model_alias"
] | null | null | null |
Type: function
Member Name: embed
Qualified Name: fenic.api.functions.semantic.embed
Docstring: Generate embeddings for the specified string column.
Args:
column: Column or column name containing the values to generate embeddings for.
model_alias: Optional alias for the embedding model to use for the mapping.
If None, will use the embedding model configured as the default.
Returns:
A Column expression that represents the embeddings for each value in the input column
Raises:
TypeError: If the input column is not a string column.
Example: Generate embeddings for a text column
```python
df.select(semantic.embed(col("text_column")).alias("text_embeddings"))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "model_alias"]
Returns: Column
Parent Class: none
|
function
|
summarize
|
fenic.api.functions.semantic.summarize
|
Summarizes strings from a column.
Args:
column: Column or column name containing text for summarization
format: Format of the summary to generate. Can be either KeyPoints or Paragraph. If None, will default to Paragraph with a maximum of 120 words.
temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0).
model_alias: Optional alias for the language model to use for the summarization. If None, will use the language model configured as the default.
Returns:
Column: Expression containing the summarized string
Raises:
ValueError: If column is invalid or cannot be resolved.
Example:
>>> semantic.summarize(col('user_comment')).
|
site-packages/fenic/api/functions/semantic.py
| true | false | 560 | 589 | null |
Column
|
[
"column",
"format",
"temperature",
"model_alias"
] | null | null | null |
Type: function
Member Name: summarize
Qualified Name: fenic.api.functions.semantic.summarize
Docstring: Summarizes strings from a column.
Args:
column: Column or column name containing text for summarization
format: Format of the summary to generate. Can be either KeyPoints or Paragraph. If None, will default to Paragraph with a maximum of 120 words.
temperature: Optional temperature parameter for the language model. If None, will use the default temperature (0.0).
model_alias: Optional alias for the language model to use for the summarization. If None, will use the language model configured as the default.
Returns:
Column: Expression containing the summarized string
Raises:
ValueError: If column is invalid or cannot be resolved.
Example:
>>> semantic.summarize(col('user_comment')).
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "format", "temperature", "model_alias"]
Returns: Column
Parent Class: none
|
module
|
embedding
|
fenic.api.functions.embedding
|
Embedding functions.
|
site-packages/fenic/api/functions/embedding.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: embedding
Qualified Name: fenic.api.functions.embedding
Docstring: Embedding functions.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
function
|
normalize
|
fenic.api.functions.embedding.normalize
|
Normalize embedding vectors to unit length.
Args:
column: Column containing embedding vectors.
Returns:
Column: A column of normalized embedding vectors with the same embedding type.
Notes:
- Normalizes each embedding vector to have unit length (L2 norm = 1)
- Preserves the original embedding model in the type
- Null values are preserved as null
- Zero vectors become NaN after normalization
Example: Normalize embeddings for dot product similarity
```python
# Normalize embeddings for dot product similarity comparisons
df.select(
embedding.normalize(col("embeddings")).alias("unit_embeddings")
)
```
Example: Compare normalized embeddings using dot product
```python
# Compare normalized embeddings using dot product (equivalent to cosine similarity)
normalized_df = df.select(embedding.normalize(col("embeddings")).alias("norm_emb"))
query = [0.6, 0.8] # Already normalized
normalized_df.select(
embedding.compute_similarity(col("norm_emb"), query, metric="dot").alias("dot_product_sim")
)
```
|
site-packages/fenic/api/functions/embedding.py
| true | false | 17 | 51 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: normalize
Qualified Name: fenic.api.functions.embedding.normalize
Docstring: Normalize embedding vectors to unit length.
Args:
column: Column containing embedding vectors.
Returns:
Column: A column of normalized embedding vectors with the same embedding type.
Notes:
- Normalizes each embedding vector to have unit length (L2 norm = 1)
- Preserves the original embedding model in the type
- Null values are preserved as null
- Zero vectors become NaN after normalization
Example: Normalize embeddings for dot product similarity
```python
# Normalize embeddings for dot product similarity comparisons
df.select(
embedding.normalize(col("embeddings")).alias("unit_embeddings")
)
```
Example: Compare normalized embeddings using dot product
```python
# Compare normalized embeddings using dot product (equivalent to cosine similarity)
normalized_df = df.select(embedding.normalize(col("embeddings")).alias("norm_emb"))
query = [0.6, 0.8] # Already normalized
normalized_df.select(
embedding.compute_similarity(col("norm_emb"), query, metric="dot").alias("dot_product_sim")
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
compute_similarity
|
fenic.api.functions.embedding.compute_similarity
|
Compute similarity between embedding vectors using specified metric.
Args:
column: Column containing embedding vectors.
other: Either:
- Another column containing embedding vectors for pairwise similarity
- A query vector (list of floats or numpy array) for similarity with each embedding
metric: The similarity metric to use. Options:
- `cosine`: Cosine similarity (range: -1 to 1, higher is more similar)
- `dot`: Dot product similarity (raw inner product)
- `l2`: L2 (Euclidean) distance (lower is more similar)
Returns:
Column: A column of float values representing similarity scores.
Raises:
ValidationError: If query vector contains NaN values or has invalid dimensions.
Notes:
- Cosine similarity normalizes vectors internally, so pre-normalization is not required
- Dot product does not normalize, useful when vectors are already normalized
- L2 distance measures the straight-line distance between vectors
- When using two columns, dimensions must match between embeddings
Example: Compute dot product with a query vector
```python
# Compute dot product with a query vector
query = [0.1, 0.2, 0.3]
df.select(
embedding.compute_similarity(col("embeddings"), query).alias("similarity")
)
```
Example: Compute cosine similarity with a query vector
```python
query = [0.6, ... 0.8] # Already normalized
df.select(
embedding.compute_similarity(col("embeddings"), query, metric="cosine").alias("cosine_sim")
)
```
Example: Compute pairwise dot products between columns
```python
# Compute L2 distance between two columns of embeddings
df.select(
embedding.compute_similarity(col("embeddings1"), col("embeddings2"), metric="l2").alias("distance")
)
```
Example: Using numpy array as query vector
```python
# Use numpy array as query vector
import numpy as np
query = np.array([0.1, 0.2, 0.3])
df.select(embedding.compute_similarity("embeddings", query))
```
|
site-packages/fenic/api/functions/embedding.py
| true | false | 54 | 142 | null |
Column
|
[
"column",
"other",
"metric"
] | null | null | null |
Type: function
Member Name: compute_similarity
Qualified Name: fenic.api.functions.embedding.compute_similarity
Docstring: Compute similarity between embedding vectors using specified metric.
Args:
column: Column containing embedding vectors.
other: Either:
- Another column containing embedding vectors for pairwise similarity
- A query vector (list of floats or numpy array) for similarity with each embedding
metric: The similarity metric to use. Options:
- `cosine`: Cosine similarity (range: -1 to 1, higher is more similar)
- `dot`: Dot product similarity (raw inner product)
- `l2`: L2 (Euclidean) distance (lower is more similar)
Returns:
Column: A column of float values representing similarity scores.
Raises:
ValidationError: If query vector contains NaN values or has invalid dimensions.
Notes:
- Cosine similarity normalizes vectors internally, so pre-normalization is not required
- Dot product does not normalize, useful when vectors are already normalized
- L2 distance measures the straight-line distance between vectors
- When using two columns, dimensions must match between embeddings
Example: Compute dot product with a query vector
```python
# Compute dot product with a query vector
query = [0.1, 0.2, 0.3]
df.select(
embedding.compute_similarity(col("embeddings"), query).alias("similarity")
)
```
Example: Compute cosine similarity with a query vector
```python
query = [0.6, ... 0.8] # Already normalized
df.select(
embedding.compute_similarity(col("embeddings"), query, metric="cosine").alias("cosine_sim")
)
```
Example: Compute pairwise dot products between columns
```python
# Compute L2 distance between two columns of embeddings
df.select(
embedding.compute_similarity(col("embeddings1"), col("embeddings2"), metric="l2").alias("distance")
)
```
Example: Using numpy array as query vector
```python
# Use numpy array as query vector
import numpy as np
query = np.array([0.1, 0.2, 0.3])
df.select(embedding.compute_similarity("embeddings", query))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "other", "metric"]
Returns: Column
Parent Class: none
|
module
|
core
|
fenic.api.functions.core
|
Core functions for Fenic DataFrames.
|
site-packages/fenic/api/functions/core.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: core
Qualified Name: fenic.api.functions.core
Docstring: Core functions for Fenic DataFrames.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
function
|
col
|
fenic.api.functions.core.col
|
Creates a Column expression referencing a column in the DataFrame.
Args:
col_name: Name of the column to reference
Returns:
A Column expression for the specified column
Raises:
TypeError: If colName is not a string
|
site-packages/fenic/api/functions/core.py
| true | false | 17 | 30 | null |
Column
|
[
"col_name"
] | null | null | null |
Type: function
Member Name: col
Qualified Name: fenic.api.functions.core.col
Docstring: Creates a Column expression referencing a column in the DataFrame.
Args:
col_name: Name of the column to reference
Returns:
A Column expression for the specified column
Raises:
TypeError: If colName is not a string
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["col_name"]
Returns: Column
Parent Class: none
|
function
|
null
|
fenic.api.functions.core.null
|
Creates a Column expression representing a null value of the specified data type.
Regardless of the data type, the column will contain a null (None) value.
This function is useful for creating columns with null values of a particular type.
Args:
data_type: The data type of the null value
Returns:
A Column expression representing the null value
Raises:
ValidationError: If the data type is not a valid data type
Example: Creating a column with a null value of a primitive type
```python
# The newly created `b` column will have a value of `None` for all rows
df.select(fc.col("a"), fc.null(fc.IntegerType).alias("b"))
```
Example: Creating a column with a null value of an array/struct type
```python
# The newly created `b` and `c` columns will have a value of `None` for all rows
df.select(
fc.col("a"),
fc.null(fc.ArrayType(fc.IntegerType)).alias("b"),
fc.null(fc.StructType([fc.StructField("b", fc.IntegerType)])).alias("c"),
)
```
|
site-packages/fenic/api/functions/core.py
| true | false | 32 | 64 | null |
Column
|
[
"data_type"
] | null | null | null |
Type: function
Member Name: null
Qualified Name: fenic.api.functions.core.null
Docstring: Creates a Column expression representing a null value of the specified data type.
Regardless of the data type, the column will contain a null (None) value.
This function is useful for creating columns with null values of a particular type.
Args:
data_type: The data type of the null value
Returns:
A Column expression representing the null value
Raises:
ValidationError: If the data type is not a valid data type
Example: Creating a column with a null value of a primitive type
```python
# The newly created `b` column will have a value of `None` for all rows
df.select(fc.col("a"), fc.null(fc.IntegerType).alias("b"))
```
Example: Creating a column with a null value of an array/struct type
```python
# The newly created `b` and `c` columns will have a value of `None` for all rows
df.select(
fc.col("a"),
fc.null(fc.ArrayType(fc.IntegerType)).alias("b"),
fc.null(fc.StructType([fc.StructField("b", fc.IntegerType)])).alias("c"),
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["data_type"]
Returns: Column
Parent Class: none
|
function
|
empty
|
fenic.api.functions.core.empty
|
Creates a Column expression representing an empty value of the given type.
- If the data type is `ArrayType(...)`, the empty value will be an empty array.
- If the data type is `StructType(...)`, the empty value will be an instance of the struct type with all fields set to `None`.
- For all other data types, the empty value is None (equivalent to calling `null(data_type)`)
This function is useful for creating columns with empty values of a particular type.
Args:
data_type: The data type of the empty value
Returns:
A Column expression representing the empty value
Raises:
ValidationError: If the data type is not a valid data type
Example: Creating a column with an empty array type
```python
# The newly created `b` column will have a value of `[]` for all rows
df.select(fc.col("a"), fc.empty(fc.ArrayType(fc.IntegerType)).alias("b"))
```
Example: Creating a column with an empty struct type
```python
# The newly created `b` column will have a value of `{b: None}` for all rows
df.select(fc.col("a"), fc.empty(fc.StructType([fc.StructField("b", fc.IntegerType)])).alias("b"))
```
Example: Creating a column with an empty primitive type
```python
# The newly created `b` column will have a value of `None` for all rows
df.select(fc.col("a"), fc.empty(fc.IntegerType).alias("b"))
```
|
site-packages/fenic/api/functions/core.py
| true | false | 66 | 106 | null |
Column
|
[
"data_type"
] | null | null | null |
Type: function
Member Name: empty
Qualified Name: fenic.api.functions.core.empty
Docstring: Creates a Column expression representing an empty value of the given type.
- If the data type is `ArrayType(...)`, the empty value will be an empty array.
- If the data type is `StructType(...)`, the empty value will be an instance of the struct type with all fields set to `None`.
- For all other data types, the empty value is None (equivalent to calling `null(data_type)`)
This function is useful for creating columns with empty values of a particular type.
Args:
data_type: The data type of the empty value
Returns:
A Column expression representing the empty value
Raises:
ValidationError: If the data type is not a valid data type
Example: Creating a column with an empty array type
```python
# The newly created `b` column will have a value of `[]` for all rows
df.select(fc.col("a"), fc.empty(fc.ArrayType(fc.IntegerType)).alias("b"))
```
Example: Creating a column with an empty struct type
```python
# The newly created `b` column will have a value of `{b: None}` for all rows
df.select(fc.col("a"), fc.empty(fc.StructType([fc.StructField("b", fc.IntegerType)])).alias("b"))
```
Example: Creating a column with an empty primitive type
```python
# The newly created `b` column will have a value of `None` for all rows
df.select(fc.col("a"), fc.empty(fc.IntegerType).alias("b"))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["data_type"]
Returns: Column
Parent Class: none
|
function
|
lit
|
fenic.api.functions.core.lit
|
Creates a Column expression representing a literal value.
Args:
value: The literal value to create a column for
Returns:
A Column expression representing the literal value
Raises:
ValidationError: If the type of the value cannot be inferred
|
site-packages/fenic/api/functions/core.py
| true | false | 108 | 131 | null |
Column
|
[
"value"
] | null | null | null |
Type: function
Member Name: lit
Qualified Name: fenic.api.functions.core.lit
Docstring: Creates a Column expression representing a literal value.
Args:
value: The literal value to create a column for
Returns:
A Column expression representing the literal value
Raises:
ValidationError: If the type of the value cannot be inferred
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["value"]
Returns: Column
Parent Class: none
|
function
|
tool_param
|
fenic.api.functions.core.tool_param
|
Creates an unresolved literal placeholder column with a declared data type.
A placeholder argument for a DataFrame, representing a literal value to be provided at execution time.
If no value is supplied, it defaults to null. Enables parameterized views and macros over fenic DataFrames.
Notes:
Supports only Primitive/Object/ArrayLike Types (StringType, IntegerType, FloatType, DoubleType, BooleanType, StructType, ArrayType)
Args:
parameter_name: The name of the parameter to reference.
data_type: The expected data type for the parameter value.
Returns:
A Column wrapping an UnresolvedLiteralExpr for the given parameter.
Example: A simple tool with one parameter
```python
# Assume we are reading data with a `name` column.
df = session.read.csv(data.csv)
parameterized_df = df.filter(fc.col("name").contains(fc.tool_param('query', StringType)))
...
session.catalog.create_tool(
tool_name="my_tool",
tool_description="A tool that searches the name field",
tool_query=parameterized_df,
result_limit=100,
tool_params=[ToolParam(name="query", description="The name should contain the following value")]
)
Example: A tool with multiple filters
```python
# Assume we are reading data with an `age` column.
df = session.read.csv(users.csv)
# create multiple filters that evaluate to true if a param is not passed.
optional_min = fc.coalesce(fc.col("age") >= tool_param("min_age", IntegerType), fc.lit(True))
optional_max = fc.coalesce(fc.col("age") <= tool_param("max_age", IntegerType), fc.lit(True))
core_filter = df.filter(optional_min & optional_max)
session.catalog.create_tool(
"users_filter",
"Filter users by age",
core_filter,
tool_params=[
ToolParam(name="min_age", description="Minimum age", has_default=True, default_value=None),
ToolParam(name="max_age", description="Maximum age", has_default=True, default_value=None),
]
)
|
site-packages/fenic/api/functions/core.py
| true | false | 135 | 187 | null |
Column
|
[
"parameter_name",
"data_type"
] | null | null | null |
Type: function
Member Name: tool_param
Qualified Name: fenic.api.functions.core.tool_param
Docstring: Creates an unresolved literal placeholder column with a declared data type.
A placeholder argument for a DataFrame, representing a literal value to be provided at execution time.
If no value is supplied, it defaults to null. Enables parameterized views and macros over fenic DataFrames.
Notes:
Supports only Primitive/Object/ArrayLike Types (StringType, IntegerType, FloatType, DoubleType, BooleanType, StructType, ArrayType)
Args:
parameter_name: The name of the parameter to reference.
data_type: The expected data type for the parameter value.
Returns:
A Column wrapping an UnresolvedLiteralExpr for the given parameter.
Example: A simple tool with one parameter
```python
# Assume we are reading data with a `name` column.
df = session.read.csv(data.csv)
parameterized_df = df.filter(fc.col("name").contains(fc.tool_param('query', StringType)))
...
session.catalog.create_tool(
tool_name="my_tool",
tool_description="A tool that searches the name field",
tool_query=parameterized_df,
result_limit=100,
tool_params=[ToolParam(name="query", description="The name should contain the following value")]
)
Example: A tool with multiple filters
```python
# Assume we are reading data with an `age` column.
df = session.read.csv(users.csv)
# create multiple filters that evaluate to true if a param is not passed.
optional_min = fc.coalesce(fc.col("age") >= tool_param("min_age", IntegerType), fc.lit(True))
optional_max = fc.coalesce(fc.col("age") <= tool_param("max_age", IntegerType), fc.lit(True))
core_filter = df.filter(optional_min & optional_max)
session.catalog.create_tool(
"users_filter",
"Filter users by age",
core_filter,
tool_params=[
ToolParam(name="min_age", description="Minimum age", has_default=True, default_value=None),
ToolParam(name="max_age", description="Maximum age", has_default=True, default_value=None),
]
)
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["parameter_name", "data_type"]
Returns: Column
Parent Class: none
|
module
|
markdown
|
fenic.api.functions.markdown
|
Markdown functions.
|
site-packages/fenic/api/functions/markdown.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: markdown
Qualified Name: fenic.api.functions.markdown
Docstring: Markdown functions.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
function
|
to_json
|
fenic.api.functions.markdown.to_json
|
Converts a column of Markdown-formatted strings into a hierarchical JSON representation.
Args:
column (ColumnOrName): Input column containing Markdown strings.
Returns:
Column: A column of JSON-formatted strings representing the structured document tree.
Notes:
- This function parses Markdown into a structured JSON format optimized for document chunking,
semantic analysis, and `jq` queries.
- The output conforms to a custom schema that organizes content into nested sections based
on heading levels. This makes it more expressive than flat ASTs like `mdast`.
- The full JSON schema is available at: docs.fenic.ai/topics/markdown-json
Supported Markdown Features:
- Headings with nested hierarchy (e.g., h2 → h3 → h4)
- Paragraphs with inline formatting (bold, italics, links, code, etc.)
- Lists (ordered, unordered, task lists)
- Tables with header alignment and inline content
- Code blocks with language info
- Blockquotes, horizontal rules, and inline/flow HTML
Example: Convert markdown to JSON
```python
df.select(markdown.to_json(col("markdown_text")))
```
Example: Extract all level-2 headings with jq
```python
# Combine with jq to extract all level-2 headings
df.select(json.jq(markdown.to_json(col("md")), ".. | select(.type == 'heading' and .level == 2)"))
```
|
site-packages/fenic/api/functions/markdown.py
| true | false | 16 | 54 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: to_json
Qualified Name: fenic.api.functions.markdown.to_json
Docstring: Converts a column of Markdown-formatted strings into a hierarchical JSON representation.
Args:
column (ColumnOrName): Input column containing Markdown strings.
Returns:
Column: A column of JSON-formatted strings representing the structured document tree.
Notes:
- This function parses Markdown into a structured JSON format optimized for document chunking,
semantic analysis, and `jq` queries.
- The output conforms to a custom schema that organizes content into nested sections based
on heading levels. This makes it more expressive than flat ASTs like `mdast`.
- The full JSON schema is available at: docs.fenic.ai/topics/markdown-json
Supported Markdown Features:
- Headings with nested hierarchy (e.g., h2 → h3 → h4)
- Paragraphs with inline formatting (bold, italics, links, code, etc.)
- Lists (ordered, unordered, task lists)
- Tables with header alignment and inline content
- Code blocks with language info
- Blockquotes, horizontal rules, and inline/flow HTML
Example: Convert markdown to JSON
```python
df.select(markdown.to_json(col("markdown_text")))
```
Example: Extract all level-2 headings with jq
```python
# Combine with jq to extract all level-2 headings
df.select(json.jq(markdown.to_json(col("md")), ".. | select(.type == 'heading' and .level == 2)"))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
get_code_blocks
|
fenic.api.functions.markdown.get_code_blocks
|
Extracts all code blocks from a column of Markdown-formatted strings.
Args:
column (ColumnOrName): Input column containing Markdown strings.
language_filter (Optional[str]): Optional language filter to extract only code blocks with a specific language. By default, all code blocks are extracted.
Returns:
Column: A column of code blocks. The output column type is:
ArrayType(StructType([
StructField("language", StringType),
StructField("code", StringType),
]))
Notes:
- Code blocks are parsed from fenced Markdown blocks (e.g., triple backticks ```).
- Language identifiers are optional and may be null if not provided in the original Markdown.
- Indented code blocks without fences are not currently supported.
- This function is useful for extracting embedded logic, configuration, or examples
from documentation or notebooks.
Example: Extract all code blocks
```python
df.select(markdown.get_code_blocks(col("markdown_text")))
```
Example: Explode code blocks into individual rows
```python
# Explode the list of code blocks into individual rows
df = df.explode(df.with_column("blocks", markdown.get_code_blocks(col("md"))))
df = df.select(col("blocks")["language"], col("blocks")["code"])
```
|
site-packages/fenic/api/functions/markdown.py
| true | false | 56 | 92 | null |
Column
|
[
"column",
"language_filter"
] | null | null | null |
Type: function
Member Name: get_code_blocks
Qualified Name: fenic.api.functions.markdown.get_code_blocks
Docstring: Extracts all code blocks from a column of Markdown-formatted strings.
Args:
column (ColumnOrName): Input column containing Markdown strings.
language_filter (Optional[str]): Optional language filter to extract only code blocks with a specific language. By default, all code blocks are extracted.
Returns:
Column: A column of code blocks. The output column type is:
ArrayType(StructType([
StructField("language", StringType),
StructField("code", StringType),
]))
Notes:
- Code blocks are parsed from fenced Markdown blocks (e.g., triple backticks ```).
- Language identifiers are optional and may be null if not provided in the original Markdown.
- Indented code blocks without fences are not currently supported.
- This function is useful for extracting embedded logic, configuration, or examples
from documentation or notebooks.
Example: Extract all code blocks
```python
df.select(markdown.get_code_blocks(col("markdown_text")))
```
Example: Explode code blocks into individual rows
```python
# Explode the list of code blocks into individual rows
df = df.explode(df.with_column("blocks", markdown.get_code_blocks(col("md"))))
df = df.select(col("blocks")["language"], col("blocks")["code"])
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "language_filter"]
Returns: Column
Parent Class: none
|
function
|
generate_toc
|
fenic.api.functions.markdown.generate_toc
|
Generates a table of contents from markdown headings.
Args:
column (ColumnOrName): Input column containing Markdown strings.
max_level (Optional[int]): Maximum heading level to include in the TOC (1-6).
Defaults to 6 (all levels).
Returns:
Column: A column of Markdown-formatted table of contents strings.
Notes:
- The TOC is generated using markdown heading syntax (# ## ### etc.)
- Each heading in the source document becomes a line in the TOC
- The heading level is preserved in the output
- This creates a valid markdown document that can be rendered or processed further
Example: Generate a complete TOC
```python
df.select(markdown.generate_toc(col("documentation")))
```
Example: Generate a simplified TOC with only top 2 levels
```python
df.select(markdown.generate_toc(col("documentation"), max_level=2))
```
Example: Add TOC as a new column
```python
df = df.with_column("toc", markdown.generate_toc(col("content"), max_level=3))
```
|
site-packages/fenic/api/functions/markdown.py
| true | false | 95 | 132 | null |
Column
|
[
"column",
"max_level"
] | null | null | null |
Type: function
Member Name: generate_toc
Qualified Name: fenic.api.functions.markdown.generate_toc
Docstring: Generates a table of contents from markdown headings.
Args:
column (ColumnOrName): Input column containing Markdown strings.
max_level (Optional[int]): Maximum heading level to include in the TOC (1-6).
Defaults to 6 (all levels).
Returns:
Column: A column of Markdown-formatted table of contents strings.
Notes:
- The TOC is generated using markdown heading syntax (# ## ### etc.)
- Each heading in the source document becomes a line in the TOC
- The heading level is preserved in the output
- This creates a valid markdown document that can be rendered or processed further
Example: Generate a complete TOC
```python
df.select(markdown.generate_toc(col("documentation")))
```
Example: Generate a simplified TOC with only top 2 levels
```python
df.select(markdown.generate_toc(col("documentation"), max_level=2))
```
Example: Add TOC as a new column
```python
df = df.with_column("toc", markdown.generate_toc(col("content"), max_level=3))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "max_level"]
Returns: Column
Parent Class: none
|
function
|
extract_header_chunks
|
fenic.api.functions.markdown.extract_header_chunks
|
Splits markdown documents into logical chunks based on heading hierarchy.
Args:
column (ColumnOrName): Input column containing Markdown strings.
header_level (int): Heading level to split on (1-6). Creates a new chunk at every
heading of this level, including all nested content and subsections.
Returns:
Column: A column of arrays containing chunk objects with the following structure:
```python
ArrayType(StructType([
StructField("heading", StringType), # Heading text (clean, no markdown)
StructField("level", IntegerType), # Heading level (1-6)
StructField("content", StringType), # All content under this heading (clean text)
StructField("parent_heading", StringType), # Parent heading text (or null)
StructField("full_path", StringType), # Full breadcrumb path
]))
```
Notes:
- **Context-preserving**: Each chunk contains all content and subsections under the heading
- **Hierarchical awareness**: Includes parent heading context for better LLM understanding
- **Clean text output**: Strips markdown formatting for direct LLM consumption
Chunking Behavior:
With `header_level=2`, this markdown:
```markdown
# Introduction
Overview text
## Getting Started
Setup instructions
### Prerequisites
Python 3.8+ required
## API Reference
Function documentation
```
Produces 2 chunks:
1. `Getting Started` chunk (includes `Prerequisites` subsection)
2. `API Reference` chunk
Example: Split articles into top-level sections
```python
df.select(markdown.extract_header_chunks(col("articles"), header_level=1))
```
Example: Split documentation into feature sections
```python
df.select(markdown.extract_header_chunks(col("docs"), header_level=2))
```
Example: Create fine-grained chunks for detailed analysis
```python
df.select(markdown.extract_header_chunks(col("content"), header_level=3))
```
Example: Explode chunks into individual rows for processing
```python
chunks_df = df.select(
markdown.extract_header_chunks(col("markdown"), header_level=2).alias("chunks")
).explode("chunks")
chunks_df.select(
col("chunks").heading,
col("chunks").content,
col("chunks").full_path
)
```
|
site-packages/fenic/api/functions/markdown.py
| true | false | 135 | 212 | null |
Column
|
[
"column",
"header_level"
] | null | null | null |
Type: function
Member Name: extract_header_chunks
Qualified Name: fenic.api.functions.markdown.extract_header_chunks
Docstring: Splits markdown documents into logical chunks based on heading hierarchy.
Args:
column (ColumnOrName): Input column containing Markdown strings.
header_level (int): Heading level to split on (1-6). Creates a new chunk at every
heading of this level, including all nested content and subsections.
Returns:
Column: A column of arrays containing chunk objects with the following structure:
```python
ArrayType(StructType([
StructField("heading", StringType), # Heading text (clean, no markdown)
StructField("level", IntegerType), # Heading level (1-6)
StructField("content", StringType), # All content under this heading (clean text)
StructField("parent_heading", StringType), # Parent heading text (or null)
StructField("full_path", StringType), # Full breadcrumb path
]))
```
Notes:
- **Context-preserving**: Each chunk contains all content and subsections under the heading
- **Hierarchical awareness**: Includes parent heading context for better LLM understanding
- **Clean text output**: Strips markdown formatting for direct LLM consumption
Chunking Behavior:
With `header_level=2`, this markdown:
```markdown
# Introduction
Overview text
## Getting Started
Setup instructions
### Prerequisites
Python 3.8+ required
## API Reference
Function documentation
```
Produces 2 chunks:
1. `Getting Started` chunk (includes `Prerequisites` subsection)
2. `API Reference` chunk
Example: Split articles into top-level sections
```python
df.select(markdown.extract_header_chunks(col("articles"), header_level=1))
```
Example: Split documentation into feature sections
```python
df.select(markdown.extract_header_chunks(col("docs"), header_level=2))
```
Example: Create fine-grained chunks for detailed analysis
```python
df.select(markdown.extract_header_chunks(col("content"), header_level=3))
```
Example: Explode chunks into individual rows for processing
```python
chunks_df = df.select(
markdown.extract_header_chunks(col("markdown"), header_level=2).alias("chunks")
).explode("chunks")
chunks_df.select(
col("chunks").heading,
col("chunks").content,
col("chunks").full_path
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "header_level"]
Returns: Column
Parent Class: none
|
module
|
text
|
fenic.api.functions.text
|
Text manipulation functions for Fenic DataFrames.
|
site-packages/fenic/api/functions/text.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: text
Qualified Name: fenic.api.functions.text
Docstring: Text manipulation functions for Fenic DataFrames.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
function
|
extract
|
fenic.api.functions.text.extract
|
Extracts structured data from text using template-based pattern matching.
Matches each string in the input column against a template pattern with named
placeholders. Each placeholder can specify a format rule to handle different
data types within the text.
Args:
column: Input text column to extract from
template: Template string with placeholders as ``${field_name}`` or ``${field_name:format}``
Available formats: none, csv, json, quoted
Returns:
Column: Struct column with fields corresponding to template placeholders.
All fields are strings except JSON fields which preserve their parsed type.
Template Syntax:
- ``${field_name}`` - Extract field as plain text
- ``${field_name:csv}`` - Parse as CSV field (handles quoted values)
- ``${field_name:json}`` - Parse as JSON and preserve type
- ``${field_name:quoted}`` - Extract quoted string (removes outer quotes)
- ``$`` - Literal dollar sign
Raises:
ValidationError: If template syntax is invalid
Example: Basic extraction
```python
text.extract(col("log"), "${date} ${level} ${message}")
# Input: "2024-01-15 ERROR Connection failed"
# Output: {date: "2024-01-15", level: "ERROR", message: "Connection failed"}
```
Example: Mixed format extraction
```python
text.extract(col("data"), 'Name: ${name:csv}, Price: ${price}, Tags: ${tags:json}')
# Input: 'Name: "Smith, John", Price: 99.99, Tags: ["a", "b"]'
# Output: {name: "Smith, John", price: "99.99", tags: ["a", "b"]}
```
Example: Quoted field handling
```python
text.extract(col("record"), 'Title: ${title:quoted}, Author: ${author}')
# Input: 'Title: "To Kill a Mockingbird", Author: Harper Lee'
# Output: {title: "To Kill a Mockingbird", author: "Harper Lee"}
```
Note:
If a string doesn't match the template pattern, all extracted fields will be null.
|
site-packages/fenic/api/functions/text.py
| true | false | 46 | 99 | null |
Column
|
[
"column",
"template"
] | null | null | null |
Type: function
Member Name: extract
Qualified Name: fenic.api.functions.text.extract
Docstring: Extracts structured data from text using template-based pattern matching.
Matches each string in the input column against a template pattern with named
placeholders. Each placeholder can specify a format rule to handle different
data types within the text.
Args:
column: Input text column to extract from
template: Template string with placeholders as ``${field_name}`` or ``${field_name:format}``
Available formats: none, csv, json, quoted
Returns:
Column: Struct column with fields corresponding to template placeholders.
All fields are strings except JSON fields which preserve their parsed type.
Template Syntax:
- ``${field_name}`` - Extract field as plain text
- ``${field_name:csv}`` - Parse as CSV field (handles quoted values)
- ``${field_name:json}`` - Parse as JSON and preserve type
- ``${field_name:quoted}`` - Extract quoted string (removes outer quotes)
- ``$`` - Literal dollar sign
Raises:
ValidationError: If template syntax is invalid
Example: Basic extraction
```python
text.extract(col("log"), "${date} ${level} ${message}")
# Input: "2024-01-15 ERROR Connection failed"
# Output: {date: "2024-01-15", level: "ERROR", message: "Connection failed"}
```
Example: Mixed format extraction
```python
text.extract(col("data"), 'Name: ${name:csv}, Price: ${price}, Tags: ${tags:json}')
# Input: 'Name: "Smith, John", Price: 99.99, Tags: ["a", "b"]'
# Output: {name: "Smith, John", price: "99.99", tags: ["a", "b"]}
```
Example: Quoted field handling
```python
text.extract(col("record"), 'Title: ${title:quoted}, Author: ${author}')
# Input: 'Title: "To Kill a Mockingbird", Author: Harper Lee'
# Output: {title: "To Kill a Mockingbird", author: "Harper Lee"}
```
Note:
If a string doesn't match the template pattern, all extracted fields will be null.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "template"]
Returns: Column
Parent Class: none
|
function
|
recursive_character_chunk
|
fenic.api.functions.text.recursive_character_chunk
|
Chunks a string column into chunks of a specified size (in characters) with an optional overlap.
The chunking is performed recursively, attempting to preserve the underlying structure of the text
by splitting on natural boundaries (paragraph breaks, sentence breaks, etc.) to maintain context.
By default, these characters are ['\n\n', '\n', '.', ';', ':', ' ', '-', ''], but this can be customized.
Args:
column: The input string column or column name to chunk
chunk_size: The size of each chunk in characters
chunk_overlap_percentage: The overlap between each chunk as a percentage of the chunk size
chunking_character_set_custom_characters (Optional): List of alternative characters to split on. Note that the characters should be ordered from coarsest to finest desired granularity -- earlier characters in the list should result in fewer overall splits than later characters.
Returns:
Column: A column containing the chunks as an array of strings
Example: Default character chunking
```python
# Create chunks of at most 100 characters with 20% overlap
df.select(
text.recursive_character_chunk(col("text"), 100, 20).alias("chunks")
)
```
Example: Custom character chunking
```python
# Create chunks with custom split characters
df.select(
text.recursive_character_chunk(
col("text"),
100,
20,
['\n\n', '\n', '.', ' ', '']
).alias("chunks")
)
```
|
site-packages/fenic/api/functions/text.py
| true | false | 101 | 160 | null |
Column
|
[
"column",
"chunk_size",
"chunk_overlap_percentage",
"chunking_character_set_custom_characters"
] | null | null | null |
Type: function
Member Name: recursive_character_chunk
Qualified Name: fenic.api.functions.text.recursive_character_chunk
Docstring: Chunks a string column into chunks of a specified size (in characters) with an optional overlap.
The chunking is performed recursively, attempting to preserve the underlying structure of the text
by splitting on natural boundaries (paragraph breaks, sentence breaks, etc.) to maintain context.
By default, these characters are ['\n\n', '\n', '.', ';', ':', ' ', '-', ''], but this can be customized.
Args:
column: The input string column or column name to chunk
chunk_size: The size of each chunk in characters
chunk_overlap_percentage: The overlap between each chunk as a percentage of the chunk size
chunking_character_set_custom_characters (Optional): List of alternative characters to split on. Note that the characters should be ordered from coarsest to finest desired granularity -- earlier characters in the list should result in fewer overall splits than later characters.
Returns:
Column: A column containing the chunks as an array of strings
Example: Default character chunking
```python
# Create chunks of at most 100 characters with 20% overlap
df.select(
text.recursive_character_chunk(col("text"), 100, 20).alias("chunks")
)
```
Example: Custom character chunking
```python
# Create chunks with custom split characters
df.select(
text.recursive_character_chunk(
col("text"),
100,
20,
['\n\n', '\n', '.', ' ', '']
).alias("chunks")
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "chunk_size", "chunk_overlap_percentage", "chunking_character_set_custom_characters"]
Returns: Column
Parent Class: none
|
function
|
recursive_word_chunk
|
fenic.api.functions.text.recursive_word_chunk
|
Chunks a string column into chunks of a specified size (in words) with an optional overlap.
The chunking is performed recursively, attempting to preserve the underlying structure of the text
by splitting on natural boundaries (paragraph breaks, sentence breaks, etc.) to maintain context.
By default, these characters are ['\n\n', '\n', '.', ';', ':', ' ', '-', ''], but this can be customized.
Args:
column: The input string column or column name to chunk
chunk_size: The size of each chunk in words
chunk_overlap_percentage: The overlap between each chunk as a percentage of the chunk size
chunking_character_set_custom_characters (Optional): List of alternative characters to split on. Note that the characters should be ordered from coarsest to finest desired granularity -- earlier characters in the list should result in fewer overall splits than later characters.
Returns:
Column: A column containing the chunks as an array of strings
Example: Default word chunking
```python
# Create chunks of at most 100 words with 20% overlap
df.select(
text.recursive_word_chunk(col("text"), 100, 20).alias("chunks")
)
```
Example: Custom word chunking
```python
# Create chunks with custom split characters
df.select(
text.recursive_word_chunk(
col("text"),
100,
20,
['\n\n', '\n', '.', ' ', '']
).alias("chunks")
)
```
|
site-packages/fenic/api/functions/text.py
| true | false | 163 | 222 | null |
Column
|
[
"column",
"chunk_size",
"chunk_overlap_percentage",
"chunking_character_set_custom_characters"
] | null | null | null |
Type: function
Member Name: recursive_word_chunk
Qualified Name: fenic.api.functions.text.recursive_word_chunk
Docstring: Chunks a string column into chunks of a specified size (in words) with an optional overlap.
The chunking is performed recursively, attempting to preserve the underlying structure of the text
by splitting on natural boundaries (paragraph breaks, sentence breaks, etc.) to maintain context.
By default, these characters are ['\n\n', '\n', '.', ';', ':', ' ', '-', ''], but this can be customized.
Args:
column: The input string column or column name to chunk
chunk_size: The size of each chunk in words
chunk_overlap_percentage: The overlap between each chunk as a percentage of the chunk size
chunking_character_set_custom_characters (Optional): List of alternative characters to split on. Note that the characters should be ordered from coarsest to finest desired granularity -- earlier characters in the list should result in fewer overall splits than later characters.
Returns:
Column: A column containing the chunks as an array of strings
Example: Default word chunking
```python
# Create chunks of at most 100 words with 20% overlap
df.select(
text.recursive_word_chunk(col("text"), 100, 20).alias("chunks")
)
```
Example: Custom word chunking
```python
# Create chunks with custom split characters
df.select(
text.recursive_word_chunk(
col("text"),
100,
20,
['\n\n', '\n', '.', ' ', '']
).alias("chunks")
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "chunk_size", "chunk_overlap_percentage", "chunking_character_set_custom_characters"]
Returns: Column
Parent Class: none
|
function
|
recursive_token_chunk
|
fenic.api.functions.text.recursive_token_chunk
|
Chunks a string column into chunks of a specified size (in tokens) with an optional overlap.
The chunking is performed recursively, attempting to preserve the underlying structure of the text
by splitting on natural boundaries (paragraph breaks, sentence breaks, etc.) to maintain context.
By default, these characters are ['\n\n', '\n', '.', ';', ':', ' ', '-', ''], but this can be customized.
Args:
column: The input string column or column name to chunk
chunk_size: The size of each chunk in tokens
chunk_overlap_percentage: The overlap between each chunk as a percentage of the chunk size
chunking_character_set_custom_characters (Optional): List of alternative characters to split on. Note that the characters should be ordered from coarsest to finest desired granularity -- earlier characters in the list should result in fewer overall splits than later characters.
Returns:
Column: A column containing the chunks as an array of strings
Example: Default token chunking
```python
# Create chunks of at most 100 tokens with 20% overlap
df.select(
text.recursive_token_chunk(col("text"), 100, 20).alias("chunks")
)
```
Example: Custom token chunking
```python
# Create chunks with custom split characters
df.select(
text.recursive_token_chunk(
col("text"),
100,
20,
['\n\n', '\n', '.', ' ', '']
).alias("chunks")
)
```
|
site-packages/fenic/api/functions/text.py
| true | false | 225 | 284 | null |
Column
|
[
"column",
"chunk_size",
"chunk_overlap_percentage",
"chunking_character_set_custom_characters"
] | null | null | null |
Type: function
Member Name: recursive_token_chunk
Qualified Name: fenic.api.functions.text.recursive_token_chunk
Docstring: Chunks a string column into chunks of a specified size (in tokens) with an optional overlap.
The chunking is performed recursively, attempting to preserve the underlying structure of the text
by splitting on natural boundaries (paragraph breaks, sentence breaks, etc.) to maintain context.
By default, these characters are ['\n\n', '\n', '.', ';', ':', ' ', '-', ''], but this can be customized.
Args:
column: The input string column or column name to chunk
chunk_size: The size of each chunk in tokens
chunk_overlap_percentage: The overlap between each chunk as a percentage of the chunk size
chunking_character_set_custom_characters (Optional): List of alternative characters to split on. Note that the characters should be ordered from coarsest to finest desired granularity -- earlier characters in the list should result in fewer overall splits than later characters.
Returns:
Column: A column containing the chunks as an array of strings
Example: Default token chunking
```python
# Create chunks of at most 100 tokens with 20% overlap
df.select(
text.recursive_token_chunk(col("text"), 100, 20).alias("chunks")
)
```
Example: Custom token chunking
```python
# Create chunks with custom split characters
df.select(
text.recursive_token_chunk(
col("text"),
100,
20,
['\n\n', '\n', '.', ' ', '']
).alias("chunks")
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "chunk_size", "chunk_overlap_percentage", "chunking_character_set_custom_characters"]
Returns: Column
Parent Class: none
|
function
|
character_chunk
|
fenic.api.functions.text.character_chunk
|
Chunks a string column into chunks of a specified size (in characters) with an optional overlap.
The chunking is done by applying a simple sliding window across the text to create chunks of equal size.
This approach does not attempt to preserve the underlying structure of the text.
Args:
column: The input string column or column name to chunk
chunk_size: The size of each chunk in characters
chunk_overlap_percentage: The overlap between chunks as a percentage of the chunk size (Default: 0)
Returns:
Column: A column containing the chunks as an array of strings
Example: Create character chunks
```python
# Create chunks of 100 characters with 20% overlap
df.select(text.character_chunk(col("text"), 100, 20))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 287 | 319 | null |
Column
|
[
"column",
"chunk_size",
"chunk_overlap_percentage"
] | null | null | null |
Type: function
Member Name: character_chunk
Qualified Name: fenic.api.functions.text.character_chunk
Docstring: Chunks a string column into chunks of a specified size (in characters) with an optional overlap.
The chunking is done by applying a simple sliding window across the text to create chunks of equal size.
This approach does not attempt to preserve the underlying structure of the text.
Args:
column: The input string column or column name to chunk
chunk_size: The size of each chunk in characters
chunk_overlap_percentage: The overlap between chunks as a percentage of the chunk size (Default: 0)
Returns:
Column: A column containing the chunks as an array of strings
Example: Create character chunks
```python
# Create chunks of 100 characters with 20% overlap
df.select(text.character_chunk(col("text"), 100, 20))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "chunk_size", "chunk_overlap_percentage"]
Returns: Column
Parent Class: none
|
function
|
word_chunk
|
fenic.api.functions.text.word_chunk
|
Chunks a string column into chunks of a specified size (in words) with an optional overlap.
The chunking is done by applying a simple sliding window across the text to create chunks of equal size.
This approach does not attempt to preserve the underlying structure of the text.
Args:
column: The input string column or column name to chunk
chunk_size: The size of each chunk in words
chunk_overlap_percentage: The overlap between chunks as a percentage of the chunk size (Default: 0)
Returns:
Column: A column containing the chunks as an array of strings
Example: Create word chunks
```python
# Create chunks of 100 words with 20% overlap
df.select(text.word_chunk(col("text"), 100, 20))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 322 | 354 | null |
Column
|
[
"column",
"chunk_size",
"chunk_overlap_percentage"
] | null | null | null |
Type: function
Member Name: word_chunk
Qualified Name: fenic.api.functions.text.word_chunk
Docstring: Chunks a string column into chunks of a specified size (in words) with an optional overlap.
The chunking is done by applying a simple sliding window across the text to create chunks of equal size.
This approach does not attempt to preserve the underlying structure of the text.
Args:
column: The input string column or column name to chunk
chunk_size: The size of each chunk in words
chunk_overlap_percentage: The overlap between chunks as a percentage of the chunk size (Default: 0)
Returns:
Column: A column containing the chunks as an array of strings
Example: Create word chunks
```python
# Create chunks of 100 words with 20% overlap
df.select(text.word_chunk(col("text"), 100, 20))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "chunk_size", "chunk_overlap_percentage"]
Returns: Column
Parent Class: none
|
function
|
token_chunk
|
fenic.api.functions.text.token_chunk
|
Chunks a string column into chunks of a specified size (in tokens) with an optional overlap.
The chunking is done by applying a simple sliding window across the text to create chunks of equal size.
This approach does not attempt to preserve the underlying structure of the text.
Args:
column: The input string column or column name to chunk
chunk_size: The size of each chunk in tokens
chunk_overlap_percentage: The overlap between chunks as a percentage of the chunk size (Default: 0)
Returns:
Column: A column containing the chunks as an array of strings
Example: Create token chunks
```python
# Create chunks of 100 tokens with 20% overlap
df.select(text.token_chunk(col("text"), 100, 20))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 357 | 389 | null |
Column
|
[
"column",
"chunk_size",
"chunk_overlap_percentage"
] | null | null | null |
Type: function
Member Name: token_chunk
Qualified Name: fenic.api.functions.text.token_chunk
Docstring: Chunks a string column into chunks of a specified size (in tokens) with an optional overlap.
The chunking is done by applying a simple sliding window across the text to create chunks of equal size.
This approach does not attempt to preserve the underlying structure of the text.
Args:
column: The input string column or column name to chunk
chunk_size: The size of each chunk in tokens
chunk_overlap_percentage: The overlap between chunks as a percentage of the chunk size (Default: 0)
Returns:
Column: A column containing the chunks as an array of strings
Example: Create token chunks
```python
# Create chunks of 100 tokens with 20% overlap
df.select(text.token_chunk(col("text"), 100, 20))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "chunk_size", "chunk_overlap_percentage"]
Returns: Column
Parent Class: none
|
function
|
count_tokens
|
fenic.api.functions.text.count_tokens
|
Returns the number of tokens in a string using OpenAI's cl100k_base encoding (tiktoken).
Args:
column: The input string column.
Returns:
Column: A column with the token counts for each input string.
Example: Count tokens in text
```python
# Count tokens in a text column
df.select(text.count_tokens(col("text")))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 392 | 412 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: count_tokens
Qualified Name: fenic.api.functions.text.count_tokens
Docstring: Returns the number of tokens in a string using OpenAI's cl100k_base encoding (tiktoken).
Args:
column: The input string column.
Returns:
Column: A column with the token counts for each input string.
Example: Count tokens in text
```python
# Count tokens in a text column
df.select(text.count_tokens(col("text")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
concat
|
fenic.api.functions.text.concat
|
Concatenates multiple columns or strings into a single string.
Args:
*cols: Columns or strings to concatenate
Returns:
Column: A column containing the concatenated strings
Example: Concatenate columns
```python
# Concatenate two columns with a space in between
df.select(text.concat(col("col1"), lit(" "), col("col2")))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 415 | 444 | null |
Column
|
[
"cols"
] | null | null | null |
Type: function
Member Name: concat
Qualified Name: fenic.api.functions.text.concat
Docstring: Concatenates multiple columns or strings into a single string.
Args:
*cols: Columns or strings to concatenate
Returns:
Column: A column containing the concatenated strings
Example: Concatenate columns
```python
# Concatenate two columns with a space in between
df.select(text.concat(col("col1"), lit(" "), col("col2")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["cols"]
Returns: Column
Parent Class: none
|
function
|
parse_transcript
|
fenic.api.functions.text.parse_transcript
|
Parses a transcript from text to a structured format with unified schema.
Converts transcript text in various formats (srt, webvtt, generic) to a standardized structure
with fields: index, speaker, start_time, end_time, duration, content, format.
All timestamps are returned as floating-point seconds from the start.
Args:
column: The input string column or column name containing transcript text
format: The format of the transcript ("srt", "webvtt", or "generic")
Returns:
Column: A column containing an array of structured transcript entries with unified schema:
- index: Optional[int] - Entry index (1-based)
- speaker: Optional[str] - Speaker name (for generic format)
- start_time: float - Start time in seconds
- end_time: Optional[float] - End time in seconds
- duration: Optional[float] - Duration in seconds
- content: str - Transcript content/text
- format: str - Original format ("srt", "webvtt", or "generic")
Examples:
>>> # Parse SRT format transcript
>>> df.select(text.parse_transcript(col("transcript"), "srt"))
>>> # Parse generic conversation transcript
>>> df.select(text.parse_transcript(col("transcript"), "generic"))
>>> # Parse WebVTT format transcript
>>> df.select(text.parse_transcript(col("transcript"), "webvtt"))
|
site-packages/fenic/api/functions/text.py
| true | false | 448 | 481 | null |
Column
|
[
"column",
"format"
] | null | null | null |
Type: function
Member Name: parse_transcript
Qualified Name: fenic.api.functions.text.parse_transcript
Docstring: Parses a transcript from text to a structured format with unified schema.
Converts transcript text in various formats (srt, webvtt, generic) to a standardized structure
with fields: index, speaker, start_time, end_time, duration, content, format.
All timestamps are returned as floating-point seconds from the start.
Args:
column: The input string column or column name containing transcript text
format: The format of the transcript ("srt", "webvtt", or "generic")
Returns:
Column: A column containing an array of structured transcript entries with unified schema:
- index: Optional[int] - Entry index (1-based)
- speaker: Optional[str] - Speaker name (for generic format)
- start_time: float - Start time in seconds
- end_time: Optional[float] - End time in seconds
- duration: Optional[float] - Duration in seconds
- content: str - Transcript content/text
- format: str - Original format ("srt", "webvtt", or "generic")
Examples:
>>> # Parse SRT format transcript
>>> df.select(text.parse_transcript(col("transcript"), "srt"))
>>> # Parse generic conversation transcript
>>> df.select(text.parse_transcript(col("transcript"), "generic"))
>>> # Parse WebVTT format transcript
>>> df.select(text.parse_transcript(col("transcript"), "webvtt"))
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "format"]
Returns: Column
Parent Class: none
|
function
|
concat_ws
|
fenic.api.functions.text.concat_ws
|
Concatenates multiple columns or strings into a single string with a separator.
Args:
separator: The separator to use
*cols: Columns or strings to concatenate
Returns:
Column: A column containing the concatenated strings
Example: Concatenate with comma separator
```python
# Concatenate columns with comma separator
df.select(text.concat_ws(",", col("col1"), col("col2")))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 484 | 516 | null |
Column
|
[
"separator",
"cols"
] | null | null | null |
Type: function
Member Name: concat_ws
Qualified Name: fenic.api.functions.text.concat_ws
Docstring: Concatenates multiple columns or strings into a single string with a separator.
Args:
separator: The separator to use
*cols: Columns or strings to concatenate
Returns:
Column: A column containing the concatenated strings
Example: Concatenate with comma separator
```python
# Concatenate columns with comma separator
df.select(text.concat_ws(",", col("col1"), col("col2")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["separator", "cols"]
Returns: Column
Parent Class: none
|
function
|
array_join
|
fenic.api.functions.text.array_join
|
Joins an array of strings into a single string with a delimiter.
Args:
column: The column to join
delimiter: The delimiter to use
Returns:
Column: A column containing the joined strings
Example: Join array with comma
```python
# Join array elements with comma
df.select(text.array_join(col("array_column"), ","))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 519 | 537 | null |
Column
|
[
"column",
"delimiter"
] | null | null | null |
Type: function
Member Name: array_join
Qualified Name: fenic.api.functions.text.array_join
Docstring: Joins an array of strings into a single string with a delimiter.
Args:
column: The column to join
delimiter: The delimiter to use
Returns:
Column: A column containing the joined strings
Example: Join array with comma
```python
# Join array elements with comma
df.select(text.array_join(col("array_column"), ","))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "delimiter"]
Returns: Column
Parent Class: none
|
function
|
replace
|
fenic.api.functions.text.replace
|
Replace all occurrences of a pattern with a new string, treating pattern as a literal string.
This method creates a new string column with all occurrences of the specified pattern
replaced with a new string. The pattern is treated as a literal string, not a regular expression.
If either search or replace is a column expression, the operation is performed dynamically
using the values from those columns.
Args:
src: The input string column or column name to perform replacements on
search: The pattern to search for (can be a string or column expression)
replace: The string to replace with (can be a string or column expression)
Returns:
Column: A column containing the strings with replacements applied
Example: Replace with literal string
```python
# Replace all occurrences of "foo" in the "name" column with "bar"
df.select(text.replace(col("name"), "foo", "bar"))
```
Example: Replace using column values
```python
# Replace all occurrences of the value in the "search" column with the value in the "replace" column, for each row in the "text" column
df.select(text.replace(col("text"), col("search"), col("replace")))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 540 | 583 | null |
Column
|
[
"src",
"search",
"replace"
] | null | null | null |
Type: function
Member Name: replace
Qualified Name: fenic.api.functions.text.replace
Docstring: Replace all occurrences of a pattern with a new string, treating pattern as a literal string.
This method creates a new string column with all occurrences of the specified pattern
replaced with a new string. The pattern is treated as a literal string, not a regular expression.
If either search or replace is a column expression, the operation is performed dynamically
using the values from those columns.
Args:
src: The input string column or column name to perform replacements on
search: The pattern to search for (can be a string or column expression)
replace: The string to replace with (can be a string or column expression)
Returns:
Column: A column containing the strings with replacements applied
Example: Replace with literal string
```python
# Replace all occurrences of "foo" in the "name" column with "bar"
df.select(text.replace(col("name"), "foo", "bar"))
```
Example: Replace using column values
```python
# Replace all occurrences of the value in the "search" column with the value in the "replace" column, for each row in the "text" column
df.select(text.replace(col("text"), col("search"), col("replace")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["src", "search", "replace"]
Returns: Column
Parent Class: none
|
function
|
regexp_replace
|
fenic.api.functions.text.regexp_replace
|
Replace all occurrences of a pattern with a new string, treating pattern as a regular expression.
This method creates a new string column with all occurrences of the specified pattern
replaced with a new string. The pattern is treated as a regular expression.
If either pattern or replacement is a column expression, the operation is performed dynamically
using the values from those columns.
Args:
src: The input string column or column name to perform replacements on
pattern: The regular expression pattern to search for (can be a string or column expression)
replacement: The string to replace with (can be a string or column expression)
Returns:
Column: A column containing the strings with replacements applied
Example: Replace digits with dashes
```python
# Replace all digits with dashes
df.select(text.regexp_replace(col("text"), r"\d+", "--"))
```
Example: Dynamic replacement using column values
```python
# Replace using patterns from columns
df.select(text.regexp_replace(col("text"), col("pattern"), col("replacement")))
```
Example: Complex pattern replacement
```python
# Replace email addresses with [REDACTED]
df.select(text.regexp_replace(col("text"), r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}", "[REDACTED]"))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 586 | 640 | null |
Column
|
[
"src",
"pattern",
"replacement"
] | null | null | null |
Type: function
Member Name: regexp_replace
Qualified Name: fenic.api.functions.text.regexp_replace
Docstring: Replace all occurrences of a pattern with a new string, treating pattern as a regular expression.
This method creates a new string column with all occurrences of the specified pattern
replaced with a new string. The pattern is treated as a regular expression.
If either pattern or replacement is a column expression, the operation is performed dynamically
using the values from those columns.
Args:
src: The input string column or column name to perform replacements on
pattern: The regular expression pattern to search for (can be a string or column expression)
replacement: The string to replace with (can be a string or column expression)
Returns:
Column: A column containing the strings with replacements applied
Example: Replace digits with dashes
```python
# Replace all digits with dashes
df.select(text.regexp_replace(col("text"), r"\d+", "--"))
```
Example: Dynamic replacement using column values
```python
# Replace using patterns from columns
df.select(text.regexp_replace(col("text"), col("pattern"), col("replacement")))
```
Example: Complex pattern replacement
```python
# Replace email addresses with [REDACTED]
df.select(text.regexp_replace(col("text"), r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}", "[REDACTED]"))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["src", "pattern", "replacement"]
Returns: Column
Parent Class: none
|
function
|
split
|
fenic.api.functions.text.split
|
Split a string column into an array using a regular expression pattern.
This method creates an array column by splitting each value in the input string column
at matches of the specified regular expression pattern.
Args:
src: The input string column or column name to split
pattern: The regular expression pattern to split on
limit: Maximum number of splits to perform (Default: -1 for unlimited).
If > 0, returns at most limit+1 elements, with remainder in last element.
Returns:
Column: A column containing arrays of substrings
Example: Split on whitespace
```python
# Split on whitespace
df.select(text.split(col("text"), r"\s+"))
```
Example: Split with limit
```python
# Split on whitespace, max 2 splits
df.select(text.split(col("text"), r"\s+", limit=2))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 643 | 673 | null |
Column
|
[
"src",
"pattern",
"limit"
] | null | null | null |
Type: function
Member Name: split
Qualified Name: fenic.api.functions.text.split
Docstring: Split a string column into an array using a regular expression pattern.
This method creates an array column by splitting each value in the input string column
at matches of the specified regular expression pattern.
Args:
src: The input string column or column name to split
pattern: The regular expression pattern to split on
limit: Maximum number of splits to perform (Default: -1 for unlimited).
If > 0, returns at most limit+1 elements, with remainder in last element.
Returns:
Column: A column containing arrays of substrings
Example: Split on whitespace
```python
# Split on whitespace
df.select(text.split(col("text"), r"\s+"))
```
Example: Split with limit
```python
# Split on whitespace, max 2 splits
df.select(text.split(col("text"), r"\s+", limit=2))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["src", "pattern", "limit"]
Returns: Column
Parent Class: none
|
function
|
split_part
|
fenic.api.functions.text.split_part
|
Split a string and return a specific part using 1-based indexing.
Splits each string by a delimiter and returns the specified part.
If the delimiter is a column expression, the split operation is performed dynamically
using the delimiter values from that column.
Behavior:
- If any input is null, returns null
- If part_number is out of range of split parts, returns empty string
- If part_number is 0, throws an error
- If part_number is negative, counts from the end of the split parts
- If the delimiter is an empty string, the string is not split
Args:
src: The input string column or column name to split
delimiter: The delimiter to split on (can be a string or column expression)
part_number: Which part to return (1-based integer index or column expression)
Returns:
Column: A column containing the specified part from each split string
Example: Get second part of comma-separated values
```python
# Get second part of comma-separated values
df.select(text.split_part(col("text"), ",", 2))
```
Example: Get last part using negative index
```python
# Get last part using negative index
df.select(text.split_part(col("text"), ",", -1))
```
Example: Use dynamic delimiter from column
```python
# Use dynamic delimiter from column
df.select(text.split_part(col("text"), col("delimiter"), 1))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 676 | 737 | null |
Column
|
[
"src",
"delimiter",
"part_number"
] | null | null | null |
Type: function
Member Name: split_part
Qualified Name: fenic.api.functions.text.split_part
Docstring: Split a string and return a specific part using 1-based indexing.
Splits each string by a delimiter and returns the specified part.
If the delimiter is a column expression, the split operation is performed dynamically
using the delimiter values from that column.
Behavior:
- If any input is null, returns null
- If part_number is out of range of split parts, returns empty string
- If part_number is 0, throws an error
- If part_number is negative, counts from the end of the split parts
- If the delimiter is an empty string, the string is not split
Args:
src: The input string column or column name to split
delimiter: The delimiter to split on (can be a string or column expression)
part_number: Which part to return (1-based integer index or column expression)
Returns:
Column: A column containing the specified part from each split string
Example: Get second part of comma-separated values
```python
# Get second part of comma-separated values
df.select(text.split_part(col("text"), ",", 2))
```
Example: Get last part using negative index
```python
# Get last part using negative index
df.select(text.split_part(col("text"), ",", -1))
```
Example: Use dynamic delimiter from column
```python
# Use dynamic delimiter from column
df.select(text.split_part(col("text"), col("delimiter"), 1))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["src", "delimiter", "part_number"]
Returns: Column
Parent Class: none
|
function
|
upper
|
fenic.api.functions.text.upper
|
Convert all characters in a string column to uppercase.
Args:
column: The input string column to convert to uppercase
Returns:
Column: A column containing the uppercase strings
Example: Convert text to uppercase
```python
# Convert all text in the name column to uppercase
df.select(text.upper(col("name")))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 740 | 758 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: upper
Qualified Name: fenic.api.functions.text.upper
Docstring: Convert all characters in a string column to uppercase.
Args:
column: The input string column to convert to uppercase
Returns:
Column: A column containing the uppercase strings
Example: Convert text to uppercase
```python
# Convert all text in the name column to uppercase
df.select(text.upper(col("name")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
lower
|
fenic.api.functions.text.lower
|
Convert all characters in a string column to lowercase.
Args:
column: The input string column to convert to lowercase
Returns:
Column: A column containing the lowercase strings
Example: Convert text to lowercase
```python
# Convert all text in the name column to lowercase
df.select(text.lower(col("name")))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 761 | 779 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: lower
Qualified Name: fenic.api.functions.text.lower
Docstring: Convert all characters in a string column to lowercase.
Args:
column: The input string column to convert to lowercase
Returns:
Column: A column containing the lowercase strings
Example: Convert text to lowercase
```python
# Convert all text in the name column to lowercase
df.select(text.lower(col("name")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
title_case
|
fenic.api.functions.text.title_case
|
Convert the first character of each word in a string column to uppercase.
Args:
column: The input string column to convert to title case
Returns:
Column: A column containing the title case strings
Example: Convert text to title case
```python
# Convert text in the name column to title case
df.select(text.title_case(col("name")))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 782 | 800 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: title_case
Qualified Name: fenic.api.functions.text.title_case
Docstring: Convert the first character of each word in a string column to uppercase.
Args:
column: The input string column to convert to title case
Returns:
Column: A column containing the title case strings
Example: Convert text to title case
```python
# Convert text in the name column to title case
df.select(text.title_case(col("name")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
trim
|
fenic.api.functions.text.trim
|
Remove whitespace from both sides of strings in a column.
This function removes all whitespace characters (spaces, tabs, newlines) from
both the beginning and end of each string in the column.
Args:
column: The input string column or column name to trim
Returns:
Column: A column containing the trimmed strings
Example: Remove whitespace from both sides
```python
# Remove whitespace from both sides of text
df.select(text.trim(col("text")))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 803 | 824 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: trim
Qualified Name: fenic.api.functions.text.trim
Docstring: Remove whitespace from both sides of strings in a column.
This function removes all whitespace characters (spaces, tabs, newlines) from
both the beginning and end of each string in the column.
Args:
column: The input string column or column name to trim
Returns:
Column: A column containing the trimmed strings
Example: Remove whitespace from both sides
```python
# Remove whitespace from both sides of text
df.select(text.trim(col("text")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
btrim
|
fenic.api.functions.text.btrim
|
Remove specified characters from both sides of strings in a column.
This function removes all occurrences of the specified characters from
both the beginning and end of each string in the column.
If trim is a column expression, the characters to remove are determined dynamically
from the values in that column.
Args:
col: The input string column or column name to trim
trim: The characters to remove from both sides (Default: whitespace)
Can be a string or column expression.
Returns:
Column: A column containing the trimmed strings
Example: Remove brackets from both sides
```python
# Remove brackets from both sides of text
df.select(text.btrim(col("text"), "[]"))
```
Example: Remove characters specified in a column
```python
# Remove characters specified in a column
df.select(text.btrim(col("text"), col("chars")))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 827 | 864 | null |
Column
|
[
"col",
"trim"
] | null | null | null |
Type: function
Member Name: btrim
Qualified Name: fenic.api.functions.text.btrim
Docstring: Remove specified characters from both sides of strings in a column.
This function removes all occurrences of the specified characters from
both the beginning and end of each string in the column.
If trim is a column expression, the characters to remove are determined dynamically
from the values in that column.
Args:
col: The input string column or column name to trim
trim: The characters to remove from both sides (Default: whitespace)
Can be a string or column expression.
Returns:
Column: A column containing the trimmed strings
Example: Remove brackets from both sides
```python
# Remove brackets from both sides of text
df.select(text.btrim(col("text"), "[]"))
```
Example: Remove characters specified in a column
```python
# Remove characters specified in a column
df.select(text.btrim(col("text"), col("chars")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["col", "trim"]
Returns: Column
Parent Class: none
|
function
|
ltrim
|
fenic.api.functions.text.ltrim
|
Remove whitespace from the start of strings in a column.
This function removes all whitespace characters (spaces, tabs, newlines) from
the beginning of each string in the column.
Args:
col: The input string column or column name to trim
Returns:
Column: A column containing the left-trimmed strings
Example: Remove leading whitespace
```python
# Remove whitespace from the start of text
df.select(text.ltrim(col("text")))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 867 | 888 | null |
Column
|
[
"col"
] | null | null | null |
Type: function
Member Name: ltrim
Qualified Name: fenic.api.functions.text.ltrim
Docstring: Remove whitespace from the start of strings in a column.
This function removes all whitespace characters (spaces, tabs, newlines) from
the beginning of each string in the column.
Args:
col: The input string column or column name to trim
Returns:
Column: A column containing the left-trimmed strings
Example: Remove leading whitespace
```python
# Remove whitespace from the start of text
df.select(text.ltrim(col("text")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["col"]
Returns: Column
Parent Class: none
|
function
|
rtrim
|
fenic.api.functions.text.rtrim
|
Remove whitespace from the end of strings in a column.
This function removes all whitespace characters (spaces, tabs, newlines) from
the end of each string in the column.
Args:
col: The input string column or column name to trim
Returns:
Column: A column containing the right-trimmed strings
Example: Remove trailing whitespace
```python
# Remove whitespace from the end of text
df.select(text.rtrim(col("text")))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 891 | 912 | null |
Column
|
[
"col"
] | null | null | null |
Type: function
Member Name: rtrim
Qualified Name: fenic.api.functions.text.rtrim
Docstring: Remove whitespace from the end of strings in a column.
This function removes all whitespace characters (spaces, tabs, newlines) from
the end of each string in the column.
Args:
col: The input string column or column name to trim
Returns:
Column: A column containing the right-trimmed strings
Example: Remove trailing whitespace
```python
# Remove whitespace from the end of text
df.select(text.rtrim(col("text")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["col"]
Returns: Column
Parent Class: none
|
function
|
length
|
fenic.api.functions.text.length
|
Calculate the character length of each string in the column.
Args:
column: The input string column to calculate lengths for
Returns:
Column: A column containing the length of each string in characters
Example: Get string lengths
```python
# Get the length of each string in the name column
df.select(text.length(col("name")))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 915 | 933 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: length
Qualified Name: fenic.api.functions.text.length
Docstring: Calculate the character length of each string in the column.
Args:
column: The input string column to calculate lengths for
Returns:
Column: A column containing the length of each string in characters
Example: Get string lengths
```python
# Get the length of each string in the name column
df.select(text.length(col("name")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
byte_length
|
fenic.api.functions.text.byte_length
|
Calculate the byte length of each string in the column.
Args:
column: The input string column to calculate byte lengths for
Returns:
Column: A column containing the byte length of each string
Example: Get byte lengths
```python
# Get the byte length of each string in the name column
df.select(text.byte_length(col("name")))
```
|
site-packages/fenic/api/functions/text.py
| true | false | 936 | 954 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: byte_length
Qualified Name: fenic.api.functions.text.byte_length
Docstring: Calculate the byte length of each string in the column.
Args:
column: The input string column to calculate byte lengths for
Returns:
Column: A column containing the byte length of each string
Example: Get byte lengths
```python
# Get the byte length of each string in the name column
df.select(text.byte_length(col("name")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
jinja
|
fenic.api.functions.text.jinja
|
Render a Jinja template using values from the specified columns.
This function evaluates a Jinja2 template string for each row, using the provided
columns as template variables. Only a subset of Jinja2 features is supported.
Args:
jinja_template: A Jinja2 template string to render for each row.
Variables are referenced using double braces: {{ variable_name }}
strict: If True, when any of the provided columns has a None value for a row,
the entire row's output will be None (template is not rendered).
If False, None values are handled using Jinja2's null rendering behavior.
Default is True.
**columns: Keyword arguments mapping variable names to columns.
Each keyword becomes a variable in the template context.
Returns:
Column: A string column containing the rendered template for each row
Supported Features:
- Variable substitution: {{ variable }}
- Struct/object field access: {{ user.name }}
- Array indexing with literals: {{ items[0] }}, {{ data["key"] }}
- For loops: {% for item in items %}...{% endfor %}
- If/elif/else conditionals: {% if condition %}...{% endif %}
- Loop variables: {{ loop.index }}, {{ loop.first }}, etc.
- Constants: {{ "literal string" }}, {{ 42 }}
Not Supported (use column expressions instead):
- **Filters**: {{ name|upper }} → Use upper_name=fc.upper(col("name"))
- **Function calls**: {{ len(items) }} → Use item_count=fc.array_size(col("items"))
- **Operators**: {% if price > 100 %} → Use is_expensive=(col("price") > 100)
- **Arithmetic**: {{ price * quantity }} → Use total=col("price") * col("quantity")
- **Dynamic indexing**: {{ items[i] }} → Use item=(fc.col("items").get_item(col("index")))
- **Variable assignment**: {% set x = 5 %} → Pre-compute as column expression
- **Macros, includes, extends**: Not supported
Example: LLM prompt formatting with conditional context and examples
```python
# Format prompts with user query, conditional context, and examples
prompt_template = '''
Answer the user's question.
{% if context %}
Context: {{ context }}
{% endif %}
{% if examples %}
Few-shot examples:
{% for ex in examples %}
Q: {{ ex.question }}
A: {{ ex.answer }}
{% endfor %}
{% endif %}
Question: {{ query }}
Please provide a {{ style }} response.'''
# Generate prompts with varying context based on query type
result = df.select(
text.jinja(
prompt_template,
# Direct columns
query=col("user_question"),
context=col("retrieved_context"), # Can be null for some rows
# Column expression for conditional logic
style=fc.when(col("query_type") == "technical", "detailed and technical")
.when(col("query_type") == "casual", "conversational")
.otherwise("clear and concise"),
# Array of examples (struct array)
examples=col("few_shot_examples") # Array of {question, answer} structs
).alias("llm_prompt")
)
```
Notes:
- Template syntax is validated at query planning time
- Complex operations can use column expressions
- Arrays can only be iterated with {% for %} or accessed with literal indices
- Structs can only use literal field names
- Null values are rendered according to Jinja2's null rendering behavior
|
site-packages/fenic/api/functions/text.py
| true | false | 957 | 1,058 | null |
Column
|
[
"jinja_template",
"strict",
"columns"
] | null | null | null |
Type: function
Member Name: jinja
Qualified Name: fenic.api.functions.text.jinja
Docstring: Render a Jinja template using values from the specified columns.
This function evaluates a Jinja2 template string for each row, using the provided
columns as template variables. Only a subset of Jinja2 features is supported.
Args:
jinja_template: A Jinja2 template string to render for each row.
Variables are referenced using double braces: {{ variable_name }}
strict: If True, when any of the provided columns has a None value for a row,
the entire row's output will be None (template is not rendered).
If False, None values are handled using Jinja2's null rendering behavior.
Default is True.
**columns: Keyword arguments mapping variable names to columns.
Each keyword becomes a variable in the template context.
Returns:
Column: A string column containing the rendered template for each row
Supported Features:
- Variable substitution: {{ variable }}
- Struct/object field access: {{ user.name }}
- Array indexing with literals: {{ items[0] }}, {{ data["key"] }}
- For loops: {% for item in items %}...{% endfor %}
- If/elif/else conditionals: {% if condition %}...{% endif %}
- Loop variables: {{ loop.index }}, {{ loop.first }}, etc.
- Constants: {{ "literal string" }}, {{ 42 }}
Not Supported (use column expressions instead):
- **Filters**: {{ name|upper }} → Use upper_name=fc.upper(col("name"))
- **Function calls**: {{ len(items) }} → Use item_count=fc.array_size(col("items"))
- **Operators**: {% if price > 100 %} → Use is_expensive=(col("price") > 100)
- **Arithmetic**: {{ price * quantity }} → Use total=col("price") * col("quantity")
- **Dynamic indexing**: {{ items[i] }} → Use item=(fc.col("items").get_item(col("index")))
- **Variable assignment**: {% set x = 5 %} → Pre-compute as column expression
- **Macros, includes, extends**: Not supported
Example: LLM prompt formatting with conditional context and examples
```python
# Format prompts with user query, conditional context, and examples
prompt_template = '''
Answer the user's question.
{% if context %}
Context: {{ context }}
{% endif %}
{% if examples %}
Few-shot examples:
{% for ex in examples %}
Q: {{ ex.question }}
A: {{ ex.answer }}
{% endfor %}
{% endif %}
Question: {{ query }}
Please provide a {{ style }} response.'''
# Generate prompts with varying context based on query type
result = df.select(
text.jinja(
prompt_template,
# Direct columns
query=col("user_question"),
context=col("retrieved_context"), # Can be null for some rows
# Column expression for conditional logic
style=fc.when(col("query_type") == "technical", "detailed and technical")
.when(col("query_type") == "casual", "conversational")
.otherwise("clear and concise"),
# Array of examples (struct array)
examples=col("few_shot_examples") # Array of {question, answer} structs
).alias("llm_prompt")
)
```
Notes:
- Template syntax is validated at query planning time
- Complex operations can use column expressions
- Arrays can only be iterated with {% for %} or accessed with literal indices
- Structs can only use literal field names
- Null values are rendered according to Jinja2's null rendering behavior
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["jinja_template", "strict", "columns"]
Returns: Column
Parent Class: none
|
function
|
compute_fuzzy_ratio
|
fenic.api.functions.text.compute_fuzzy_ratio
|
Compute the similarity between two strings using a fuzzy string matching algorithm.
This function computes a fuzzy similarity score between two string columns (or a string column
and a literal string) for each row. It supports multiple well-known string similarity metrics,
including Levenshtein, Damerau-Levenshtein, Jaro, Jaro-Winkler, and Hamming.
The returned score is a similarity percentage between 0 and 100, where:
- 100 indicates the strings are identical
- 0 indicates maximum dissimilarity (as defined by the method)
Based on https://rapidfuzz.github.io/RapidFuzz/Usage/fuzz.html#rapidfuzz.fuzz.ratio
Args:
column: A string column or column name. This is the left-hand side of the comparison.
other: A second string column or literal string. This is the right-hand side of the comparison.
method: A string indicating which similarity method to use. Must be one of:
- `"indel"`: Indel distance — counts only insertions and deletions (no substitutions); based on the Longest Common Subsequence.
- `"levenshtein"`: Levenshtein distance (edit distance)
- `"damerau_levenshtein"`: Damerau-Levenshtein distance (includes transpositions)
- `"jaro"`: Jaro similarity, accounts for transpositions and proximity
- `"jaro_winkler"`: Jaro-Winkler similarity, gives higher scores for common prefixes
- `"hamming"`: Hamming distance. Counts differing positions between two equal-length strings, padding shorter string if needed.
Returns:
Column: A double column with similarity scores in the range [0, 100].
Example: Compare two columns
```python
result = df.select(
compute_fuzzy_ratio(col("a"), col("b"), method="levenshtein").alias("sim")
)
```
Example: Compare a column to a literal string
```python
result = df.select(
compute_fuzzy_ratio(col("a"), "world", method="jaro").alias("sim_to_world")
)
```
|
site-packages/fenic/api/functions/text.py
| true | false | 1,060 | 1,107 | null |
Column
|
[
"column",
"other",
"method"
] | null | null | null |
Type: function
Member Name: compute_fuzzy_ratio
Qualified Name: fenic.api.functions.text.compute_fuzzy_ratio
Docstring: Compute the similarity between two strings using a fuzzy string matching algorithm.
This function computes a fuzzy similarity score between two string columns (or a string column
and a literal string) for each row. It supports multiple well-known string similarity metrics,
including Levenshtein, Damerau-Levenshtein, Jaro, Jaro-Winkler, and Hamming.
The returned score is a similarity percentage between 0 and 100, where:
- 100 indicates the strings are identical
- 0 indicates maximum dissimilarity (as defined by the method)
Based on https://rapidfuzz.github.io/RapidFuzz/Usage/fuzz.html#rapidfuzz.fuzz.ratio
Args:
column: A string column or column name. This is the left-hand side of the comparison.
other: A second string column or literal string. This is the right-hand side of the comparison.
method: A string indicating which similarity method to use. Must be one of:
- `"indel"`: Indel distance — counts only insertions and deletions (no substitutions); based on the Longest Common Subsequence.
- `"levenshtein"`: Levenshtein distance (edit distance)
- `"damerau_levenshtein"`: Damerau-Levenshtein distance (includes transpositions)
- `"jaro"`: Jaro similarity, accounts for transpositions and proximity
- `"jaro_winkler"`: Jaro-Winkler similarity, gives higher scores for common prefixes
- `"hamming"`: Hamming distance. Counts differing positions between two equal-length strings, padding shorter string if needed.
Returns:
Column: A double column with similarity scores in the range [0, 100].
Example: Compare two columns
```python
result = df.select(
compute_fuzzy_ratio(col("a"), col("b"), method="levenshtein").alias("sim")
)
```
Example: Compare a column to a literal string
```python
result = df.select(
compute_fuzzy_ratio(col("a"), "world", method="jaro").alias("sim_to_world")
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "other", "method"]
Returns: Column
Parent Class: none
|
function
|
compute_fuzzy_token_sort_ratio
|
fenic.api.functions.text.compute_fuzzy_token_sort_ratio
|
Compute fuzzy similarity after sorting tokens in each string.
Tokenizes strings by whitespace, sorts tokens alphabetically, concatenates
them back into a string, then applies the specified similarity metric.
Useful for comparing strings where word order doesn't matter.
Based on https://rapidfuzz.github.io/RapidFuzz/Usage/fuzz.html#rapidfuzz.fuzz.token_sort_ratio
Args:
column: First string column to compare
other: Second string column or literal string to compare against
method: Similarity algorithm to use after token sorting
Returns:
Double column with similarity scores between 0 and 100
Example:
```python
# df.select(compute_fuzzy_token_sort_ratio(col("city"), "city new york", "levenshtein"))
# "new york city" → ["new", "york", "city"] → sorted → ["city", "new", "york"] → "city new york"
# "city new york" → ["city", "new", "york"] → sorted → ["city", "new", "york"] → "city new york"
# levenshtein similarity("city new york", "city new york") = 100
```
|
site-packages/fenic/api/functions/text.py
| true | false | 1,109 | 1,140 | null |
Column
|
[
"column",
"other",
"method"
] | null | null | null |
Type: function
Member Name: compute_fuzzy_token_sort_ratio
Qualified Name: fenic.api.functions.text.compute_fuzzy_token_sort_ratio
Docstring: Compute fuzzy similarity after sorting tokens in each string.
Tokenizes strings by whitespace, sorts tokens alphabetically, concatenates
them back into a string, then applies the specified similarity metric.
Useful for comparing strings where word order doesn't matter.
Based on https://rapidfuzz.github.io/RapidFuzz/Usage/fuzz.html#rapidfuzz.fuzz.token_sort_ratio
Args:
column: First string column to compare
other: Second string column or literal string to compare against
method: Similarity algorithm to use after token sorting
Returns:
Double column with similarity scores between 0 and 100
Example:
```python
# df.select(compute_fuzzy_token_sort_ratio(col("city"), "city new york", "levenshtein"))
# "new york city" → ["new", "york", "city"] → sorted → ["city", "new", "york"] → "city new york"
# "city new york" → ["city", "new", "york"] → sorted → ["city", "new", "york"] → "city new york"
# levenshtein similarity("city new york", "city new york") = 100
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "other", "method"]
Returns: Column
Parent Class: none
|
function
|
compute_fuzzy_token_set_ratio
|
fenic.api.functions.text.compute_fuzzy_token_set_ratio
|
Compute fuzzy similarity using token set comparison.
Tokenizes strings by whitespace, creates sets of unique tokens, then
compares three combinations: diff1 vs diff2, intersection vs left set,
and intersection vs right set. Returns the maximum similarity score.
Useful for comparing strings where both word order and duplicates
don't matter.
Based on https://rapidfuzz.github.io/RapidFuzz/Usage/fuzz.html#rapidfuzz.fuzz.token_set_ratio
Args:
column: First string column to compare
other: Second string column or literal string to compare against
method: Similarity algorithm to use for comparison
Returns:
Double column with similarity scores between 0 and 100
Example:
```python
# df.select(compute_fuzzy_token_set_ratio(col("city"), "city of new york", "indel"))
# "new york city new" → unique tokens: {"city", "new", "york"}
# "city of new york" → unique tokens: {"city", "new", "of", "york"}
# intersection: {"city", "new", "york"}
# diff1: {} (empty)
# diff2: {"of"}
# Compares: diff1 vs diff2, intersection vs set1, intersection vs set2
# Returns max similarity score = 100
```
|
site-packages/fenic/api/functions/text.py
| true | false | 1,142 | 1,179 | null |
Column
|
[
"column",
"other",
"method"
] | null | null | null |
Type: function
Member Name: compute_fuzzy_token_set_ratio
Qualified Name: fenic.api.functions.text.compute_fuzzy_token_set_ratio
Docstring: Compute fuzzy similarity using token set comparison.
Tokenizes strings by whitespace, creates sets of unique tokens, then
compares three combinations: diff1 vs diff2, intersection vs left set,
and intersection vs right set. Returns the maximum similarity score.
Useful for comparing strings where both word order and duplicates
don't matter.
Based on https://rapidfuzz.github.io/RapidFuzz/Usage/fuzz.html#rapidfuzz.fuzz.token_set_ratio
Args:
column: First string column to compare
other: Second string column or literal string to compare against
method: Similarity algorithm to use for comparison
Returns:
Double column with similarity scores between 0 and 100
Example:
```python
# df.select(compute_fuzzy_token_set_ratio(col("city"), "city of new york", "indel"))
# "new york city new" → unique tokens: {"city", "new", "york"}
# "city of new york" → unique tokens: {"city", "new", "of", "york"}
# intersection: {"city", "new", "york"}
# diff1: {} (empty)
# diff2: {"of"}
# Compares: diff1 vs diff2, intersection vs set1, intersection vs set2
# Returns max similarity score = 100
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "other", "method"]
Returns: Column
Parent Class: none
|
module
|
builtin
|
fenic.api.functions.builtin
|
Built-in functions for Fenic DataFrames.
|
site-packages/fenic/api/functions/builtin.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: builtin
Qualified Name: fenic.api.functions.builtin
Docstring: Built-in functions for Fenic DataFrames.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
function
|
sum
|
fenic.api.functions.builtin.sum
|
Aggregate function: returns the sum of all values in the specified column.
Args:
column: Column or column name to compute the sum of
Returns:
A Column expression representing the sum aggregation
Raises:
TypeError: If column is not a Column or string
|
site-packages/fenic/api/functions/builtin.py
| true | false | 38 | 53 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: sum
Qualified Name: fenic.api.functions.builtin.sum
Docstring: Aggregate function: returns the sum of all values in the specified column.
Args:
column: Column or column name to compute the sum of
Returns:
A Column expression representing the sum aggregation
Raises:
TypeError: If column is not a Column or string
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
avg
|
fenic.api.functions.builtin.avg
|
Aggregate function: returns the average (mean) of all values in the specified column. Applies to numeric and embedding types.
Args:
column: Column or column name to compute the average of
Returns:
A Column expression representing the average aggregation
Raises:
TypeError: If column is not a Column or string
|
site-packages/fenic/api/functions/builtin.py
| true | false | 56 | 71 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: avg
Qualified Name: fenic.api.functions.builtin.avg
Docstring: Aggregate function: returns the average (mean) of all values in the specified column. Applies to numeric and embedding types.
Args:
column: Column or column name to compute the average of
Returns:
A Column expression representing the average aggregation
Raises:
TypeError: If column is not a Column or string
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
mean
|
fenic.api.functions.builtin.mean
|
Aggregate function: returns the mean (average) of all values in the specified column.
Alias for avg().
Args:
column: Column or column name to compute the mean of
Returns:
A Column expression representing the mean aggregation
Raises:
TypeError: If column is not a Column or string
|
site-packages/fenic/api/functions/builtin.py
| true | false | 74 | 91 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: mean
Qualified Name: fenic.api.functions.builtin.mean
Docstring: Aggregate function: returns the mean (average) of all values in the specified column.
Alias for avg().
Args:
column: Column or column name to compute the mean of
Returns:
A Column expression representing the mean aggregation
Raises:
TypeError: If column is not a Column or string
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
min
|
fenic.api.functions.builtin.min
|
Aggregate function: returns the minimum value in the specified column.
Args:
column: Column or column name to compute the minimum of
Returns:
A Column expression representing the minimum aggregation
Raises:
TypeError: If column is not a Column or string
|
site-packages/fenic/api/functions/builtin.py
| true | false | 94 | 109 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: min
Qualified Name: fenic.api.functions.builtin.min
Docstring: Aggregate function: returns the minimum value in the specified column.
Args:
column: Column or column name to compute the minimum of
Returns:
A Column expression representing the minimum aggregation
Raises:
TypeError: If column is not a Column or string
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
max
|
fenic.api.functions.builtin.max
|
Aggregate function: returns the maximum value in the specified column.
Args:
column: Column or column name to compute the maximum of
Returns:
A Column expression representing the maximum aggregation
Raises:
TypeError: If column is not a Column or string
|
site-packages/fenic/api/functions/builtin.py
| true | false | 112 | 127 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: max
Qualified Name: fenic.api.functions.builtin.max
Docstring: Aggregate function: returns the maximum value in the specified column.
Args:
column: Column or column name to compute the maximum of
Returns:
A Column expression representing the maximum aggregation
Raises:
TypeError: If column is not a Column or string
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
count
|
fenic.api.functions.builtin.count
|
Aggregate function: returns the count of non-null values in the specified column.
Args:
column: Column or column name to count values in
Returns:
A Column expression representing the count aggregation
Raises:
TypeError: If column is not a Column or string
|
site-packages/fenic/api/functions/builtin.py
| true | false | 130 | 147 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: count
Qualified Name: fenic.api.functions.builtin.count
Docstring: Aggregate function: returns the count of non-null values in the specified column.
Args:
column: Column or column name to count values in
Returns:
A Column expression representing the count aggregation
Raises:
TypeError: If column is not a Column or string
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
collect_list
|
fenic.api.functions.builtin.collect_list
|
Aggregate function: collects all values from the specified column into a list.
Args:
column: Column or column name to collect values from
Returns:
A Column expression representing the list aggregation
Raises:
TypeError: If column is not a Column or string
|
site-packages/fenic/api/functions/builtin.py
| true | false | 150 | 165 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: collect_list
Qualified Name: fenic.api.functions.builtin.collect_list
Docstring: Aggregate function: collects all values from the specified column into a list.
Args:
column: Column or column name to collect values from
Returns:
A Column expression representing the list aggregation
Raises:
TypeError: If column is not a Column or string
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
array_agg
|
fenic.api.functions.builtin.array_agg
|
Alias for collect_list().
|
site-packages/fenic/api/functions/builtin.py
| true | false | 167 | 170 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: array_agg
Qualified Name: fenic.api.functions.builtin.array_agg
Docstring: Alias for collect_list().
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
first
|
fenic.api.functions.builtin.first
|
Aggregate function: returns the first non-null value in the specified column.
Typically used in aggregations to select the first observed value per group.
Args:
column: Column or column name.
Returns:
Column expression for the first value.
|
site-packages/fenic/api/functions/builtin.py
| true | false | 172 | 186 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: first
Qualified Name: fenic.api.functions.builtin.first
Docstring: Aggregate function: returns the first non-null value in the specified column.
Typically used in aggregations to select the first observed value per group.
Args:
column: Column or column name.
Returns:
Column expression for the first value.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
stddev
|
fenic.api.functions.builtin.stddev
|
Aggregate function: returns the sample standard deviation of the specified column.
Args:
column: Column or column name.
Returns:
Column expression for sample standard deviation.
|
site-packages/fenic/api/functions/builtin.py
| true | false | 188 | 200 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: stddev
Qualified Name: fenic.api.functions.builtin.stddev
Docstring: Aggregate function: returns the sample standard deviation of the specified column.
Args:
column: Column or column name.
Returns:
Column expression for sample standard deviation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
struct
|
fenic.api.functions.builtin.struct
|
Creates a new struct column from multiple input columns.
Args:
*args: Columns or column names to combine into a struct. Can be:
- Individual arguments
- Lists of columns/column names
- Tuples of columns/column names
Returns:
A Column expression representing a struct containing the input columns
Raises:
TypeError: If any argument is not a Column, string, or collection of
Columns/strings
|
site-packages/fenic/api/functions/builtin.py
| true | false | 202 | 231 | null |
Column
|
[
"args"
] | null | null | null |
Type: function
Member Name: struct
Qualified Name: fenic.api.functions.builtin.struct
Docstring: Creates a new struct column from multiple input columns.
Args:
*args: Columns or column names to combine into a struct. Can be:
- Individual arguments
- Lists of columns/column names
- Tuples of columns/column names
Returns:
A Column expression representing a struct containing the input columns
Raises:
TypeError: If any argument is not a Column, string, or collection of
Columns/strings
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["args"]
Returns: Column
Parent Class: none
|
function
|
array
|
fenic.api.functions.builtin.array
|
Creates a new array column from multiple input columns.
Args:
*args: Columns or column names to combine into an array. Can be:
- Individual arguments
- Lists of columns/column names
- Tuples of columns/column names
Returns:
A Column expression representing an array containing values from the input columns
Raises:
TypeError: If any argument is not a Column, string, or collection of
Columns/strings
|
site-packages/fenic/api/functions/builtin.py
| true | false | 234 | 263 | null |
Column
|
[
"args"
] | null | null | null |
Type: function
Member Name: array
Qualified Name: fenic.api.functions.builtin.array
Docstring: Creates a new array column from multiple input columns.
Args:
*args: Columns or column names to combine into an array. Can be:
- Individual arguments
- Lists of columns/column names
- Tuples of columns/column names
Returns:
A Column expression representing an array containing values from the input columns
Raises:
TypeError: If any argument is not a Column, string, or collection of
Columns/strings
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["args"]
Returns: Column
Parent Class: none
|
function
|
udf
|
fenic.api.functions.builtin.udf
|
A decorator or function for creating user-defined functions (UDFs) that can be applied to DataFrame rows.
Warning:
UDFs cannot be serialized and are not supported in cloud execution.
User-defined functions contain arbitrary Python code that cannot be transmitted
to remote workers. For cloud compatibility, use built-in fenic functions instead.
When applied, UDFs will:
- Access `StructType` columns as Python dictionaries (`dict[str, Any]`).
- Access `ArrayType` columns as Python lists (`list[Any]`).
- Access primitive types (e.g., `int`, `float`, `str`) as their respective Python types.
Args:
f: Python function to convert to UDF
return_type: Expected return type of the UDF. Required parameter.
Example: UDF with primitive types
```python
# UDF with primitive types
@udf(return_type=IntegerType)
def add_one(x: int):
return x + 1
# Or
add_one = udf(lambda x: x + 1, return_type=IntegerType)
```
Example: UDF with nested types
```python
# UDF with nested types
@udf(return_type=StructType([StructField("value1", IntegerType), StructField("value2", IntegerType)]))
def example_udf(x: dict[str, int], y: list[int]):
return {
"value1": x["value1"] + x["value2"] + y[0],
"value2": x["value1"] + x["value2"] + y[1],
}
```
|
site-packages/fenic/api/functions/builtin.py
| true | false | 266 | 321 | null | null |
[
"f",
"return_type"
] | null | null | null |
Type: function
Member Name: udf
Qualified Name: fenic.api.functions.builtin.udf
Docstring: A decorator or function for creating user-defined functions (UDFs) that can be applied to DataFrame rows.
Warning:
UDFs cannot be serialized and are not supported in cloud execution.
User-defined functions contain arbitrary Python code that cannot be transmitted
to remote workers. For cloud compatibility, use built-in fenic functions instead.
When applied, UDFs will:
- Access `StructType` columns as Python dictionaries (`dict[str, Any]`).
- Access `ArrayType` columns as Python lists (`list[Any]`).
- Access primitive types (e.g., `int`, `float`, `str`) as their respective Python types.
Args:
f: Python function to convert to UDF
return_type: Expected return type of the UDF. Required parameter.
Example: UDF with primitive types
```python
# UDF with primitive types
@udf(return_type=IntegerType)
def add_one(x: int):
return x + 1
# Or
add_one = udf(lambda x: x + 1, return_type=IntegerType)
```
Example: UDF with nested types
```python
# UDF with nested types
@udf(return_type=StructType([StructField("value1", IntegerType), StructField("value2", IntegerType)]))
def example_udf(x: dict[str, int], y: list[int]):
return {
"value1": x["value1"] + x["value2"] + y[0],
"value2": x["value1"] + x["value2"] + y[1],
}
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["f", "return_type"]
Returns: none
Parent Class: none
|
function
|
async_udf
|
fenic.api.functions.builtin.async_udf
|
A decorator for creating async user-defined functions (UDFs) with configurable concurrency and retries.
Async UDFs allow IO-bound operations (API calls, database queries, MCP tool calls)
to be executed concurrently while maintaining DataFrame semantics.
Args:
f: Async function to convert to UDF
return_type: Expected return type of the UDF. Required parameter.
max_concurrency: Maximum number of concurrent executions (default: 10)
timeout_seconds: Per-item timeout in seconds (default: 30)
num_retries: Number of retries for failed items (default: 0)
Example: Basic async UDF
```python
@async_udf(return_type=IntegerType)
async def slow_add(x: int, y: int) -> int:
await asyncio.sleep(1)
return x + y
df = df.select(slow_add(fc.col("x"), fc.col("y")).alias("slow_sum"))
# Or
async def slow_add_fn(x: int, y: int) -> int:
await asyncio.sleep(1)
return x + y
slow_add = async_udf(
slow_add_fn,
return_type=IntegerType
)
```
Example: API call with custom concurrency and retries
```python
@async_udf(
return_type=StructType([
StructField("status", IntegerType),
StructField("data", StringType)
]),
max_concurrency=20,
timeout_seconds=5,
num_retries=2
)
async def fetch_data(id: str) -> dict:
async with aiohttp.ClientSession() as session:
async with session.get(f"https://api.example.com/{id}") as resp:
return {
"status": resp.status,
"data": await resp.text()
}
```
Note:
- Individual failures return None instead of raising exceptions
- Async UDFs should not block or do CPU-intensive work, as they
will block execution of other instances of the function call.
|
site-packages/fenic/api/functions/builtin.py
| true | false | 323 | 419 | null | null |
[
"f",
"return_type",
"max_concurrency",
"timeout_seconds",
"num_retries"
] | null | null | null |
Type: function
Member Name: async_udf
Qualified Name: fenic.api.functions.builtin.async_udf
Docstring: A decorator for creating async user-defined functions (UDFs) with configurable concurrency and retries.
Async UDFs allow IO-bound operations (API calls, database queries, MCP tool calls)
to be executed concurrently while maintaining DataFrame semantics.
Args:
f: Async function to convert to UDF
return_type: Expected return type of the UDF. Required parameter.
max_concurrency: Maximum number of concurrent executions (default: 10)
timeout_seconds: Per-item timeout in seconds (default: 30)
num_retries: Number of retries for failed items (default: 0)
Example: Basic async UDF
```python
@async_udf(return_type=IntegerType)
async def slow_add(x: int, y: int) -> int:
await asyncio.sleep(1)
return x + y
df = df.select(slow_add(fc.col("x"), fc.col("y")).alias("slow_sum"))
# Or
async def slow_add_fn(x: int, y: int) -> int:
await asyncio.sleep(1)
return x + y
slow_add = async_udf(
slow_add_fn,
return_type=IntegerType
)
```
Example: API call with custom concurrency and retries
```python
@async_udf(
return_type=StructType([
StructField("status", IntegerType),
StructField("data", StringType)
]),
max_concurrency=20,
timeout_seconds=5,
num_retries=2
)
async def fetch_data(id: str) -> dict:
async with aiohttp.ClientSession() as session:
async with session.get(f"https://api.example.com/{id}") as resp:
return {
"status": resp.status,
"data": await resp.text()
}
```
Note:
- Individual failures return None instead of raising exceptions
- Async UDFs should not block or do CPU-intensive work, as they
will block execution of other instances of the function call.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["f", "return_type", "max_concurrency", "timeout_seconds", "num_retries"]
Returns: none
Parent Class: none
|
function
|
asc
|
fenic.api.functions.builtin.asc
|
Mark this column for ascending sort order with nulls first.
Args:
column: The column to apply the ascending ordering to.
Returns:
A sort expression with ascending order and nulls first.
|
site-packages/fenic/api/functions/builtin.py
| true | false | 422 | 432 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: asc
Qualified Name: fenic.api.functions.builtin.asc
Docstring: Mark this column for ascending sort order with nulls first.
Args:
column: The column to apply the ascending ordering to.
Returns:
A sort expression with ascending order and nulls first.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
asc_nulls_first
|
fenic.api.functions.builtin.asc_nulls_first
|
Alias for asc().
Args:
column: The column to apply the ascending ordering to.
Returns:
A sort expression with ascending order and nulls first.
|
site-packages/fenic/api/functions/builtin.py
| true | false | 435 | 445 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: asc_nulls_first
Qualified Name: fenic.api.functions.builtin.asc_nulls_first
Docstring: Alias for asc().
Args:
column: The column to apply the ascending ordering to.
Returns:
A sort expression with ascending order and nulls first.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
asc_nulls_last
|
fenic.api.functions.builtin.asc_nulls_last
|
Mark this column for ascending sort order with nulls last.
Args:
column: The column to apply the ascending ordering to.
Returns:
A Column expression representing the column and the ascending sort order with nulls last.
|
site-packages/fenic/api/functions/builtin.py
| true | false | 448 | 458 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: asc_nulls_last
Qualified Name: fenic.api.functions.builtin.asc_nulls_last
Docstring: Mark this column for ascending sort order with nulls last.
Args:
column: The column to apply the ascending ordering to.
Returns:
A Column expression representing the column and the ascending sort order with nulls last.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
desc
|
fenic.api.functions.builtin.desc
|
Mark this column for descending sort order with nulls first.
Args:
column: The column to apply the descending ordering to.
Returns:
A sort expression with descending order and nulls first.
|
site-packages/fenic/api/functions/builtin.py
| true | false | 461 | 471 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: desc
Qualified Name: fenic.api.functions.builtin.desc
Docstring: Mark this column for descending sort order with nulls first.
Args:
column: The column to apply the descending ordering to.
Returns:
A sort expression with descending order and nulls first.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
desc_nulls_first
|
fenic.api.functions.builtin.desc_nulls_first
|
Alias for desc().
Args:
column: The column to apply the descending ordering to.
Returns:
A sort expression with descending order and nulls first.
|
site-packages/fenic/api/functions/builtin.py
| true | false | 474 | 484 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: desc_nulls_first
Qualified Name: fenic.api.functions.builtin.desc_nulls_first
Docstring: Alias for desc().
Args:
column: The column to apply the descending ordering to.
Returns:
A sort expression with descending order and nulls first.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
desc_nulls_last
|
fenic.api.functions.builtin.desc_nulls_last
|
Mark this column for descending sort order with nulls last.
Args:
column: The column to apply the descending ordering to.
Returns:
A sort expression with descending order and nulls last.
|
site-packages/fenic/api/functions/builtin.py
| true | false | 487 | 497 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: desc_nulls_last
Qualified Name: fenic.api.functions.builtin.desc_nulls_last
Docstring: Mark this column for descending sort order with nulls last.
Args:
column: The column to apply the descending ordering to.
Returns:
A sort expression with descending order and nulls last.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
array_size
|
fenic.api.functions.builtin.array_size
|
Returns the number of elements in an array column.
This function computes the length of arrays stored in the specified column.
Returns None for None arrays.
Args:
column: Column or column name containing arrays whose length to compute.
Returns:
A Column expression representing the array length.
Raises:
TypeError: If the column does not contain array data.
Example: Get array sizes
```python
# Get the size of arrays in 'tags' column
df.select(array_size("tags"))
# Use with column reference
df.select(array_size(col("tags")))
```
|
site-packages/fenic/api/functions/builtin.py
| true | false | 500 | 527 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: array_size
Qualified Name: fenic.api.functions.builtin.array_size
Docstring: Returns the number of elements in an array column.
This function computes the length of arrays stored in the specified column.
Returns None for None arrays.
Args:
column: Column or column name containing arrays whose length to compute.
Returns:
A Column expression representing the array length.
Raises:
TypeError: If the column does not contain array data.
Example: Get array sizes
```python
# Get the size of arrays in 'tags' column
df.select(array_size("tags"))
# Use with column reference
df.select(array_size(col("tags")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
array_contains
|
fenic.api.functions.builtin.array_contains
|
Checks if array column contains a specific value.
This function returns True if the array in the specified column contains the given value,
and False otherwise. Returns False if the array is None.
Args:
column: Column or column name containing the arrays to check.
value: Value to search for in the arrays. Can be:
- A literal value (string, number, boolean)
- A Column expression
Returns:
A boolean Column expression (True if value is found, False otherwise).
Raises:
TypeError: If value type is incompatible with the array element type.
TypeError: If the column does not contain array data.
Example: Check for values in arrays
```python
# Check if 'python' exists in arrays in the 'tags' column
df.select(array_contains("tags", "python"))
# Check using a value from another column
df.select(array_contains("tags", col("search_term")))
```
|
site-packages/fenic/api/functions/builtin.py
| true | false | 530 | 571 | null |
Column
|
[
"column",
"value"
] | null | null | null |
Type: function
Member Name: array_contains
Qualified Name: fenic.api.functions.builtin.array_contains
Docstring: Checks if array column contains a specific value.
This function returns True if the array in the specified column contains the given value,
and False otherwise. Returns False if the array is None.
Args:
column: Column or column name containing the arrays to check.
value: Value to search for in the arrays. Can be:
- A literal value (string, number, boolean)
- A Column expression
Returns:
A boolean Column expression (True if value is found, False otherwise).
Raises:
TypeError: If value type is incompatible with the array element type.
TypeError: If the column does not contain array data.
Example: Check for values in arrays
```python
# Check if 'python' exists in arrays in the 'tags' column
df.select(array_contains("tags", "python"))
# Check using a value from another column
df.select(array_contains("tags", col("search_term")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "value"]
Returns: Column
Parent Class: none
|
function
|
when
|
fenic.api.functions.builtin.when
|
Evaluates a condition and returns a value if true.
This function is used to create conditional expressions. If Column.otherwise() is not invoked,
None is returned for unmatched conditions.
Args:
condition: A boolean Column expression to evaluate.
value: A Column expression to return if the condition is true.
Returns:
A Column expression that evaluates the condition and returns the specified value when true,
and None otherwise.
Raises:
TypeError: If the condition is not a boolean Column expression.
Example: Basic conditional expression
```python
# Basic usage
df.select(when(col("age") > 18, lit("adult")))
# With otherwise
df.select(when(col("age") > 18, lit("adult")).otherwise(lit("minor")))
```
|
site-packages/fenic/api/functions/builtin.py
| true | false | 574 | 604 | null |
Column
|
[
"condition",
"value"
] | null | null | null |
Type: function
Member Name: when
Qualified Name: fenic.api.functions.builtin.when
Docstring: Evaluates a condition and returns a value if true.
This function is used to create conditional expressions. If Column.otherwise() is not invoked,
None is returned for unmatched conditions.
Args:
condition: A boolean Column expression to evaluate.
value: A Column expression to return if the condition is true.
Returns:
A Column expression that evaluates the condition and returns the specified value when true,
and None otherwise.
Raises:
TypeError: If the condition is not a boolean Column expression.
Example: Basic conditional expression
```python
# Basic usage
df.select(when(col("age") > 18, lit("adult")))
# With otherwise
df.select(when(col("age") > 18, lit("adult")).otherwise(lit("minor")))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["condition", "value"]
Returns: Column
Parent Class: none
|
function
|
coalesce
|
fenic.api.functions.builtin.coalesce
|
Returns the first non-null value from the given columns for each row.
This function mimics the behavior of SQL's COALESCE function. It evaluates the input columns
in order and returns the first non-null value encountered. If all values are null, returns null.
Args:
*cols: Column expressions or column names to evaluate. Each argument should be a single
column expression or column name string.
Returns:
A Column expression containing the first non-null value from the input columns.
Raises:
ValidationError: If no columns are provided.
Example: coalesce usage
```python
df.select(coalesce("col1", "col2", "col3"))
```
|
site-packages/fenic/api/functions/builtin.py
| true | false | 607 | 635 | null |
Column
|
[
"cols"
] | null | null | null |
Type: function
Member Name: coalesce
Qualified Name: fenic.api.functions.builtin.coalesce
Docstring: Returns the first non-null value from the given columns for each row.
This function mimics the behavior of SQL's COALESCE function. It evaluates the input columns
in order and returns the first non-null value encountered. If all values are null, returns null.
Args:
*cols: Column expressions or column names to evaluate. Each argument should be a single
column expression or column name string.
Returns:
A Column expression containing the first non-null value from the input columns.
Raises:
ValidationError: If no columns are provided.
Example: coalesce usage
```python
df.select(coalesce("col1", "col2", "col3"))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["cols"]
Returns: Column
Parent Class: none
|
function
|
greatest
|
fenic.api.functions.builtin.greatest
|
Returns the greatest value from the given columns for each row.
This function mimics the behavior of SQL's GREATEST function. It evaluates the input columns
in order and returns the greatest value encountered. If all values are null, returns null.
All arguments must be of the same primitive type (e.g., StringType, BooleanType, FloatType, IntegerType, etc).
Args:
*cols: Column expressions or column names to evaluate. Each argument should be a single
column expression or column name string.
Returns:
A Column expression containing the greatest value from the input columns.
Raises:
ValidationError: If fewer than two columns are provided.
Example: greatest usage
```python
df.select(fc.greatest("col1", "col2", "col3"))
```
|
site-packages/fenic/api/functions/builtin.py
| true | false | 637 | 667 | null |
Column
|
[
"cols"
] | null | null | null |
Type: function
Member Name: greatest
Qualified Name: fenic.api.functions.builtin.greatest
Docstring: Returns the greatest value from the given columns for each row.
This function mimics the behavior of SQL's GREATEST function. It evaluates the input columns
in order and returns the greatest value encountered. If all values are null, returns null.
All arguments must be of the same primitive type (e.g., StringType, BooleanType, FloatType, IntegerType, etc).
Args:
*cols: Column expressions or column names to evaluate. Each argument should be a single
column expression or column name string.
Returns:
A Column expression containing the greatest value from the input columns.
Raises:
ValidationError: If fewer than two columns are provided.
Example: greatest usage
```python
df.select(fc.greatest("col1", "col2", "col3"))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["cols"]
Returns: Column
Parent Class: none
|
function
|
least
|
fenic.api.functions.builtin.least
|
Returns the least value from the given columns for each row.
This function mimics the behavior of SQL's LEAST function. It evaluates the input columns
in order and returns the least value encountered. If all values are null, returns null.
All arguments must be of the same primitive type (e.g., StringType, BooleanType, FloatType, IntegerType, etc).
Args:
*cols: Column expressions or column names to evaluate. Each argument should be a single
column expression or column name string.
Returns:
A Column expression containing the least value from the input columns.
Raises:
ValidationError: If fewer than two columns are provided.
Example: least usage
```python
df.select(fc.least("col1", "col2", "col3"))
```
|
site-packages/fenic/api/functions/builtin.py
| true | false | 670 | 700 | null |
Column
|
[
"cols"
] | null | null | null |
Type: function
Member Name: least
Qualified Name: fenic.api.functions.builtin.least
Docstring: Returns the least value from the given columns for each row.
This function mimics the behavior of SQL's LEAST function. It evaluates the input columns
in order and returns the least value encountered. If all values are null, returns null.
All arguments must be of the same primitive type (e.g., StringType, BooleanType, FloatType, IntegerType, etc).
Args:
*cols: Column expressions or column names to evaluate. Each argument should be a single
column expression or column name string.
Returns:
A Column expression containing the least value from the input columns.
Raises:
ValidationError: If fewer than two columns are provided.
Example: least usage
```python
df.select(fc.least("col1", "col2", "col3"))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["cols"]
Returns: Column
Parent Class: none
|
module
|
json
|
fenic.api.functions.json
|
JSON functions.
|
site-packages/fenic/api/functions/json.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: json
Qualified Name: fenic.api.functions.json
Docstring: JSON functions.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
function
|
jq
|
fenic.api.functions.json.jq
|
Applies a JQ query to a column containing JSON-formatted strings.
Args:
column (ColumnOrName): Input column of type `JsonType`.
query (str): A [JQ](https://jqlang.org/) expression used to extract or transform values.
Returns:
Column: A column containing the result of applying the JQ query to each row's JSON input.
Notes:
- The input column *must* be of type `JsonType`. Use `cast(JsonType)` if needed to ensure correct typing.
- This function supports extracting nested fields, transforming arrays/objects, and other standard JQ operations.
Example: Extract nested field
```python
# Extract the "user.name" field from a JSON column
df.select(json.jq(col("json_col"), ".user.name"))
```
Example: Cast to JsonType before querying
```python
df.select(json.jq(col("raw_json").cast(JsonType), ".event.type"))
```
Example: Work with arrays
```python
# Work with arrays using JQ functions
df.select(json.jq(col("json_array"), "map(.id)"))
```
|
site-packages/fenic/api/functions/json.py
| true | false | 12 | 46 | null |
Column
|
[
"column",
"query"
] | null | null | null |
Type: function
Member Name: jq
Qualified Name: fenic.api.functions.json.jq
Docstring: Applies a JQ query to a column containing JSON-formatted strings.
Args:
column (ColumnOrName): Input column of type `JsonType`.
query (str): A [JQ](https://jqlang.org/) expression used to extract or transform values.
Returns:
Column: A column containing the result of applying the JQ query to each row's JSON input.
Notes:
- The input column *must* be of type `JsonType`. Use `cast(JsonType)` if needed to ensure correct typing.
- This function supports extracting nested fields, transforming arrays/objects, and other standard JQ operations.
Example: Extract nested field
```python
# Extract the "user.name" field from a JSON column
df.select(json.jq(col("json_col"), ".user.name"))
```
Example: Cast to JsonType before querying
```python
df.select(json.jq(col("raw_json").cast(JsonType), ".event.type"))
```
Example: Work with arrays
```python
# Work with arrays using JQ functions
df.select(json.jq(col("json_array"), "map(.id)"))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "query"]
Returns: Column
Parent Class: none
|
function
|
get_type
|
fenic.api.functions.json.get_type
|
Get the JSON type of each value.
Args:
column (ColumnOrName): Input column of type `JsonType`.
Returns:
Column: A column of strings indicating the JSON type
("string", "number", "boolean", "array", "object", "null").
Example: Get JSON types
```python
df.select(json.get_type(col("json_data")))
```
Example: Filter by type
```python
# Filter by type
df.filter(json.get_type(col("data")) == "array")
```
|
site-packages/fenic/api/functions/json.py
| true | false | 49 | 73 | null |
Column
|
[
"column"
] | null | null | null |
Type: function
Member Name: get_type
Qualified Name: fenic.api.functions.json.get_type
Docstring: Get the JSON type of each value.
Args:
column (ColumnOrName): Input column of type `JsonType`.
Returns:
Column: A column of strings indicating the JSON type
("string", "number", "boolean", "array", "object", "null").
Example: Get JSON types
```python
df.select(json.get_type(col("json_data")))
```
Example: Filter by type
```python
# Filter by type
df.filter(json.get_type(col("data")) == "array")
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column"]
Returns: Column
Parent Class: none
|
function
|
contains
|
fenic.api.functions.json.contains
|
Check if a JSON value contains the specified value using recursive deep search.
Args:
column (ColumnOrName): Input column of type `JsonType`.
value (str): Valid JSON string to search for.
Returns:
Column: A column of booleans indicating whether the JSON contains the value.
Matching Rules:
- **Objects**: Uses partial matching - `{"role": "admin"}` matches `{"role": "admin", "level": 5}`
- **Arrays**: Uses exact matching - `[1, 2]` only matches exactly `[1, 2]`, not `[1, 2, 3]`
- **Primitives**: Uses exact matching - `42` matches `42` but not `"42"`
- **Search is recursive**: Searches at all nesting levels throughout the JSON structure
- **Type-aware**: Distinguishes between `42` (number) and `"42"` (string)
Example: Find objects with partial structure match
```python
# Find objects with partial structure match (at any nesting level)
df.select(json.contains(col("json_data"), '{"name": "Alice"}'))
# Matches: {"name": "Alice", "age": 30} and {"user": {"name": "Alice"}}
```
Example: Find exact array match
```python
# Find exact array match (at any nesting level)
df.select(json.contains(col("json_data"), '["read", "write"]'))
# Matches: {"permissions": ["read", "write"]} but not ["read", "write", "admin"]
```
Example: Find exact primitive values
```python
# Find exact primitive values (at any nesting level)
df.select(json.contains(col("json_data"), '"admin"'))
# Matches: {"role": "admin"} and ["admin", "user"] but not {"role": "administrator"}
```
Example: Type distinction matters
```python
# Type distinction matters
df.select(json.contains(col("json_data"), '42')) # number 42
df.select(json.contains(col("json_data"), '"42"')) # string "42"
```
Raises:
ValidationError: If `value` is not valid JSON.
|
site-packages/fenic/api/functions/json.py
| true | false | 76 | 127 | null |
Column
|
[
"column",
"value"
] | null | null | null |
Type: function
Member Name: contains
Qualified Name: fenic.api.functions.json.contains
Docstring: Check if a JSON value contains the specified value using recursive deep search.
Args:
column (ColumnOrName): Input column of type `JsonType`.
value (str): Valid JSON string to search for.
Returns:
Column: A column of booleans indicating whether the JSON contains the value.
Matching Rules:
- **Objects**: Uses partial matching - `{"role": "admin"}` matches `{"role": "admin", "level": 5}`
- **Arrays**: Uses exact matching - `[1, 2]` only matches exactly `[1, 2]`, not `[1, 2, 3]`
- **Primitives**: Uses exact matching - `42` matches `42` but not `"42"`
- **Search is recursive**: Searches at all nesting levels throughout the JSON structure
- **Type-aware**: Distinguishes between `42` (number) and `"42"` (string)
Example: Find objects with partial structure match
```python
# Find objects with partial structure match (at any nesting level)
df.select(json.contains(col("json_data"), '{"name": "Alice"}'))
# Matches: {"name": "Alice", "age": 30} and {"user": {"name": "Alice"}}
```
Example: Find exact array match
```python
# Find exact array match (at any nesting level)
df.select(json.contains(col("json_data"), '["read", "write"]'))
# Matches: {"permissions": ["read", "write"]} but not ["read", "write", "admin"]
```
Example: Find exact primitive values
```python
# Find exact primitive values (at any nesting level)
df.select(json.contains(col("json_data"), '"admin"'))
# Matches: {"role": "admin"} and ["admin", "user"] but not {"role": "administrator"}
```
Example: Type distinction matters
```python
# Type distinction matters
df.select(json.contains(col("json_data"), '42')) # number 42
df.select(json.contains(col("json_data"), '"42"')) # string "42"
```
Raises:
ValidationError: If `value` is not valid JSON.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["column", "value"]
Returns: Column
Parent Class: none
|
module
|
session
|
fenic.api.session
|
Session module for managing query execution context and state.
|
site-packages/fenic/api/session/__init__.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: session
Qualified Name: fenic.api.session
Docstring: Session module for managing query execution context and state.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
__all__
|
fenic.api.session.__all__
| null |
site-packages/fenic/api/session/__init__.py
| false | false | 20 | 35 | null | null | null | null |
['Session', 'SessionConfig', 'SemanticConfig', 'OpenAILanguageModel', 'OpenAIEmbeddingModel', 'AnthropicLanguageModel', 'GoogleDeveloperEmbeddingModel', 'GoogleDeveloperLanguageModel', 'GoogleVertexEmbeddingModel', 'GoogleVertexLanguageModel', 'ModelConfig', 'CloudConfig', 'CloudExecutorSize', 'CohereEmbeddingModel']
| null |
Type: attribute
Member Name: __all__
Qualified Name: fenic.api.session.__all__
Docstring: none
Value: ['Session', 'SessionConfig', 'SemanticConfig', 'OpenAILanguageModel', 'OpenAIEmbeddingModel', 'AnthropicLanguageModel', 'GoogleDeveloperEmbeddingModel', 'GoogleDeveloperLanguageModel', 'GoogleVertexEmbeddingModel', 'GoogleVertexLanguageModel', 'ModelConfig', 'CloudConfig', 'CloudExecutorSize', 'CohereEmbeddingModel']
Annotation: none
is Public? : false
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
module
|
config
|
fenic.api.session.config
|
Session configuration classes for Fenic.
|
site-packages/fenic/api/session/config.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: config
Qualified Name: fenic.api.session.config
Docstring: Session configuration classes for Fenic.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
profiles_desc
|
fenic.api.session.config.profiles_desc
| null |
site-packages/fenic/api/session/config.py
| true | false | 45 | 48 | null | null | null | null |
'\n Allow the same model configuration to be used with different profiles, currently used to set thinking budget/reasoning effort\n for reasoning models. To use a profile of a given model alias in a semantic operator, reference the model as `ModelAlias(name="<model_alias>", profile="<profile_name>")`.\n '
| null |
Type: attribute
Member Name: profiles_desc
Qualified Name: fenic.api.session.config.profiles_desc
Docstring: none
Value: '\n Allow the same model configuration to be used with different profiles, currently used to set thinking budget/reasoning effort\n for reasoning models. To use a profile of a given model alias in a semantic operator, reference the model as `ModelAlias(name="<model_alias>", profile="<profile_name>")`.\n '
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
default_profiles_desc
|
fenic.api.session.config.default_profiles_desc
| null |
site-packages/fenic/api/session/config.py
| true | false | 50 | 52 | null | null | null | null |
'\n If profiles are configured, which should be used by default?\n '
| null |
Type: attribute
Member Name: default_profiles_desc
Qualified Name: fenic.api.session.config.default_profiles_desc
Docstring: none
Value: '\n If profiles are configured, which should be used by default?\n '
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
GoogleEmbeddingTaskType
|
fenic.api.session.config.GoogleEmbeddingTaskType
| null |
site-packages/fenic/api/session/config.py
| true | false | 54 | 63 | null | null | null | null |
Literal['SEMANTIC_SIMILARITY', 'CLASSIFICATION', 'CLUSTERING', 'RETRIEVAL_DOCUMENT', 'RETRIEVAL_QUERY', 'CODE_RETRIEVAL_QUERY', 'QUESTION_ANSWERING', 'FACT_VERIFICATION']
| null |
Type: attribute
Member Name: GoogleEmbeddingTaskType
Qualified Name: fenic.api.session.config.GoogleEmbeddingTaskType
Docstring: none
Value: Literal['SEMANTIC_SIMILARITY', 'CLASSIFICATION', 'CLUSTERING', 'RETRIEVAL_DOCUMENT', 'RETRIEVAL_QUERY', 'CODE_RETRIEVAL_QUERY', 'QUESTION_ANSWERING', 'FACT_VERIFICATION']
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
GoogleDeveloperEmbeddingModel
|
fenic.api.session.config.GoogleDeveloperEmbeddingModel
|
Configuration for Google Developer embedding models.
This class defines the configuration settings for Google embedding models available in Google Developer AI Studio,
including model selection and rate limiting parameters. These models are accessible using a GOOGLE_API_KEY environment variable.
Attributes:
model_name: The name of the Google Developer embedding model to use.
rpm: Requests per minute limit; must be greater than 0.
tpm: Tokens per minute limit; must be greater than 0.
profiles: Optional mapping of profile names to profile configurations.
default_profile: The name of the default profile to use if profiles are configured.
Example:
Configuring a Google Developer embedding model with rate limits:
```python
config = GoogleDeveloperEmbeddingModelConfig(
model_name="gemini-embedding-001",
rpm=100,
tpm=1000
)
```
Configuring a Google Developer embedding model with profiles:
```python
config = GoogleDeveloperEmbeddingModelConfig(
model_name="gemini-embedding-001",
rpm=100,
tpm=1000,
profiles={
"default": GoogleDeveloperEmbeddingModelConfig.Profile(),
"high_dim": GoogleDeveloperEmbeddingModelConfig.Profile(output_dimensionality=3072)
},
default_profile="default"
)
```
|
site-packages/fenic/api/session/config.py
| true | false | 65 | 138 | null | null | null | null | null |
[
"BaseModel"
] |
Type: class
Member Name: GoogleDeveloperEmbeddingModel
Qualified Name: fenic.api.session.config.GoogleDeveloperEmbeddingModel
Docstring: Configuration for Google Developer embedding models.
This class defines the configuration settings for Google embedding models available in Google Developer AI Studio,
including model selection and rate limiting parameters. These models are accessible using a GOOGLE_API_KEY environment variable.
Attributes:
model_name: The name of the Google Developer embedding model to use.
rpm: Requests per minute limit; must be greater than 0.
tpm: Tokens per minute limit; must be greater than 0.
profiles: Optional mapping of profile names to profile configurations.
default_profile: The name of the default profile to use if profiles are configured.
Example:
Configuring a Google Developer embedding model with rate limits:
```python
config = GoogleDeveloperEmbeddingModelConfig(
model_name="gemini-embedding-001",
rpm=100,
tpm=1000
)
```
Configuring a Google Developer embedding model with profiles:
```python
config = GoogleDeveloperEmbeddingModelConfig(
model_name="gemini-embedding-001",
rpm=100,
tpm=1000,
profiles={
"default": GoogleDeveloperEmbeddingModelConfig.Profile(),
"high_dim": GoogleDeveloperEmbeddingModelConfig.Profile(output_dimensionality=3072)
},
default_profile="default"
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
GoogleDeveloperLanguageModel
|
fenic.api.session.config.GoogleDeveloperLanguageModel
|
Configuration for Gemini models accessible through Google Developer AI Studio.
This class defines the configuration settings for Google Gemini models available in Google Developer AI Studio,
including model selection and rate limiting parameters. These models are accessible using a GOOGLE_API_KEY environment variable.
Attributes:
model_name: The name of the Google Developer model to use.
rpm: Requests per minute limit; must be greater than 0.
tpm: Tokens per minute limit; must be greater than 0.
profiles: Optional mapping of profile names to profile configurations.
default_profile: The name of the default profile to use if profiles are configured.
Example:
Configuring a Google Developer model with rate limits:
```python
config = GoogleDeveloperLanguageModel(
model_name="gemini-2.0-flash",
rpm=100,
tpm=1000
)
```
Configuring a reasoning Google Developer model with profiles:
```python
config = GoogleDeveloperLanguageModel(
model_name="gemini-2.5-flash",
rpm=100,
tpm=1000,
profiles={
"thinking_disabled": GoogleDeveloperLanguageModel.Profile(),
"fast": GoogleDeveloperLanguageModel.Profile(thinking_token_budget=1024),
"thorough": GoogleDeveloperLanguageModel.Profile(thinking_token_budget=8192)
},
default_profile="fast"
)
```
|
site-packages/fenic/api/session/config.py
| true | false | 142 | 228 | null | null | null | null | null |
[
"BaseModel"
] |
Type: class
Member Name: GoogleDeveloperLanguageModel
Qualified Name: fenic.api.session.config.GoogleDeveloperLanguageModel
Docstring: Configuration for Gemini models accessible through Google Developer AI Studio.
This class defines the configuration settings for Google Gemini models available in Google Developer AI Studio,
including model selection and rate limiting parameters. These models are accessible using a GOOGLE_API_KEY environment variable.
Attributes:
model_name: The name of the Google Developer model to use.
rpm: Requests per minute limit; must be greater than 0.
tpm: Tokens per minute limit; must be greater than 0.
profiles: Optional mapping of profile names to profile configurations.
default_profile: The name of the default profile to use if profiles are configured.
Example:
Configuring a Google Developer model with rate limits:
```python
config = GoogleDeveloperLanguageModel(
model_name="gemini-2.0-flash",
rpm=100,
tpm=1000
)
```
Configuring a reasoning Google Developer model with profiles:
```python
config = GoogleDeveloperLanguageModel(
model_name="gemini-2.5-flash",
rpm=100,
tpm=1000,
profiles={
"thinking_disabled": GoogleDeveloperLanguageModel.Profile(),
"fast": GoogleDeveloperLanguageModel.Profile(thinking_token_budget=1024),
"thorough": GoogleDeveloperLanguageModel.Profile(thinking_token_budget=8192)
},
default_profile="fast"
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
GoogleVertexEmbeddingModel
|
fenic.api.session.config.GoogleVertexEmbeddingModel
|
Configuration for Google Vertex AI embedding models.
This class defines the configuration settings for Google embedding models available in Google Vertex AI,
including model selection and rate limiting parameters. These models are accessible using Google Cloud credentials.
Attributes:
model_name: The name of the Google Vertex embedding model to use.
rpm: Requests per minute limit; must be greater than 0.
tpm: Tokens per minute limit; must be greater than 0.
profiles: Optional mapping of profile names to profile configurations.
default_profile: The name of the default profile to use if profiles are configured.
Example:
Configuring a Google Vertex embedding model with rate limits:
```python
embedding_model = GoogleVertexEmbeddingModel(
model_name="gemini-embedding-001",
rpm=100,
tpm=1000
)
```
Configuring a Google Vertex embedding model with profiles:
```python
embedding_model = GoogleVertexEmbeddingModel(
model_name="gemini-embedding-001",
rpm=100,
tpm=1000,
profiles={
"default": GoogleVertexEmbeddingModel.Profile(),
"high_dim": GoogleVertexEmbeddingModel.Profile(output_dimensionality=3072)
},
default_profile="default"
)
```
|
site-packages/fenic/api/session/config.py
| true | false | 230 | 304 | null | null | null | null | null |
[
"BaseModel"
] |
Type: class
Member Name: GoogleVertexEmbeddingModel
Qualified Name: fenic.api.session.config.GoogleVertexEmbeddingModel
Docstring: Configuration for Google Vertex AI embedding models.
This class defines the configuration settings for Google embedding models available in Google Vertex AI,
including model selection and rate limiting parameters. These models are accessible using Google Cloud credentials.
Attributes:
model_name: The name of the Google Vertex embedding model to use.
rpm: Requests per minute limit; must be greater than 0.
tpm: Tokens per minute limit; must be greater than 0.
profiles: Optional mapping of profile names to profile configurations.
default_profile: The name of the default profile to use if profiles are configured.
Example:
Configuring a Google Vertex embedding model with rate limits:
```python
embedding_model = GoogleVertexEmbeddingModel(
model_name="gemini-embedding-001",
rpm=100,
tpm=1000
)
```
Configuring a Google Vertex embedding model with profiles:
```python
embedding_model = GoogleVertexEmbeddingModel(
model_name="gemini-embedding-001",
rpm=100,
tpm=1000,
profiles={
"default": GoogleVertexEmbeddingModel.Profile(),
"high_dim": GoogleVertexEmbeddingModel.Profile(output_dimensionality=3072)
},
default_profile="default"
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
GoogleVertexLanguageModel
|
fenic.api.session.config.GoogleVertexLanguageModel
|
Configuration for Google Vertex AI models.
This class defines the configuration settings for Google Gemini models available in Google Vertex AI,
including model selection and rate limiting parameters. These models are accessible using Google Cloud credentials.
Attributes:
model_name: The name of the Google Vertex model to use.
rpm: Requests per minute limit; must be greater than 0.
tpm: Tokens per minute limit; must be greater than 0.
profiles: Optional mapping of profile names to profile configurations.
default_profile: The name of the default profile to use if profiles are configured.
Example:
Configuring a Google Vertex model with rate limits:
```python
config = GoogleVertexLanguageModel(
model_name="gemini-2.0-flash",
rpm=100,
tpm=1000
)
```
Configuring a reasoning Google Vertex model with profiles:
```python
config = GoogleVertexLanguageModel(
model_name="gemini-2.5-flash",
rpm=100,
tpm=1000,
profiles={
"thinking_disabled": GoogleVertexLanguageModel.Profile(),
"fast": GoogleVertexLanguageModel.Profile(thinking_token_budget=1024),
"thorough": GoogleVertexLanguageModel.Profile(thinking_token_budget=8192)
},
default_profile="fast"
)
```
|
site-packages/fenic/api/session/config.py
| true | false | 306 | 392 | null | null | null | null | null |
[
"BaseModel"
] |
Type: class
Member Name: GoogleVertexLanguageModel
Qualified Name: fenic.api.session.config.GoogleVertexLanguageModel
Docstring: Configuration for Google Vertex AI models.
This class defines the configuration settings for Google Gemini models available in Google Vertex AI,
including model selection and rate limiting parameters. These models are accessible using Google Cloud credentials.
Attributes:
model_name: The name of the Google Vertex model to use.
rpm: Requests per minute limit; must be greater than 0.
tpm: Tokens per minute limit; must be greater than 0.
profiles: Optional mapping of profile names to profile configurations.
default_profile: The name of the default profile to use if profiles are configured.
Example:
Configuring a Google Vertex model with rate limits:
```python
config = GoogleVertexLanguageModel(
model_name="gemini-2.0-flash",
rpm=100,
tpm=1000
)
```
Configuring a reasoning Google Vertex model with profiles:
```python
config = GoogleVertexLanguageModel(
model_name="gemini-2.5-flash",
rpm=100,
tpm=1000,
profiles={
"thinking_disabled": GoogleVertexLanguageModel.Profile(),
"fast": GoogleVertexLanguageModel.Profile(thinking_token_budget=1024),
"thorough": GoogleVertexLanguageModel.Profile(thinking_token_budget=8192)
},
default_profile="fast"
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
OpenAILanguageModel
|
fenic.api.session.config.OpenAILanguageModel
|
Configuration for OpenAI language models.
This class defines the configuration settings for OpenAI language models,
including model selection and rate limiting parameters.
Attributes:
model_name: The name of the OpenAI model to use.
rpm: Requests per minute limit; must be greater than 0.
tpm: Tokens per minute limit; must be greater than 0.
profiles: Optional mapping of profile names to profile configurations.
default_profile: The name of the default profile to use if profiles are configured.
Note:
When using an o-series or gpt5 reasoning model without specifying a reasoning effort in
a Profile, the `reasoning_effort` will default to `low` (for o-series models) or `minimal`
(for gpt5 models).
Example:
Configuring an OpenAI language model with rate limits:
```python
config = OpenAILanguageModel(
model_name="gpt-4.1-nano",
rpm=100,
tpm=100
)
```
Configuring an OpenAI model with profiles:
```python
config = OpenAILanguageModel(
model_name="o4-mini",
rpm=100,
tpm=100,
profiles={
"fast": OpenAILanguageModel.Profile(reasoning_effort="low"),
"thorough": OpenAILanguageModel.Profile(reasoning_effort="high")
},
default_profile="fast"
)
```
Using a profile in a semantic operation:
```python
config = SemanticConfig(
language_models={
"o4": OpenAILanguageModel(
model_name="o4-mini",
rpm=1_000,
tpm=1_000_000,
profiles={
"fast": OpenAILanguageModel.Profile(reasoning_effort="low"),
"thorough": OpenAILanguageModel.Profile(reasoning_effort="high")
},
default_profile="fast"
)
},
default_language_model="o4"
)
# Will use the default "fast" profile for the "o4" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias="o4")
# Will use the "thorough" profile for the "o4" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias=ModelAlias(name="o4", profile="thorough"))
```
|
site-packages/fenic/api/session/config.py
| true | false | 394 | 502 | null | null | null | null | null |
[
"BaseModel"
] |
Type: class
Member Name: OpenAILanguageModel
Qualified Name: fenic.api.session.config.OpenAILanguageModel
Docstring: Configuration for OpenAI language models.
This class defines the configuration settings for OpenAI language models,
including model selection and rate limiting parameters.
Attributes:
model_name: The name of the OpenAI model to use.
rpm: Requests per minute limit; must be greater than 0.
tpm: Tokens per minute limit; must be greater than 0.
profiles: Optional mapping of profile names to profile configurations.
default_profile: The name of the default profile to use if profiles are configured.
Note:
When using an o-series or gpt5 reasoning model without specifying a reasoning effort in
a Profile, the `reasoning_effort` will default to `low` (for o-series models) or `minimal`
(for gpt5 models).
Example:
Configuring an OpenAI language model with rate limits:
```python
config = OpenAILanguageModel(
model_name="gpt-4.1-nano",
rpm=100,
tpm=100
)
```
Configuring an OpenAI model with profiles:
```python
config = OpenAILanguageModel(
model_name="o4-mini",
rpm=100,
tpm=100,
profiles={
"fast": OpenAILanguageModel.Profile(reasoning_effort="low"),
"thorough": OpenAILanguageModel.Profile(reasoning_effort="high")
},
default_profile="fast"
)
```
Using a profile in a semantic operation:
```python
config = SemanticConfig(
language_models={
"o4": OpenAILanguageModel(
model_name="o4-mini",
rpm=1_000,
tpm=1_000_000,
profiles={
"fast": OpenAILanguageModel.Profile(reasoning_effort="low"),
"thorough": OpenAILanguageModel.Profile(reasoning_effort="high")
},
default_profile="fast"
)
},
default_language_model="o4"
)
# Will use the default "fast" profile for the "o4" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias="o4")
# Will use the "thorough" profile for the "o4" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias=ModelAlias(name="o4", profile="thorough"))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
OpenAIEmbeddingModel
|
fenic.api.session.config.OpenAIEmbeddingModel
|
Configuration for OpenAI embedding models.
This class defines the configuration settings for OpenAI embedding models,
including model selection and rate limiting parameters.
Attributes:
model_name: The name of the OpenAI embedding model to use.
rpm: Requests per minute limit; must be greater than 0.
tpm: Tokens per minute limit; must be greater than 0.
Example:
Configuring an OpenAI embedding model with rate limits:
```python
config = OpenAIEmbeddingModel(
model_name="text-embedding-3-small",
rpm=100,
tpm=100
)
```
|
site-packages/fenic/api/session/config.py
| true | false | 505 | 529 | null | null | null | null | null |
[
"BaseModel"
] |
Type: class
Member Name: OpenAIEmbeddingModel
Qualified Name: fenic.api.session.config.OpenAIEmbeddingModel
Docstring: Configuration for OpenAI embedding models.
This class defines the configuration settings for OpenAI embedding models,
including model selection and rate limiting parameters.
Attributes:
model_name: The name of the OpenAI embedding model to use.
rpm: Requests per minute limit; must be greater than 0.
tpm: Tokens per minute limit; must be greater than 0.
Example:
Configuring an OpenAI embedding model with rate limits:
```python
config = OpenAIEmbeddingModel(
model_name="text-embedding-3-small",
rpm=100,
tpm=100
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
AnthropicLanguageModel
|
fenic.api.session.config.AnthropicLanguageModel
|
Configuration for Anthropic language models.
This class defines the configuration settings for Anthropic language models,
including model selection and separate rate limiting parameters for input and output tokens.
Attributes:
model_name: The name of the Anthropic model to use.
rpm: Requests per minute limit; must be greater than 0.
input_tpm: Input tokens per minute limit; must be greater than 0.
output_tpm: Output tokens per minute limit; must be greater than 0.
profiles: Optional mapping of profile names to profile configurations.
default_profile: The name of the default profile to use if profiles are configured.
Example:
Configuring an Anthropic model with separate input/output rate limits:
```python
config = AnthropicLanguageModel(
model_name="claude-3-5-haiku-latest",
rpm=100,
input_tpm=100,
output_tpm=100
)
```
Configuring an Anthropic model with profiles:
```python
config = SessionConfig(
semantic=SemanticConfig(
language_models={
"claude": AnthropicLanguageModel(
model_name="claude-opus-4-0",
rpm=100,
input_tpm=100,
output_tpm=100,
profiles={
"thinking_disabled": AnthropicLanguageModel.Profile(),
"fast": AnthropicLanguageModel.Profile(thinking_token_budget=1024),
"thorough": AnthropicLanguageModel.Profile(thinking_token_budget=4096)
},
default_profile="fast"
)
},
default_language_model="claude"
)
# Using the default "fast" profile for the "claude" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias="claude")
# Using the "thorough" profile for the "claude" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias=ModelAlias(name="claude", profile="thorough"))
```
|
site-packages/fenic/api/session/config.py
| true | false | 532 | 629 | null | null | null | null | null |
[
"BaseModel"
] |
Type: class
Member Name: AnthropicLanguageModel
Qualified Name: fenic.api.session.config.AnthropicLanguageModel
Docstring: Configuration for Anthropic language models.
This class defines the configuration settings for Anthropic language models,
including model selection and separate rate limiting parameters for input and output tokens.
Attributes:
model_name: The name of the Anthropic model to use.
rpm: Requests per minute limit; must be greater than 0.
input_tpm: Input tokens per minute limit; must be greater than 0.
output_tpm: Output tokens per minute limit; must be greater than 0.
profiles: Optional mapping of profile names to profile configurations.
default_profile: The name of the default profile to use if profiles are configured.
Example:
Configuring an Anthropic model with separate input/output rate limits:
```python
config = AnthropicLanguageModel(
model_name="claude-3-5-haiku-latest",
rpm=100,
input_tpm=100,
output_tpm=100
)
```
Configuring an Anthropic model with profiles:
```python
config = SessionConfig(
semantic=SemanticConfig(
language_models={
"claude": AnthropicLanguageModel(
model_name="claude-opus-4-0",
rpm=100,
input_tpm=100,
output_tpm=100,
profiles={
"thinking_disabled": AnthropicLanguageModel.Profile(),
"fast": AnthropicLanguageModel.Profile(thinking_token_budget=1024),
"thorough": AnthropicLanguageModel.Profile(thinking_token_budget=4096)
},
default_profile="fast"
)
},
default_language_model="claude"
)
# Using the default "fast" profile for the "claude" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias="claude")
# Using the "thorough" profile for the "claude" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias=ModelAlias(name="claude", profile="thorough"))
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
CohereEmbeddingTaskType
|
fenic.api.session.config.CohereEmbeddingTaskType
| null |
site-packages/fenic/api/session/config.py
| true | false | 631 | 636 | null | null | null | null |
Literal['search_document', 'search_query', 'classification', 'clustering']
| null |
Type: attribute
Member Name: CohereEmbeddingTaskType
Qualified Name: fenic.api.session.config.CohereEmbeddingTaskType
Docstring: none
Value: Literal['search_document', 'search_query', 'classification', 'clustering']
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
CohereEmbeddingModel
|
fenic.api.session.config.CohereEmbeddingModel
|
Configuration for Cohere embedding models.
This class defines the configuration settings for Cohere embedding models,
including model selection and rate limiting parameters.
Attributes:
model_name: The name of the Cohere model to use.
rpm: Requests per minute limit for the model.
tpm: Tokens per minute limit for the model.
profiles: Optional dictionary of profile configurations.
default_profile: Default profile name to use if none specified.
Example:
Configuring a Cohere embedding model with profiles:
```python
cohere_config = CohereEmbeddingModel(
model_name="embed-v4.0",
rpm=100,
tpm=50_000,
profiles={
"high_dim": CohereEmbeddingModel.Profile(
embedding_dimensionality=1536,
embedding_task_type="search_document"
),
"classification": CohereEmbeddingModel.Profile(
embedding_dimensionality=1024,
embedding_task_type="classification"
)
},
default_profile="high_dim"
)
```
|
site-packages/fenic/api/session/config.py
| true | false | 638 | 707 | null | null | null | null | null |
[
"BaseModel"
] |
Type: class
Member Name: CohereEmbeddingModel
Qualified Name: fenic.api.session.config.CohereEmbeddingModel
Docstring: Configuration for Cohere embedding models.
This class defines the configuration settings for Cohere embedding models,
including model selection and rate limiting parameters.
Attributes:
model_name: The name of the Cohere model to use.
rpm: Requests per minute limit for the model.
tpm: Tokens per minute limit for the model.
profiles: Optional dictionary of profile configurations.
default_profile: Default profile name to use if none specified.
Example:
Configuring a Cohere embedding model with profiles:
```python
cohere_config = CohereEmbeddingModel(
model_name="embed-v4.0",
rpm=100,
tpm=50_000,
profiles={
"high_dim": CohereEmbeddingModel.Profile(
embedding_dimensionality=1536,
embedding_task_type="search_document"
),
"classification": CohereEmbeddingModel.Profile(
embedding_dimensionality=1024,
embedding_task_type="classification"
)
},
default_profile="high_dim"
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
EmbeddingModel
|
fenic.api.session.config.EmbeddingModel
| null |
site-packages/fenic/api/session/config.py
| true | false | 709 | 709 | null | null | null | null |
Union[OpenAIEmbeddingModel, GoogleVertexEmbeddingModel, GoogleDeveloperEmbeddingModel, CohereEmbeddingModel]
| null |
Type: attribute
Member Name: EmbeddingModel
Qualified Name: fenic.api.session.config.EmbeddingModel
Docstring: none
Value: Union[OpenAIEmbeddingModel, GoogleVertexEmbeddingModel, GoogleDeveloperEmbeddingModel, CohereEmbeddingModel]
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
LanguageModel
|
fenic.api.session.config.LanguageModel
| null |
site-packages/fenic/api/session/config.py
| true | false | 710 | 710 | null | null | null | null |
Union[OpenAILanguageModel, AnthropicLanguageModel, GoogleDeveloperLanguageModel, GoogleVertexLanguageModel]
| null |
Type: attribute
Member Name: LanguageModel
Qualified Name: fenic.api.session.config.LanguageModel
Docstring: none
Value: Union[OpenAILanguageModel, AnthropicLanguageModel, GoogleDeveloperLanguageModel, GoogleVertexLanguageModel]
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
ModelConfig
|
fenic.api.session.config.ModelConfig
| null |
site-packages/fenic/api/session/config.py
| true | false | 711 | 711 | null | null | null | null |
Union[EmbeddingModel, LanguageModel]
| null |
Type: attribute
Member Name: ModelConfig
Qualified Name: fenic.api.session.config.ModelConfig
Docstring: none
Value: Union[EmbeddingModel, LanguageModel]
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.