type
stringclasses 5
values | name
stringlengths 1
55
| qualified_name
stringlengths 5
143
| docstring
stringlengths 0
3.59k
⌀ | filepath
stringclasses 180
values | is_public
bool 2
classes | is_private
bool 2
classes | line_start
float64 0
1.54k
⌀ | line_end
float64 0
1.56k
⌀ | annotation
stringclasses 8
values | returns
stringclasses 236
values | parameters
listlengths 0
74
⌀ | parent_class
stringclasses 298
values | value
stringclasses 112
values | bases
listlengths 0
3
⌀ | api_element_summary
stringlengths 199
23k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
method
|
__rand__
|
fenic.api.column.Column.__rand__
|
Reverse logical AND operation.
|
site-packages/fenic/api/column.py
| true | false | 807 | 809 | null |
Column
|
[
"self",
"other"
] |
Column
| null | null |
Type: method
Member Name: __rand__
Qualified Name: fenic.api.column.Column.__rand__
Docstring: Reverse logical AND operation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other"]
Returns: Column
Parent Class: Column
|
method
|
__or__
|
fenic.api.column.Column.__or__
|
Logical OR operation.
|
site-packages/fenic/api/column.py
| true | false | 811 | 813 | null |
Column
|
[
"self",
"other"
] |
Column
| null | null |
Type: method
Member Name: __or__
Qualified Name: fenic.api.column.Column.__or__
Docstring: Logical OR operation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other"]
Returns: Column
Parent Class: Column
|
method
|
__ror__
|
fenic.api.column.Column.__ror__
|
Reverse logical OR operation.
|
site-packages/fenic/api/column.py
| true | false | 815 | 817 | null |
Column
|
[
"self",
"other"
] |
Column
| null | null |
Type: method
Member Name: __ror__
Qualified Name: fenic.api.column.Column.__ror__
Docstring: Reverse logical OR operation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other"]
Returns: Column
Parent Class: Column
|
method
|
__add__
|
fenic.api.column.Column.__add__
|
Addition operation.
|
site-packages/fenic/api/column.py
| true | false | 819 | 821 | null |
Column
|
[
"self",
"other"
] |
Column
| null | null |
Type: method
Member Name: __add__
Qualified Name: fenic.api.column.Column.__add__
Docstring: Addition operation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other"]
Returns: Column
Parent Class: Column
|
method
|
__radd__
|
fenic.api.column.Column.__radd__
|
Reverse addition operation.
|
site-packages/fenic/api/column.py
| true | false | 823 | 827 | null |
Column
|
[
"self",
"other"
] |
Column
| null | null |
Type: method
Member Name: __radd__
Qualified Name: fenic.api.column.Column.__radd__
Docstring: Reverse addition operation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other"]
Returns: Column
Parent Class: Column
|
method
|
__sub__
|
fenic.api.column.Column.__sub__
|
Subtraction operation.
|
site-packages/fenic/api/column.py
| true | false | 829 | 831 | null |
Column
|
[
"self",
"other"
] |
Column
| null | null |
Type: method
Member Name: __sub__
Qualified Name: fenic.api.column.Column.__sub__
Docstring: Subtraction operation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other"]
Returns: Column
Parent Class: Column
|
method
|
__rsub__
|
fenic.api.column.Column.__rsub__
|
Reverse subtraction operation.
|
site-packages/fenic/api/column.py
| true | false | 833 | 837 | null |
Column
|
[
"self",
"other"
] |
Column
| null | null |
Type: method
Member Name: __rsub__
Qualified Name: fenic.api.column.Column.__rsub__
Docstring: Reverse subtraction operation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other"]
Returns: Column
Parent Class: Column
|
method
|
__mul__
|
fenic.api.column.Column.__mul__
|
Multiplication operation.
|
site-packages/fenic/api/column.py
| true | false | 839 | 841 | null |
Column
|
[
"self",
"other"
] |
Column
| null | null |
Type: method
Member Name: __mul__
Qualified Name: fenic.api.column.Column.__mul__
Docstring: Multiplication operation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other"]
Returns: Column
Parent Class: Column
|
method
|
__rmul__
|
fenic.api.column.Column.__rmul__
|
Reverse multiplication operation.
|
site-packages/fenic/api/column.py
| true | false | 843 | 845 | null |
Column
|
[
"self",
"other"
] |
Column
| null | null |
Type: method
Member Name: __rmul__
Qualified Name: fenic.api.column.Column.__rmul__
Docstring: Reverse multiplication operation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other"]
Returns: Column
Parent Class: Column
|
method
|
__truediv__
|
fenic.api.column.Column.__truediv__
|
Division operation.
|
site-packages/fenic/api/column.py
| true | false | 847 | 849 | null |
Column
|
[
"self",
"other"
] |
Column
| null | null |
Type: method
Member Name: __truediv__
Qualified Name: fenic.api.column.Column.__truediv__
Docstring: Division operation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other"]
Returns: Column
Parent Class: Column
|
method
|
__rtruediv__
|
fenic.api.column.Column.__rtruediv__
|
Reverse division operation.
|
site-packages/fenic/api/column.py
| true | false | 851 | 855 | null |
Column
|
[
"self",
"other"
] |
Column
| null | null |
Type: method
Member Name: __rtruediv__
Qualified Name: fenic.api.column.Column.__rtruediv__
Docstring: Reverse division operation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other"]
Returns: Column
Parent Class: Column
|
method
|
__bool__
|
fenic.api.column.Column.__bool__
|
Prevent boolean conversion of Column objects.
|
site-packages/fenic/api/column.py
| true | false | 857 | 861 | null | null |
[
"self"
] |
Column
| null | null |
Type: method
Member Name: __bool__
Qualified Name: fenic.api.column.Column.__bool__
Docstring: Prevent boolean conversion of Column objects.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: none
Parent Class: Column
|
attribute
|
ColumnOrName
|
fenic.api.column.ColumnOrName
| null |
site-packages/fenic/api/column.py
| true | false | 863 | 863 | null | null | null | null |
Union[Column, str]
| null |
Type: attribute
Member Name: ColumnOrName
Qualified Name: fenic.api.column.ColumnOrName
Docstring: none
Value: Union[Column, str]
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
module
|
dataframe
|
fenic.api.dataframe
|
DataFrame API for Fenic - provides DataFrame and grouped data operations.
|
site-packages/fenic/api/dataframe/__init__.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: dataframe
Qualified Name: fenic.api.dataframe
Docstring: DataFrame API for Fenic - provides DataFrame and grouped data operations.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
__all__
|
fenic.api.dataframe.__all__
| null |
site-packages/fenic/api/dataframe/__init__.py
| false | false | 9 | 9 | null | null | null | null |
['DataFrame', 'GroupedData', 'SemanticExtensions']
| null |
Type: attribute
Member Name: __all__
Qualified Name: fenic.api.dataframe.__all__
Docstring: none
Value: ['DataFrame', 'GroupedData', 'SemanticExtensions']
Annotation: none
is Public? : false
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
module
|
_base_grouped_data
|
fenic.api.dataframe._base_grouped_data
| null |
site-packages/fenic/api/dataframe/_base_grouped_data.py
| false | true | null | null | null | null | null | null | null | null |
Type: module
Member Name: _base_grouped_data
Qualified Name: fenic.api.dataframe._base_grouped_data
Docstring: none
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: none
Returns: none
Parent Class: none
|
class
|
BaseGroupedData
|
fenic.api.dataframe._base_grouped_data.BaseGroupedData
|
Base class for aggregation methods shared between GroupedData and SemanticallyGroupedData.
|
site-packages/fenic/api/dataframe/_base_grouped_data.py
| true | false | 14 | 83 | null | null | null | null | null |
[] |
Type: class
Member Name: BaseGroupedData
Qualified Name: fenic.api.dataframe._base_grouped_data.BaseGroupedData
Docstring: Base class for aggregation methods shared between GroupedData and SemanticallyGroupedData.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic.api.dataframe._base_grouped_data.BaseGroupedData.__init__
| null |
site-packages/fenic/api/dataframe/_base_grouped_data.py
| true | false | 17 | 18 | null | null |
[
"self",
"df"
] |
BaseGroupedData
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic.api.dataframe._base_grouped_data.BaseGroupedData.__init__
Docstring: none
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "df"]
Returns: none
Parent Class: BaseGroupedData
|
method
|
_process_agg_dict
|
fenic.api.dataframe._base_grouped_data.BaseGroupedData._process_agg_dict
|
Process dictionary-style aggregation specifications.
|
site-packages/fenic/api/dataframe/_base_grouped_data.py
| false | true | 20 | 40 | null |
List[Column]
|
[
"self",
"agg_dict"
] |
BaseGroupedData
| null | null |
Type: method
Member Name: _process_agg_dict
Qualified Name: fenic.api.dataframe._base_grouped_data.BaseGroupedData._process_agg_dict
Docstring: Process dictionary-style aggregation specifications.
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["self", "agg_dict"]
Returns: List[Column]
Parent Class: BaseGroupedData
|
method
|
_process_agg_exprs
|
fenic.api.dataframe._base_grouped_data.BaseGroupedData._process_agg_exprs
|
Process Column-style aggregation expressions.
|
site-packages/fenic/api/dataframe/_base_grouped_data.py
| false | true | 42 | 59 | null |
List[AliasExpr]
|
[
"self",
"cols"
] |
BaseGroupedData
| null | null |
Type: method
Member Name: _process_agg_exprs
Qualified Name: fenic.api.dataframe._base_grouped_data.BaseGroupedData._process_agg_exprs
Docstring: Process Column-style aggregation expressions.
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["self", "cols"]
Returns: List[AliasExpr]
Parent Class: BaseGroupedData
|
method
|
_validate_agg_exprs
|
fenic.api.dataframe._base_grouped_data.BaseGroupedData._validate_agg_exprs
|
Validate aggregation expressions.
|
site-packages/fenic/api/dataframe/_base_grouped_data.py
| false | true | 61 | 83 | null |
None
|
[
"self",
"exprs"
] |
BaseGroupedData
| null | null |
Type: method
Member Name: _validate_agg_exprs
Qualified Name: fenic.api.dataframe._base_grouped_data.BaseGroupedData._validate_agg_exprs
Docstring: Validate aggregation expressions.
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["self", "exprs"]
Returns: None
Parent Class: BaseGroupedData
|
module
|
dataframe
|
fenic.api.dataframe.dataframe
|
DataFrame class providing PySpark-inspired API for data manipulation.
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: dataframe
Qualified Name: fenic.api.dataframe.dataframe
Docstring: DataFrame class providing PySpark-inspired API for data manipulation.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
logger
|
fenic.api.dataframe.dataframe.logger
| null |
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 54 | 54 | null | null | null | null |
logging.getLogger(__name__)
| null |
Type: attribute
Member Name: logger
Qualified Name: fenic.api.dataframe.dataframe.logger
Docstring: none
Value: logging.getLogger(__name__)
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
DataFrame
|
fenic.api.dataframe.dataframe.DataFrame
|
A data collection organized into named columns.
The DataFrame class represents a lazily evaluated computation on data. Operations on
DataFrame build up a logical query plan that is only executed when an action like
show(), to_polars(), to_pandas(), to_arrow(), to_pydict(), to_pylist(), or count() is called.
The DataFrame supports method chaining for building complex transformations.
Example: Create and transform a DataFrame
```python
# Create a DataFrame from a dictionary
df = session.create_dataframe({"id": [1, 2, 3], "value": ["a", "b", "c"]})
# Chain transformations
result = df.filter(col("id") > 1).select("id", "value")
# Show results
result.show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 2| b|
# | 3| c|
# +---+-----+
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 57 | 1,560 | null | null | null | null | null |
[] |
Type: class
Member Name: DataFrame
Qualified Name: fenic.api.dataframe.dataframe.DataFrame
Docstring: A data collection organized into named columns.
The DataFrame class represents a lazily evaluated computation on data. Operations on
DataFrame build up a logical query plan that is only executed when an action like
show(), to_polars(), to_pandas(), to_arrow(), to_pydict(), to_pylist(), or count() is called.
The DataFrame supports method chaining for building complex transformations.
Example: Create and transform a DataFrame
```python
# Create a DataFrame from a dictionary
df = session.create_dataframe({"id": [1, 2, 3], "value": ["a", "b", "c"]})
# Chain transformations
result = df.filter(col("id") > 1).select("id", "value")
# Show results
result.show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 2| b|
# | 3| c|
# +---+-----+
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__new__
|
fenic.api.dataframe.dataframe.DataFrame.__new__
|
Prevent direct DataFrame construction.
DataFrames must be created through Session.create_dataframe().
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 89 | 98 | null | null |
[
"cls"
] |
DataFrame
| null | null |
Type: method
Member Name: __new__
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.__new__
Docstring: Prevent direct DataFrame construction.
DataFrames must be created through Session.create_dataframe().
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["cls"]
Returns: none
Parent Class: DataFrame
|
method
|
_from_logical_plan
|
fenic.api.dataframe.dataframe.DataFrame._from_logical_plan
|
Factory method to create DataFrame instances.
This method is intended for internal use by the Session class and other
DataFrame methods that need to create new DataFrame instances.
Args:
logical_plan: The logical plan for this DataFrame
session_state: The session state for this DataFrame
Returns:
A new DataFrame instance
|
site-packages/fenic/api/dataframe/dataframe.py
| false | true | 100 | 123 | null |
DataFrame
|
[
"cls",
"logical_plan",
"session_state"
] |
DataFrame
| null | null |
Type: method
Member Name: _from_logical_plan
Qualified Name: fenic.api.dataframe.dataframe.DataFrame._from_logical_plan
Docstring: Factory method to create DataFrame instances.
This method is intended for internal use by the Session class and other
DataFrame methods that need to create new DataFrame instances.
Args:
logical_plan: The logical plan for this DataFrame
session_state: The session state for this DataFrame
Returns:
A new DataFrame instance
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["cls", "logical_plan", "session_state"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
__getitem__
|
fenic.api.dataframe.dataframe.DataFrame.__getitem__
|
Enable DataFrame[column_name] syntax for column access.
Args:
col_name: Name of the column to access
Returns:
Column: Column object for the specified column
Raises:
TypeError: If item is not a string
Examples:
>>> df[col("age")] # Returns Column object for "age"
>>> df.filter(df[col("age")] > 25) # Use in expressions
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 168 | 196 | null |
Column
|
[
"self",
"col_name"
] |
DataFrame
| null | null |
Type: method
Member Name: __getitem__
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.__getitem__
Docstring: Enable DataFrame[column_name] syntax for column access.
Args:
col_name: Name of the column to access
Returns:
Column: Column object for the specified column
Raises:
TypeError: If item is not a string
Examples:
>>> df[col("age")] # Returns Column object for "age"
>>> df.filter(df[col("age")] > 25) # Use in expressions
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "col_name"]
Returns: Column
Parent Class: DataFrame
|
method
|
__getattr__
|
fenic.api.dataframe.dataframe.DataFrame.__getattr__
|
Enable DataFrame.column_name syntax for column access.
Args:
col_name: Name of the column to access
Returns:
Column: Column object for the specified column
Raises:
TypeError: If col_name is not a string
Examples:
>>> df.age # Returns Column object for "age"
>>> df.filter(col("age") > 25) # Use in expressions
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 198 | 225 | null |
Column
|
[
"self",
"col_name"
] |
DataFrame
| null | null |
Type: method
Member Name: __getattr__
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.__getattr__
Docstring: Enable DataFrame.column_name syntax for column access.
Args:
col_name: Name of the column to access
Returns:
Column: Column object for the specified column
Raises:
TypeError: If col_name is not a string
Examples:
>>> df.age # Returns Column object for "age"
>>> df.filter(col("age") > 25) # Use in expressions
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "col_name"]
Returns: Column
Parent Class: DataFrame
|
method
|
explain
|
fenic.api.dataframe.dataframe.DataFrame.explain
|
Display the logical plan of the DataFrame.
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 227 | 229 | null |
None
|
[
"self"
] |
DataFrame
| null | null |
Type: method
Member Name: explain
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.explain
Docstring: Display the logical plan of the DataFrame.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: None
Parent Class: DataFrame
|
method
|
show
|
fenic.api.dataframe.dataframe.DataFrame.show
|
Display the DataFrame content in a tabular form.
This is an action that triggers computation of the DataFrame.
The output is printed to stdout in a formatted table.
Args:
n: Number of rows to display
explain_analyze: Whether to print the explain analyze plan
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 231 | 245 | null |
None
|
[
"self",
"n",
"explain_analyze"
] |
DataFrame
| null | null |
Type: method
Member Name: show
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.show
Docstring: Display the DataFrame content in a tabular form.
This is an action that triggers computation of the DataFrame.
The output is printed to stdout in a formatted table.
Args:
n: Number of rows to display
explain_analyze: Whether to print the explain analyze plan
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "n", "explain_analyze"]
Returns: None
Parent Class: DataFrame
|
method
|
collect
|
fenic.api.dataframe.dataframe.DataFrame.collect
|
Execute the DataFrame computation and return the result as a QueryResult.
This is an action that triggers computation of the DataFrame query plan.
All transformations and operations are executed, and the results are
materialized into a QueryResult, which contains both the result data and the query metrics.
Args:
data_type: The type of data to return
Returns:
QueryResult: A QueryResult with materialized data and query metrics
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 247 | 275 | null |
QueryResult
|
[
"self",
"data_type"
] |
DataFrame
| null | null |
Type: method
Member Name: collect
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.collect
Docstring: Execute the DataFrame computation and return the result as a QueryResult.
This is an action that triggers computation of the DataFrame query plan.
All transformations and operations are executed, and the results are
materialized into a QueryResult, which contains both the result data and the query metrics.
Args:
data_type: The type of data to return
Returns:
QueryResult: A QueryResult with materialized data and query metrics
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "data_type"]
Returns: QueryResult
Parent Class: DataFrame
|
method
|
to_polars
|
fenic.api.dataframe.dataframe.DataFrame.to_polars
|
Execute the DataFrame computation and return the result as a Polars DataFrame.
This is an action that triggers computation of the DataFrame query plan.
All transformations and operations are executed, and the results are
materialized into a Polars DataFrame.
Returns:
pl.DataFrame: A Polars DataFrame with materialized results
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 277 | 287 | null |
pl.DataFrame
|
[
"self"
] |
DataFrame
| null | null |
Type: method
Member Name: to_polars
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.to_polars
Docstring: Execute the DataFrame computation and return the result as a Polars DataFrame.
This is an action that triggers computation of the DataFrame query plan.
All transformations and operations are executed, and the results are
materialized into a Polars DataFrame.
Returns:
pl.DataFrame: A Polars DataFrame with materialized results
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: pl.DataFrame
Parent Class: DataFrame
|
method
|
to_pandas
|
fenic.api.dataframe.dataframe.DataFrame.to_pandas
|
Execute the DataFrame computation and return a Pandas DataFrame.
This is an action that triggers computation of the DataFrame query plan.
All transformations and operations are executed, and the results are
materialized into a Pandas DataFrame.
Returns:
pd.DataFrame: A Pandas DataFrame containing the computed results with
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 289 | 299 | null |
pd.DataFrame
|
[
"self"
] |
DataFrame
| null | null |
Type: method
Member Name: to_pandas
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.to_pandas
Docstring: Execute the DataFrame computation and return a Pandas DataFrame.
This is an action that triggers computation of the DataFrame query plan.
All transformations and operations are executed, and the results are
materialized into a Pandas DataFrame.
Returns:
pd.DataFrame: A Pandas DataFrame containing the computed results with
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: pd.DataFrame
Parent Class: DataFrame
|
method
|
to_arrow
|
fenic.api.dataframe.dataframe.DataFrame.to_arrow
|
Execute the DataFrame computation and return an Apache Arrow Table.
This is an action that triggers computation of the DataFrame query plan.
All transformations and operations are executed, and the results are
materialized into an Apache Arrow Table with columnar memory layout
optimized for analytics and zero-copy data exchange.
Returns:
pa.Table: An Apache Arrow Table containing the computed results
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 301 | 312 | null |
pa.Table
|
[
"self"
] |
DataFrame
| null | null |
Type: method
Member Name: to_arrow
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.to_arrow
Docstring: Execute the DataFrame computation and return an Apache Arrow Table.
This is an action that triggers computation of the DataFrame query plan.
All transformations and operations are executed, and the results are
materialized into an Apache Arrow Table with columnar memory layout
optimized for analytics and zero-copy data exchange.
Returns:
pa.Table: An Apache Arrow Table containing the computed results
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: pa.Table
Parent Class: DataFrame
|
method
|
to_pydict
|
fenic.api.dataframe.dataframe.DataFrame.to_pydict
|
Execute the DataFrame computation and return a dictionary of column arrays.
This is an action that triggers computation of the DataFrame query plan.
All transformations and operations are executed, and the results are
materialized into a Python dictionary where each column becomes a list of values.
Returns:
Dict[str, List[Any]]: A dictionary containing the computed results with:
- Keys: Column names as strings
- Values: Lists containing all values for each column
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 314 | 326 | null |
Dict[str, List[Any]]
|
[
"self"
] |
DataFrame
| null | null |
Type: method
Member Name: to_pydict
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.to_pydict
Docstring: Execute the DataFrame computation and return a dictionary of column arrays.
This is an action that triggers computation of the DataFrame query plan.
All transformations and operations are executed, and the results are
materialized into a Python dictionary where each column becomes a list of values.
Returns:
Dict[str, List[Any]]: A dictionary containing the computed results with:
- Keys: Column names as strings
- Values: Lists containing all values for each column
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: Dict[str, List[Any]]
Parent Class: DataFrame
|
method
|
to_pylist
|
fenic.api.dataframe.dataframe.DataFrame.to_pylist
|
Execute the DataFrame computation and return a list of row dictionaries.
This is an action that triggers computation of the DataFrame query plan.
All transformations and operations are executed, and the results are
materialized into a Python list where each element is a dictionary
representing one row with column names as keys.
Returns:
List[Dict[str, Any]]: A list containing the computed results with:
- Each element: A dictionary representing one row
- Dictionary keys: Column names as strings
- Dictionary values: Cell values in Python native types
- List length equals number of rows in the result
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 328 | 343 | null |
List[Dict[str, Any]]
|
[
"self"
] |
DataFrame
| null | null |
Type: method
Member Name: to_pylist
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.to_pylist
Docstring: Execute the DataFrame computation and return a list of row dictionaries.
This is an action that triggers computation of the DataFrame query plan.
All transformations and operations are executed, and the results are
materialized into a Python list where each element is a dictionary
representing one row with column names as keys.
Returns:
List[Dict[str, Any]]: A list containing the computed results with:
- Each element: A dictionary representing one row
- Dictionary keys: Column names as strings
- Dictionary values: Cell values in Python native types
- List length equals number of rows in the result
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: List[Dict[str, Any]]
Parent Class: DataFrame
|
method
|
count
|
fenic.api.dataframe.dataframe.DataFrame.count
|
Count the number of rows in the DataFrame.
This is an action that triggers computation of the DataFrame.
The output is an integer representing the number of rows.
Returns:
int: The number of rows in the DataFrame
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 345 | 354 | null |
int
|
[
"self"
] |
DataFrame
| null | null |
Type: method
Member Name: count
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.count
Docstring: Count the number of rows in the DataFrame.
This is an action that triggers computation of the DataFrame.
The output is an integer representing the number of rows.
Returns:
int: The number of rows in the DataFrame
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: int
Parent Class: DataFrame
|
method
|
lineage
|
fenic.api.dataframe.dataframe.DataFrame.lineage
|
Create a Lineage object to trace data through transformations.
The Lineage interface allows you to trace how specific rows are transformed
through your DataFrame operations, both forwards and backwards through the
computation graph.
Returns:
Lineage: Interface for querying data lineage
Example:
```python
# Create lineage query
lineage = df.lineage()
# Trace specific rows backwards through transformations
source_rows = lineage.backward(["result_uuid1", "result_uuid2"])
# Or trace forwards to see outputs
result_rows = lineage.forward(["source_uuid1"])
```
See Also:
LineageQuery: Full documentation of lineage querying capabilities
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 356 | 381 | null |
Lineage
|
[
"self"
] |
DataFrame
| null | null |
Type: method
Member Name: lineage
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.lineage
Docstring: Create a Lineage object to trace data through transformations.
The Lineage interface allows you to trace how specific rows are transformed
through your DataFrame operations, both forwards and backwards through the
computation graph.
Returns:
Lineage: Interface for querying data lineage
Example:
```python
# Create lineage query
lineage = df.lineage()
# Trace specific rows backwards through transformations
source_rows = lineage.backward(["result_uuid1", "result_uuid2"])
# Or trace forwards to see outputs
result_rows = lineage.forward(["source_uuid1"])
```
See Also:
LineageQuery: Full documentation of lineage querying capabilities
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: Lineage
Parent Class: DataFrame
|
method
|
persist
|
fenic.api.dataframe.dataframe.DataFrame.persist
|
Mark this DataFrame to be persisted after first computation.
The persisted DataFrame will be cached after its first computation,
avoiding recomputation in subsequent operations. This is useful for DataFrames
that are reused multiple times in your workflow.
Returns:
DataFrame: Same DataFrame, but marked for persistence
Example:
```python
# Cache intermediate results for reuse
filtered_df = (df
.filter(col("age") > 25)
.persist() # Cache these results
)
# Both operations will use cached results
result1 = filtered_df.group_by("department").count()
result2 = filtered_df.select("name", "salary")
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 383 | 411 | null |
DataFrame
|
[
"self"
] |
DataFrame
| null | null |
Type: method
Member Name: persist
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.persist
Docstring: Mark this DataFrame to be persisted after first computation.
The persisted DataFrame will be cached after its first computation,
avoiding recomputation in subsequent operations. This is useful for DataFrames
that are reused multiple times in your workflow.
Returns:
DataFrame: Same DataFrame, but marked for persistence
Example:
```python
# Cache intermediate results for reuse
filtered_df = (df
.filter(col("age") > 25)
.persist() # Cache these results
)
# Both operations will use cached results
result1 = filtered_df.group_by("department").count()
result2 = filtered_df.select("name", "salary")
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
cache
|
fenic.api.dataframe.dataframe.DataFrame.cache
|
Alias for persist(). Mark DataFrame for caching after first computation.
Returns:
DataFrame: Same DataFrame, but marked for caching
See Also:
persist(): Full documentation of caching behavior
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 413 | 422 | null |
DataFrame
|
[
"self"
] |
DataFrame
| null | null |
Type: method
Member Name: cache
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.cache
Docstring: Alias for persist(). Mark DataFrame for caching after first computation.
Returns:
DataFrame: Same DataFrame, but marked for caching
See Also:
persist(): Full documentation of caching behavior
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
select
|
fenic.api.dataframe.dataframe.DataFrame.select
|
Projects a set of Column expressions or column names.
Args:
*cols: Column expressions to select. Can be:
- String column names (e.g., "id", "name")
- Column objects (e.g., col("id"), col("age") + 1)
Returns:
DataFrame: A new DataFrame with selected columns
Example: Select by column names
```python
# Create a DataFrame
df = session.create_dataframe({"name": ["Alice", "Bob"], "age": [25, 30]})
# Select by column names
df.select(col("name"), col("age")).show()
# Output:
# +-----+---+
# | name|age|
# +-----+---+
# |Alice| 25|
# | Bob| 30|
# +-----+---+
```
Example: Select with expressions
```python
# Select with expressions
df.select(col("name"), col("age") + 1).show()
# Output:
# +-----+-------+
# | name|age + 1|
# +-----+-------+
# |Alice| 26|
# | Bob| 31|
# +-----+-------+
```
Example: Mix strings and expressions
```python
# Mix strings and expressions
df.select(col("name"), col("age") * 2).show()
# Output:
# +-----+-------+
# | name|age * 2|
# +-----+-------+
# |Alice| 50|
# | Bob| 60|
# +-----+-------+
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 424 | 492 | null |
DataFrame
|
[
"self",
"cols"
] |
DataFrame
| null | null |
Type: method
Member Name: select
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.select
Docstring: Projects a set of Column expressions or column names.
Args:
*cols: Column expressions to select. Can be:
- String column names (e.g., "id", "name")
- Column objects (e.g., col("id"), col("age") + 1)
Returns:
DataFrame: A new DataFrame with selected columns
Example: Select by column names
```python
# Create a DataFrame
df = session.create_dataframe({"name": ["Alice", "Bob"], "age": [25, 30]})
# Select by column names
df.select(col("name"), col("age")).show()
# Output:
# +-----+---+
# | name|age|
# +-----+---+
# |Alice| 25|
# | Bob| 30|
# +-----+---+
```
Example: Select with expressions
```python
# Select with expressions
df.select(col("name"), col("age") + 1).show()
# Output:
# +-----+-------+
# | name|age + 1|
# +-----+-------+
# |Alice| 26|
# | Bob| 31|
# +-----+-------+
```
Example: Mix strings and expressions
```python
# Mix strings and expressions
df.select(col("name"), col("age") * 2).show()
# Output:
# +-----+-------+
# | name|age * 2|
# +-----+-------+
# |Alice| 50|
# | Bob| 60|
# +-----+-------+
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "cols"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
where
|
fenic.api.dataframe.dataframe.DataFrame.where
|
Filters rows using the given condition (alias for filter()).
Args:
condition: A Column expression that evaluates to a boolean
Returns:
DataFrame: Filtered DataFrame
See Also:
filter(): Full documentation of filtering behavior
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 494 | 506 | null |
DataFrame
|
[
"self",
"condition"
] |
DataFrame
| null | null |
Type: method
Member Name: where
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.where
Docstring: Filters rows using the given condition (alias for filter()).
Args:
condition: A Column expression that evaluates to a boolean
Returns:
DataFrame: Filtered DataFrame
See Also:
filter(): Full documentation of filtering behavior
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "condition"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
filter
|
fenic.api.dataframe.dataframe.DataFrame.filter
|
Filters rows using the given condition.
Args:
condition: A Column expression that evaluates to a boolean
Returns:
DataFrame: Filtered DataFrame
Example: Filter with numeric comparison
```python
# Create a DataFrame
df = session.create_dataframe({"age": [25, 30, 35], "name": ["Alice", "Bob", "Charlie"]})
# Filter with numeric comparison
df.filter(col("age") > 25).show()
# Output:
# +---+-------+
# |age| name|
# +---+-------+
# | 30| Bob|
# | 35|Charlie|
# +---+-------+
```
Example: Filter with semantic predicate
```python
# Filter with semantic predicate
df.filter((col("age") > 25) & semantic.predicate("This {feedback} mentions problems with the user interface or navigation")).show()
# Output:
# +---+-------+
# |age| name|
# +---+-------+
# | 30| Bob|
# | 35|Charlie|
# +---+-------+
```
Example: Filter with multiple conditions
```python
# Filter with multiple conditions
df.filter((col("age") > 25) & (col("age") <= 35)).show()
# Output:
# +---+-------+
# |age| name|
# +---+-------+
# | 30| Bob|
# | 35|Charlie|
# +---+-------+
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 508 | 562 | null |
DataFrame
|
[
"self",
"condition"
] |
DataFrame
| null | null |
Type: method
Member Name: filter
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.filter
Docstring: Filters rows using the given condition.
Args:
condition: A Column expression that evaluates to a boolean
Returns:
DataFrame: Filtered DataFrame
Example: Filter with numeric comparison
```python
# Create a DataFrame
df = session.create_dataframe({"age": [25, 30, 35], "name": ["Alice", "Bob", "Charlie"]})
# Filter with numeric comparison
df.filter(col("age") > 25).show()
# Output:
# +---+-------+
# |age| name|
# +---+-------+
# | 30| Bob|
# | 35|Charlie|
# +---+-------+
```
Example: Filter with semantic predicate
```python
# Filter with semantic predicate
df.filter((col("age") > 25) & semantic.predicate("This {feedback} mentions problems with the user interface or navigation")).show()
# Output:
# +---+-------+
# |age| name|
# +---+-------+
# | 30| Bob|
# | 35|Charlie|
# +---+-------+
```
Example: Filter with multiple conditions
```python
# Filter with multiple conditions
df.filter((col("age") > 25) & (col("age") <= 35)).show()
# Output:
# +---+-------+
# |age| name|
# +---+-------+
# | 30| Bob|
# | 35|Charlie|
# +---+-------+
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "condition"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
with_column
|
fenic.api.dataframe.dataframe.DataFrame.with_column
|
Add a new column or replace an existing column.
Args:
col_name: Name of the new column
col: Column expression or value to assign to the column. If not a Column,
it will be treated as a literal value.
Returns:
DataFrame: New DataFrame with added/replaced column
Example: Add literal column
```python
# Create a DataFrame
df = session.create_dataframe({"name": ["Alice", "Bob"], "age": [25, 30]})
# Add literal column
df.with_column("constant", lit(1)).show()
# Output:
# +-----+---+--------+
# | name|age|constant|
# +-----+---+--------+
# |Alice| 25| 1|
# | Bob| 30| 1|
# +-----+---+--------+
```
Example: Add computed column
```python
# Add computed column
df.with_column("double_age", col("age") * 2).show()
# Output:
# +-----+---+----------+
# | name|age|double_age|
# +-----+---+----------+
# |Alice| 25| 50|
# | Bob| 30| 60|
# +-----+---+----------+
```
Example: Replace existing column
```python
# Replace existing column
df.with_column("age", col("age") + 1).show()
# Output:
# +-----+---+
# | name|age|
# +-----+---+
# |Alice| 26|
# | Bob| 31|
# +-----+---+
```
Example: Add column with complex expression
```python
# Add column with complex expression
df.with_column(
"age_category",
when(col("age") < 30, "young")
.when(col("age") < 50, "middle")
.otherwise("senior")
).show()
# Output:
# +-----+---+------------+
# | name|age|age_category|
# +-----+---+------------+
# |Alice| 25| young|
# | Bob| 30| middle|
# +-----+---+------------+
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 564 | 649 | null |
DataFrame
|
[
"self",
"col_name",
"col"
] |
DataFrame
| null | null |
Type: method
Member Name: with_column
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.with_column
Docstring: Add a new column or replace an existing column.
Args:
col_name: Name of the new column
col: Column expression or value to assign to the column. If not a Column,
it will be treated as a literal value.
Returns:
DataFrame: New DataFrame with added/replaced column
Example: Add literal column
```python
# Create a DataFrame
df = session.create_dataframe({"name": ["Alice", "Bob"], "age": [25, 30]})
# Add literal column
df.with_column("constant", lit(1)).show()
# Output:
# +-----+---+--------+
# | name|age|constant|
# +-----+---+--------+
# |Alice| 25| 1|
# | Bob| 30| 1|
# +-----+---+--------+
```
Example: Add computed column
```python
# Add computed column
df.with_column("double_age", col("age") * 2).show()
# Output:
# +-----+---+----------+
# | name|age|double_age|
# +-----+---+----------+
# |Alice| 25| 50|
# | Bob| 30| 60|
# +-----+---+----------+
```
Example: Replace existing column
```python
# Replace existing column
df.with_column("age", col("age") + 1).show()
# Output:
# +-----+---+
# | name|age|
# +-----+---+
# |Alice| 26|
# | Bob| 31|
# +-----+---+
```
Example: Add column with complex expression
```python
# Add column with complex expression
df.with_column(
"age_category",
when(col("age") < 30, "young")
.when(col("age") < 50, "middle")
.otherwise("senior")
).show()
# Output:
# +-----+---+------------+
# | name|age|age_category|
# +-----+---+------------+
# |Alice| 25| young|
# | Bob| 30| middle|
# +-----+---+------------+
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "col_name", "col"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
with_column_renamed
|
fenic.api.dataframe.dataframe.DataFrame.with_column_renamed
|
Rename a column. No-op if the column does not exist.
Args:
col_name: Name of the column to rename.
new_col_name: New name for the column.
Returns:
DataFrame: New DataFrame with the column renamed.
Example: Rename a column
```python
# Create sample DataFrame
df = session.create_dataframe({
"age": [25, 30, 35],
"name": ["Alice", "Bob", "Charlie"]
})
# Rename a column
df.with_column_renamed("age", "age_in_years").show()
# Output:
# +------------+-------+
# |age_in_years| name|
# +------------+-------+
# | 25| Alice|
# | 30| Bob|
# | 35|Charlie|
# +------------+-------+
```
Example: Rename multiple columns
```python
# Rename multiple columns
df = (df
.with_column_renamed("age", "age_in_years")
.with_column_renamed("name", "full_name")
).show()
# Output:
# +------------+----------+
# |age_in_years|full_name |
# +------------+----------+
# | 25| Alice|
# | 30| Bob|
# | 35| Charlie|
# +------------+----------+
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 651 | 715 | null |
DataFrame
|
[
"self",
"col_name",
"new_col_name"
] |
DataFrame
| null | null |
Type: method
Member Name: with_column_renamed
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.with_column_renamed
Docstring: Rename a column. No-op if the column does not exist.
Args:
col_name: Name of the column to rename.
new_col_name: New name for the column.
Returns:
DataFrame: New DataFrame with the column renamed.
Example: Rename a column
```python
# Create sample DataFrame
df = session.create_dataframe({
"age": [25, 30, 35],
"name": ["Alice", "Bob", "Charlie"]
})
# Rename a column
df.with_column_renamed("age", "age_in_years").show()
# Output:
# +------------+-------+
# |age_in_years| name|
# +------------+-------+
# | 25| Alice|
# | 30| Bob|
# | 35|Charlie|
# +------------+-------+
```
Example: Rename multiple columns
```python
# Rename multiple columns
df = (df
.with_column_renamed("age", "age_in_years")
.with_column_renamed("name", "full_name")
).show()
# Output:
# +------------+----------+
# |age_in_years|full_name |
# +------------+----------+
# | 25| Alice|
# | 30| Bob|
# | 35| Charlie|
# +------------+----------+
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "col_name", "new_col_name"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
drop
|
fenic.api.dataframe.dataframe.DataFrame.drop
|
Remove one or more columns from this DataFrame.
Args:
*col_names: Names of columns to drop.
Returns:
DataFrame: New DataFrame without specified columns.
Raises:
ValueError: If any specified column doesn't exist in the DataFrame.
ValueError: If dropping the columns would result in an empty DataFrame.
Example: Drop single column
```python
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2, 3],
"name": ["Alice", "Bob", "Charlie"],
"age": [25, 30, 35]
})
# Drop single column
df.drop("age").show()
# Output:
# +---+-------+
# | id| name|
# +---+-------+
# | 1| Alice|
# | 2| Bob|
# | 3|Charlie|
# +---+-------+
```
Example: Drop multiple columns
```python
# Drop multiple columns
df.drop(col("id"), "age").show()
# Output:
# +-------+
# | name|
# +-------+
# | Alice|
# | Bob|
# |Charlie|
# +-------+
```
Example: Error when dropping non-existent column
```python
# This will raise a ValueError
df.drop("non_existent_column")
# ValueError: Column 'non_existent_column' not found in DataFrame
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 717 | 797 | null |
DataFrame
|
[
"self",
"col_names"
] |
DataFrame
| null | null |
Type: method
Member Name: drop
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.drop
Docstring: Remove one or more columns from this DataFrame.
Args:
*col_names: Names of columns to drop.
Returns:
DataFrame: New DataFrame without specified columns.
Raises:
ValueError: If any specified column doesn't exist in the DataFrame.
ValueError: If dropping the columns would result in an empty DataFrame.
Example: Drop single column
```python
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2, 3],
"name": ["Alice", "Bob", "Charlie"],
"age": [25, 30, 35]
})
# Drop single column
df.drop("age").show()
# Output:
# +---+-------+
# | id| name|
# +---+-------+
# | 1| Alice|
# | 2| Bob|
# | 3|Charlie|
# +---+-------+
```
Example: Drop multiple columns
```python
# Drop multiple columns
df.drop(col("id"), "age").show()
# Output:
# +-------+
# | name|
# +-------+
# | Alice|
# | Bob|
# |Charlie|
# +-------+
```
Example: Error when dropping non-existent column
```python
# This will raise a ValueError
df.drop("non_existent_column")
# ValueError: Column 'non_existent_column' not found in DataFrame
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "col_names"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
union
|
fenic.api.dataframe.dataframe.DataFrame.union
|
Return a new DataFrame containing the union of rows in this and another DataFrame.
This is equivalent to UNION ALL in SQL. To remove duplicates, use drop_duplicates() after union().
Args:
other: Another DataFrame with the same schema.
Returns:
DataFrame: A new DataFrame containing rows from both DataFrames.
Raises:
ValueError: If the DataFrames have different schemas.
TypeError: If other is not a DataFrame.
Example: Union two DataFrames
```python
# Create two DataFrames
df1 = session.create_dataframe({
"id": [1, 2],
"value": ["a", "b"]
})
df2 = session.create_dataframe({
"id": [3, 4],
"value": ["c", "d"]
})
# Union the DataFrames
df1.union(df2).show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 1| a|
# | 2| b|
# | 3| c|
# | 4| d|
# +---+-----+
```
Example: Union with duplicates
```python
# Create DataFrames with overlapping data
df1 = session.create_dataframe({
"id": [1, 2],
"value": ["a", "b"]
})
df2 = session.create_dataframe({
"id": [2, 3],
"value": ["b", "c"]
})
# Union with duplicates
df1.union(df2).show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 1| a|
# | 2| b|
# | 2| b|
# | 3| c|
# +---+-----+
# Remove duplicates after union
df1.union(df2).drop_duplicates().show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 1| a|
# | 2| b|
# | 3| c|
# +---+-----+
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 799 | 879 | null |
DataFrame
|
[
"self",
"other"
] |
DataFrame
| null | null |
Type: method
Member Name: union
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.union
Docstring: Return a new DataFrame containing the union of rows in this and another DataFrame.
This is equivalent to UNION ALL in SQL. To remove duplicates, use drop_duplicates() after union().
Args:
other: Another DataFrame with the same schema.
Returns:
DataFrame: A new DataFrame containing rows from both DataFrames.
Raises:
ValueError: If the DataFrames have different schemas.
TypeError: If other is not a DataFrame.
Example: Union two DataFrames
```python
# Create two DataFrames
df1 = session.create_dataframe({
"id": [1, 2],
"value": ["a", "b"]
})
df2 = session.create_dataframe({
"id": [3, 4],
"value": ["c", "d"]
})
# Union the DataFrames
df1.union(df2).show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 1| a|
# | 2| b|
# | 3| c|
# | 4| d|
# +---+-----+
```
Example: Union with duplicates
```python
# Create DataFrames with overlapping data
df1 = session.create_dataframe({
"id": [1, 2],
"value": ["a", "b"]
})
df2 = session.create_dataframe({
"id": [2, 3],
"value": ["b", "c"]
})
# Union with duplicates
df1.union(df2).show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 1| a|
# | 2| b|
# | 2| b|
# | 3| c|
# +---+-----+
# Remove duplicates after union
df1.union(df2).drop_duplicates().show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 1| a|
# | 2| b|
# | 3| c|
# +---+-----+
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
limit
|
fenic.api.dataframe.dataframe.DataFrame.limit
|
Limits the number of rows to the specified number.
Args:
n: Maximum number of rows to return.
Returns:
DataFrame: DataFrame with at most n rows.
Raises:
TypeError: If n is not an integer.
Example: Limit rows
```python
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2, 3, 4, 5],
"name": ["Alice", "Bob", "Charlie", "Dave", "Eve"]
})
# Get first 3 rows
df.limit(3).show()
# Output:
# +---+-------+
# | id| name|
# +---+-------+
# | 1| Alice|
# | 2| Bob|
# | 3|Charlie|
# +---+-------+
```
Example: Limit with other operations
```python
# Limit after filtering
df.filter(col("id") > 2).limit(2).show()
# Output:
# +---+-------+
# | id| name|
# +---+-------+
# | 3|Charlie|
# | 4| Dave|
# +---+-------+
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 881 | 928 | null |
DataFrame
|
[
"self",
"n"
] |
DataFrame
| null | null |
Type: method
Member Name: limit
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.limit
Docstring: Limits the number of rows to the specified number.
Args:
n: Maximum number of rows to return.
Returns:
DataFrame: DataFrame with at most n rows.
Raises:
TypeError: If n is not an integer.
Example: Limit rows
```python
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2, 3, 4, 5],
"name": ["Alice", "Bob", "Charlie", "Dave", "Eve"]
})
# Get first 3 rows
df.limit(3).show()
# Output:
# +---+-------+
# | id| name|
# +---+-------+
# | 1| Alice|
# | 2| Bob|
# | 3|Charlie|
# +---+-------+
```
Example: Limit with other operations
```python
# Limit after filtering
df.filter(col("id") > 2).limit(2).show()
# Output:
# +---+-------+
# | id| name|
# +---+-------+
# | 3|Charlie|
# | 4| Dave|
# +---+-------+
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "n"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
join
|
fenic.api.dataframe.dataframe.DataFrame.join
|
Joins this DataFrame with another DataFrame.
The Dataframes must have no duplicate column names between them. This API only supports equi-joins.
For non-equi-joins, use session.sql().
Args:
other: DataFrame to join with.
on: Join condition(s). Can be:
- A column name (str)
- A list of column names (List[str])
- A Column expression (e.g., col('a'))
- A list of Column expressions
- `None` for cross joins
left_on: Column(s) from the left DataFrame to join on. Can be:
- A column name (str)
- A Column expression (e.g., col('a'), col('a') + 1)
- A list of column names or expressions
right_on: Column(s) from the right DataFrame to join on. Can be:
- A column name (str)
- A Column expression (e.g., col('b'), upper(col('b')))
- A list of column names or expressions
how: Type of join to perform.
Returns:
Joined DataFrame.
Raises:
ValidationError: If cross join is used with an ON clause.
ValidationError: If join condition is invalid.
ValidationError: If both 'on' and 'left_on'/'right_on' parameters are provided.
ValidationError: If only one of 'left_on' or 'right_on' is provided.
ValidationError: If 'left_on' and 'right_on' have different lengths
Example: Inner join on column name
```python
# Create sample DataFrames
df1 = session.create_dataframe({
"id": [1, 2, 3],
"name": ["Alice", "Bob", "Charlie"]
})
df2 = session.create_dataframe({
"id": [1, 2, 4],
"age": [25, 30, 35]
})
# Join on single column
df1.join(df2, on=col("id")).show()
# Output:
# +---+-----+---+
# | id| name|age|
# +---+-----+---+
# | 1|Alice| 25|
# | 2| Bob| 30|
# +---+-----+---+
```
Example: Join with expression
```python
# Join with Column expressions
df1.join(
df2,
left_on=col("id"),
right_on=col("id"),
).show()
# Output:
# +---+-----+---+
# | id| name|age|
# +---+-----+---+
# | 1|Alice| 25|
# | 2| Bob| 30|
# +---+-----+---+
```
Example: Cross join
```python
# Cross join (cartesian product)
df1.join(df2, how="cross").show()
# Output:
# +---+-----+---+---+
# | id| name| id|age|
# +---+-----+---+---+
# | 1|Alice| 1| 25|
# | 1|Alice| 2| 30|
# | 1|Alice| 4| 35|
# | 2| Bob| 1| 25|
# | 2| Bob| 2| 30|
# | 2| Bob| 4| 35|
# | 3|Charlie| 1| 25|
# | 3|Charlie| 2| 30|
# | 3|Charlie| 4| 35|
# +---+-----+---+---+
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 949 | 1,066 | null |
DataFrame
|
[
"self",
"other",
"on",
"left_on",
"right_on",
"how"
] |
DataFrame
| null | null |
Type: method
Member Name: join
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.join
Docstring: Joins this DataFrame with another DataFrame.
The Dataframes must have no duplicate column names between them. This API only supports equi-joins.
For non-equi-joins, use session.sql().
Args:
other: DataFrame to join with.
on: Join condition(s). Can be:
- A column name (str)
- A list of column names (List[str])
- A Column expression (e.g., col('a'))
- A list of Column expressions
- `None` for cross joins
left_on: Column(s) from the left DataFrame to join on. Can be:
- A column name (str)
- A Column expression (e.g., col('a'), col('a') + 1)
- A list of column names or expressions
right_on: Column(s) from the right DataFrame to join on. Can be:
- A column name (str)
- A Column expression (e.g., col('b'), upper(col('b')))
- A list of column names or expressions
how: Type of join to perform.
Returns:
Joined DataFrame.
Raises:
ValidationError: If cross join is used with an ON clause.
ValidationError: If join condition is invalid.
ValidationError: If both 'on' and 'left_on'/'right_on' parameters are provided.
ValidationError: If only one of 'left_on' or 'right_on' is provided.
ValidationError: If 'left_on' and 'right_on' have different lengths
Example: Inner join on column name
```python
# Create sample DataFrames
df1 = session.create_dataframe({
"id": [1, 2, 3],
"name": ["Alice", "Bob", "Charlie"]
})
df2 = session.create_dataframe({
"id": [1, 2, 4],
"age": [25, 30, 35]
})
# Join on single column
df1.join(df2, on=col("id")).show()
# Output:
# +---+-----+---+
# | id| name|age|
# +---+-----+---+
# | 1|Alice| 25|
# | 2| Bob| 30|
# +---+-----+---+
```
Example: Join with expression
```python
# Join with Column expressions
df1.join(
df2,
left_on=col("id"),
right_on=col("id"),
).show()
# Output:
# +---+-----+---+
# | id| name|age|
# +---+-----+---+
# | 1|Alice| 25|
# | 2| Bob| 30|
# +---+-----+---+
```
Example: Cross join
```python
# Cross join (cartesian product)
df1.join(df2, how="cross").show()
# Output:
# +---+-----+---+---+
# | id| name| id|age|
# +---+-----+---+---+
# | 1|Alice| 1| 25|
# | 1|Alice| 2| 30|
# | 1|Alice| 4| 35|
# | 2| Bob| 1| 25|
# | 2| Bob| 2| 30|
# | 2| Bob| 4| 35|
# | 3|Charlie| 1| 25|
# | 3|Charlie| 2| 30|
# | 3|Charlie| 4| 35|
# +---+-----+---+---+
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other", "on", "left_on", "right_on", "how"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
explode
|
fenic.api.dataframe.dataframe.DataFrame.explode
|
Create a new row for each element in an array column.
This operation is useful for flattening nested data structures. For each row in the
input DataFrame that contains an array/list in the specified column, this method will:
1. Create N new rows, where N is the length of the array
2. Each new row will be identical to the original row, except the array column will
contain just a single element from the original array
3. Rows with NULL values or empty arrays in the specified column are filtered out
Args:
column: Name of array column to explode (as string) or Column expression.
Returns:
DataFrame: New DataFrame with the array column exploded into multiple rows.
Raises:
TypeError: If column argument is not a string or Column.
Example: Explode array column
```python
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2, 3, 4],
"tags": [["red", "blue"], ["green"], [], None],
"name": ["Alice", "Bob", "Carol", "Dave"]
})
# Explode the tags column
df.explode("tags").show()
# Output:
# +---+-----+-----+
# | id| tags| name|
# +---+-----+-----+
# | 1| red|Alice|
# | 1| blue|Alice|
# | 2|green| Bob|
# +---+-----+-----+
```
Example: Using column expression
```python
# Explode using column expression
df.explode(col("tags")).show()
# Output:
# +---+-----+-----+
# | id| tags| name|
# +---+-----+-----+
# | 1| red|Alice|
# | 1| blue|Alice|
# | 2|green| Bob|
# +---+-----+-----+
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 1,068 | 1,125 | null |
DataFrame
|
[
"self",
"column"
] |
DataFrame
| null | null |
Type: method
Member Name: explode
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.explode
Docstring: Create a new row for each element in an array column.
This operation is useful for flattening nested data structures. For each row in the
input DataFrame that contains an array/list in the specified column, this method will:
1. Create N new rows, where N is the length of the array
2. Each new row will be identical to the original row, except the array column will
contain just a single element from the original array
3. Rows with NULL values or empty arrays in the specified column are filtered out
Args:
column: Name of array column to explode (as string) or Column expression.
Returns:
DataFrame: New DataFrame with the array column exploded into multiple rows.
Raises:
TypeError: If column argument is not a string or Column.
Example: Explode array column
```python
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2, 3, 4],
"tags": [["red", "blue"], ["green"], [], None],
"name": ["Alice", "Bob", "Carol", "Dave"]
})
# Explode the tags column
df.explode("tags").show()
# Output:
# +---+-----+-----+
# | id| tags| name|
# +---+-----+-----+
# | 1| red|Alice|
# | 1| blue|Alice|
# | 2|green| Bob|
# +---+-----+-----+
```
Example: Using column expression
```python
# Explode using column expression
df.explode(col("tags")).show()
# Output:
# +---+-----+-----+
# | id| tags| name|
# +---+-----+-----+
# | 1| red|Alice|
# | 1| blue|Alice|
# | 2|green| Bob|
# +---+-----+-----+
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "column"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
group_by
|
fenic.api.dataframe.dataframe.DataFrame.group_by
|
Groups the DataFrame using the specified columns.
Args:
*cols: Columns to group by. Can be column names as strings or Column expressions.
Returns:
GroupedData: Object for performing aggregations on the grouped data.
Example: Group by single column
```python
# Create sample DataFrame
df = session.create_dataframe({
"department": ["IT", "HR", "IT", "HR", "IT"],
"salary": [80000, 70000, 90000, 75000, 85000]
})
# Group by single column
df.group_by(col("department")).agg(count("*")).show()
# Output:
# +----------+-----+
# |department|count|
# +----------+-----+
# | IT| 3|
# | HR| 2|
# +----------+-----+
```
Example: Group by multiple columns
```python
# Group by multiple columns
df.group_by(col("department"), col("location")).agg({"salary": "avg"}).show()
# Output:
# +----------+--------+-----------+
# |department|location|avg(salary)|
# +----------+--------+-----------+
# | IT| NYC| 85000.0|
# | HR| NYC| 72500.0|
# +----------+--------+-----------+
```
Example: Group by expression
```python
# Group by expression
df.group_by(lower(col("department")).alias("department")).agg(count("*")).show()
# Output:
# +---------+-----+
# |department|count|
# +----------+-----+
# | it| 3|
# | hr| 2|
# +---------+-----+
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 1,127 | 1,181 | null |
GroupedData
|
[
"self",
"cols"
] |
DataFrame
| null | null |
Type: method
Member Name: group_by
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.group_by
Docstring: Groups the DataFrame using the specified columns.
Args:
*cols: Columns to group by. Can be column names as strings or Column expressions.
Returns:
GroupedData: Object for performing aggregations on the grouped data.
Example: Group by single column
```python
# Create sample DataFrame
df = session.create_dataframe({
"department": ["IT", "HR", "IT", "HR", "IT"],
"salary": [80000, 70000, 90000, 75000, 85000]
})
# Group by single column
df.group_by(col("department")).agg(count("*")).show()
# Output:
# +----------+-----+
# |department|count|
# +----------+-----+
# | IT| 3|
# | HR| 2|
# +----------+-----+
```
Example: Group by multiple columns
```python
# Group by multiple columns
df.group_by(col("department"), col("location")).agg({"salary": "avg"}).show()
# Output:
# +----------+--------+-----------+
# |department|location|avg(salary)|
# +----------+--------+-----------+
# | IT| NYC| 85000.0|
# | HR| NYC| 72500.0|
# +----------+--------+-----------+
```
Example: Group by expression
```python
# Group by expression
df.group_by(lower(col("department")).alias("department")).agg(count("*")).show()
# Output:
# +---------+-----+
# |department|count|
# +----------+-----+
# | it| 3|
# | hr| 2|
# +---------+-----+
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "cols"]
Returns: GroupedData
Parent Class: DataFrame
|
method
|
agg
|
fenic.api.dataframe.dataframe.DataFrame.agg
|
Aggregate on the entire DataFrame without groups.
This is equivalent to group_by() without any grouping columns.
Args:
*exprs: Aggregation expressions or dictionary of aggregations.
Returns:
DataFrame: Aggregation results.
Example: Multiple aggregations
```python
# Create sample DataFrame
df = session.create_dataframe({
"salary": [80000, 70000, 90000, 75000, 85000],
"age": [25, 30, 35, 28, 32]
})
# Multiple aggregations
df.agg(
count().alias("total_rows"),
avg(col("salary")).alias("avg_salary")
).show()
# Output:
# +----------+-----------+
# |total_rows|avg_salary|
# +----------+-----------+
# | 5| 80000.0|
# +----------+-----------+
```
Example: Dictionary style
```python
# Dictionary style
df.agg({col("salary"): "avg", col("age"): "max"}).show()
# Output:
# +-----------+--------+
# |avg(salary)|max(age)|
# +-----------+--------+
# | 80000.0| 35|
# +-----------+--------+
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 1,183 | 1,227 | null |
DataFrame
|
[
"self",
"exprs"
] |
DataFrame
| null | null |
Type: method
Member Name: agg
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.agg
Docstring: Aggregate on the entire DataFrame without groups.
This is equivalent to group_by() without any grouping columns.
Args:
*exprs: Aggregation expressions or dictionary of aggregations.
Returns:
DataFrame: Aggregation results.
Example: Multiple aggregations
```python
# Create sample DataFrame
df = session.create_dataframe({
"salary": [80000, 70000, 90000, 75000, 85000],
"age": [25, 30, 35, 28, 32]
})
# Multiple aggregations
df.agg(
count().alias("total_rows"),
avg(col("salary")).alias("avg_salary")
).show()
# Output:
# +----------+-----------+
# |total_rows|avg_salary|
# +----------+-----------+
# | 5| 80000.0|
# +----------+-----------+
```
Example: Dictionary style
```python
# Dictionary style
df.agg({col("salary"): "avg", col("age"): "max"}).show()
# Output:
# +-----------+--------+
# |avg(salary)|max(age)|
# +-----------+--------+
# | 80000.0| 35|
# +-----------+--------+
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "exprs"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
drop_duplicates
|
fenic.api.dataframe.dataframe.DataFrame.drop_duplicates
|
Return a DataFrame with duplicate rows removed.
Args:
subset: Column names to consider when identifying duplicates. If not provided, all columns are considered.
Returns:
DataFrame: A new DataFrame with duplicate rows removed.
Raises:
ValueError: If a specified column is not present in the current DataFrame schema.
Example: Remove duplicates considering specific columns
```python
# Create sample DataFrame
df = session.create_dataframe({
"c1": [1, 2, 3, 1],
"c2": ["a", "a", "a", "a"],
"c3": ["b", "b", "b", "b"]
})
# Remove duplicates considering all columns
df.drop_duplicates([col("c1"), col("c2"), col("c3")]).show()
# Output:
# +---+---+---+
# | c1| c2| c3|
# +---+---+---+
# | 1| a| b|
# | 2| a| b|
# | 3| a| b|
# +---+---+---+
# Remove duplicates considering only c1
df.drop_duplicates([col("c1")]).show()
# Output:
# +---+---+---+
# | c1| c2| c3|
# +---+---+---+
# | 1| a| b|
# | 2| a| b|
# | 3| a| b|
# +---+---+---+
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 1,229 | 1,286 | null |
DataFrame
|
[
"self",
"subset"
] |
DataFrame
| null | null |
Type: method
Member Name: drop_duplicates
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.drop_duplicates
Docstring: Return a DataFrame with duplicate rows removed.
Args:
subset: Column names to consider when identifying duplicates. If not provided, all columns are considered.
Returns:
DataFrame: A new DataFrame with duplicate rows removed.
Raises:
ValueError: If a specified column is not present in the current DataFrame schema.
Example: Remove duplicates considering specific columns
```python
# Create sample DataFrame
df = session.create_dataframe({
"c1": [1, 2, 3, 1],
"c2": ["a", "a", "a", "a"],
"c3": ["b", "b", "b", "b"]
})
# Remove duplicates considering all columns
df.drop_duplicates([col("c1"), col("c2"), col("c3")]).show()
# Output:
# +---+---+---+
# | c1| c2| c3|
# +---+---+---+
# | 1| a| b|
# | 2| a| b|
# | 3| a| b|
# +---+---+---+
# Remove duplicates considering only c1
df.drop_duplicates([col("c1")]).show()
# Output:
# +---+---+---+
# | c1| c2| c3|
# +---+---+---+
# | 1| a| b|
# | 2| a| b|
# | 3| a| b|
# +---+---+---+
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "subset"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
sort
|
fenic.api.dataframe.dataframe.DataFrame.sort
|
Sort the DataFrame by the specified columns.
Args:
cols: Columns to sort by. This can be:
- A single column name (str)
- A Column expression (e.g., `col("name")`)
- A list of column names or Column expressions
- Column expressions may include sorting directives such as `asc("col")`, `desc("col")`,
`asc_nulls_last("col")`, etc.
- If no columns are provided, the operation is a no-op.
ascending: A boolean or list of booleans indicating sort order.
- If `True`, sorts in ascending order; if `False`, descending.
- If a list is provided, its length must match the number of columns.
- Cannot be used if any of the columns use `asc()`/`desc()` expressions.
- If not specified and no sort expressions are used, columns will be sorted in ascending order by default.
Returns:
DataFrame: A new DataFrame sorted by the specified columns.
Raises:
ValueError:
- If `ascending` is provided and its length does not match `cols`
- If both `ascending` and column expressions like `asc()`/`desc()` are used
TypeError:
- If `cols` is not a column name, Column, or list of column names/Columns
- If `ascending` is not a boolean or list of booleans
Example: Sort in ascending order
```python
# Create sample DataFrame
df = session.create_dataframe([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
# Sort by age in ascending order
df.sort(asc(col("age"))).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 2|Alice|
# | 5| Bob|
# +---+-----+
```
Example: Sort in descending order
```python
# Sort by age in descending order
df.sort(col("age").desc()).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2|Alice|
# +---+-----+
```
Example: Sort with boolean ascending parameter
```python
# Sort by age in descending order using boolean
df.sort(col("age"), ascending=False).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2|Alice|
# +---+-----+
```
Example: Multiple columns with different sort orders
```python
# Create sample DataFrame
df = session.create_dataframe([(2, "Alice"), (2, "Bob"), (5, "Bob")], schema=["age", "name"])
# Sort by age descending, then name ascending
df.sort(desc(col("age")), col("name")).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2|Alice|
# | 2| Bob|
# +---+-----+
```
Example: Multiple columns with list of ascending strategies
```python
# Sort both columns in descending order
df.sort([col("age"), col("name")], ascending=[False, False]).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2| Bob|
# | 2|Alice|
# +---+-----+
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 1,288 | 1,450 | null |
DataFrame
|
[
"self",
"cols",
"ascending"
] |
DataFrame
| null | null |
Type: method
Member Name: sort
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.sort
Docstring: Sort the DataFrame by the specified columns.
Args:
cols: Columns to sort by. This can be:
- A single column name (str)
- A Column expression (e.g., `col("name")`)
- A list of column names or Column expressions
- Column expressions may include sorting directives such as `asc("col")`, `desc("col")`,
`asc_nulls_last("col")`, etc.
- If no columns are provided, the operation is a no-op.
ascending: A boolean or list of booleans indicating sort order.
- If `True`, sorts in ascending order; if `False`, descending.
- If a list is provided, its length must match the number of columns.
- Cannot be used if any of the columns use `asc()`/`desc()` expressions.
- If not specified and no sort expressions are used, columns will be sorted in ascending order by default.
Returns:
DataFrame: A new DataFrame sorted by the specified columns.
Raises:
ValueError:
- If `ascending` is provided and its length does not match `cols`
- If both `ascending` and column expressions like `asc()`/`desc()` are used
TypeError:
- If `cols` is not a column name, Column, or list of column names/Columns
- If `ascending` is not a boolean or list of booleans
Example: Sort in ascending order
```python
# Create sample DataFrame
df = session.create_dataframe([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
# Sort by age in ascending order
df.sort(asc(col("age"))).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 2|Alice|
# | 5| Bob|
# +---+-----+
```
Example: Sort in descending order
```python
# Sort by age in descending order
df.sort(col("age").desc()).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2|Alice|
# +---+-----+
```
Example: Sort with boolean ascending parameter
```python
# Sort by age in descending order using boolean
df.sort(col("age"), ascending=False).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2|Alice|
# +---+-----+
```
Example: Multiple columns with different sort orders
```python
# Create sample DataFrame
df = session.create_dataframe([(2, "Alice"), (2, "Bob"), (5, "Bob")], schema=["age", "name"])
# Sort by age descending, then name ascending
df.sort(desc(col("age")), col("name")).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2|Alice|
# | 2| Bob|
# +---+-----+
```
Example: Multiple columns with list of ascending strategies
```python
# Sort both columns in descending order
df.sort([col("age"), col("name")], ascending=[False, False]).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2| Bob|
# | 2|Alice|
# +---+-----+
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "cols", "ascending"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
order_by
|
fenic.api.dataframe.dataframe.DataFrame.order_by
|
Sort the DataFrame by the specified columns. Alias for sort().
Returns:
DataFrame: sorted Dataframe.
See Also:
sort(): Full documentation of sorting behavior and parameters.
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 1,452 | 1,465 | null |
DataFrame
|
[
"self",
"cols",
"ascending"
] |
DataFrame
| null | null |
Type: method
Member Name: order_by
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.order_by
Docstring: Sort the DataFrame by the specified columns. Alias for sort().
Returns:
DataFrame: sorted Dataframe.
See Also:
sort(): Full documentation of sorting behavior and parameters.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "cols", "ascending"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
unnest
|
fenic.api.dataframe.dataframe.DataFrame.unnest
|
Unnest the specified struct columns into separate columns.
This operation flattens nested struct data by expanding each field of a struct
into its own top-level column.
For each specified column containing a struct:
1. Each field in the struct becomes a separate column.
2. New columns are named after the corresponding struct fields.
3. The new columns are inserted into the DataFrame in place of the original struct column.
4. The overall column order is preserved.
Args:
*col_names: One or more struct columns to unnest. Each can be a string (column name)
or a Column expression.
Returns:
DataFrame: A new DataFrame with the specified struct columns expanded.
Raises:
TypeError: If any argument is not a string or Column.
ValueError: If a specified column does not contain struct data.
Example: Unnest struct column
```python
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2],
"tags": [{"red": 1, "blue": 2}, {"red": 3}],
"name": ["Alice", "Bob"]
})
# Unnest the tags column
df.unnest(col("tags")).show()
# Output:
# +---+---+----+-----+
# | id| red|blue| name|
# +---+---+----+-----+
# | 1| 1| 2|Alice|
# | 2| 3|null| Bob|
# +---+---+----+-----+
```
Example: Unnest multiple struct columns
```python
# Create sample DataFrame with multiple struct columns
df = session.create_dataframe({
"id": [1, 2],
"tags": [{"red": 1, "blue": 2}, {"red": 3}],
"info": [{"age": 25, "city": "NY"}, {"age": 30, "city": "LA"}],
"name": ["Alice", "Bob"]
})
# Unnest multiple struct columns
df.unnest(col("tags"), col("info")).show()
# Output:
# +---+---+----+---+----+-----+
# | id| red|blue|age|city| name|
# +---+---+----+---+----+-----+
# | 1| 1| 2| 25| NY|Alice|
# | 2| 3|null| 30| LA| Bob|
# +---+---+----+---+----+-----+
```
|
site-packages/fenic/api/dataframe/dataframe.py
| true | false | 1,467 | 1,541 | null |
DataFrame
|
[
"self",
"col_names"
] |
DataFrame
| null | null |
Type: method
Member Name: unnest
Qualified Name: fenic.api.dataframe.dataframe.DataFrame.unnest
Docstring: Unnest the specified struct columns into separate columns.
This operation flattens nested struct data by expanding each field of a struct
into its own top-level column.
For each specified column containing a struct:
1. Each field in the struct becomes a separate column.
2. New columns are named after the corresponding struct fields.
3. The new columns are inserted into the DataFrame in place of the original struct column.
4. The overall column order is preserved.
Args:
*col_names: One or more struct columns to unnest. Each can be a string (column name)
or a Column expression.
Returns:
DataFrame: A new DataFrame with the specified struct columns expanded.
Raises:
TypeError: If any argument is not a string or Column.
ValueError: If a specified column does not contain struct data.
Example: Unnest struct column
```python
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2],
"tags": [{"red": 1, "blue": 2}, {"red": 3}],
"name": ["Alice", "Bob"]
})
# Unnest the tags column
df.unnest(col("tags")).show()
# Output:
# +---+---+----+-----+
# | id| red|blue| name|
# +---+---+----+-----+
# | 1| 1| 2|Alice|
# | 2| 3|null| Bob|
# +---+---+----+-----+
```
Example: Unnest multiple struct columns
```python
# Create sample DataFrame with multiple struct columns
df = session.create_dataframe({
"id": [1, 2],
"tags": [{"red": 1, "blue": 2}, {"red": 3}],
"info": [{"age": 25, "city": "NY"}, {"age": 30, "city": "LA"}],
"name": ["Alice", "Bob"]
})
# Unnest multiple struct columns
df.unnest(col("tags"), col("info")).show()
# Output:
# +---+---+----+---+----+-----+
# | id| red|blue|age|city| name|
# +---+---+----+---+----+-----+
# | 1| 1| 2| 25| NY|Alice|
# | 2| 3|null| 30| LA| Bob|
# +---+---+----+---+----+-----+
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "col_names"]
Returns: DataFrame
Parent Class: DataFrame
|
method
|
_ensure_same_session
|
fenic.api.dataframe.dataframe.DataFrame._ensure_same_session
|
Ensure that any session context passed are from the same session context.
This check prevents accidental combinations of DataFrames created in different
sessions, which can lead to inconsistent behavior due to differing configurations,
catalogs, or function registries.
|
site-packages/fenic/api/dataframe/dataframe.py
| false | true | 1,543 | 1,560 | null | null |
[
"cls",
"session_state",
"other_session_states"
] |
DataFrame
| null | null |
Type: method
Member Name: _ensure_same_session
Qualified Name: fenic.api.dataframe.dataframe.DataFrame._ensure_same_session
Docstring: Ensure that any session context passed are from the same session context.
This check prevents accidental combinations of DataFrames created in different
sessions, which can lead to inconsistent behavior due to differing configurations,
catalogs, or function registries.
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["cls", "session_state", "other_session_states"]
Returns: none
Parent Class: DataFrame
|
module
|
grouped_data
|
fenic.api.dataframe.grouped_data
|
GroupedData class for aggregations on grouped DataFrames.
|
site-packages/fenic/api/dataframe/grouped_data.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: grouped_data
Qualified Name: fenic.api.dataframe.grouped_data
Docstring: GroupedData class for aggregations on grouped DataFrames.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
GroupedData
|
fenic.api.dataframe.grouped_data.GroupedData
|
Methods for aggregations on a grouped DataFrame.
|
site-packages/fenic/api/dataframe/grouped_data.py
| true | false | 19 | 94 | null | null | null | null | null |
[
"BaseGroupedData"
] |
Type: class
Member Name: GroupedData
Qualified Name: fenic.api.dataframe.grouped_data.GroupedData
Docstring: Methods for aggregations on a grouped DataFrame.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic.api.dataframe.grouped_data.GroupedData.__init__
|
Initialize grouped data.
Args:
df: The DataFrame to group.
by: Optional list of columns to group by.
|
site-packages/fenic/api/dataframe/grouped_data.py
| true | false | 22 | 43 | null | null |
[
"self",
"df",
"by"
] |
GroupedData
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic.api.dataframe.grouped_data.GroupedData.__init__
Docstring: Initialize grouped data.
Args:
df: The DataFrame to group.
by: Optional list of columns to group by.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "df", "by"]
Returns: none
Parent Class: GroupedData
|
method
|
agg
|
fenic.api.dataframe.grouped_data.GroupedData.agg
|
Compute aggregations on grouped data and return the result as a DataFrame.
This method applies aggregate functions to the grouped data.
Args:
*exprs: Aggregation expressions. Can be:
- Column expressions with aggregate functions (e.g., `count("*")`, `sum("amount")`)
- A dictionary mapping column names to aggregate function names (e.g., `{"amount": "sum", "age": "avg"}`)
Returns:
DataFrame: A new DataFrame with one row per group and columns for group keys and aggregated values
Raises:
ValueError: If arguments are not Column expressions or a dictionary
ValueError: If dictionary values are not valid aggregate function names
Example: Count employees by department
```python
# Group by department and count employees
df.group_by("department").agg(count("*").alias("employee_count"))
```
Example: Multiple aggregations
```python
# Multiple aggregations
df.group_by("department").agg(
count("*").alias("employee_count"),
avg("salary").alias("avg_salary"),
max("age").alias("max_age")
)
```
Example: Dictionary style aggregations
```python
# Dictionary style for simple aggregations
df.group_by("department", "location").agg({"salary": "avg", "age": "max"})
```
|
site-packages/fenic/api/dataframe/grouped_data.py
| true | false | 45 | 94 | null |
DataFrame
|
[
"self",
"exprs"
] |
GroupedData
| null | null |
Type: method
Member Name: agg
Qualified Name: fenic.api.dataframe.grouped_data.GroupedData.agg
Docstring: Compute aggregations on grouped data and return the result as a DataFrame.
This method applies aggregate functions to the grouped data.
Args:
*exprs: Aggregation expressions. Can be:
- Column expressions with aggregate functions (e.g., `count("*")`, `sum("amount")`)
- A dictionary mapping column names to aggregate function names (e.g., `{"amount": "sum", "age": "avg"}`)
Returns:
DataFrame: A new DataFrame with one row per group and columns for group keys and aggregated values
Raises:
ValueError: If arguments are not Column expressions or a dictionary
ValueError: If dictionary values are not valid aggregate function names
Example: Count employees by department
```python
# Group by department and count employees
df.group_by("department").agg(count("*").alias("employee_count"))
```
Example: Multiple aggregations
```python
# Multiple aggregations
df.group_by("department").agg(
count("*").alias("employee_count"),
avg("salary").alias("avg_salary"),
max("age").alias("max_age")
)
```
Example: Dictionary style aggregations
```python
# Dictionary style for simple aggregations
df.group_by("department", "location").agg({"salary": "avg", "age": "max"})
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "exprs"]
Returns: DataFrame
Parent Class: GroupedData
|
module
|
_join_utils
|
fenic.api.dataframe._join_utils
|
Utility functions for DataFrame join operations.
|
site-packages/fenic/api/dataframe/_join_utils.py
| false | true | null | null | null | null | null | null | null | null |
Type: module
Member Name: _join_utils
Qualified Name: fenic.api.dataframe._join_utils
Docstring: Utility functions for DataFrame join operations.
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: none
Returns: none
Parent Class: none
|
function
|
validate_join_parameters
|
fenic.api.dataframe._join_utils.validate_join_parameters
|
Validate join parameter combinations.
|
site-packages/fenic/api/dataframe/_join_utils.py
| true | false | 10 | 51 | null |
None
|
[
"self",
"on",
"left_on",
"right_on",
"how"
] | null | null | null |
Type: function
Member Name: validate_join_parameters
Qualified Name: fenic.api.dataframe._join_utils.validate_join_parameters
Docstring: Validate join parameter combinations.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "on", "left_on", "right_on", "how"]
Returns: None
Parent Class: none
|
function
|
build_join_conditions
|
fenic.api.dataframe._join_utils.build_join_conditions
|
Build left and right join condition lists.
|
site-packages/fenic/api/dataframe/_join_utils.py
| true | false | 53 | 82 | null |
Tuple[List, List]
|
[
"on",
"left_on",
"right_on"
] | null | null | null |
Type: function
Member Name: build_join_conditions
Qualified Name: fenic.api.dataframe._join_utils.build_join_conditions
Docstring: Build left and right join condition lists.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["on", "left_on", "right_on"]
Returns: Tuple[List, List]
Parent Class: none
|
function
|
_has_join_conditions
|
fenic.api.dataframe._join_utils._has_join_conditions
|
Check if any join conditions are specified.
|
site-packages/fenic/api/dataframe/_join_utils.py
| false | true | 84 | 94 | null |
bool
|
[
"on",
"left_on",
"right_on"
] | null | null | null |
Type: function
Member Name: _has_join_conditions
Qualified Name: fenic.api.dataframe._join_utils._has_join_conditions
Docstring: Check if any join conditions are specified.
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["on", "left_on", "right_on"]
Returns: bool
Parent Class: none
|
function
|
_validate_join_condition_lengths
|
fenic.api.dataframe._join_utils._validate_join_condition_lengths
|
Validate that left_on and right_on have matching lengths.
|
site-packages/fenic/api/dataframe/_join_utils.py
| false | true | 96 | 108 | null |
None
|
[
"left_on",
"right_on"
] | null | null | null |
Type: function
Member Name: _validate_join_condition_lengths
Qualified Name: fenic.api.dataframe._join_utils._validate_join_condition_lengths
Docstring: Validate that left_on and right_on have matching lengths.
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["left_on", "right_on"]
Returns: None
Parent Class: none
|
module
|
semantic_extensions
|
fenic.api.dataframe.semantic_extensions
|
Semantic extensions for DataFrames providing clustering and semantic join operations.
|
site-packages/fenic/api/dataframe/semantic_extensions.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: semantic_extensions
Qualified Name: fenic.api.dataframe.semantic_extensions
Docstring: Semantic extensions for DataFrames providing clustering and semantic join operations.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
SemanticExtensions
|
fenic.api.dataframe.semantic_extensions.SemanticExtensions
|
A namespace for semantic dataframe operators.
|
site-packages/fenic/api/dataframe/semantic_extensions.py
| true | false | 26 | 368 | null | null | null | null | null |
[] |
Type: class
Member Name: SemanticExtensions
Qualified Name: fenic.api.dataframe.semantic_extensions.SemanticExtensions
Docstring: A namespace for semantic dataframe operators.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic.api.dataframe.semantic_extensions.SemanticExtensions.__init__
|
Initialize semantic extensions.
Args:
df: The DataFrame to extend with semantic operations.
|
site-packages/fenic/api/dataframe/semantic_extensions.py
| true | false | 29 | 35 | null | null |
[
"self",
"df"
] |
SemanticExtensions
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic.api.dataframe.semantic_extensions.SemanticExtensions.__init__
Docstring: Initialize semantic extensions.
Args:
df: The DataFrame to extend with semantic operations.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "df"]
Returns: none
Parent Class: SemanticExtensions
|
method
|
with_cluster_labels
|
fenic.api.dataframe.semantic_extensions.SemanticExtensions.with_cluster_labels
|
Cluster rows using K-means and add cluster metadata columns.
This method clusters rows based on the given embedding column or expression using K-means.
It adds a new column with cluster assignments, and optionally includes the centroid embedding
for each assigned cluster.
Args:
by: Column or expression producing embeddings to cluster (e.g., `embed(col("text"))`).
num_clusters: Number of clusters to compute (must be > 0).
max_iter: Maximum iterations for a single run of the k-means algorithm. The algorithm stops when it either converges or reaches this limit.
num_init: Number of independent runs of k-means with different centroid seeds. The best result is selected.
label_column: Name of the output column for cluster IDs. Default is "cluster_label".
centroid_column: If provided, adds a column with this name containing the centroid embedding
for each row's assigned cluster.
Returns:
A DataFrame with all original columns plus:
- `<label_column>`: integer cluster assignment (0 to num_clusters - 1)
- `<centroid_column>`: cluster centroid embedding, if specified
Example: Basic clustering
```python
# Cluster customer feedback and add cluster metadata
clustered_df = df.semantic.with_cluster_labels("feedback_embeddings", num_clusters=5)
# Then use regular operations to analyze clusters
clustered_df.group_by("cluster_label").agg(count("*"), avg("rating"))
```
Example: Filter outliers using centroids
```python
# Cluster and filter out rows far from their centroid
clustered_df = df.semantic.with_cluster_labels("embeddings", num_clusters=3, num_init=10, centroid_column="cluster_centroid")
clean_df = clustered_df.filter(
embedding.compute_similarity("embeddings", "cluster_centroid", metric="cosine") > 0.7
)
```
|
site-packages/fenic/api/dataframe/semantic_extensions.py
| true | false | 37 | 130 | null |
DataFrame
|
[
"self",
"by",
"num_clusters",
"max_iter",
"num_init",
"label_column",
"centroid_column"
] |
SemanticExtensions
| null | null |
Type: method
Member Name: with_cluster_labels
Qualified Name: fenic.api.dataframe.semantic_extensions.SemanticExtensions.with_cluster_labels
Docstring: Cluster rows using K-means and add cluster metadata columns.
This method clusters rows based on the given embedding column or expression using K-means.
It adds a new column with cluster assignments, and optionally includes the centroid embedding
for each assigned cluster.
Args:
by: Column or expression producing embeddings to cluster (e.g., `embed(col("text"))`).
num_clusters: Number of clusters to compute (must be > 0).
max_iter: Maximum iterations for a single run of the k-means algorithm. The algorithm stops when it either converges or reaches this limit.
num_init: Number of independent runs of k-means with different centroid seeds. The best result is selected.
label_column: Name of the output column for cluster IDs. Default is "cluster_label".
centroid_column: If provided, adds a column with this name containing the centroid embedding
for each row's assigned cluster.
Returns:
A DataFrame with all original columns plus:
- `<label_column>`: integer cluster assignment (0 to num_clusters - 1)
- `<centroid_column>`: cluster centroid embedding, if specified
Example: Basic clustering
```python
# Cluster customer feedback and add cluster metadata
clustered_df = df.semantic.with_cluster_labels("feedback_embeddings", num_clusters=5)
# Then use regular operations to analyze clusters
clustered_df.group_by("cluster_label").agg(count("*"), avg("rating"))
```
Example: Filter outliers using centroids
```python
# Cluster and filter out rows far from their centroid
clustered_df = df.semantic.with_cluster_labels("embeddings", num_clusters=3, num_init=10, centroid_column="cluster_centroid")
clean_df = clustered_df.filter(
embedding.compute_similarity("embeddings", "cluster_centroid", metric="cosine") > 0.7
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "by", "num_clusters", "max_iter", "num_init", "label_column", "centroid_column"]
Returns: DataFrame
Parent Class: SemanticExtensions
|
method
|
join
|
fenic.api.dataframe.semantic_extensions.SemanticExtensions.join
|
Performs a semantic join between two DataFrames using a natural language predicate.
This method evaluates a boolean predicate for each potential row pair between the two DataFrames,
including only those pairs where the predicate evaluates to True.
The join process:
1. For each row in the left DataFrame, evaluates the predicate in the jinja template against each row in the right DataFrame
2. Includes row pairs where the predicate returns True
3. Excludes row pairs where the predicate returns False
4. Returns a new DataFrame containing all columns from both DataFrames for the matched pairs
The jinja template must use exactly two column placeholders:
- One from the left DataFrame: `{{ left_on }}`
- One from the right DataFrame: `{{ right_on }}`
Args:
other: The DataFrame to join with.
predicate: A Jinja2 template containing the natural language predicate.
Must include placeholders for exactly one column from each DataFrame.
The template is evaluated as a boolean - True includes the pair, False excludes it.
left_on: The column from the left DataFrame (self) to use in the join predicate.
right_on: The column from the right DataFrame (other) to use in the join predicate.
strict: If True, when either the left_on or right_on column has a None value for a row pair,
that pair is automatically excluded from the join (predicate is not evaluated).
If False, None values are rendered according to Jinja2's null rendering behavior.
Default is True.
examples: Optional JoinExampleCollection containing labeled examples to guide the join.
Each example should have:
- left: Sample value from the left column
- right: Sample value from the right column
- output: Boolean indicating whether this pair should be joined (True) or not (False)
model_alias: Optional alias for the language model to use. If None, uses the default model.
Returns:
DataFrame: A new DataFrame containing matched row pairs with all columns from both DataFrames.
Example: Basic semantic join
```python
# Match job listings with candidate resumes based on title/skills
# Only includes pairs where the predicate evaluates to True
df_jobs.semantic.join(df_resumes,
predicate=dedent(''' Job Description: {{left_on}}
Candidate Background: {{right_on}}
The candidate is qualified for the job.'''),
left_on=col("job_description"),
right_on=col("work_experience"),
examples=examples
)
```
Example: Semantic join with examples
```python
# Improve join quality with examples
examples = JoinExampleCollection()
examples.create_example(JoinExample(
left="5 years experience building backend services in Python using asyncio, FastAPI, and PostgreSQL",
right="Senior Software Engineer - Backend",
output=True)) # This pair WILL be included in similar cases
examples.create_example(JoinExample(
left="5 years experience with growth strategy, private equity due diligence, and M&A",
right="Product Manager - Hardware",
output=False)) # This pair will NOT be included in similar cases
df_jobs.semantic.join(
other=df_resumes,
predicate=dedent(''' Job Description: {{left_on}}
Candidate Background: {{right_on}}
The candidate is qualified for the job.'''),
left_on=col("job_description"),
right_on=col("work_experience"),
examples=examples
)
```
|
site-packages/fenic/api/dataframe/semantic_extensions.py
| true | false | 132 | 251 | null |
DataFrame
|
[
"self",
"other",
"predicate",
"left_on",
"right_on",
"strict",
"examples",
"model_alias"
] |
SemanticExtensions
| null | null |
Type: method
Member Name: join
Qualified Name: fenic.api.dataframe.semantic_extensions.SemanticExtensions.join
Docstring: Performs a semantic join between two DataFrames using a natural language predicate.
This method evaluates a boolean predicate for each potential row pair between the two DataFrames,
including only those pairs where the predicate evaluates to True.
The join process:
1. For each row in the left DataFrame, evaluates the predicate in the jinja template against each row in the right DataFrame
2. Includes row pairs where the predicate returns True
3. Excludes row pairs where the predicate returns False
4. Returns a new DataFrame containing all columns from both DataFrames for the matched pairs
The jinja template must use exactly two column placeholders:
- One from the left DataFrame: `{{ left_on }}`
- One from the right DataFrame: `{{ right_on }}`
Args:
other: The DataFrame to join with.
predicate: A Jinja2 template containing the natural language predicate.
Must include placeholders for exactly one column from each DataFrame.
The template is evaluated as a boolean - True includes the pair, False excludes it.
left_on: The column from the left DataFrame (self) to use in the join predicate.
right_on: The column from the right DataFrame (other) to use in the join predicate.
strict: If True, when either the left_on or right_on column has a None value for a row pair,
that pair is automatically excluded from the join (predicate is not evaluated).
If False, None values are rendered according to Jinja2's null rendering behavior.
Default is True.
examples: Optional JoinExampleCollection containing labeled examples to guide the join.
Each example should have:
- left: Sample value from the left column
- right: Sample value from the right column
- output: Boolean indicating whether this pair should be joined (True) or not (False)
model_alias: Optional alias for the language model to use. If None, uses the default model.
Returns:
DataFrame: A new DataFrame containing matched row pairs with all columns from both DataFrames.
Example: Basic semantic join
```python
# Match job listings with candidate resumes based on title/skills
# Only includes pairs where the predicate evaluates to True
df_jobs.semantic.join(df_resumes,
predicate=dedent(''' Job Description: {{left_on}}
Candidate Background: {{right_on}}
The candidate is qualified for the job.'''),
left_on=col("job_description"),
right_on=col("work_experience"),
examples=examples
)
```
Example: Semantic join with examples
```python
# Improve join quality with examples
examples = JoinExampleCollection()
examples.create_example(JoinExample(
left="5 years experience building backend services in Python using asyncio, FastAPI, and PostgreSQL",
right="Senior Software Engineer - Backend",
output=True)) # This pair WILL be included in similar cases
examples.create_example(JoinExample(
left="5 years experience with growth strategy, private equity due diligence, and M&A",
right="Product Manager - Hardware",
output=False)) # This pair will NOT be included in similar cases
df_jobs.semantic.join(
other=df_resumes,
predicate=dedent(''' Job Description: {{left_on}}
Candidate Background: {{right_on}}
The candidate is qualified for the job.'''),
left_on=col("job_description"),
right_on=col("work_experience"),
examples=examples
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other", "predicate", "left_on", "right_on", "strict", "examples", "model_alias"]
Returns: DataFrame
Parent Class: SemanticExtensions
|
method
|
sim_join
|
fenic.api.dataframe.semantic_extensions.SemanticExtensions.sim_join
|
Performs a semantic similarity join between two DataFrames using embedding expressions.
For each row in the left DataFrame, returns the top `k` most semantically similar rows
from the right DataFrame based on the specified similarity metric.
Args:
other: The right-hand DataFrame to join with.
left_on: Expression or column representing embeddings in the left DataFrame.
right_on: Expression or column representing embeddings in the right DataFrame.
k: Number of most similar matches to return per row.
similarity_metric: Similarity metric to use: "l2", "cosine", or "dot".
similarity_score_column: If set, adds a column with this name containing similarity scores.
If None, the scores are omitted.
Returns:
A DataFrame containing one row for each of the top-k matches per row in the left DataFrame.
The result includes all columns from both DataFrames, optionally augmented with a similarity score column
if `similarity_score_column` is provided.
Raises:
ValidationError: If `k` is not positive or if the columns are invalid.
ValidationError: If `similarity_metric` is not one of "l2", "cosine", "dot"
Example: Match queries to FAQ entries
```python
# Match customer queries to FAQ entries
df_queries.semantic.sim_join(
df_faqs,
left_on=embeddings(col("query_text")),
right_on=embeddings(col("faq_question")),
k=1
)
```
Example: Link headlines to articles
```python
# Link news headlines to full articles
df_headlines.semantic.sim_join(
df_articles,
left_on=embeddings(col("headline")),
right_on=embeddings(col("content")),
k=3,
return_similarity_scores=True
)
```
Example: Find similar job postings
```python
# Find similar job postings across two sources
df_linkedin.semantic.sim_join(
df_indeed,
left_on=embeddings(col("job_title")),
right_on=embeddings(col("job_description")),
k=2
)
```
|
site-packages/fenic/api/dataframe/semantic_extensions.py
| true | false | 253 | 365 | null |
DataFrame
|
[
"self",
"other",
"left_on",
"right_on",
"k",
"similarity_metric",
"similarity_score_column"
] |
SemanticExtensions
| null | null |
Type: method
Member Name: sim_join
Qualified Name: fenic.api.dataframe.semantic_extensions.SemanticExtensions.sim_join
Docstring: Performs a semantic similarity join between two DataFrames using embedding expressions.
For each row in the left DataFrame, returns the top `k` most semantically similar rows
from the right DataFrame based on the specified similarity metric.
Args:
other: The right-hand DataFrame to join with.
left_on: Expression or column representing embeddings in the left DataFrame.
right_on: Expression or column representing embeddings in the right DataFrame.
k: Number of most similar matches to return per row.
similarity_metric: Similarity metric to use: "l2", "cosine", or "dot".
similarity_score_column: If set, adds a column with this name containing similarity scores.
If None, the scores are omitted.
Returns:
A DataFrame containing one row for each of the top-k matches per row in the left DataFrame.
The result includes all columns from both DataFrames, optionally augmented with a similarity score column
if `similarity_score_column` is provided.
Raises:
ValidationError: If `k` is not positive or if the columns are invalid.
ValidationError: If `similarity_metric` is not one of "l2", "cosine", "dot"
Example: Match queries to FAQ entries
```python
# Match customer queries to FAQ entries
df_queries.semantic.sim_join(
df_faqs,
left_on=embeddings(col("query_text")),
right_on=embeddings(col("faq_question")),
k=1
)
```
Example: Link headlines to articles
```python
# Link news headlines to full articles
df_headlines.semantic.sim_join(
df_articles,
left_on=embeddings(col("headline")),
right_on=embeddings(col("content")),
k=3,
return_similarity_scores=True
)
```
Example: Find similar job postings
```python
# Find similar job postings across two sources
df_linkedin.semantic.sim_join(
df_indeed,
left_on=embeddings(col("job_title")),
right_on=embeddings(col("job_description")),
k=2
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "other", "left_on", "right_on", "k", "similarity_metric", "similarity_score_column"]
Returns: DataFrame
Parent Class: SemanticExtensions
|
module
|
io
|
fenic.api.io
|
IO module for reading and writing DataFrames to external storage.
|
site-packages/fenic/api/io/__init__.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: io
Qualified Name: fenic.api.io
Docstring: IO module for reading and writing DataFrames to external storage.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
__all__
|
fenic.api.io.__all__
| null |
site-packages/fenic/api/io/__init__.py
| false | false | 6 | 6 | null | null | null | null |
['DataFrameReader', 'DataFrameWriter']
| null |
Type: attribute
Member Name: __all__
Qualified Name: fenic.api.io.__all__
Docstring: none
Value: ['DataFrameReader', 'DataFrameWriter']
Annotation: none
is Public? : false
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
module
|
reader
|
fenic.api.io.reader
|
Reader interface for loading DataFrames from external storage systems.
|
site-packages/fenic/api/io/reader.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: reader
Qualified Name: fenic.api.io.reader
Docstring: Reader interface for loading DataFrames from external storage systems.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
DataFrameReader
|
fenic.api.io.reader.DataFrameReader
|
Interface used to load a DataFrame from external storage systems.
Similar to PySpark's DataFrameReader.
Supported External Storage Schemes:
- Amazon S3 (s3://)
- Format: s3://{bucket_name}/{path_to_file}
- Notes:
- Uses boto3 to aquire AWS credentials.
- Examples:
- s3://my-bucket/data.csv
- s3://my-bucket/data/*.parquet
- Hugging Face Datasets (hf://)
- Format: hf://{repo_type}/{repo_id}/{path_to_file}
- Notes:
- Supports glob patterns (*, **)
- Supports dataset revisions and branch aliases (e.g., @refs/convert/parquet, @~parquet)
- HF_TOKEN environment variable is required to read private datasets.
- Examples:
- hf://datasets/datasets-examples/doc-formats-csv-1/data.csv
- hf://datasets/cais/mmlu/astronomy/*.parquet
- hf://datasets/datasets-examples/doc-formats-csv-1@~parquet/**/*.parquet
- Local Files (file:// or implicit)
- Format: file://{absolute_or_relative_path}
- Notes:
- Paths without a scheme (e.g., ./data.csv or /tmp/data.parquet) are treated as local files
- Examples:
- file:///home/user/data.csv
- ./data/*.parquet
|
site-packages/fenic/api/io/reader.py
| true | false | 21 | 335 | null | null | null | null | null |
[] |
Type: class
Member Name: DataFrameReader
Qualified Name: fenic.api.io.reader.DataFrameReader
Docstring: Interface used to load a DataFrame from external storage systems.
Similar to PySpark's DataFrameReader.
Supported External Storage Schemes:
- Amazon S3 (s3://)
- Format: s3://{bucket_name}/{path_to_file}
- Notes:
- Uses boto3 to aquire AWS credentials.
- Examples:
- s3://my-bucket/data.csv
- s3://my-bucket/data/*.parquet
- Hugging Face Datasets (hf://)
- Format: hf://{repo_type}/{repo_id}/{path_to_file}
- Notes:
- Supports glob patterns (*, **)
- Supports dataset revisions and branch aliases (e.g., @refs/convert/parquet, @~parquet)
- HF_TOKEN environment variable is required to read private datasets.
- Examples:
- hf://datasets/datasets-examples/doc-formats-csv-1/data.csv
- hf://datasets/cais/mmlu/astronomy/*.parquet
- hf://datasets/datasets-examples/doc-formats-csv-1@~parquet/**/*.parquet
- Local Files (file:// or implicit)
- Format: file://{absolute_or_relative_path}
- Notes:
- Paths without a scheme (e.g., ./data.csv or /tmp/data.parquet) are treated as local files
- Examples:
- file:///home/user/data.csv
- ./data/*.parquet
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic.api.io.reader.DataFrameReader.__init__
|
Creates a DataFrameReader.
Args:
session_state: The session state to use for reading
|
site-packages/fenic/api/io/reader.py
| true | false | 60 | 67 | null | null |
[
"self",
"session_state"
] |
DataFrameReader
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic.api.io.reader.DataFrameReader.__init__
Docstring: Creates a DataFrameReader.
Args:
session_state: The session state to use for reading
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "session_state"]
Returns: none
Parent Class: DataFrameReader
|
method
|
csv
|
fenic.api.io.reader.DataFrameReader.csv
|
Load a DataFrame from one or more CSV files.
Args:
paths: A single file path, a glob pattern (e.g., "data/*.csv"), or a list of paths.
schema: (optional) A complete schema definition of column names and their types. Only primitive types are supported.
- For e.g.:
- Schema([ColumnField(name="id", data_type=IntegerType), ColumnField(name="name", data_type=StringType)])
- If provided, all files must match this schema exactly—all column names must be present, and values must be
convertible to the specified types. Partial schemas are not allowed.
merge_schemas: Whether to merge schemas across all files.
- If True: Column names are unified across files. Missing columns are filled with nulls. Column types are
inferred and widened as needed.
- If False (default): Only accepts columns from the first file. Column types from the first file are
inferred and applied across all files. If subsequent files do not have the same column name and order as the first file, an error is raised.
- The "first file" is defined as:
- The first file in lexicographic order (for glob patterns), or
- The first file in the provided list (for lists of paths).
Notes:
- The first row in each file is assumed to be a header row.
- Delimiters (e.g., comma, tab) are automatically inferred.
- You may specify either `schema` or `merge_schemas=True`, but not both.
- Any date/datetime columns are cast to strings during ingestion.
Raises:
ValidationError: If both `schema` and `merge_schemas=True` are provided.
ValidationError: If any path does not end with `.csv`.
PlanError: If schemas cannot be merged or if there's a schema mismatch when merge_schemas=False.
Example: Read a single CSV file
```python
df = session.read.csv("file.csv")
```
Example: Read multiple CSV files with schema merging
```python
df = session.read.csv("data/*.csv", merge_schemas=True)
```
Example: Read CSV files with explicit schema
```python
df = session.read.csv(
["a.csv", "b.csv"],
schema=Schema([
ColumnField(name="id", data_type=IntegerType),
ColumnField(name="value", data_type=FloatType)
])
) ```
|
site-packages/fenic/api/io/reader.py
| true | false | 69 | 151 | null |
DataFrame
|
[
"self",
"paths",
"schema",
"merge_schemas"
] |
DataFrameReader
| null | null |
Type: method
Member Name: csv
Qualified Name: fenic.api.io.reader.DataFrameReader.csv
Docstring: Load a DataFrame from one or more CSV files.
Args:
paths: A single file path, a glob pattern (e.g., "data/*.csv"), or a list of paths.
schema: (optional) A complete schema definition of column names and their types. Only primitive types are supported.
- For e.g.:
- Schema([ColumnField(name="id", data_type=IntegerType), ColumnField(name="name", data_type=StringType)])
- If provided, all files must match this schema exactly—all column names must be present, and values must be
convertible to the specified types. Partial schemas are not allowed.
merge_schemas: Whether to merge schemas across all files.
- If True: Column names are unified across files. Missing columns are filled with nulls. Column types are
inferred and widened as needed.
- If False (default): Only accepts columns from the first file. Column types from the first file are
inferred and applied across all files. If subsequent files do not have the same column name and order as the first file, an error is raised.
- The "first file" is defined as:
- The first file in lexicographic order (for glob patterns), or
- The first file in the provided list (for lists of paths).
Notes:
- The first row in each file is assumed to be a header row.
- Delimiters (e.g., comma, tab) are automatically inferred.
- You may specify either `schema` or `merge_schemas=True`, but not both.
- Any date/datetime columns are cast to strings during ingestion.
Raises:
ValidationError: If both `schema` and `merge_schemas=True` are provided.
ValidationError: If any path does not end with `.csv`.
PlanError: If schemas cannot be merged or if there's a schema mismatch when merge_schemas=False.
Example: Read a single CSV file
```python
df = session.read.csv("file.csv")
```
Example: Read multiple CSV files with schema merging
```python
df = session.read.csv("data/*.csv", merge_schemas=True)
```
Example: Read CSV files with explicit schema
```python
df = session.read.csv(
["a.csv", "b.csv"],
schema=Schema([
ColumnField(name="id", data_type=IntegerType),
ColumnField(name="value", data_type=FloatType)
])
) ```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "paths", "schema", "merge_schemas"]
Returns: DataFrame
Parent Class: DataFrameReader
|
method
|
parquet
|
fenic.api.io.reader.DataFrameReader.parquet
|
Load a DataFrame from one or more Parquet files.
Args:
paths: A single file path, a glob pattern (e.g., "data/*.parquet"), or a list of paths.
merge_schemas: If True, infers and merges schemas across all files.
Missing columns are filled with nulls, and differing types are widened to a common supertype.
Behavior:
- If `merge_schemas=False` (default), all files must match the schema of the first file exactly.
Subsequent files must contain all columns from the first file with compatible data types.
If any column is missing or has incompatible types, an error is raised.
- If `merge_schemas=True`, column names are unified across all files, and data types are automatically
widened to accommodate all values.
- The "first file" is defined as:
- The first file in lexicographic order (for glob patterns), or
- The first file in the provided list (for lists of paths).
Notes:
- Date and datetime columns are cast to strings during ingestion.
Raises:
ValidationError: If any file does not have a `.parquet` extension.
PlanError: If schemas cannot be merged or if there's a schema mismatch when merge_schemas=False.
Example: Read a single Parquet file
```python
df = session.read.parquet("file.parquet")
```
Example: Read multiple Parquet files
```python
df = session.read.parquet("data/*.parquet")
```
Example: Read Parquet files with schema merging
```python
df = session.read.parquet(["a.parquet", "b.parquet"], merge_schemas=True)
```
|
site-packages/fenic/api/io/reader.py
| true | false | 153 | 202 | null |
DataFrame
|
[
"self",
"paths",
"merge_schemas"
] |
DataFrameReader
| null | null |
Type: method
Member Name: parquet
Qualified Name: fenic.api.io.reader.DataFrameReader.parquet
Docstring: Load a DataFrame from one or more Parquet files.
Args:
paths: A single file path, a glob pattern (e.g., "data/*.parquet"), or a list of paths.
merge_schemas: If True, infers and merges schemas across all files.
Missing columns are filled with nulls, and differing types are widened to a common supertype.
Behavior:
- If `merge_schemas=False` (default), all files must match the schema of the first file exactly.
Subsequent files must contain all columns from the first file with compatible data types.
If any column is missing or has incompatible types, an error is raised.
- If `merge_schemas=True`, column names are unified across all files, and data types are automatically
widened to accommodate all values.
- The "first file" is defined as:
- The first file in lexicographic order (for glob patterns), or
- The first file in the provided list (for lists of paths).
Notes:
- Date and datetime columns are cast to strings during ingestion.
Raises:
ValidationError: If any file does not have a `.parquet` extension.
PlanError: If schemas cannot be merged or if there's a schema mismatch when merge_schemas=False.
Example: Read a single Parquet file
```python
df = session.read.parquet("file.parquet")
```
Example: Read multiple Parquet files
```python
df = session.read.parquet("data/*.parquet")
```
Example: Read Parquet files with schema merging
```python
df = session.read.parquet(["a.parquet", "b.parquet"], merge_schemas=True)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "paths", "merge_schemas"]
Returns: DataFrame
Parent Class: DataFrameReader
|
method
|
_read_file
|
fenic.api.io.reader.DataFrameReader._read_file
|
Internal helper method to read files of a specific format.
Args:
paths: Path(s) to the file(s). Can be a single path or a list of paths.
file_format: Format of the file (e.g., "csv", "parquet").
file_extension: Expected file extension (e.g., ".csv", ".parquet").
**options: Additional options to pass to the file reader.
Returns:
DataFrame loaded from the specified file(s).
Raises:
ValidationError: If any path doesn't end with the expected file extension.
ValidationError: If paths is not a string, Path, or list of strings/Paths.
|
site-packages/fenic/api/io/reader.py
| false | true | 204 | 264 | null |
DataFrame
|
[
"self",
"paths",
"file_format",
"file_extension",
"options"
] |
DataFrameReader
| null | null |
Type: method
Member Name: _read_file
Qualified Name: fenic.api.io.reader.DataFrameReader._read_file
Docstring: Internal helper method to read files of a specific format.
Args:
paths: Path(s) to the file(s). Can be a single path or a list of paths.
file_format: Format of the file (e.g., "csv", "parquet").
file_extension: Expected file extension (e.g., ".csv", ".parquet").
**options: Additional options to pass to the file reader.
Returns:
DataFrame loaded from the specified file(s).
Raises:
ValidationError: If any path doesn't end with the expected file extension.
ValidationError: If paths is not a string, Path, or list of strings/Paths.
Value: none
Annotation: none
is Public? : false
is Private? : true
Parameters: ["self", "paths", "file_format", "file_extension", "options"]
Returns: DataFrame
Parent Class: DataFrameReader
|
method
|
docs
|
fenic.api.io.reader.DataFrameReader.docs
|
Load a DataFrame from a list of paths of documents (markdown or json).
Args:
paths: Glob pattern (or list of glob patterns) to the folder(s) to load.
data_type: Data type that will be used to cast the content of the files.
One of MarkdownType or JsonType.
exclude: A regex pattern to exclude files.
If it is not provided no files will be excluded.
recursive: Whether to recursively load files from the folder.
Returns:
DataFrame: A dataframe with all the documents found in the paths.
Each document is a row in the dataframe.
Raises:
ValidationError: If any file does not have a `.md` or `.json` depending on the data_type.
UnsupportedFileTypeError: If the data_type is not supported.
Notes:
- Each row in the dataframe corresponds to a file in the list of paths.
- The dataframe has the following columns:
- file_path: The path to the file.
- error: The error message if the file failed to be loaded.
- content: The content of the file casted to the data_type.
- Recursive loading is supported in conjunction with the '**' glob pattern,
e.g. `data/**/*.md` will load all markdown files in the `data` folder and all subfolders
when recursive is set to True.
Without recursive = True, then ** behaves like a single '*' pattern.
Example: Read all the markdown files in a folder and all its subfolders.
```python
df = session.read.docs("data/docs/**/*.md", data_type=MarkdownType, recursive=True)
```
Example: Read a folder of markdown files excluding some files.
```python
df = session.read.docs("data/docs/*.md", data_type=MarkdownType, exclude=r"\.bak.md$")
```
|
site-packages/fenic/api/io/reader.py
| true | false | 266 | 335 | null |
DataFrame
|
[
"self",
"paths",
"data_type",
"exclude",
"recursive"
] |
DataFrameReader
| null | null |
Type: method
Member Name: docs
Qualified Name: fenic.api.io.reader.DataFrameReader.docs
Docstring: Load a DataFrame from a list of paths of documents (markdown or json).
Args:
paths: Glob pattern (or list of glob patterns) to the folder(s) to load.
data_type: Data type that will be used to cast the content of the files.
One of MarkdownType or JsonType.
exclude: A regex pattern to exclude files.
If it is not provided no files will be excluded.
recursive: Whether to recursively load files from the folder.
Returns:
DataFrame: A dataframe with all the documents found in the paths.
Each document is a row in the dataframe.
Raises:
ValidationError: If any file does not have a `.md` or `.json` depending on the data_type.
UnsupportedFileTypeError: If the data_type is not supported.
Notes:
- Each row in the dataframe corresponds to a file in the list of paths.
- The dataframe has the following columns:
- file_path: The path to the file.
- error: The error message if the file failed to be loaded.
- content: The content of the file casted to the data_type.
- Recursive loading is supported in conjunction with the '**' glob pattern,
e.g. `data/**/*.md` will load all markdown files in the `data` folder and all subfolders
when recursive is set to True.
Without recursive = True, then ** behaves like a single '*' pattern.
Example: Read all the markdown files in a folder and all its subfolders.
```python
df = session.read.docs("data/docs/**/*.md", data_type=MarkdownType, recursive=True)
```
Example: Read a folder of markdown files excluding some files.
```python
df = session.read.docs("data/docs/*.md", data_type=MarkdownType, exclude=r"\.bak.md$")
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "paths", "data_type", "exclude", "recursive"]
Returns: DataFrame
Parent Class: DataFrameReader
|
module
|
writer
|
fenic.api.io.writer
|
Writer interface for saving DataFrames to external storage systems.
|
site-packages/fenic/api/io/writer.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: writer
Qualified Name: fenic.api.io.writer
Docstring: Writer interface for saving DataFrames to external storage systems.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
logger
|
fenic.api.io.writer.logger
| null |
site-packages/fenic/api/io/writer.py
| true | false | 18 | 18 | null | null | null | null |
logging.getLogger(__name__)
| null |
Type: attribute
Member Name: logger
Qualified Name: fenic.api.io.writer.logger
Docstring: none
Value: logging.getLogger(__name__)
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
class
|
DataFrameWriter
|
fenic.api.io.writer.DataFrameWriter
|
Interface used to write a DataFrame to external storage systems.
Similar to PySpark's DataFrameWriter.
Supported External Storage Schemes:
- Amazon S3 (s3://)
- Format: s3://{bucket_name}/{path_to_file}
- Notes:
- Uses boto3 to aquire AWS credentials.
- Examples:
- s3://my-bucket/data.csv
- s3://my-bucket/data/*.parquet
- Local Files (file:// or implicit)
- Format: file://{absolute_or_relative_path}
- Notes:
- Paths without a scheme (e.g., ./data.csv or /tmp/data.parquet) are treated as local files
- Examples:
- file:///home/user/data.csv
- ./data/*.parquet
|
site-packages/fenic/api/io/writer.py
| true | false | 21 | 223 | null | null | null | null | null |
[] |
Type: class
Member Name: DataFrameWriter
Qualified Name: fenic.api.io.writer.DataFrameWriter
Docstring: Interface used to write a DataFrame to external storage systems.
Similar to PySpark's DataFrameWriter.
Supported External Storage Schemes:
- Amazon S3 (s3://)
- Format: s3://{bucket_name}/{path_to_file}
- Notes:
- Uses boto3 to aquire AWS credentials.
- Examples:
- s3://my-bucket/data.csv
- s3://my-bucket/data/*.parquet
- Local Files (file:// or implicit)
- Format: file://{absolute_or_relative_path}
- Notes:
- Paths without a scheme (e.g., ./data.csv or /tmp/data.parquet) are treated as local files
- Examples:
- file:///home/user/data.csv
- ./data/*.parquet
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
method
|
__init__
|
fenic.api.io.writer.DataFrameWriter.__init__
|
Initialize a DataFrameWriter.
Args:
dataframe: The DataFrame to write.
|
site-packages/fenic/api/io/writer.py
| true | false | 47 | 53 | null | null |
[
"self",
"dataframe"
] |
DataFrameWriter
| null | null |
Type: method
Member Name: __init__
Qualified Name: fenic.api.io.writer.DataFrameWriter.__init__
Docstring: Initialize a DataFrameWriter.
Args:
dataframe: The DataFrame to write.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "dataframe"]
Returns: none
Parent Class: DataFrameWriter
|
method
|
save_as_table
|
fenic.api.io.writer.DataFrameWriter.save_as_table
|
Saves the content of the DataFrame as the specified table.
Args:
table_name: Name of the table to save to
mode: Write mode. Default is "error".
- error: Raises an error if table exists
- append: Appends data to table if it exists
- overwrite: Overwrites existing table
- ignore: Silently ignores operation if table exists
Returns:
QueryMetrics: The query metrics
Example: Save with error mode (default)
```python
df.write.save_as_table("my_table") # Raises error if table exists
```
Example: Save with append mode
```python
df.write.save_as_table("my_table", mode="append") # Adds to existing table
```
Example: Save with overwrite mode
```python
df.write.save_as_table("my_table", mode="overwrite") # Replaces existing table
```
|
site-packages/fenic/api/io/writer.py
| true | false | 55 | 99 | null |
QueryMetrics
|
[
"self",
"table_name",
"mode"
] |
DataFrameWriter
| null | null |
Type: method
Member Name: save_as_table
Qualified Name: fenic.api.io.writer.DataFrameWriter.save_as_table
Docstring: Saves the content of the DataFrame as the specified table.
Args:
table_name: Name of the table to save to
mode: Write mode. Default is "error".
- error: Raises an error if table exists
- append: Appends data to table if it exists
- overwrite: Overwrites existing table
- ignore: Silently ignores operation if table exists
Returns:
QueryMetrics: The query metrics
Example: Save with error mode (default)
```python
df.write.save_as_table("my_table") # Raises error if table exists
```
Example: Save with append mode
```python
df.write.save_as_table("my_table", mode="append") # Adds to existing table
```
Example: Save with overwrite mode
```python
df.write.save_as_table("my_table", mode="overwrite") # Replaces existing table
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "table_name", "mode"]
Returns: QueryMetrics
Parent Class: DataFrameWriter
|
method
|
save_as_view
|
fenic.api.io.writer.DataFrameWriter.save_as_view
|
Saves the content of the DataFrame as a view.
Args:
view_name: Name of the view to save to
description: Optional human-readable view description to store in the catalog.
Returns:
None.
|
site-packages/fenic/api/io/writer.py
| true | false | 101 | 117 | null |
None
|
[
"self",
"view_name",
"description"
] |
DataFrameWriter
| null | null |
Type: method
Member Name: save_as_view
Qualified Name: fenic.api.io.writer.DataFrameWriter.save_as_view
Docstring: Saves the content of the DataFrame as a view.
Args:
view_name: Name of the view to save to
description: Optional human-readable view description to store in the catalog.
Returns:
None.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "view_name", "description"]
Returns: None
Parent Class: DataFrameWriter
|
method
|
csv
|
fenic.api.io.writer.DataFrameWriter.csv
|
Saves the content of the DataFrame as a single CSV file with comma as the delimiter and headers in the first row.
Args:
file_path: Path to save the CSV file to
mode: Write mode. Default is "overwrite".
- error: Raises an error if file exists
- overwrite: Overwrites the file if it exists
- ignore: Silently ignores operation if file exists
Returns:
QueryMetrics: The query metrics
Example: Save with overwrite mode (default)
```python
df.write.csv("output.csv") # Overwrites if exists
```
Example: Save with error mode
```python
df.write.csv("output.csv", mode="error") # Raises error if exists
```
Example: Save with ignore mode
```python
df.write.csv("output.csv", mode="ignore") # Skips if exists
```
|
site-packages/fenic/api/io/writer.py
| true | false | 119 | 170 | null |
QueryMetrics
|
[
"self",
"file_path",
"mode"
] |
DataFrameWriter
| null | null |
Type: method
Member Name: csv
Qualified Name: fenic.api.io.writer.DataFrameWriter.csv
Docstring: Saves the content of the DataFrame as a single CSV file with comma as the delimiter and headers in the first row.
Args:
file_path: Path to save the CSV file to
mode: Write mode. Default is "overwrite".
- error: Raises an error if file exists
- overwrite: Overwrites the file if it exists
- ignore: Silently ignores operation if file exists
Returns:
QueryMetrics: The query metrics
Example: Save with overwrite mode (default)
```python
df.write.csv("output.csv") # Overwrites if exists
```
Example: Save with error mode
```python
df.write.csv("output.csv", mode="error") # Raises error if exists
```
Example: Save with ignore mode
```python
df.write.csv("output.csv", mode="ignore") # Skips if exists
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "file_path", "mode"]
Returns: QueryMetrics
Parent Class: DataFrameWriter
|
method
|
parquet
|
fenic.api.io.writer.DataFrameWriter.parquet
|
Saves the content of the DataFrame as a single Parquet file.
Args:
file_path: Path to save the Parquet file to
mode: Write mode. Default is "overwrite".
- error: Raises an error if file exists
- overwrite: Overwrites the file if it exists
- ignore: Silently ignores operation if file exists
Returns:
QueryMetrics: The query metrics
Example: Save with overwrite mode (default)
```python
df.write.parquet("output.parquet") # Overwrites if exists
```
Example: Save with error mode
```python
df.write.parquet("output.parquet", mode="error") # Raises error if exists
```
Example: Save with ignore mode
```python
df.write.parquet("output.parquet", mode="ignore") # Skips if exists
```
|
site-packages/fenic/api/io/writer.py
| true | false | 172 | 223 | null |
QueryMetrics
|
[
"self",
"file_path",
"mode"
] |
DataFrameWriter
| null | null |
Type: method
Member Name: parquet
Qualified Name: fenic.api.io.writer.DataFrameWriter.parquet
Docstring: Saves the content of the DataFrame as a single Parquet file.
Args:
file_path: Path to save the Parquet file to
mode: Write mode. Default is "overwrite".
- error: Raises an error if file exists
- overwrite: Overwrites the file if it exists
- ignore: Silently ignores operation if file exists
Returns:
QueryMetrics: The query metrics
Example: Save with overwrite mode (default)
```python
df.write.parquet("output.parquet") # Overwrites if exists
```
Example: Save with error mode
```python
df.write.parquet("output.parquet", mode="error") # Raises error if exists
```
Example: Save with ignore mode
```python
df.write.parquet("output.parquet", mode="ignore") # Skips if exists
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["self", "file_path", "mode"]
Returns: QueryMetrics
Parent Class: DataFrameWriter
|
module
|
mcp
|
fenic.api.mcp
|
MCP Tool Creation/Server Management API.
|
site-packages/fenic/api/mcp/__init__.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: mcp
Qualified Name: fenic.api.mcp
Docstring: MCP Tool Creation/Server Management API.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
__all__
|
fenic.api.mcp.__all__
| null |
site-packages/fenic/api/mcp/__init__.py
| false | false | 10 | 15 | null | null | null | null |
['create_mcp_server', 'run_mcp_server_sync', 'run_mcp_server_async', 'run_mcp_server_asgi']
| null |
Type: attribute
Member Name: __all__
Qualified Name: fenic.api.mcp.__all__
Docstring: none
Value: ['create_mcp_server', 'run_mcp_server_sync', 'run_mcp_server_async', 'run_mcp_server_asgi']
Annotation: none
is Public? : false
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
module
|
server
|
fenic.api.mcp.server
|
Create MCP servers using Fenic DataFrames.
This module exposes helpers to:
- Build a Fenic-backed MCP server from datasets and tools
- Run the server synchronously or asynchronously
|
site-packages/fenic/api/mcp/server.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: server
Qualified Name: fenic.api.mcp.server
Docstring: Create MCP servers using Fenic DataFrames.
This module exposes helpers to:
- Build a Fenic-backed MCP server from datasets and tools
- Run the server synchronously or asynchronously
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
function
|
create_mcp_server
|
fenic.api.mcp.server.create_mcp_server
|
Create an MCP server from datasets and tools.
Args:
session: Fenic session used to execute tools.
server_name: Name of the MCP server.
tools: Tools to register (optional).
concurrency_limit: Maximum number of concurrent tool executions.
|
site-packages/fenic/api/mcp/server.py
| true | false | 15 | 34 | null |
FenicMCPServer
|
[
"session",
"server_name",
"tools",
"concurrency_limit"
] | null | null | null |
Type: function
Member Name: create_mcp_server
Qualified Name: fenic.api.mcp.server.create_mcp_server
Docstring: Create an MCP server from datasets and tools.
Args:
session: Fenic session used to execute tools.
server_name: Name of the MCP server.
tools: Tools to register (optional).
concurrency_limit: Maximum number of concurrent tool executions.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["session", "server_name", "tools", "concurrency_limit"]
Returns: FenicMCPServer
Parent Class: none
|
function
|
run_mcp_server_asgi
|
fenic.api.mcp.server.run_mcp_server_asgi
|
Run an MCP server as a Starlette ASGI app.
Returns a Starlette ASGI app that can be integrated into any ASGI server.
This is useful for running the MCP server in a production environment, or running the MCP server as part of a larger application.
Args:
server: MCP server to run.
stateless_http: If True, use stateless HTTP.
port: Port to listen on.
host: Host to listen on.
path: Path to listen on.
kwargs: Additional transport-specific arguments to pass to FastMCP.
Notes:
Some additional possible keyword arguments:
- `middleware`: A list of Starlette `ASGIMiddleware` middleware to apply to the app.
|
site-packages/fenic/api/mcp/server.py
| true | false | 36 | 62 | null | null |
[
"server",
"stateless_http",
"port",
"host",
"path",
"kwargs"
] | null | null | null |
Type: function
Member Name: run_mcp_server_asgi
Qualified Name: fenic.api.mcp.server.run_mcp_server_asgi
Docstring: Run an MCP server as a Starlette ASGI app.
Returns a Starlette ASGI app that can be integrated into any ASGI server.
This is useful for running the MCP server in a production environment, or running the MCP server as part of a larger application.
Args:
server: MCP server to run.
stateless_http: If True, use stateless HTTP.
port: Port to listen on.
host: Host to listen on.
path: Path to listen on.
kwargs: Additional transport-specific arguments to pass to FastMCP.
Notes:
Some additional possible keyword arguments:
- `middleware`: A list of Starlette `ASGIMiddleware` middleware to apply to the app.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["server", "stateless_http", "port", "host", "path", "kwargs"]
Returns: none
Parent Class: none
|
function
|
run_mcp_server_sync
|
fenic.api.mcp.server.run_mcp_server_sync
|
Run an MCP server synchronously.
Use this when calling from synchronous code. This creates a new event loop and runs the server in it.
Args:
server: MCP server to run.
transport: Transport protocol (http, stdio).
stateless_http: If True, use stateless HTTP.
port: Port to listen on.
host: Host to listen on.
path: Path to listen on.
kwargs: Additional transport-specific arguments to pass to FastMCP.
|
site-packages/fenic/api/mcp/server.py
| true | false | 64 | 87 | null | null |
[
"server",
"transport",
"stateless_http",
"port",
"host",
"path",
"kwargs"
] | null | null | null |
Type: function
Member Name: run_mcp_server_sync
Qualified Name: fenic.api.mcp.server.run_mcp_server_sync
Docstring: Run an MCP server synchronously.
Use this when calling from synchronous code. This creates a new event loop and runs the server in it.
Args:
server: MCP server to run.
transport: Transport protocol (http, stdio).
stateless_http: If True, use stateless HTTP.
port: Port to listen on.
host: Host to listen on.
path: Path to listen on.
kwargs: Additional transport-specific arguments to pass to FastMCP.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["server", "transport", "stateless_http", "port", "host", "path", "kwargs"]
Returns: none
Parent Class: none
|
function
|
run_mcp_server_async
|
fenic.api.mcp.server.run_mcp_server_async
|
Run an MCP server asynchronously.
Use this when calling from asynchronous code. This does not create a new event loop.
Args:
server: MCP server to run.
transport: Transport protocol (http, stdio).
stateless_http: If True, use stateless HTTP.
port: Port to listen on.
host: Host to listen on.
path: Path to listen on.
kwargs: Additional transport-specific arguments to pass to FastMCP.
|
site-packages/fenic/api/mcp/server.py
| true | false | 90 | 113 | null | null |
[
"server",
"transport",
"stateless_http",
"port",
"host",
"path",
"kwargs"
] | null | null | null |
Type: function
Member Name: run_mcp_server_async
Qualified Name: fenic.api.mcp.server.run_mcp_server_async
Docstring: Run an MCP server asynchronously.
Use this when calling from asynchronous code. This does not create a new event loop.
Args:
server: MCP server to run.
transport: Transport protocol (http, stdio).
stateless_http: If True, use stateless HTTP.
port: Port to listen on.
host: Host to listen on.
path: Path to listen on.
kwargs: Additional transport-specific arguments to pass to FastMCP.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["server", "transport", "stateless_http", "port", "host", "path", "kwargs"]
Returns: none
Parent Class: none
|
module
|
functions
|
fenic.api.functions
|
Functions for working with DataFrame columns.
|
site-packages/fenic/api/functions/__init__.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: functions
Qualified Name: fenic.api.functions
Docstring: Functions for working with DataFrame columns.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
attribute
|
__all__
|
fenic.api.functions.__all__
| null |
site-packages/fenic/api/functions/__init__.py
| false | false | 34 | 77 | null | null | null | null |
['semantic', 'text', 'embedding', 'array', 'array_agg', 'async_udf', 'avg', 'collect_list', 'coalesce', 'count', 'json', 'markdown', 'max', 'mean', 'min', 'struct', 'sum', 'udf', 'col', 'lit', 'array_size', 'array_contains', 'asc', 'asc_nulls_first', 'asc_nulls_last', 'desc', 'desc_nulls_first', 'desc_nulls_last', 'extract', 'token_chunk', 'concat', 'concat_ws', 'array_join', 'replace', 'when', 'first', 'stddev', 'greatest', 'least', 'empty', 'null', 'tool_param']
| null |
Type: attribute
Member Name: __all__
Qualified Name: fenic.api.functions.__all__
Docstring: none
Value: ['semantic', 'text', 'embedding', 'array', 'array_agg', 'async_udf', 'avg', 'collect_list', 'coalesce', 'count', 'json', 'markdown', 'max', 'mean', 'min', 'struct', 'sum', 'udf', 'col', 'lit', 'array_size', 'array_contains', 'asc', 'asc_nulls_first', 'asc_nulls_last', 'desc', 'desc_nulls_first', 'desc_nulls_last', 'extract', 'token_chunk', 'concat', 'concat_ws', 'array_join', 'replace', 'when', 'first', 'stddev', 'greatest', 'least', 'empty', 'null', 'tool_param']
Annotation: none
is Public? : false
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
module
|
semantic
|
fenic.api.functions.semantic
|
Semantic functions for Fenic DataFrames - LLM-based operations.
|
site-packages/fenic/api/functions/semantic.py
| true | false | null | null | null | null | null | null | null | null |
Type: module
Member Name: semantic
Qualified Name: fenic.api.functions.semantic
Docstring: Semantic functions for Fenic DataFrames - LLM-based operations.
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: none
Returns: none
Parent Class: none
|
function
|
map
|
fenic.api.functions.semantic.map
|
Applies a generation prompt to one or more columns, enabling rich summarization and generation tasks.
Args:
prompt: A Jinja2 template for the generation prompt. References column
values using {{ column_name }} syntax. Each placeholder is replaced with the
corresponding value from the current row during execution.
strict: If True, when any of the provided columns has a None value for a row,
the entire row's output will be None (template is not rendered).
If False, None values are handled using Jinja2's null rendering behavior.
Default is True.
examples: Optional few-shot examples to guide the model's output format and style.
response_format: Optional Pydantic model to enforce structured output. Must include descriptions for each field.
model_alias: Optional language model alias. If None, uses the default model.
temperature: Language model temperature (default: 0.0).
max_output_tokens: Maximum tokens to generate (default: 512).
**columns: Named column arguments that correspond to template variables.
Keys must match the variable names used in the template.
Returns:
Column: A column expression representing the semantic mapping operation.
Example: Mapping without examples
```python
fc.semantic.map(
"Write a compelling one-line description for {{ name }}: {{ details }}",
name=fc.col("name"),
details=fc.col("details")
)
```
Example: Mapping with few-shot examples
```python
examples = MapExampleCollection()
examples.create_example(MapExample(
input={"name": "GlowMate", "details": "A rechargeable bedside lamp with adjustable color temperatures, touch controls, and a sleek minimalist design."},
output="The modern touch-controlled lamp for better sleep and style."
))
examples.create_example(MapExample(
input={"name": "AquaPure", "details": "A compact water filter that attaches to your faucet, removes over 99% of contaminants, and improves taste instantly."},
output="Clean, great-tasting water straight from your tap."
))
fc.semantic.map(
"Write a compelling one-line description for {{ name }}: {{ details }}",
name=fc.col("name"),
details=fc.col("details"),
examples=examples
)
```
|
site-packages/fenic/api/functions/semantic.py
| true | false | 38 | 136 | null |
Column
|
[
"prompt",
"strict",
"examples",
"response_format",
"model_alias",
"temperature",
"max_output_tokens",
"columns"
] | null | null | null |
Type: function
Member Name: map
Qualified Name: fenic.api.functions.semantic.map
Docstring: Applies a generation prompt to one or more columns, enabling rich summarization and generation tasks.
Args:
prompt: A Jinja2 template for the generation prompt. References column
values using {{ column_name }} syntax. Each placeholder is replaced with the
corresponding value from the current row during execution.
strict: If True, when any of the provided columns has a None value for a row,
the entire row's output will be None (template is not rendered).
If False, None values are handled using Jinja2's null rendering behavior.
Default is True.
examples: Optional few-shot examples to guide the model's output format and style.
response_format: Optional Pydantic model to enforce structured output. Must include descriptions for each field.
model_alias: Optional language model alias. If None, uses the default model.
temperature: Language model temperature (default: 0.0).
max_output_tokens: Maximum tokens to generate (default: 512).
**columns: Named column arguments that correspond to template variables.
Keys must match the variable names used in the template.
Returns:
Column: A column expression representing the semantic mapping operation.
Example: Mapping without examples
```python
fc.semantic.map(
"Write a compelling one-line description for {{ name }}: {{ details }}",
name=fc.col("name"),
details=fc.col("details")
)
```
Example: Mapping with few-shot examples
```python
examples = MapExampleCollection()
examples.create_example(MapExample(
input={"name": "GlowMate", "details": "A rechargeable bedside lamp with adjustable color temperatures, touch controls, and a sleek minimalist design."},
output="The modern touch-controlled lamp for better sleep and style."
))
examples.create_example(MapExample(
input={"name": "AquaPure", "details": "A compact water filter that attaches to your faucet, removes over 99% of contaminants, and improves taste instantly."},
output="Clean, great-tasting water straight from your tap."
))
fc.semantic.map(
"Write a compelling one-line description for {{ name }}: {{ details }}",
name=fc.col("name"),
details=fc.col("details"),
examples=examples
)
```
Value: none
Annotation: none
is Public? : true
is Private? : false
Parameters: ["prompt", "strict", "examples", "response_format", "model_alias", "temperature", "max_output_tokens", "columns"]
Returns: Column
Parent Class: none
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.