|
---
|
|
license: apache-2.0
|
|
task_categories:
|
|
- text-generation
|
|
- question-answering
|
|
language:
|
|
- en
|
|
tags:
|
|
- code
|
|
- api-documentation
|
|
- dataframe
|
|
- semantic-ai
|
|
- fenic
|
|
pretty_name: Fenic 0.4.0 API Documentation
|
|
size_categories:
|
|
- 1K<n<10K
|
|
configs:
|
|
- config_name: default
|
|
data_files:
|
|
- split: api
|
|
path: "api_df.parquet"
|
|
- split: hierarchy
|
|
path: "hierarchy_df.parquet"
|
|
- split: summary
|
|
path: "fenic_summary.parquet"
|
|
---
|
|
|
|
# Fenic 0.4.0 API Documentation Dataset
|
|
|
|
## Dataset Description
|
|
|
|
This dataset contains comprehensive API documentation for [Fenic 0.4.0](https://github.com/typedef-ai/fenic), a PySpark-inspired DataFrame framework designed for building production AI and agentic applications. The dataset provides structured information about all public and private API elements, including modules, classes, functions, methods, and attributes.
|
|
|
|
### Dataset Summary
|
|
|
|
[Fenic](https://github.com/typedef-ai/fenic) is a DataFrame framework that combines traditional data processing capabilities with semantic/AI operations. It provides:
|
|
- A familiar DataFrame API similar to PySpark
|
|
- Semantic functions powered by LLMs (map, extract, classify, etc.)
|
|
- Integration with multiple AI model providers (Anthropic, OpenAI, Google, Cohere)
|
|
- Advanced features like semantic joins and clustering
|
|
|
|
The dataset captures the complete API surface of Fenic 0.4.0, making it valuable for:
|
|
- Code generation and understanding
|
|
- API documentation analysis
|
|
- Framework comparison studies
|
|
- Training models on DataFrame/data processing APIs
|
|
|
|
## Dataset Structure
|
|
|
|
The dataset consists of three Parquet files:
|
|
|
|
### 1. `api_df.parquet` (2,522 rows × 16 columns)
|
|
Main API documentation with detailed information about each API element.
|
|
|
|
**Columns:**
|
|
- `type`: Element type (module, class, function, method, attribute)
|
|
- `name`: Element name
|
|
- `qualified_name`: Fully qualified name (e.g., `fenic.api.dataframe.DataFrame`)
|
|
- `docstring`: Documentation string
|
|
- `filepath`: Source file path
|
|
- `is_public`: Whether the element is public
|
|
- `is_private`: Whether the element is private
|
|
- `line_start`: Starting line number in source
|
|
- `line_end`: Ending line number in source
|
|
- `annotation`: Type annotation
|
|
- `returns`: Return type annotation
|
|
- `parameters`: Function/method parameters
|
|
- `parent_class`: Parent class for methods
|
|
- `value`: Value for attributes
|
|
- `bases`: Base classes for class definitions
|
|
- `api_element_summary`: Formatted summary of the element
|
|
|
|
### 2. `hierarchy_df.parquet` (2,522 rows × 18 columns)
|
|
Same as api_df but with additional hierarchy information.
|
|
|
|
**Additional Columns:**
|
|
- `path_parts`: List showing the hierarchical path
|
|
- `depth`: Depth in the API hierarchy
|
|
|
|
### 3. `fenic_summary.parquet` (1 row × 1 column)
|
|
High-level project summary.
|
|
|
|
**Columns:**
|
|
- `project_summary`: Comprehensive description of the Fenic framework
|
|
|
|
## Key API Components
|
|
|
|
### Core DataFrame Operations
|
|
- Standard operations: `select`, `filter`, `join`, `group_by`, `agg`, `sort`
|
|
- Data conversion: `to_pandas()`, `to_polars()`, `to_arrow()`, `to_pydict()`, `to_pylist()`
|
|
- Lazy evaluation with logical query plans
|
|
|
|
### Semantic Functions (`fenic.api.functions.semantic`)
|
|
- `map`: Apply generation prompts to columns
|
|
- `extract`: Extract structured data using Pydantic models
|
|
- `classify`: Text classification
|
|
- `predicate`: Boolean filtering with natural language
|
|
- `reduce`: Aggregate strings using natural language instructions
|
|
- `analyze_sentiment`: Sentiment analysis
|
|
- `summarize`: Text summarization
|
|
- `embed`: Generate embeddings
|
|
|
|
### Advanced Features
|
|
- Semantic joins and clustering
|
|
- Model client integrations (Anthropic, OpenAI, Google, Cohere)
|
|
- Query optimization and execution planning
|
|
- MCP (Model-based Code Production) tool generation
|
|
|
|
## Usage
|
|
|
|
### Loading with Fenic (Recommended)
|
|
|
|
[Fenic](https://github.com/typedef-ai/fenic) natively supports loading datasets directly from Hugging Face using the `hf://` scheme:
|
|
|
|
```python
|
|
import fenic as fc
|
|
|
|
# Create a Fenic session
|
|
session = fc.Session.get_or_create(
|
|
fc.SessionConfig(app_name="fenic_api_analysis")
|
|
)
|
|
|
|
# Load the API documentation split
|
|
api_df = session.read.parquet("hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/api_df.parquet")
|
|
|
|
# Or load all splits at once
|
|
df = session.read.parquet("hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/*.parquet")
|
|
|
|
# Explore the dataset
|
|
api_df.show(5)
|
|
print(f"Total API elements: {api_df.count()}")
|
|
print(f"Schema: {api_df.schema}")
|
|
|
|
# Example: Find all public DataFrame methods
|
|
dataframe_methods = api_df.filter(
|
|
fc.col("qualified_name").contains("fenic.api.dataframe.DataFrame.") &
|
|
(fc.col("type") == "method") &
|
|
(fc.col("is_public") == True)
|
|
).select("name", "docstring", "parameters", "returns")
|
|
|
|
dataframe_methods.show(10)
|
|
|
|
# Example: Find all semantic functions
|
|
semantic_functions = api_df.filter(
|
|
fc.col("qualified_name").contains("fenic.api.functions.semantic.") &
|
|
(fc.col("type") == "function")
|
|
).select("name", "qualified_name", "docstring")
|
|
|
|
semantic_functions.show()
|
|
|
|
# Get statistics about the codebase
|
|
stats = api_df.group_by("type").agg(
|
|
fc.count("*").alias("count")
|
|
).order_by(fc.col("count").desc())
|
|
|
|
print("\nAPI Element Statistics:")
|
|
stats.show()
|
|
|
|
# Search for specific functionality
|
|
embedding_apis = api_df.filter(
|
|
fc.col("name").contains("embed") |
|
|
fc.col("docstring").contains("embedding")
|
|
).select("type", "qualified_name", "docstring")
|
|
|
|
print(f"\nFound {embedding_apis.count()} embedding-related APIs")
|
|
embedding_apis.show(5)
|
|
```
|
|
|
|
### Loading with Pandas
|
|
|
|
```python
|
|
import pandas as pd
|
|
from datasets import load_dataset
|
|
|
|
# Option 1: Using Hugging Face datasets library
|
|
dataset = load_dataset("YOUR_USERNAME/fenic-api-0.4.0")
|
|
|
|
# Access different splits and convert to pandas
|
|
api_df = dataset['api'].to_pandas()
|
|
hierarchy_df = dataset['hierarchy'].to_pandas()
|
|
summary_df = dataset['summary'].to_pandas()
|
|
|
|
# Option 2: Direct parquet loading
|
|
api_df = pd.read_parquet('hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/api_df.parquet')
|
|
hierarchy_df = pd.read_parquet('hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/hierarchy_df.parquet')
|
|
summary_df = pd.read_parquet('hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/fenic_summary.parquet')
|
|
|
|
# Example: Find all public DataFrame methods
|
|
dataframe_methods = api_df[
|
|
(api_df['qualified_name'].str.startswith('fenic.api.dataframe.DataFrame.')) &
|
|
(api_df['type'] == 'method') &
|
|
(api_df['is_public'] == True)
|
|
]
|
|
|
|
print(f"Found {len(dataframe_methods)} DataFrame methods")
|
|
print(dataframe_methods[['name', 'docstring']].head(10))
|
|
|
|
# Example: Analyze module structure
|
|
modules = api_df[api_df['type'] == 'module']
|
|
print(f"\nTotal modules: {len(modules)}")
|
|
print("Top-level modules:")
|
|
print(modules[modules['qualified_name'].str.count('\.') == 1]['name'].unique())
|
|
|
|
# Example: Find all semantic functions
|
|
semantic_functions = api_df[
|
|
(api_df['qualified_name'].str.startswith('fenic.api.functions.semantic.')) &
|
|
(api_df['type'] == 'function')
|
|
]
|
|
|
|
print(f"\nSemantic functions available:")
|
|
for _, func in semantic_functions.iterrows():
|
|
doc_first_line = func['docstring'].split('\n')[0] if pd.notna(func['docstring']) else "No description"
|
|
print(f" • {func['name']}: {doc_first_line}")
|
|
```
|
|
|
|
### Authentication for Private Datasets
|
|
|
|
If you're using a private dataset, set your Hugging Face token:
|
|
|
|
```bash
|
|
export HF_TOKEN=your_token_here
|
|
```
|
|
|
|
Or in Python:
|
|
```python
|
|
import os
|
|
os.environ['HF_TOKEN'] = 'your_token_here'
|
|
```
|
|
|
|
## Dataset Creation
|
|
|
|
This dataset was automatically extracted from the Fenic 0.4.0 codebase using API documentation parsing tools. It captures the complete public and private API surface, including:
|
|
- All modules and submodules
|
|
- Classes with their methods and attributes
|
|
- Functions with their signatures
|
|
- Complete docstrings and type annotations
|
|
|
|
## Considerations for Using the Data
|
|
|
|
### Use Cases
|
|
- Training code generation models on DataFrame APIs
|
|
- Building API documentation search/retrieval systems
|
|
- Analyzing API design patterns in data processing frameworks
|
|
- Creating intelligent code completion for Fenic
|
|
|
|
### Limitations
|
|
- This dataset represents a snapshot of Fenic 0.4.0 and may not reflect newer versions
|
|
- Some internal/private APIs may change between versions
|
|
- Generated protobuf files are included but may be less useful for learning
|
|
|
|
## Additional Information
|
|
|
|
### Project Links
|
|
- **Fenic Framework**: [https://github.com/typedef-ai/fenic](https://github.com/typedef-ai/fenic)
|
|
- **Documentation**: See the official repository for the latest documentation
|
|
- **Issues**: Report issues with the dataset or framework on the GitHub repository
|
|
|
|
### Licensing
|
|
This dataset is released under the Apache 2.0 license, consistent with the Fenic framework's licensing.
|
|
|
|
### Citation
|
|
If you use this dataset, please cite:
|
|
```
|
|
@dataset{fenic_api_2025,
|
|
title={Fenic 0.4.0 API Documentation Dataset},
|
|
year={2025},
|
|
publisher={Hugging Face},
|
|
license={Apache-2.0}
|
|
}
|
|
```
|
|
|
|
### Maintenance
|
|
This dataset is a static snapshot of Fenic 0.4.0. For the latest API documentation and updates, refer to the official [Fenic repository](https://github.com/typedef-ai/fenic).
|
|
|