File size: 9,383 Bytes
45d92bc
 
582dd5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45d92bc
582dd5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
---

license: apache-2.0
task_categories:
- text-generation
- question-answering
language:
- en
tags:
- code
- api-documentation
- dataframe
- semantic-ai
- fenic
pretty_name: Fenic 0.4.0 API Documentation
size_categories:
- 1K<n<10K
configs:
- config_name: default
  data_files:
  - split: api
    path: "api_df.parquet"
  - split: hierarchy
    path: "hierarchy_df.parquet"
  - split: summary
    path: "fenic_summary.parquet"
---


# Fenic 0.4.0 API Documentation Dataset

## Dataset Description

This dataset contains comprehensive API documentation for [Fenic 0.4.0](https://github.com/typedef-ai/fenic), a PySpark-inspired DataFrame framework designed for building production AI and agentic applications. The dataset provides structured information about all public and private API elements, including modules, classes, functions, methods, and attributes.

### Dataset Summary

[Fenic](https://github.com/typedef-ai/fenic) is a DataFrame framework that combines traditional data processing capabilities with semantic/AI operations. It provides:
- A familiar DataFrame API similar to PySpark
- Semantic functions powered by LLMs (map, extract, classify, etc.)
- Integration with multiple AI model providers (Anthropic, OpenAI, Google, Cohere)
- Advanced features like semantic joins and clustering

The dataset captures the complete API surface of Fenic 0.4.0, making it valuable for:
- Code generation and understanding
- API documentation analysis
- Framework comparison studies
- Training models on DataFrame/data processing APIs

## Dataset Structure

The dataset consists of three Parquet files:

### 1. `api_df.parquet` (2,522 rows × 16 columns)

Main API documentation with detailed information about each API element.



**Columns:**

- `type`: Element type (module, class, function, method, attribute)

- `name`: Element name

- `qualified_name`: Fully qualified name (e.g., `fenic.api.dataframe.DataFrame`)
- `docstring`: Documentation string
- `filepath`: Source file path
- `is_public`: Whether the element is public
- `is_private`: Whether the element is private
- `line_start`: Starting line number in source
- `line_end`: Ending line number in source
- `annotation`: Type annotation
- `returns`: Return type annotation
- `parameters`: Function/method parameters
- `parent_class`: Parent class for methods
- `value`: Value for attributes
- `bases`: Base classes for class definitions
- `api_element_summary`: Formatted summary of the element

### 2. `hierarchy_df.parquet` (2,522 rows × 18 columns)

Same as api_df but with additional hierarchy information.

**Additional Columns:**
- `path_parts`: List showing the hierarchical path
- `depth`: Depth in the API hierarchy

### 3. `fenic_summary.parquet` (1 row × 1 column)

High-level project summary.



**Columns:**

- `project_summary`: Comprehensive description of the Fenic framework

## Key API Components

### Core DataFrame Operations
- Standard operations: `select`, `filter`, `join`, `group_by`, `agg`, `sort`
- Data conversion: `to_pandas()`, `to_polars()`, `to_arrow()`, `to_pydict()`, `to_pylist()`
- Lazy evaluation with logical query plans

### Semantic Functions (`fenic.api.functions.semantic`)
- `map`: Apply generation prompts to columns
- `extract`: Extract structured data using Pydantic models
- `classify`: Text classification
- `predicate`: Boolean filtering with natural language
- `reduce`: Aggregate strings using natural language instructions
- `analyze_sentiment`: Sentiment analysis
- `summarize`: Text summarization
- `embed`: Generate embeddings

### Advanced Features
- Semantic joins and clustering
- Model client integrations (Anthropic, OpenAI, Google, Cohere)
- Query optimization and execution planning
- MCP (Model-based Code Production) tool generation

## Usage

### Loading with Fenic (Recommended)

[Fenic](https://github.com/typedef-ai/fenic) natively supports loading datasets directly from Hugging Face using the `hf://` scheme:

```python

import fenic as fc



# Create a Fenic session

session = fc.Session.get_or_create(

    fc.SessionConfig(app_name="fenic_api_analysis")

)



# Load the API documentation split

api_df = session.read.parquet("hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/api_df.parquet")



# Or load all splits at once

df = session.read.parquet("hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/*.parquet")



# Explore the dataset

api_df.show(5)

print(f"Total API elements: {api_df.count()}")

print(f"Schema: {api_df.schema}")



# Example: Find all public DataFrame methods

dataframe_methods = api_df.filter(

    fc.col("qualified_name").contains("fenic.api.dataframe.DataFrame.") &

    (fc.col("type") == "method") &

    (fc.col("is_public") == True)

).select("name", "docstring", "parameters", "returns")



dataframe_methods.show(10)



# Example: Find all semantic functions

semantic_functions = api_df.filter(

    fc.col("qualified_name").contains("fenic.api.functions.semantic.") &

    (fc.col("type") == "function")

).select("name", "qualified_name", "docstring")



semantic_functions.show()



# Get statistics about the codebase

stats = api_df.group_by("type").agg(

    fc.count("*").alias("count")

).order_by(fc.col("count").desc())



print("\nAPI Element Statistics:")

stats.show()



# Search for specific functionality

embedding_apis = api_df.filter(

    fc.col("name").contains("embed") | 

    fc.col("docstring").contains("embedding")

).select("type", "qualified_name", "docstring")



print(f"\nFound {embedding_apis.count()} embedding-related APIs")

embedding_apis.show(5)

```

### Loading with Pandas

```python

import pandas as pd

from datasets import load_dataset



# Option 1: Using Hugging Face datasets library

dataset = load_dataset("YOUR_USERNAME/fenic-api-0.4.0")



# Access different splits and convert to pandas

api_df = dataset['api'].to_pandas()

hierarchy_df = dataset['hierarchy'].to_pandas()

summary_df = dataset['summary'].to_pandas()



# Option 2: Direct parquet loading

api_df = pd.read_parquet('hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/api_df.parquet')

hierarchy_df = pd.read_parquet('hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/hierarchy_df.parquet')

summary_df = pd.read_parquet('hf://datasets/YOUR_USERNAME/fenic-api-0.4.0/fenic_summary.parquet')



# Example: Find all public DataFrame methods

dataframe_methods = api_df[

    (api_df['qualified_name'].str.startswith('fenic.api.dataframe.DataFrame.')) &

    (api_df['type'] == 'method') &

    (api_df['is_public'] == True)

]



print(f"Found {len(dataframe_methods)} DataFrame methods")

print(dataframe_methods[['name', 'docstring']].head(10))



# Example: Analyze module structure

modules = api_df[api_df['type'] == 'module']

print(f"\nTotal modules: {len(modules)}")

print("Top-level modules:")

print(modules[modules['qualified_name'].str.count('\.') == 1]['name'].unique())



# Example: Find all semantic functions

semantic_functions = api_df[

    (api_df['qualified_name'].str.startswith('fenic.api.functions.semantic.')) &

    (api_df['type'] == 'function')

]



print(f"\nSemantic functions available:")

for _, func in semantic_functions.iterrows():

    doc_first_line = func['docstring'].split('\n')[0] if pd.notna(func['docstring']) else "No description"

    print(f"  • {func['name']}: {doc_first_line}")

```

### Authentication for Private Datasets

If you're using a private dataset, set your Hugging Face token:

```bash

export HF_TOKEN=your_token_here

```

Or in Python:
```python

import os

os.environ['HF_TOKEN'] = 'your_token_here'

```

## Dataset Creation

This dataset was automatically extracted from the Fenic 0.4.0 codebase using API documentation parsing tools. It captures the complete public and private API surface, including:
- All modules and submodules
- Classes with their methods and attributes
- Functions with their signatures
- Complete docstrings and type annotations

## Considerations for Using the Data

### Use Cases
- Training code generation models on DataFrame APIs
- Building API documentation search/retrieval systems
- Analyzing API design patterns in data processing frameworks
- Creating intelligent code completion for Fenic

### Limitations
- This dataset represents a snapshot of Fenic 0.4.0 and may not reflect newer versions
- Some internal/private APIs may change between versions
- Generated protobuf files are included but may be less useful for learning

## Additional Information

### Project Links
- **Fenic Framework**: [https://github.com/typedef-ai/fenic](https://github.com/typedef-ai/fenic)
- **Documentation**: See the official repository for the latest documentation
- **Issues**: Report issues with the dataset or framework on the GitHub repository

### Licensing
This dataset is released under the Apache 2.0 license, consistent with the Fenic framework's licensing.

### Citation
If you use this dataset, please cite:
```

@dataset{fenic_api_2025,

  title={Fenic 0.4.0 API Documentation Dataset},

  year={2025},

  publisher={Hugging Face},

  license={Apache-2.0}

}

```

### Maintenance
This dataset is a static snapshot of Fenic 0.4.0. For the latest API documentation and updates, refer to the official [Fenic repository](https://github.com/typedef-ai/fenic).