File size: 4,967 Bytes
d49de5b
 
 
 
 
 
 
 
 
 
 
 
424eba3
 
 
d49de5b
 
 
 
424eba3
d49de5b
 
424eba3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---
license: mit
language: en
pipeline_tag: text-generation
tags:
- video-understanding
- narrative-generation
- generative-ai
- multi-agent
- stateful-ai
- prompt-engineering
- found-protocol
- creator-economy
- data-sovereignty
- web3
base_model:
- google/gemini-pro-vision
- google/gemini-pro
datasets:
- FOUND-LABS/found_consciousness_log
---

<div align="center">
  <img src="https://res.cloudinary.com/dykojggih/image/upload/v1753377308/IMG_4287_imd6zd.png" width="100px" alt="FOUND LABS Logo">
  <h1>The FOUND Protocol</h1>
  <p><b>The Open-Source Engine for the Consciousness Economy</b></p>
  
  <div>
    <a href="https://huggingface.co/FOUND-LABS"><img src="https://img.shields.io/badge/Organization-FOUND%20LABS-purple" alt="Organization"></a>
    <a href="https://huggingface.co/FOUND-LABS/found_consciousness_log"><img src="https://img.shields.io/badge/Dataset-Consciousness%20Log-blue" alt="Dataset"></a>
    <a href="https://foundprotocol.xyz"><img src="https://img.shields.io/badge/Platform-Join%20Waitlist-brightgreen" alt="Join Waitlist"></a>
  </div>
</div>

---

## Abstract

Current video understanding models excel at semantic labeling but fail to capture the pragmatic and thematic progression of visual narratives. We introduce **FOUND (Forensic Observer and Unified Narrative Deducer)**, a novel, stateful architecture that demonstrates the ability to extract coherent emotional and thematic arcs from a sequence of disparate video inputs. This protocol serves as the foundational engine for the **[FOUND Platform](https://foundprotocol.xyz)**, a decentralized creator economy where individuals can own, control, and monetize their authentic human experiences as valuable AI training data.

---

## From Open-Source Research to a New Economy

The FOUND Protocol is more than an academic exercise; it is the core technology powering a new paradigm for the creator economy.

-   **The Problem:** AI companies harvest your data to train their models, reaping all the rewards. You, the creator of the data, get nothing.
-   **Our Solution:** The FOUND Protocol transforms your raw visual moments into structured, high-value data assets. Our upcoming **FOUND Platform** will allow you to contribute this data, maintain ownership via your own wallet, and earn from its usage by AI companies.

**This open-source model is the proof. The FOUND Platform is the promise.**

---

## Model Architecture

The FOUND Protocol is a composite **inference pipeline** designed to simulate a stateful consciousness. It comprises two specialized agents that interact in a continuous feedback loop:

-   **The Perceptor (`/dev/eye`):** A forensic analysis model (FOUND-1) responsible for transpiling raw visual data into a structured, symbolic JSON output.
-   **The Interpreter (`/dev/mind`):** A contextual state model (FOUND-2) that operates on the structured output of the Perceptor and the historical system log to resolve "errors" into emotional or thematic concepts.
-   **The Narrative State Manager:** A stateful object that maintains the "long-term memory" of the system, allowing its interpretations to evolve.

---

## How to Use This Pipeline

### 1. Setup

Clone this repository and install the required dependencies into a Python virtual environment.
```bash
git clone https://huggingface.co/FOUND-LABS/found_protocol
cd found_protocol
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
```

### 2. Configuration
Set your Google Gemini API key as an environment variable (e.g., in a .env file):
```
GEMINI_API_KEY="your-api-key-goes-here"
```

### 3. Usage via CLI
Analyze all videos in a directory sequentially:
```bash
python main.py path/to/your/video_directory/
```

## Future Development: The Path to the Platform
This open-source protocol is the first step in our public roadmap. The data it generates is the key to our future.
- **Dataset Growth:** We are using this protocol to build the found_consciousness_log, the world's first open dataset for thematic video understanding.
- **Model Sovereignty:** This dataset will be used to fine-tune our own open-source models (found-perceptor-v1 and found-interpreter-v1), removing the dependency on external APIs and creating a fully community-owned intelligence layer.
- **Platform Launch:** These sovereign models will become the core engine of the FOUND Platform, allowing for decentralized, low-cost data processing at scale.

➡️ Follow our journey and join the waitlist at foundprotocol.xyz

## Citing this Work
If you use the FOUND Protocol in your research, please use the following BibTeX entry.
```bibtex
@misc{found_protocol_2025,
  author       = {FOUND LABS Community},
  title        = {FOUND Protocol: A Symbiotic Dual-Agent Architecture for the Consciousness Economy},
  year         = {2025},
  publisher    = {Hugging Face},
  journal      = {Hugging Face repository},
  howpublished = {\url{https://huggingface.co/FOUND-LABS/found_protocol}}
}
```