File size: 5,359 Bytes
50466ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
# Self-Organizing Map (SOM) Model for Document Clustering

A trained Self-Organizing Map model for clustering and visualizing high-dimensional document embeddings. This model was trained on technical documentation and can be used for document similarity analysis, topic discovery, and semantic clustering.

## πŸ“Š Model Details

- **Model Type**: Self-Organizing Map (SOM)
- **Training Data**: 11,412 records
- **Embedding Dimension**: 3,072 (OpenAI Large Embedding Model)
- **Number of Clusters**: 625
- **Grid Size**: 25x25
- **Learning Rate**: 0.1
- **Sigma**: 1.0

## 🎯 Use Cases

- **Document Clustering**: Group similar documents based on semantic similarity
- **Topic Discovery**: Identify common themes and topics in large document collections
- **Semantic Search**: Find related documents through vector similarity
- **Data Visualization**: Interactive visualization of document relationships
- **Knowledge Organization**: Structure and organize large knowledge bases

## πŸ“ Model Files

- `som_model.pkl`: Trained SOM model weights and parameters
- `cluster_assignments.json`: Document-to-cluster assignments for all 11,412 records
- `cluster_analysis.json`: Detailed analysis of each cluster including keywords and topics
- `interactive_som_map.html`: Interactive visualization of the SOM grid with cluster information

## πŸš€ Quick Start

### Installation

```bash
pip install numpy scikit-learn matplotlib plotly
```

### Loading and Using the Model

```python
import pickle
import json
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity

# Load the trained SOM model
with open('som_model.pkl', 'rb') as f:
    som_model = pickle.load(f)

# Load cluster assignments
with open('cluster_assignments.json', 'r') as f:
    cluster_assignments = json.load(f)

# Load cluster analysis
with open('cluster_analysis.json', 'r') as f:
    cluster_analysis = json.load(f)

# Example: Get cluster for a new document embedding
def get_cluster_for_embedding(embedding, som_model):
    """Get the cluster assignment for a new document embedding"""
    # Find the best matching unit (BMU)
    bmu = som_model.winner(embedding)
    return f"{bmu[0]},{bmu[1]}"

# Example: Find similar documents
def find_similar_documents(embedding, cluster_assignments, top_k=5):
    """Find similar documents based on cluster membership"""
    cluster = get_cluster_for_embedding(embedding, som_model)
    
    # Get all documents in the same cluster
    cluster_docs = [doc for doc, doc_cluster in cluster_assignments.items() 
                   if doc_cluster == cluster]
    
    return cluster_docs[:top_k]
```

### Interactive Visualization

Open `interactive_som_map.html` in a web browser to explore the SOM grid interactively. The visualization shows:

- Cluster sizes and distributions
- Top keywords for each cluster
- Topic analysis
- Document counts per cluster

## πŸ“ˆ Model Performance

Based on the cluster analysis:

- **Total Documents**: 11,412
- **Total Clusters**: 625 (25x25 grid)
- **Silhouette Score**: -0.0078
- **Calinski-Harabasz Score**: 13.69
- **Davies-Bouldin Score**: 2.33

## πŸ” Cluster Analysis

The model identifies meaningful clusters with distinct topics. For example, one of the largest clusters (659 documents) focuses on:

- **Keywords**: connector, anypoint, mule, studio, connectors
- **Topics**: Configuration, API integration, MuleSoft platform usage

## πŸ› οΈ Advanced Usage

### Custom Clustering

```python
# Train a new SOM with different parameters
from minisom import MiniSom

def train_custom_som(embeddings, grid_size=(20, 20), sigma=1.0, learning_rate=0.1):
    som = MiniSom(grid_size[0], grid_size[1], embeddings.shape[1], 
                  sigma=sigma, learning_rate=learning_rate, random_seed=42)
    som.train_random(embeddings, 100)
    return som
```

### Cluster Analysis

```python
def analyze_cluster(cluster_key, cluster_analysis):
    """Get detailed information about a specific cluster"""
    for cluster in cluster_analysis['top_clusters']:
        if cluster['cluster_key'] == cluster_key:
            return {
                'size': cluster['size'],
                'keywords': cluster['keywords'],
                'topics': cluster['topics']
            }
    return None
```

## πŸ“š Dependencies

- `numpy`: Numerical computations
- `scikit-learn`: Machine learning utilities
- `minisom`: Self-Organizing Map implementation
- `matplotlib`: Static plotting
- `plotly`: Interactive visualizations
- `pandas`: Data manipulation

## 🀝 Contributing

This model is part of a larger document processing and clustering pipeline. For questions or contributions, please refer to the main project repository.

## πŸ“„ License

This model is provided for research and educational purposes. Please ensure compliance with the original data source licenses when using this model.

## πŸ”— Related Resources

- [Self-Organizing Maps Tutorial](https://en.wikipedia.org/wiki/Self-organizing_map)
- [MiniSom Documentation](https://github.com/JustGlowing/minisom)
- [OpenAI Embeddings](https://platform.openai.com/docs/guides/embeddings)

---

**Note**: This model was trained on technical documentation and may be most effective for similar types of content. For best results, ensure your input documents are in the same domain or consider fine-tuning the model on your specific data.