--- license: apache-2.0 language: - ko base_model: - google/embeddinggemma-300m --- ## **Model Card: ColBERT-ko-embeddinggemma-300m** ### **Model Description** This is a **ColBERT-style** late-interaction retrieval model based on Google's `embeddinggemma-300m`. It has been fine-tuned on the MS Marco Korean dataset, making it specialized for semantic search and information retrieval tasks in the **Korean language**. The model produces token-level embeddings for both queries and documents. This enables highly accurate and efficient retrieval through the ColBERT MaxSim scoring mechanism, which calculates the relevance between a query and a document at a fine-grained token level. ----- ### **Performance & Evaluation** The model demonstrated stable and consistent improvement throughout the training process. Starting from a strong in-batch Recall@1 of \~75-80%, the model was validated every 50 steps, with checkpoints saved based on validation performance. Key metrics like validation loss steadily decreased while Recall@1 increased, indicating successful generalization without signs of overfitting. #### **Semantic Inference Example (in Korean)** The true power of the fine-tuned model is its ability to understand semantic context beyond simple keyword matching. In the following challenging example, the fine-tuned model correctly infers the answer, while the original base model fails. ``` $ python inference.py Using device: cuda Loading fine-tuned model... Fine-tuned model loaded. Loading original (pre-trained) model for comparison... Original model loaded. ================================================== Query: 일론 머스크가 설립한 전기차 회사는 어디야? ================================================== --- 1. ✅ Fine-tuned Model Results --- Rank 1 (Score: 9.00): 테슬라는 모델 S, 3, X, Y를 생산하며 오토파일럿 기능으로 유명합니다. Rank 2 (Score: 7.92): 스페이스X는 재사용 가능한 로켓을 개발하여 우주 탐사 비용을 크게 낮췄습니다. Rank 3 (Score: 7.72): 아마존 웹 서비스(AWS)는 클라우드 컴퓨팅 시장의 선두주자입니다. Rank 4 (Score: 7.23): 수도권 전철은 서울과 주변 도시를 연결하는 중요한 교통수단입니다. Rank 5 (Score: 5.77): 대한민국의 수도는 서울입니다. 서울은 경제와 문화의 중심지입니다. Rank 6 (Score: 5.43): 일본의 수도는 도쿄입니다. 벚꽃이 아름다운 도시죠. Rank 7 (Score: 5.40): 프랑스의 수도는 파리이며, 에펠탑으로 유명합니다. --- 2. ❌ Original Model Results --- Rank 1 (Score: 9.13): 수도권 전철은 서울과 주변 도시를 연결하는 중요한 교통수단입니다. Rank 2 (Score: 8.79): 테슬라는 모델 S, 3, X, Y를 생산하며 오토파일럿 기능으로 유명합니다. Rank 3 (Score: 8.77): 일본의 수도는 도쿄입니다. 벚꽃이 아름다운 도시죠. Rank 4 (Score: 8.71): 대한민국의 수도는 서울입니다. 서울은 경제와 문화의 중심지입니다. Rank 5 (Score: 8.53): 아마존 웹 서비스(AWS)는 클라우드 컴퓨팅 시장의 선두주자입니다. Rank 6 (Score: 8.48): 스페이스X는 재사용 가능한 로켓을 개발하여 우주 탐사 비용을 크게 낮췄습니다. Rank 7 (Score: 8.24): 프랑스의 수도는 파리이며, 에펠탑으로 유명합니다. ``` **Analysis**: The fine-tuned model correctly identifies 'Tesla' by understanding the semantic relationship between the query and the document, even with no direct keyword overlap. In contrast, the original model is easily confused by distractors and fails to rank the correct answer first, demonstrating the significant impact of the ColBERT fine-tuning process. ----- ### **Intended Uses** The primary use case is high-performance semantic search for Korean text. It is designed to be used as a dual encoder in a retrieval pipeline: 1. **Offline Indexing**: Encode your document corpus into token-level embeddings. Each document is represented as a matrix of vectors (`Ld x D`). 2. **Online Search**: Encode an incoming query into its token-level embeddings (`Lq x D`). Use the efficient **MaxSim** algorithm to score and rank documents from your index. ----- ### **Training Procedure** The model was trained using an 8-GPU setup with the Hugging Face Accelerate library, utilizing in-batch and cross-device negatives. * **Base Model**: `google/embeddinggemma-300m` * **Dataset**: MS Marco Korean Translated Dataset * **Key Hyperparameters**: * Precision: `bf16` * Query Max Length: `128` * Document Max Length: `1024` * Learning Rate: `5e-6` (base) & `1e-4` (projection head) * Effective Batch Size: `512` (32 per device \* 8 devices \* 2 grad\_accum) * Epochs: `1`