|
|
|
**π§ NERClassifier-BERT-WikiAnn** |
|
|
|
A BERT-based Named Entity Recognition (NER) model fine-tuned on the WikiAnn English dataset. It classifies tokens into entity types like Person (PER), Location (LOC), and Organization (ORG). This model is suitable for applications like document tagging, resume parsing, and chatbots. |
|
|
|
--- |
|
|
|
β¨ **Model Highlights** |
|
|
|
- π Based on bert-base-cased |
|
- π Fine-tuned on the WikiAnn (en) NER dataset |
|
- β‘ Supports prediction of 3 core entity types: PER, LOC, ORG |
|
- πΎ Lightweight and compatible with both CPU and GPU inference environment |
|
|
|
--- |
|
|
|
π§ Intended Uses |
|
|
|
- β
Resume and document parsing |
|
- β
News article analysis |
|
- β
Question answering pipelines |
|
- β
Chatbots and virtual assistants |
|
- β
Information retrieval and tagging |
|
|
|
--- |
|
- π« Limitations |
|
|
|
- β Trained on English-only Wiki-based text |
|
- β Performance may degrade on informal or non-English texts |
|
- β Not designed for nested or overlapping entities |
|
- β Accuracy may drop on very long sequences (>128 tokens) |
|
|
|
--- |
|
|
|
ποΈββοΈ Training Details |
|
|
|
| Field | Value | |
|
| -------------- | ------------------------------ | |
|
| **Base Model** | `bert-base-cased` | |
|
| **Dataset** | WikiAnn (English) | |
|
| **Framework** | PyTorch with π€ Transformers | |
|
| **Epochs** | 3 | |
|
| **Batch Size** | 16 | |
|
| **Max Length** | 128 tokens | |
|
| **Optimizer** | AdamW | |
|
| **Loss** | CrossEntropyLoss (token-level) | |
|
| **Device** | Trained on CUDA-enabled GPU | |
|
|
|
--- |
|
|
|
π Evaluation Metrics |
|
|
|
| Metric | Score | |
|
| ----------------------------------------------- | ----- | |
|
| Accuracy | 0.92 | |
|
| F1-Score | 0.92 | |
|
| Precision | 0.92 | |
|
| Recall | 0.92 | |
|
|
|
|
|
--- |
|
|
|
π Label Mapping |
|
|
|
| Label ID | Entity Type | |
|
| -------- | ----------- | |
|
| 0 | O | |
|
| 1 | B-PER | |
|
| 2 | I-PER | |
|
| 3 | B-ORG | |
|
| 4 | I-ORG | |
|
| 5 | B-LOC | |
|
| 6 | I-LOC | |
|
|
|
--- |
|
|
|
--- |
|
π Usage |
|
```python |
|
from transformers import BertTokenizerFast, BertForTokenClassification |
|
from transformers import pipeline |
|
import torch |
|
|
|
model_name = "AventIQ-AI/NER-AI-wikiann-model" |
|
tokenizer = BertTokenizerFast.from_pretrained(model_name) |
|
model = BertForTokenClassification.from_pretrained(model_name) |
|
model.eval() |
|
|
|
#Labelling |
|
label_list = dataset["train"].features["ner_tags"].feature.names |
|
model.config.id2label = {i: label for i, label in enumerate(label_list)} |
|
model.config.label2id = {label: i for i, label in enumerate(label_list)} |
|
ner_pipeline = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple") |
|
|
|
|
|
#Inference |
|
ner_pipeline = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple") |
|
|
|
test_sentence = "Bill Gates is the CEO of Microsoft and lives in the United States." |
|
ner_results = ner_pipeline(test_sentence) |
|
|
|
print("\nπ Inference Results:") |
|
for entity in ner_results: |
|
print(f"Entity: {entity['word']}\tType: {entity['entity_group']}\tConfidence: {entity['score']:.3f}") |
|
|
|
# Test example |
|
print("Bill Gates is the CEO of Microsoft and lives in the United States.") |
|
|
|
``` |
|
--- |
|
|
|
- π§© Quantization |
|
- Post-training static quantization applied using PyTorch to reduce model size and accelerate inference on edge devices. |
|
|
|
---- |
|
|
|
π Repository Structure |
|
``` |
|
. |
|
βββ model/ # Quantized model files |
|
βββ tokenizer_config/ # Tokenizer and vocab files |
|
βββ model.safensors/ # Fine-tuned model in safetensors format |
|
βββ README.md # Model card |
|
|
|
``` |
|
--- |
|
π€ Contributing |
|
|
|
Open to improvements and feedback! Feel free to submit a pull request or open an issue if you find any bugs or want to enhance the model. |
|
|
|
|
|
|
|
|