File size: 4,047 Bytes
0a121eb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
**π§ NERClassifier-BERT-WikiAnn**
A BERT-based Named Entity Recognition (NER) model fine-tuned on the WikiAnn English dataset. It classifies tokens into entity types like Person (PER), Location (LOC), and Organization (ORG). This model is suitable for applications like document tagging, resume parsing, and chatbots.
---
β¨ **Model Highlights**
- π Based on bert-base-cased
- π Fine-tuned on the WikiAnn (en) NER dataset
- β‘ Supports prediction of 3 core entity types: PER, LOC, ORG
- πΎ Lightweight and compatible with both CPU and GPU inference environment
---
π§ Intended Uses
- β
Resume and document parsing
- β
News article analysis
- β
Question answering pipelines
- β
Chatbots and virtual assistants
- β
Information retrieval and tagging
---
- π« Limitations
- β Trained on English-only Wiki-based text
- β Performance may degrade on informal or non-English texts
- β Not designed for nested or overlapping entities
- β Accuracy may drop on very long sequences (>128 tokens)
---
ποΈββοΈ Training Details
| Field | Value |
| -------------- | ------------------------------ |
| **Base Model** | `bert-base-cased` |
| **Dataset** | WikiAnn (English) |
| **Framework** | PyTorch with π€ Transformers |
| **Epochs** | 3 |
| **Batch Size** | 16 |
| **Max Length** | 128 tokens |
| **Optimizer** | AdamW |
| **Loss** | CrossEntropyLoss (token-level) |
| **Device** | Trained on CUDA-enabled GPU |
---
π Evaluation Metrics
| Metric | Score |
| ----------------------------------------------- | ----- |
| Accuracy | 0.92 |
| F1-Score | 0.92 |
| Precision | 0.92 |
| Recall | 0.92 |
---
π Label Mapping
| Label ID | Entity Type |
| -------- | ----------- |
| 0 | O |
| 1 | B-PER |
| 2 | I-PER |
| 3 | B-ORG |
| 4 | I-ORG |
| 5 | B-LOC |
| 6 | I-LOC |
---
---
π Usage
```python
from transformers import BertTokenizerFast, BertForTokenClassification
from transformers import pipeline
import torch
model_name = "AventIQ-AI/NER-AI-wikiann-model"
tokenizer = BertTokenizerFast.from_pretrained(model_name)
model = BertForTokenClassification.from_pretrained(model_name)
model.eval()
#Labelling
label_list = dataset["train"].features["ner_tags"].feature.names
model.config.id2label = {i: label for i, label in enumerate(label_list)}
model.config.label2id = {label: i for i, label in enumerate(label_list)}
ner_pipeline = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
#Inference
ner_pipeline = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
test_sentence = "Bill Gates is the CEO of Microsoft and lives in the United States."
ner_results = ner_pipeline(test_sentence)
print("\nπ Inference Results:")
for entity in ner_results:
print(f"Entity: {entity['word']}\tType: {entity['entity_group']}\tConfidence: {entity['score']:.3f}")
# Test example
print("Bill Gates is the CEO of Microsoft and lives in the United States.")
```
---
- π§© Quantization
- Post-training static quantization applied using PyTorch to reduce model size and accelerate inference on edge devices.
----
π Repository Structure
```
.
βββ model/ # Quantized model files
βββ tokenizer_config/ # Tokenizer and vocab files
βββ model.safensors/ # Fine-tuned model in safetensors format
βββ README.md # Model card
```
---
π€ Contributing
Open to improvements and feedback! Feel free to submit a pull request or open an issue if you find any bugs or want to enhance the model.
|