File size: 2,466 Bytes
6c90e65
 
04f946a
 
 
 
 
 
 
 
6c90e65
 
83672fe
6c90e65
 
 
83672fe
 
 
 
 
 
6c90e65
83672fe
6c90e65
83672fe
6c90e65
 
 
 
83672fe
 
 
6c90e65
 
83672fe
 
6c90e65
 
 
83672fe
 
6c90e65
83672fe
 
 
6c90e65
 
 
 
 
83672fe
6c90e65
83672fe
 
 
 
6c90e65
 
 
83672fe
 
9b1b656
 
 
 
 
 
6c90e65
 
 
 
 
 
 
83672fe
 
 
6c90e65
83672fe
6c90e65
83672fe
6c90e65
83672fe
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
library_name: transformers
tags:
- nlp
- text
- multiclass
- classification
license: mit
base_model:
- distilbert/distilbert-base-uncased
---

# Model Card for `debojit01/course-review-sentiment`

## Model Details

**Model Name**: Course Review Sentiment Classifier  
**Model Type**: Text Classification (Multiclass – Positive, Neutral, Negative)  
**Language**: English  
**License**: MIT  
**Finetuned From**: `distilbert-base-uncased`  
**Developed By**: Debojit Choudhury

## Model Description

This model is a fine-tuned DistilBERT model for sentiment classification of course reviews. It predicts whether a review is **positive**, **neutral**, or **negative**, and was trained on a labeled dataset of 100k Coursera reviews.

## Uses

### Direct Use
This model can be used to:
- Automatically classify course reviews based on sentiment.
- Analyze customer feedback for online education platforms.

### Out-of-Scope Use
- Not suitable for non-English text.
- Not suitable for other domains beyond course review sentiment.

## How to Get Started with the Model

```python
from transformers import pipeline

classifier = pipeline("text-classification", model="debojit01/course-review-sentiment")
classifier("The course was extremely helpful and well-structured!")
```

## Training Details

### Training Data

Kaggle's 100k Coursera Reviews Dataset

- **Number of Classes**: 3
- **Training Framework**: Hugging Face Transformers
- **Max Seq Length**: 512
- **Epochs**: 3

## Evaluation

- **Test Split**: 20% of full dataset
- **Metrics**: Accuracy, Macro Precision, Recall, F1
- **Macro F1**: 0.7647813475266324
- **Accuracy**: 0.7641242937853108
- **Macro Precision**: 0.766738569737377
- **Macro Recall**: 0.7641242937853107
- **samples_per_second**: 72.966
- **steps_per_second**: 1.159

## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** Tesla T4 GPU (Google Colab)
- **Hours used:** <2 hours
- **Compute Region:** US (Colab)

## Citation

If you use this model, please cite:

Debojit Choudhury, Course Review Sentiment Classifier (2025), Hugging Face. 
https://huggingface.co/debojit01/course-review-sentiment