Spaces:
Running
Running
File size: 4,803 Bytes
d255a0f ba53531 7651b30 40e08df d255a0f 529bbd6 e484a46 529bbd6 e484a46 529bbd6 e484a46 529bbd6 e484a46 529bbd6 e484a46 529bbd6 e484a46 529bbd6 cbc0b57 e484a46 529bbd6 e484a46 529bbd6 e484a46 529bbd6 e484a46 529bbd6 e484a46 529bbd6 e484a46 529bbd6 e484a46 529bbd6 e484a46 529bbd6 e484a46 529bbd6 e484a46 529bbd6 e484a46 529bbd6 e484a46 d3a39d7 e484a46 529bbd6 e9c7c84 8a41ab5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 |
---
title: AI Polymer Classification
emoji: π¬
colorFrom: indigo
colorTo: yellow
sdk: streamlit
app_file: app.py
pinned: false
license: apache-2.0
---
## AI-Driven Polymer Aging Prediction and Classification (v0.1)
This web application classifies the degradation state of polymers using Raman spectroscopy and deep learning.
It was developed as part of the AIRE 2025 internship project at the Imageomics Institute and demonstrates a prototype pipeline for evaluating multiple convolutional neural networks (CNNs) on spectral data.
---
## π§ͺ Current Scope
- π¬ **Modality**: Raman spectroscopy (.txt)
- π§ **Model**: Figure2CNN (baseline)
- π **Task**: Binary classification β Stable vs Weathered polymers
- π οΈ **Architecture**: PyTorch + Streamlit
---
## π§ Roadmap
- [x] Inference from Raman `.txt` files
- [x] Model selection (Figure2CNN, ResNet1D)
- [ ] Add more trained CNNs for comparison
- [ ] FTIR support (modular integration planned)
- [ ] Image-based inference (future modality)
---
## π§ How to Use
1. Upload a Raman spectrum `.txt` file (or select a sample)
2. Choose a model from the sidebar
3. Run analysis
4. View prediction, logits, and technical information
Supported input:
- Plaintext `.txt` files with 1β2 columns
- Space- or comma-separated
- Comment lines (#) are ignored
- Automatically resampled to 500 points
---
## Contributors
π¨βπ« Dr. Sanmukh Kuppannagari (Mentor)
π¨βπ« Dr. Metin Karailyan (Mentor)
π¨βπ» Jaser Hasan (Author/Developer)
## π§ Model Credit
Baseline model inspired by:
Neo, E.R.K., Low, J.S.C., Goodship, V., Debattista, K. (2023).
*Deep learning for chemometric analysis of plastic spectral data from infrared and Raman databases.*
_Resources, Conservation & Recycling_, **188**, 106718.
[https://doi.org/10.1016/j.resconrec.2022.106718](https://doi.org/10.1016/j.resconrec.2022.106718)
---
## π Links
- π» **Live App**: [Hugging Face Space](https://huggingface.co/spaces/dev-jas/polymer-aging-ml)
- π **GitHub Repo**: [ml-polymer-recycling](https://github.com/KLab-AI3/ml-polymer-recycling)
## π― Strategic Expansion Objectives (Roadmap)
**The roadmap defines three major expansion paths designed to broaden the systemβs capabilities and impact:**
1. **Model Expansion: Multi-Model Dashboard**
> The dashboard will evolve into a hub for multiple model architectures rather than being tied to a single baseline. Planned work includes:
- **Retraining & Fine-Tuning**: Incorporating publicly available vision models and retraining them with the polymer dataset.
- **Model Registry**: Automatically detecting available .pth weights and exposing them in the dashboard for easy selection.
- **Side-by-Side Reporting**: Running comparative experiments and reporting each modelβs accuracy and diagnostics in a standardized format.
- **Reproducible Integration**: Maintaining modular scripts and pipelines so each modelβs results can be replicated without conflict.
This ensures flexibility for future research and transparency in performance comparisons.
2. **Image Input Modality**
> The system will support classification on images as an additional modality, extending beyond spectra. Key features will include:
- **Upload Support**: Users can upload single images or batches directly through the dashboard.
- **Multi-Model Execution**: Selected models from the registry can be applied to all uploaded images simultaneously.
- **Batch Results**: Output will be returned in a structured, accessible way, showing both individual predictions and aggregate statistics.
- **Enhanced Feedback**: Outputs will include predicted class, model confidence, and potentially annotated image previews.
This expands the system toward a multi-modal framework, supporting broader research workflows.
3. **FTIR Dataset Integration**
> Although previously deferred, FTIR support will be added back in a modular, distinct fashion. Planned steps are:
- **Dedicated Preprocessing**: Tailored scripts to handle FTIR-specific signal characteristics (multi-layer handling, baseline correction, normalization).
- **Architecture Compatibility**: Ensuring existing and retrained models can process FTIR data without mixing it with Raman workflows.
- **UI Integration**: Introducing FTIR as a separate option in the modality selector, keeping Raman, Image, and FTIR workflows clearly delineated.
- **Phased Development**: Implementation details to be refined during meetings to ensure scientific rigor.
This guarantees FTIR becomes a supported modality without undermining the validated Raman foundation.
|