AI & ML interests

None defined yet.

Recent Activity

ImranzamanML 
posted an update about 12 hours ago
view post
Post
791
Finaly OpenAI is open to share open-source models after GPT2-2019.
gpt-oss-120b
gpt-oss-20b

openai/gpt-oss-120b

#AI #GPT #LLM #Openai
Tonic 
posted an update 4 days ago
ImranzamanML 
posted an update 4 days ago
view post
Post
272
Working of Transformer model layers!

I focused on showing the core steps side by side with tokenization, embedding and the transformer model layers, each highlighting the self attention and feedforward parts without getting lost in too much technical depth.

Its showing how these layers work together to understand context and generate meaningful output!

If you are curious about the architecture behind AI language models or want a clean way to explain it, hit me up, I’d love to share!



#AI #MachineLearning #NLP #Transformers #DeepLearning #DataScience #LLM #AIAgents
ImranzamanML 
posted an update 9 days ago
view post
Post
1623
Hugging Face just made life easier with the new hf CLI!
huggingface-cli to hf

With renaming the CLI, there are new features added like hf jobs. We can now run any script or Docker image on dedicated Hugging Face infrastructure with a simple command. It's a good addition for running experiments and jobs on the fly.

To get started, just run:
pip install -U huggingface_hub

List of hf CLI Commands

Main Commands
hf auth: Manage authentication (login, logout, etc.).
hf cache: Manage the local cache directory.
hf download: Download files from the Hub.
hf jobs: Run and manage Jobs on the Hub.
hf repo: Manage repos on the Hub.
hf upload: Upload a file or a folder to the Hub.
hf version: Print information about the hf version.
hf env: Print information about the environment.

Authentication Subcommands (hf auth)
login: Log in using a Hugging Face token.
logout: Log out of your account.
whoami: See which account you are logged in as.
switch: Switch between different stored access tokens/profiles.
list: List all stored access tokens.

Jobs Subcommands (hf jobs)
run: Run a Job on Hugging Face infrastructure.
inspect: Display detailed information on one or more Jobs.
logs: Fetch the logs of a Job.
ps: List running Jobs.
cancel: Cancel a Job.

hashtag#HuggingFace hashtag#MachineLearning hashtag#AI hashtag#DeepLearning hashtag#MLTools hashtag#MLOps hashtag#OpenSource hashtag#Python hashtag#DataScience hashtag#DevTools hashtag#LLM hashtag#hfCLI hashtag#GenerativeAI
  • 1 reply
·
Tonic 
posted an update 16 days ago
view post
Post
684
👋 Hey there folks,

just submitted my plugin idea to the G-Assist Plugin Hackathon by @nvidia . Check it out, it's a great way to use a local SLA model on a windows machine to easily and locally get things done ! https://github.com/NVIDIA/G-Assist
Tonic 
posted an update 18 days ago
view post
Post
532
🙋🏻‍♂️ Hey there folks ,

Yesterday , Nvidia released a reasoning model that beats o3 on science, math and coding !

Today you can try it out here : Tonic/Nvidia-OpenReasoning

hope you like it !
Tonic 
posted an update 25 days ago
view post
Post
3270
🙋🏻‍♂️ Normalize adding compute & runtime traces to your model cards
  • 2 replies
·
Tonic 
posted an update about 1 month ago
view post
Post
490
Who's going to Raise Summit in Paris Tomorrow ?

If you're around , I would love to meet you :-)
Tonic 
posted an update 2 months ago
view post
Post
682
🙋🏻‍♂️ hey there folks ,

So every bio/med/chem meeting i go to i always the same questions "why are you sharing a gdrive link with me for this?" and "Do you have any plans to publish your model weights and datasets on huggingface?" and finally i got a good answer today which explains everything :

basically there is some kind of government censorship on this (usa, but i'm sure others too) and they are told they are not allowed as it is considered a "dataleak" which is illegal !!!!

this is terrible ! but the good news is that we can do something about it !

so there is this "call for opinions and comments" here from the NIH (usa) , and here we can make our opinion on this topic known : https://osp.od.nih.gov/comment-form-responsibly-developing-and-sharing-generative-artificial-intelligence-tools-using-nih-controlled-access-data/

kindly consider dropping your opinion and thoughts about this censorship of science , and share this post , link or thoughts widely .

Together maybe we can start to share data and model weights appropriately and openly in a good way 🙏🏻🚀

cc. @cyrilzakka

Tonic 
posted an update 2 months ago
view post
Post
2536
🙋🏻‍♂️ Hey there folks ,

Yesterday the world's first "Learn to Vibe Code" application was released .

As vibe coding is the mainstream paradigm , so now the first educational app is there to support it .

You can try it out already :

https://vibe.takara.ai

and of course it's entirely open source, so i already made my issue and feature branch :-) 🚀
ImranzamanML 
posted an update 3 months ago
view post
Post
652
Run LLM model Locally using Docker right inside your codebase (No GUI Needed!)

In this project, I did not used the suporting GUI like Open WebUI or LM Studio or any other, so the purpose to use stand alone LLM models with ollama to give you the idea that how you can use it in your project/code instead of running through third party. Everything is containerized with Docker, so setup is clean and repeatable. Its just a fun side project so my connections can learn more about running models locally in their own projects.

Tech stack used:

🐋 Docker

🦙 LLaMA via Ollama

💻 HTML/CSS/JS

🐍 Python + FastAPI

🌐 NGINX



Its still early and a fun side project, but if you are into local model deployment, or just want to see how it works, check it out on the given link!

https://github.com/Imran-ml/llama-chatbot-dockerized

#LLM #Docker #OpenSource #Chatbot #LLaMA #fastapi
ImranzamanML 
posted an update 3 months ago
view post
Post
2897
🚀 New paper out: "Improving Arabic Multi-Label Emotion Classification using Stacked Embeddings and Hybrid Loss Function"
Improving Arabic Multi-Label Emotion Classification using Stacked Embeddings and Hybrid Loss Function (2410.03979)

In this work, we tackle some major challenges in Arabic multi-label emotion classification especially the issues of class imbalance and label correlation that often hurt model performance, particularly for minority emotions.

Our approach:

Stacked contextual embeddings from fine-tuned ArabicBERT, MarBERT, and AraBERT models.

A meta-learning strategy that builds richer representations.

A hybrid loss function combining class weighting, label correlation matrices, and contrastive learning to better handle class imbalances.

🧠 Model pipeline: stacked embeddings → meta-learner → Bi-LSTM → fully connected network → multi-label classification.

🔍 Extensive experiments show significant improvements across Precision, Recall, F1-Score, Jaccard Accuracy, and Hamming Loss.
🌟 The hybrid loss function in particular helped close the gap between majority and minority classes!

We also performed ablation studies to break down each component’s contribution and the results consistently validated our design choices.

This framework isn't just for Arabic it offers a generalizable path for improving multi-label emotion classification in other low-resource languages and domains.

Big thanks to my co-authors: Muhammad Azeem Aslam, Wang Jun, Nisar Ahmed, Li Yanan, Hu Hongfei, Wang Shiyu, and Xin Liu!

Would love to hear your thoughts on this work! 👇
ImranzamanML 
posted an update 4 months ago
ImranzamanML 
posted an update 4 months ago
view post
Post
1589

Llama 4 is here and it's making serious waves!

After diving into the latest benchmark results, it’s clear that Meta’s new Llama 4 lineup (Maverick, Scout, and Behemoth) is no joke.

Here are a few standout highlights🔍:

Llama 4 Maverick hits the sweet spot between cost and performance
- Outperforms GPT-4o in image tasks like ChartQA (90.0 vs 85.7) and DocVQA (94.4 vs 92.8)
- Beats others in MathVista and MMLU Pro too and at a fraction of the cost ($0.19–$0.49 vs $4.38 🤯)

Llama 4 Scout is lean, cost-efficient, and surprisingly capable
- Strong performance across image and language tasks (e.g. ChartQA: 88.8, DocVQA: 94.4)
- More affordable than most competitors and still beats out larger models like Gemini 2.0 Flash-Lite

Llama 4 Behemoth is the heavy hitter.
- Tops the charts in LiveCodeBench (49.4), MATH-500 (95.0), and MMLU Pro (82.2)
- Even edges out Claude 3 Sonnet and Gemini 2 Pro in multiple areas

Meta didn’t just show up, they delivered across multimodal, coding, reasoning, and multilingual benchmarks.

And honestly? Seeing this level of performance, especially at lower inference costs, is a big deal for anyone building on LLMs.

Curious to see how these models do in real-world apps next.

#AI #Meta #Llama4 #LLMs #Benchmarking #MachineLearning #OpenSourceAI #GenerativeAI
  • 1 reply
·
Tonic 
posted an update 5 months ago
view post
Post
1611
🙋🏻‍♂️Hey there folks,

Did you know that you can use ModernBERT to detect model hallucinations ?

Check out the Demo : Tonic/hallucination-test

See here for Medical Context Demo : MultiTransformer/tonic-discharge-guard

check out the model from KRLabs : KRLabsOrg/lettucedect-large-modernbert-en-v1

and the library they kindly open sourced for it : https://github.com/KRLabsOrg/LettuceDetect

👆🏻if you like this topic please contribute code upstream 🚀

  • 2 replies
·
Tonic 
posted an update 5 months ago
view post
Post
857
Powered by KRLabsOrg/lettucedect-large-modernbert-en-v1 from KRLabsOrg.

Detect hallucinations in answers based on context and questions using ModernBERT with 8192-token context support!

### Model Details
- **Model Name**: [lettucedect-large-modernbert-en-v1]( KRLabsOrg/lettucedect-large-modernbert-en-v1)
- **Organization**: [KRLabsOrg]( KRLabsOrg )
- **Github**: [https://github.com/KRLabsOrg/LettuceDetect](https://github.com/KRLabsOrg/LettuceDetect)
- **Architecture**: ModernBERT (Large) with extended context support up to 8192 tokens
- **Task**: Token Classification / Hallucination Detection
- **Training Dataset**: [RagTruth]( wandb/RAGTruth-processed)
- **Language**: English
- **Capabilities**: Detects hallucinated spans in answers, provides confidence scores, and calculates average confidence across detected spans.

LettuceDetect excels at processing long documents to determine if an answer aligns with the provided context, making it a powerful tool for ensuring factual accuracy.
ImranzamanML 
posted an update 6 months ago
view post
Post
3290
Hugging Face just launched the AI Agents Course – a free journey from beginner to expert in AI agents!

- Learn AI Agent fundamentals, use cases and frameworks
- Use top libraries like LangChain & LlamaIndex
- Compete in challenges & earn a certificate
- Hands-on projects & real-world applications

https://huggingface.co/learn/agents-course/unit0/introduction

You can join for a live Q&A on Feb 12 at 5PM CET to learn more about the course here

https://www.youtube.com/live/PopqUt3MGyQ
Tonic 
posted an update 6 months ago
view post
Post
2441
🙋🏻‍♂️hey there folks ,

Goedel's Theorem Prover is now being demo'ed on huggingface : Tonic/Math

give it a try !