AI & ML interests

Thinking, Agentic, and Research Purpose finetunning and training

Recent Activity

DedeProGames  updated a model about 3 hours ago
OrionLLM/GRM2-3b
Reality123b  new activity about 9 hours ago
OrionLLM/README:Quantum Computing
DedeProGames  updated a collection 1 day ago
GRM2
View all activity

Reality123b 
in OrionLLM/README about 9 hours ago

Quantum Computing

#1 opened about 9 hours ago by
Reality123b
DedeProGames 
posted an update 1 day ago
view post
Post
87
Introducing GRM2, a powerful 3b parameter model designed for long-term reasoning and high performance in complex tasks.

Even with only 3b of parameters, it outperforms qwen3-32b in several benchmarks.

With only 3b of parameters, it can also generate large and complex code of over 1000 lines, use tools in a way comparable to large models, and is perfect for agentic tasks.

GRM2 is licensed under Apache 2.0, making it perfect as a FineTune base for other tasks.

OrionLLM/GRM2-3b
DedeProGames 
updated a Space 2 days ago
DedeProGames 
posted an update 3 days ago
view post
Post
3736
Can small models program?

Although even if they are reasoning AIs, small AIs cannot create extensive and high-quality code, at least that's what is commonly thought.

We present OrionLLM/NanoCoder-0.6b, an AI with just 600 million parameters based on qwen3-0.6b and trained with the dataset nvidia/OpenCodeReasoning.

While not good at complex code, we observed a significant improvement in code generation (especially in Python code), demonstrating that, when trained correctly, small AIs can, in fact, program.
  • 2 replies
·
DedeProGames 
posted an update 4 days ago
view post
Post
2534
Introducing GRM Family, a family of fine-tuned small models from the Qwen2.5 family for Long Cot and General Reasoning and Agentic Tasks.

GRM is available in 7b and 1.5b parameter sizes, these models being significantly relevant for complex tasks or local inference agents.
OrionLLM/GRM-7b
OrionLLM/GRM-1.5b
  • 1 reply
·