Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
12
20
Jonatan Borkowski
PRO
j14i
Follow
ryanaustin1's profile picture
asigalov61's profile picture
0xSojalSec's profile picture
6 followers
·
23 following
jborkowski
AI & ML interests
None yet
Recent Activity
liked
a Space
about 14 hours ago
hysts/daily-papers
reacted
to
sergiopaniego
's
post
with 🚀
about 15 hours ago
Google DeepMind releases FunctionGemma, a 240M model specialized in 🔧 tool calling, built for fine-tuning TRL has day-0 support. To celebrate, we’re sharing 2 new resources: > Colab guide to fine-tune it for 🌐 browser control with BrowserGym OpenEnv > Standalone training script > Colab notebook: https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/grpo_functiongemma_browsergym_openenv.ipynb > Training script: https://github.com/huggingface/trl/blob/main/examples/scripts/openenv/browsergym_llm.py (command to run it inside the script) > More notebooks in TRL: https://huggingface.co/docs/trl/example_overview#notebooks
reacted
to
Kseniase
's
post
with ❤️
about 15 hours ago
From Prompt Engineering to Context Engineering: Main Design Patterns Earlier on, we relied on clever prompt wording, but now structured, complete context matters more than just magic phrasing. The next year is going to be a year of context engineering which expands beyond prompt engineering. The two complement each other: prompt engineering shapes how we ask, while context engineering shapes what the model knows, sees, and can do. To keep things clear, here are the main techniques and design patterns in both areas, with some useful resources for further exploration: ▪️ 9 Prompt Engineering Techniques (configuring input text) 1. Zero-shot prompting – giving a single instruction without examples. Relies entirely on pretrained knowledge. 2. Few-shot prompting – adding input–output examples to encourage model to show the desired behavior. ⟶ https://arxiv.org/abs/2005.14165 3. Role prompting – assigning a persona or role (e.g. "You are a senior researcher," "Say it as a specialist in healthcare") to shape style and reasoning. ⟶ https://arxiv.org/abs/2403.02756 4. Instruction-based prompting – explicit constraints or guidance, like "think step by step," "use bullet points," "answer in 10 words" 5. Chain-of-Thought (CoT) – encouraging intermediate reasoning traces to improve multi-step reasoning. It can be explicit ("let’s think step by step"), or implicit (demonstrated via examples). ⟶ https://arxiv.org/abs/2201.11903 6. Tree-of-Thought (ToT) – the model explores multiple reasoning paths in parallel, like branches of a tree, instead of following a single chain of thought. ⟶ https://arxiv.org/pdf/2203.11171 7. Reasoning–action prompting (ReAct-style) – prompting the model to interleave reasoning steps with explicit actions and observations. It defines action slots and lets the model generate a sequence of "Thought → Action → Observation" steps. ⟶ https://arxiv.org/abs/2210.03629 Read further ⬇️ Also subscribe to Turing Post: https://www.turingpost.com/subscribe
View all activity
Organizations
j14i
's Spaces
1
Sort: Recently updated
Runtime error
Template Final Assignment
🕵