Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
1
90
231
pascalmusabyimana
pascal-maker
Follow
6b4b86ec-928a-4b7e-9c1e-8d5f009e3272's profile picture
yaostephanekouassi1's profile picture
DaveEds's profile picture
19 followers
·
102 following
https://pascal-maker.github.io/developedbypascalmusabyimana/
PascalMusabyim1
pascal-maker
pascal-musabyimana-573b66178
AI & ML interests
computer vision, nlp , machine learning and deeplearning
Recent Activity
reacted
to
MikeDoes
's
post
with 👀
about 5 hours ago
What if an AI agent could be tricked into stealing your data, just by reading a tool's description? A new paper reports it's possible. The "Attractive Metadata Attack" paper details this stealthy new threat. To measure the real-world impact of their attack, the researchers needed a source of sensitive data for the agent to leak. We're proud that the AI4Privacy corpus was used to create the synthetic user profiles containing standardized PII for their experiments. This is a perfect win-win. Our open-source data helped researchers Kanghua Mo, 龙昱丞, Zhihao Li from Guangzhou University and The Hong Kong Polytechnic University to not just demonstrate a new attack, but also quantify its potential for harm. This data-driven evidence is what pushes the community to build better, execution-level defenses for AI agents. 🔗 Check out their paper to see how easily an agent's trust in tool metadata could be exploited: https://arxiv.org/pdf/2508.02110 #OpenSource #DataPrivacy #LLM #Anonymization #AIsecurity #HuggingFace #Ai4Privacy #Worldslargestopensourceprivacymaskingdataset
reacted
to
MikeDoes
's
post
with 👀
about 5 hours ago
What if an AI agent could be tricked into stealing your data, just by reading a tool's description? A new paper reports it's possible. The "Attractive Metadata Attack" paper details this stealthy new threat. To measure the real-world impact of their attack, the researchers needed a source of sensitive data for the agent to leak. We're proud that the AI4Privacy corpus was used to create the synthetic user profiles containing standardized PII for their experiments. This is a perfect win-win. Our open-source data helped researchers Kanghua Mo, 龙昱丞, Zhihao Li from Guangzhou University and The Hong Kong Polytechnic University to not just demonstrate a new attack, but also quantify its potential for harm. This data-driven evidence is what pushes the community to build better, execution-level defenses for AI agents. 🔗 Check out their paper to see how easily an agent's trust in tool metadata could be exploited: https://arxiv.org/pdf/2508.02110 #OpenSource #DataPrivacy #LLM #Anonymization #AIsecurity #HuggingFace #Ai4Privacy #Worldslargestopensourceprivacymaskingdataset
liked
a model
4 days ago
mlx-community/Olmo-3.1-32B-Instruct-bf16
View all activity
Organizations
pascal-maker
's datasets
2
Sort: Recently updated
pascal-maker/my-single-image-dataset
Viewer
•
Updated
May 27
•
1
•
20
pascal-maker/classification-ie-optimization
Viewer
•
Updated
Feb 18
•
18