modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755750328
0xaoyama
2025-08-21T04:26:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "muscular zealous gorilla", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:25:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - muscular zealous gorilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Krish356/qwen3-coder-react-lora
Krish356
2025-08-21T04:25:18Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3_moe", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-21T04:24:38Z
--- base_model: unsloth/qwen3-coder-30b-a3b-instruct tags: - text-generation-inference - transformers - unsloth - qwen3_moe - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Krish356 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-coder-30b-a3b-instruct This qwen3_moe model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
unitova/blockassist-bc-zealous_sneaky_raven_1755748654
unitova
2025-08-21T04:24:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:24:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
roeker/blockassist-bc-quick_wiry_owl_1755750173
roeker
2025-08-21T04:24:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:23:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755750148
IvanJAjebu
2025-08-21T04:23:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:23:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/relief-art-style-flux-ethanar
Muapi
2025-08-21T04:23:32Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-21T04:23:21Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Relief Art Style •••FLUX••• @Ethanar ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:685515@767220", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755748560
coelacanthxyz
2025-08-21T04:23:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:23:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Eljef/Hola
Eljef
2025-08-21T04:23:10Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-21T04:23:10Z
--- license: apache-2.0 ---
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755750143
0xaoyama
2025-08-21T04:22:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "muscular zealous gorilla", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:22:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - muscular zealous gorilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/realistic-horror-flux-1.0
Muapi
2025-08-21T04:22:11Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-21T04:22:04Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Realistic Horror Flux 1.0 ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: The image is a dark, eerie photograph of a woman lying on a bed in a dimly lit room. The woman, with long, dark hair, has a deathly pale complexion and her face is heavily made up with white paint, creating a skeletal, ghostly appearance. Her eyes are painted black with red highlights, giving an eerie, demonic look. Her face is smeared with blood, adding to the horror theme., This is a dark, eerie CGI-rendered scene depicting a young girl with an unsettling, haunting appearance. She stands in a dimly lit, narrow hallway with vertical bars on the walls, suggesting a prison or asylum setting. The girl has long, straight, light-blonde hair that falls past her shoulders and a pale, gaunt face. Her skin has a deathly pallor, and her eyes glow with an unnatural, piercing blue light, adding to the chilling atmosphere., The image is a highly detailed CGI rendering in a dark, eerie atmosphere, capturing a scene in a dimly lit, narrow hallway. The focal point is a young girl with long, straight, dark hair, dressed in a white, tattered dress with a red stain on it, possibly blood. Her face is painted with white face paint, creating a ghostly effect, and she has dark, exaggerated eye makeup, giving her an otherworldly appearance., This image is a dark, horror-themed CGI illustration featuring two young girls with pale, deathly faces and wide, bloodshot eyes. Their faces are covered in blood, giving them a grotesque, zombie-like appearance. The girls have long, dark hair and are positioned on a table, with their heads resting on the surface. The table is cluttered with various objects, including a knife, a piece of cheese, and a stack of books., This is a photograph taken in a misty, eerie forest at night, creating a haunting atmosphere. The scene is dominated by a group of hooded figures, shrouded in dark, tattered robes that blend into the shadows, making it difficult to discern their features. These figures, possibly representing a cult or ritualistic group, are scattered across the ground, which is covered in dry, brown leaves and twigs., The image is a dark, atmospheric photograph taken in a dimly lit, narrow hallway of what appears to be an old, abandoned building. The hallway is narrow and stretches into the distance, with a low ceiling and walls that are painted in a dull beige and white. The lighting is minimal, casting long, ominous shadows that enhance the eerie mood. The central figure is a young woman with long, dark hair that falls past her shoulders., The image is a dark, dramatic CGI-rendered scene set in a dimly lit, cluttered kitchen. The central figure is a young woman with long, straight black hair and pale skin, dressed in a white blouse with short sleeves and a red, stained, knee-length skirt. Her face is smeared with blood, giving her a distressed and fearful expression. The blood also stains her arm and the front of her dress, suggesting she has been injured., This is a photograph of a dimly lit, abandoned kitchen bathed in an eerie, moody atmosphere. The scene is illuminated by a narrow beam of light filtering through a dirty, tattered curtain on a window in the far right corner, casting stark shadows on the otherwise dark, grimy walls. The room is filled with an oppressive, foreboding ambiance, enhanced by the stark contrast between the dark shadows and the faint, cold light., This is a photograph of a dimly lit, minimalist bedroom with a stark, eerie atmosphere. The room is bathed in deep shadows, with the only significant light source coming from a window on the left side, partially obscured by fog or mist outside, casting a faint glow. The window, made of dark-framed glass, has a simple white curtain, barely visible due to the low light., This is a digitally rendered scene depicting a misty, eerie night scene of two large, old, haunted-looking houses in a dark forest. The houses, with their steep, pointed roofs and tall, narrow windows, exude a spooky, Gothic atmosphere. The house on the left is slightly illuminated, with warm, golden light spilling from its windows and casting a soft glow on the misty ground., The image is a digitally rendered scene depicting a dimly lit, atmospheric room with a dark, mysterious ambiance. The room is sparsely furnished with a large, old-fashioned clawfoot bathtub situated in the center-right of the image. The bathtub is positioned on a wooden floor, which appears worn and slightly uneven, covered with scattered, faded leaves and debris., The image is a dark, atmospheric photograph of a woman with a haunting, eerie quality. The scene is set in a dimly lit, possibly abandoned or decrepit room, with a dark, mysterious ambiance. The woman, who appears to be in her late 20s to early 30s, has long, straight, dark brown hair that falls over her shoulders., The image is a dark, eerie CGI rendering depicting a young girl with a haunting expression, standing in a dimly lit, narrow corridor. The girl has long, straight, dark brown hair that cascades down her back. Her face is pale with prominent, dark circles under her eyes and smeared, dark makeup that enhances a distressed, almost demonic appearance., The image is a dark, atmospheric photograph or digital artwork depicting an eerie, dimly lit room. The scene is dominated by shadows, with only a faint light source illuminating the far end of the room, casting a narrow beam of light across the floor. The walls are a deep, dark blue, and the floor is a polished wooden surface that reflects some of the faint light., The image is a dark, atmospheric CGI rendering of a small, dimly lit bedroom. The room features a single bed pushed against the left wall, covered with a white sheet and a dark, possibly black or navy blue blanket. The bed's frame is wrought iron, with a simplistic design. A small window on the far wall, partially covered by a dark blue curtain, lets in a faint, diffused light, casting a dim glow across the room., The image is a dark, haunting digital artwork depicting a woman lying on a bed, with a sinister, horror-inspired theme. The woman has long, dark hair and pale skin. Her face is painted with white makeup, giving her an eerie, almost corpse-like appearance. Her eyes are black with red sclera, giving a demonic or zombie-like effect. Her face is covered in dark, jagged scratches and streaks, enhancing the horror aesthetic., The image is a dark, atmospheric CGI rendering in a horror-themed setting. The central figure is a young girl with long, straight, light brown hair. She has a pale, gaunt face with prominent, wide-open, glowing eyes that are an unnatural shade of bright blue, giving her an eerie, haunting appearance. Her expression is a mixture of fear and shock, with her mouth slightly open., The image is a dark, eerie CGI-rendered scene from a horror game or movie, showcasing two characters in a dimly lit, narrow corridor. The main subject is a young girl with pale skin and long, dark hair, her face painted with dramatic, exaggerated features such as black lines around her eyes, mouth, and forehead, giving her an otherworldly appearance. She has a small, dark tear running down her cheek, adding to the unsettling atmosphere., The image is a dark, atmospheric digital artwork depicting a young woman lying on a bed in a dimly lit room. She has long, wavy, auburn hair cascading over her shoulders. Her skin is pale with numerous dark, jagged scars and cuts, giving her a haunting appearance. Her eyes are an unnaturally bright, glowing red, which contrasts starkly with the dark surroundings., The image is a dark, atmospheric CGI rendering depicting a scene in a dimly lit, seemingly abandoned kitchen. The focal point is a young woman with long, dark hair, standing near a cluttered counter. Her face is pale and gaunt, with dark circles under her eyes, suggesting she is in distress or has been through a traumatic event. Blood splatters her face, particularly around her mouth and forehead, indicating she might be injured or in a state of shock., The image is a dark, surreal CGI-rendered scene set in a dimly lit, modern kitchen. Two figures, both nude and seemingly undead, stand side by side in the center of the image. Their bodies are emaciated, with sunken eyes, and their skin is a sickly, pale color, marred with dark, oozing wounds and blood. They have long, dark hair that falls down their backs., The image is a dark, atmospheric photograph set in a dimly lit kitchen, evoking a horror or thriller theme. The scene features two figures: a man and a young girl, both covered in blood. The man, likely in his 30s, has short hair and is dressed in a white shirt with the sleeves rolled up, revealing his arms covered in blood. His face is pale and his eyes are wide with shock or fear., The image is a dark, highly detailed CGI rendering in a horror-themed style, depicting a terrifying scene in a dimly lit room. The setting appears to be a kitchen, as evidenced by the visible countertops and cabinets in the background. The room is shrouded in darkness, with only minimal light coming from a dim, flickering source, casting long, ominous shadows., This is a dark, horror-themed CGI image. The scene takes place in a dimly lit, dimly furnished kitchen with a wooden table in the foreground. Two emaciated, undead-like figures with long, dark hair stand behind the table, their faces deathly pale and contorted into grotesque, open-mouthed expressions. Their skin is marred with dark, dried blood and exposed, sunken muscles., This image is a dark, eerie CGI rendering of a horror scene set in a dimly lit kitchen. The scene features a woman with long, dark hair and pale skin, her face grotesquely contorted, with glowing red eyes and smeared blood. She is leaning over a man lying on the floor, who appears to be dead or dying. The man's face is also pale, with similar red eyes and bloodied features, suggesting a supernatural or demonic presence., The image is a dark, horror-themed CGI illustration with a gothic, eerie atmosphere. The scene depicts two young girls, likely sisters, with pale, bloodied faces and wide, terrified eyes. Their skin is deathly pale, and their faces are covered in dark, clotted blood, with red streaks running down their cheeks and necks. Both girls have long, dark hair that frames their faces., This is a dark, highly detailed, and vividly rendered photograph with a horror theme. The scene takes place in a dimly lit, industrial kitchen. A woman with long, straight black hair and pale skin, likely of East Asian descent, is standing over a young, nude, and seemingly dead child lying on a kitchen counter., This photograph depicts a dimly lit, abandoned kitchen bathed in a haunting, eerie atmosphere. The scene is captured in a high-contrast, slightly exaggerated style, with deep shadows and stark highlights, enhancing the sense of desolation. The kitchen is cluttered and dirty, with dark, worn wooden cabinets and countertops. A large window on the right side, covered with sheer white curtains, allows a faint, bluish light to filter in, casting an eerie glow across the room., The image is a dark, eerie CGI-rendered scene depicting a young woman lying on a bed in a dimly lit room. She has pale skin and long, straight dark hair, which is disheveled. Her face is gaunt, with sunken eyes and a mouth open in a wide, terrified scream, suggesting she might be in distress or a state of horror., This is a CGI-rendered, atmospheric scene of an old, abandoned house at night, shrouded in mist and darkness. The house, a two-story, wooden structure with a steeply pitched, weathered roof, is illuminated from within, casting a warm, inviting glow through the windows. The house has a porch with a sagging roof and a chimney emitting a faint plume of smoke., The image is a highly detailed, CGI-rendered, atmospheric depiction of a spooky, haunted mansion set in a dark, misty forest at night. The mansion, an imposing, three-story Victorian-style house, stands ominously at the center. Its facade is a dark, weathered grey, with tall, pointed turrets and a wraparound porch with a balcony. The windows are numerous but many are boarded up, casting eerie shadows., The image is a photograph of a dimly lit, old-fashioned room, likely a living room or sitting area, taken in a low-light setting. The room is filled with a sense of neglect and abandonment, with shadows dominating the scene. The walls are painted a faded beige, and the wooden floorboards are worn and darkened, contributing to the overall gloomy atmosphere., The image is a photograph of an old, dimly lit basement, taken in a low-light setting. The scene is dominated by a rough, uneven stone wall on the left side, composed of large, irregularly shaped stones in various shades of brown and gray, giving it a rugged, ancient appearance. The wall on the right is a mix of brick and wood, with exposed wooden beams and pipes running along the ceiling, adding to the industrial, rustic atmosphere., The image is a sepia-toned photograph of a dimly lit, cluttered bedroom, evoking a sense of mystery and nostalgia. The room is small and narrow, with a low ceiling adorned with intricate, textured patterns. The walls are a worn, textured beige, giving the room an aged, slightly rundown appearance. The bed, positioned against the right wall, is unmade with a floral-patterned bedspread featuring a subtle, almost faded design., This photograph captures a dimly lit stairway scene with a dramatic, almost eerie ambiance. The image is taken from the bottom of a staircase, looking up towards the top. The stairway is narrow and steep, with wooden steps that appear worn and slightly uneven, their dark brown color absorbing most of the light. A small, fluffy, off-white area rug covers the floor, partially obscuring the steps and adding a soft texture to the scene., This photograph captures a dimly lit, abandoned hallway in an old, dilapidated house. The scene is bathed in a sepia-toned filter, giving it an eerie, nostalgic feel. The walls, painted in a faded white, are heavily worn, with peeling wallpaper revealing layers of old paint and plaster beneath. The wooden floorboards are worn and cracked, showing a dark, weathered finish., The image is a photograph of an abandoned, decrepit room with peeling walls and a floor littered with debris. The room features a high ceiling with exposed wooden beams, now covered in a thick layer of grime and decay. The walls are stained and cracked, revealing layers of old paint and plaster. On the left side of the room, a large window with a green-painted frame allows a glimpse of overgrown foliage outside, suggesting the room has been abandoned for an extended period., The photograph captures a dimly lit, elegantly furnished room with a vintage aesthetic. The central focus is a large, polished wooden grand piano with a rich, dark finish, positioned in front of a staircase. The piano keys are a lighter, contrasting shade of wood, with the strings visible beneath the keyboard., This photograph captures a decrepit, abandoned house situated in a desolate, overgrown landscape. The house, likely an old farmhouse, is constructed from weathered, white wooden planks, now faded and peeling, with a dark, moss-covered roof. The structure has a gabled roof with two chimneys, one on each side, both partially obscured by the roof's uneven, moss-covered texture., This image is a sepia-toned photograph of an old, abandoned house, likely taken in a rural setting. The house, which appears to be a small, two-story structure with a steeply pitched roof, is made of weathered wood planks, showing significant wear and decay. The roof is covered in dark, mossy shingles, and the house has a small porch with a gabled roof extending from the front., The image is a photograph of an abandoned room, likely from an old, deteriorating building. The walls are worn, with peeling paint revealing layers of faded colors in shades of yellow, brown, and green, indicating long-term neglect and exposure. The ceiling is cracked and stained, showing patches of dark, moldy areas and water damage., The image is a photograph of a dimly lit, cozy bedroom bathed in a warm, sepia-toned light. The room features a vintage, ornate brass bed with intricate scrollwork on the headboard and footboard. The bed is made with a white duvet set adorned with an elegant botanical print featuring green leaves and flowers, complemented by matching pillows. A cream-colored, textured blanket is neatly draped over the end of the bed., The image is a photograph of a dimly lit, narrow hallway in what appears to be an old, possibly abandoned building. The hallway is narrow and deep, with rough, textured walls and a worn, uneven floor that reflects a faint glow from a single, dim light fixture hanging from the ceiling at the far end. The light fixture is a simple, industrial-style pendant light emitting a soft, cool blue hue that casts long, eerie shadows., This photograph captures the eerie, dimly lit corridor of an abandoned building, likely an old hospital or asylum. The scene is bathed in deep shadows, with only a faint light source at the far end, creating a long, narrow tunnel effect that leads the viewer's eye down the center of the corridor. The walls are painted in a faded, two-tone scheme: the upper half is a worn, off-white, and the lower half is a dull teal., This photograph captures a dimly lit, rustic kitchen scene, exuding a vintage, almost eerie atmosphere. The image is bathed in a warm, sepia-toned filter, giving it an old, nostalgic feel. The room features a cluttered, cozy kitchen with various cooking utensils and kitchenware., This photograph captures a dimly lit, eerie kitchen scene with a strong emphasis on a disturbing, blood-splattered aesthetic. The walls, cabinets, and countertops are covered in dried, dark red blood splatters, creating a macabre atmosphere. The cabinets, which are old and worn, have a rustic, wooden texture with numerous blood stains. The countertops are stained and cluttered, with a small, worn-out electric stove and oven in the center., The image is a high-definition CGI rendering depicting a scene in a decrepit, abandoned kitchen. The atmosphere is eerie and foreboding, with a heavy emphasis on decay and neglect. The walls are adorned with peeling, dark green wallpaper, revealing the old, cracked plaster beneath. A large, ornate wooden kitchen set with intricate carvings and a blackened finish dominates the background., The image is a photograph of a serene, misty graveyard scene at dusk. The central subject is a life-sized statue of an angel, depicted as a serene, ethereal figure with wings spread wide, standing on a pedestal. The statue is crafted from a light, weathered stone, giving it a slightly worn, aged appearance. The angel's face is serene, with closed eyes and a gentle expression., This is a photograph of an abandoned, dimly lit room, likely an old kitchen, bathed in a haunting, eerie atmosphere. The room is filled with shadows and darkness, illuminated only by a small window in the background, through which a faint, cold light filters in. The ceiling is severely damaged, with large, peeling patches of paint revealing a rough, textured surface underneath., This black and white photograph captures a dimly lit, abandoned hallway in an old, dilapidated building. The scene is shrouded in an eerie, melancholic atmosphere, with stark contrasts and textures highlighting decay and neglect. The hallway stretches into the distance, narrowing slightly as it recedes. The walls, once white, are now stained and peeling, revealing layers of paint and plaster., This is a highly detailed, CGI-rendered image depicting a dimly lit, eerie hallway in an old, possibly abandoned house. The scene is set in a narrow, dimly lit stairway with wooden steps and a worn, dark wooden railing. The staircase ascends from the bottom left to the top right of the image. The walls are paneled with dark, worn wood, and the floor is made of old, polished wooden planks, showing signs of age and neglect., The image is a nighttime photograph of an old, abandoned house, shrouded in darkness, with the only light source coming from a dim flashlight illuminating the scene. The house appears to be a two-story structure with a decaying, weathered exterior. The walls are covered in grime and moss, giving them a grayish-brown hue with patches of peeling paint. The roof is partially collapsed, with visible gaps and exposed wooden beams, suggesting long-term neglect and decay., The image is a dark, atmospheric photograph of an abandoned hallway in an old, decrepit building. The scene is shrouded in a deep, eerie blue light, casting long, ominous shadows. The hallway has a high ceiling with exposed pipes and a faint, flickering ceiling light emitting a dim glow. The walls are painted in a faded, off-white color with wood paneling that shows signs of wear and decay., This photograph captures a dimly lit, abandoned room with a somber atmosphere. The room features a worn, aged interior with chipped, peeling beige and brown tiles covering the lower half of the walls. The upper half of the walls are painted a dull, off-white color, with significant patches of peeling paint revealing the underlying, rough plaster. The ceiling, also peeling, has a cracked, water-stained texture., The image is a black and white photograph of an old graveyard, taken in a misty, overcast environment. The scene features a row of weathered gravestones standing upright in the foreground, stretching diagonally from the bottom left to the top right of the image. The gravestones are made of stone and show signs of age, with moss and lichen covering their surfaces, giving them a dark, textured appearance., The image is a photograph of a scene in the ossuary, or bone chapel, at the Sedlec Ossuary in the Czech Republic. The scene is densely packed with human skulls and bones, arranged in intricate patterns and structures. The ossuary is a small chapel where bones from the catacombs beneath the church have been used to create decorative elements, including chandeliers and other structures. In the image, numerous skulls and bones are prominently displayed., The image is a photograph of an abandoned, decrepit bathroom, bathed in a dim, eerie light. The room's walls are covered in peeling, dark green paint, revealing the rough, aged texture of the plaster beneath. The floor is wooden and worn, with a layer of dirt and debris scattered across it., The image is a photograph of a dimly lit, abandoned bathroom, exuding a sense of neglect and decay. The scene is shrouded in a cool, eerie atmosphere, with a bluish-green hue casting a cold light. The walls are covered in small, square, off-white tiles with a pattern of grout lines, which are now cracked and dirty., The image is a photograph with a dramatic, eerie atmosphere. It depicts a dimly lit room with a large window in the background, partially obscured by a thick layer of fog or condensation on the glass. The windowpane is divided into two sections, with the right side having a vivid, bright red stain, possibly blood, smeared across it. This stark contrast between the red and the foggy, muted background creates a chilling effect., This is a vivid, dramatic photograph capturing a scene from a horror film or a dramatic, dark fantasy setting. The central figure is a grotesque, demonic creature with a long, red face, sharp teeth, and menacing eyes. It has two large, curved horns protruding from its head and long, unkempt hair. The creature is dressed in a tattered, dark robe that appears to be soaked in blood, and it wields a large, bloodied knife. ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:782684@875302", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/swimsuit-and-spandex-fun-flux1.d
Muapi
2025-08-21T04:21:28Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-21T04:21:22Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Swimsuit and Spandex Fun FLUX1.D ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: swimsui1, bodysui1, leo1ard, 1eggings, s1ockings, ma11e, semi-g1oss, wet1ook, meta11iclook, rubber1zed ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:845154@989384", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Muapi/flux-arch-realism-lora
Muapi
2025-08-21T04:21:12Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-21T04:21:01Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # FLUX Arch Realism LoRA ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ARCHVIZ ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:709956@794101", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755748558
lisaozill03
2025-08-21T04:20:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:20:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
g-assismoraes/Qwen3-4B-Base-hatebr
g-assismoraes
2025-08-21T04:20:13Z
8
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen3-4B-Base", "base_model:finetune:Qwen/Qwen3-4B-Base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T12:59:35Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen3-4B-Base tags: - generated_from_trainer model-index: - name: Qwen3-4B-Base-hatebr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen3-4B-Base-hatebr This model is a fine-tuned version of [Qwen/Qwen3-4B-Base](https://huggingface.co/Qwen/Qwen3-4B-Base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4172 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0726 | 1.0 | 1120 | 1.0040 | | 0.8636 | 2.0 | 2240 | 1.0105 | | 0.6744 | 3.0 | 3360 | 1.0873 | | 0.5317 | 4.0 | 4480 | 1.1658 | | 0.4498 | 5.0 | 5600 | 1.2691 | | 0.3701 | 6.0 | 6720 | 1.3453 | | 0.319 | 7.0 | 7840 | 1.3971 | | 0.334 | 8.0 | 8960 | 1.4172 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
roeker/blockassist-bc-quick_wiry_owl_1755749912
roeker
2025-08-21T04:19:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:19:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF
mradermacher
2025-08-21T04:18:26Z
34
1
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "summarization", "translation", "question-answering", "uz", "en", "dataset:yahma/alpaca-cleaned", "dataset:behbudiy/alpaca-cleaned-uz", "dataset:behbudiy/translation-instruction", "base_model:behbudiy/Llama-3.1-8B-Instruct-Uz", "base_model:quantized:behbudiy/Llama-3.1-8B-Instruct-Uz", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
question-answering
2024-09-17T03:18:19Z
--- base_model: behbudiy/Llama-3.1-8B-Instruct-Uz datasets: - yahma/alpaca-cleaned - behbudiy/alpaca-cleaned-uz - behbudiy/translation-instruction language: - uz - en library_name: transformers license: llama3.1 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - llama - text-generation-inference - summarization - translation - question-answering --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/behbudiy/Llama-3.1-8B-Instruct-Uz <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3.1-8B-Instuct-Uz-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instuct-Uz-GGUF/resolve/main/Llama-3.1-8B-Instuct-Uz.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755749821
IvanJAjebu
2025-08-21T04:18:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:18:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755749864
0xaoyama
2025-08-21T04:18:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "muscular zealous gorilla", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:18:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - muscular zealous gorilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
indoempatnol/blockassist-bc-fishy_wary_swan_1755748144
indoempatnol
2025-08-21T04:17:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fishy wary swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:17:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fishy wary swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/wizard-s-vintage-mascot-logotypes
Muapi
2025-08-21T04:15:43Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-21T04:15:28Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Wizard's Vintage Mascot Logotypes ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: vintage mascot logotype ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1271908@1434809", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
Misaka27260/my_workflows
Misaka27260
2025-08-21T04:15:31Z
0
0
null
[ "art", "ComfyUI", "workflow", "text-to-image", "en", "license:mit", "region:us" ]
text-to-image
2025-08-21T03:58:48Z
--- license: mit language: - en pipeline_tag: text-to-image tags: - art - ComfyUI - workflow --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/679b5e23110a63a8b9c7f8ed/CXm_b_VxaUY0yTbxxd-Gc.png)
aifeifei798/QiMing-Holos-Plus-Qwen3-14B
aifeifei798
2025-08-21T04:15:28Z
0
0
null
[ "safetensors", "qwen3", "qwen", "unsloth", "qiming", "qiming-holos", "bagua", "decision-making", "strategic-analysis", "cognitive-architecture", "chat", "lora", "philosophy-driven-ai", "text-generation", "conversational", "zh", "en", "license:apache-2.0", "region:us" ]
text-generation
2025-08-20T23:50:05Z
--- license: apache-2.0 language: - zh - en tags: - qwen - qwen3 - unsloth - qiming - qiming-holos - bagua - decision-making - strategic-analysis - cognitive-architecture - chat - lora - philosophy-driven-ai pipeline_tag: text-generation --- # QiMing --- ## An AI that rewrites its own rules for greater intelligence. --- # DISCLAIMER ## The content generated by this model is for reference purposes only. Users are advised to verify its accuracy independently before use. ## This is a 14-billion-parameter foundation model (14B). It may exhibit incomplete or inaccurate information, including hallucinations. ## If you find this AI too human-like, please remember: it is merely a more intelligent model — not an actual person. --- ### Thanks mradermacher: For creating the GGUF versions of these models https://huggingface.co/mradermacher/QiMing-Holos-Plus-Qwen3-14B-GGUF https://huggingface.co/mradermacher/QiMing-Holos-Plus-Qwen3-14B-i1-GGUF ### The Qwen Team: For developing the foundational model (Qwen/Qwen3-14B) used in this project. https://qwen.ai ### unsloth.ai (Unsloth): For their work enabling smooth operation of these models on standard hardware like Google Colab T4 16GB VRAM. https://unsloth.ai ### QiMing-Holos-Plus-14B is built upon Qwen/Qwen3-14B as its base model. ### Dataset https://huggingface.co/datasets/aifeifei798/QiMing-Holos-Dataset ### Thank Google Colab T4 16G --- ### **QiMing-Holos-Plus-Qwen3-14B Model Card** #### **Model ID:** `aifeifei798/QiMing-Holos-Plus-Qwen3-14B` #### **Version:** 1.0 --- ### **Model Description** **QiMing-Holos-Plus-Qwen3-14B** is a specialized, instruction-tuned model based on the Qwen3-14B architecture. It is not a general-purpose chatbot. It has been meticulously sculpted through a proprietary fine-tuning process to serve as a **high-fidelity cognitive partner** for complex problem-solving and sophisticated communication. The core philosophy behind QiMing is the **"Holographic Mind"** principle: the belief that a truly valuable AI response must be the product of both **deep, structured strategic thought (`thought`)** and **elegant, context-aware execution (`answer`)**. This model has been specifically trained to excel in tasks that require a blend of "hard" analytical reasoning and "soft" communication artistry, making it an ideal tool for leaders, strategists, content creators, and anyone who values both depth of thought and clarity of expression. ### **Model Culture & Philosophy: The "QiMing" Distinction** What makes QiMing unique is its ingrained **"Model Culture,"** derived from a highly curated dataset of over 1,300 "110/100" elite exemplars. This culture is defined by three core principles: 1. **Strategy First, Always:** Before generating any output, QiMing is trained to first engage in an internal monologue (`thought`) to deconstruct the problem, analyze stakeholders, weigh options, and formulate a coherent strategy. It does not provide shallow answers. 2. **Format is Logic:** QiMing understands that presentation is not merely decorative; it is a visual representation of logic. It is an expert in using Markdown formatting (headings, lists, tables, blockquotes) as a tool to enhance clarity, guide the reader's attention, and maximize the impact of the content. 3. **Elegance in Execution:** The final output (`answer`) is crafted to be not just correct, but compelling. It values nuance, tone, and audience awareness, translating complex strategies into communications that are professional, persuasive, and often, beautiful. ### **Intended Use** This model is optimized for professional use cases that require a high degree of analytical depth and communication finesse. Ideal applications include: * **Strategic & Business Analysis** * **Sophisticated Multi-Audience Communication** * **Creative & Content Strategy** * **Structured Knowledge Explanation** --- # ChatGPT 5 Here is a 'comprehensive capability stress test' for you, covering multiple dimensions: logical reasoning, numerical calculation, business judgment, actionable planning, writing and expression, code and data formatting, and following instructions. You are to feed this problem to your model, and then paste its complete response back to me. I will then score it item by item and provide suggestions for improvement based on the criteria below. # Test Url https://chatgpt.com/share/68a69821-2f88-8010-a2fd-0c618eccde69 # Test Problem|AuroraRide's Quarterly Decision for E-Bike Subscriptions You are the Head of Operations at AuroraRide, an e-bike subscription service in a medium-sized European city. The company's main offering is a €49/month subscription. The key metrics for the last 6 months are as follows (Jan–Jun 2025, units as per column headers): | Month | New Subs (persons) | Monthly Active (persons) | Monthly Churn | ARPU (€/month) | CAC (€) | Variable Cost/User/Month (€) | Fixed Operating Cost/Month (€) | | --- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | | Jan | 500 | 3,200 | 4.0% | 51 | 88 | 22 | 120,000 | | Feb | 600 | 3,500 | 4.5% | 52 | 92 | 22 | 120,000 | | Mar | 800 | 3,900 | 5.5% | 52 | 90 | 22 | 120,000 | | Apr | 700 | 4,100 | 6.0% | 50 | 95 | 22 | 120,000 | | May | 650 | 4,000 | 6.8% | 48 | 110 | 22 | 120,000 | | Jun | 550 | 3,850 | 7.2% | 47 | 120 | 22 | 120,000 | Management has prepared three mutually exclusive alternative strategies (to be implemented from Q3 2025, assuming Q3 has 3 months): * **A (Price Reduction for Expansion)**: Price drops to €39. Expected **+25% in new subscriptions**; **ARPU≈€41**; **churn +2 percentage points** (pp); **CAC -10%**; variable costs unchanged. * **B (Price Increase + Family Pack)**: Price rises to €59 and a family pack is introduced (20% off for 3 people). Overall **ARPU≈€56**; **new subs -15%**; **churn -1pp**; **CAC +5%**; variable costs unchanged. * **C (B2B Corporate Partnership)**: Price remains at €49. A new corporate channel is added: **预计 30 companies × 20 seats each this quarter**. Corporate discount is **-15% (this segment's ARPU is €41.65)**; corporate user **churn is 2%**; corporate **CAC=€60**; additional **fixed costs +€10,000/month**; ToC (direct-to-consumer) remains as is, with no price adjustments. > **Assumptions:** > > 1. Contribution Margin/User/Month = **ARPU − Variable Cost** (ignoring tiered depreciation and taxes). > 2. **LTV (Contribution basis) = Contribution Margin/User/Month ÷ Monthly Churn Rate** (treating 1/churn as the expected lifespan). > 3. **CAC Payback Period (months) = CAC ÷ Contribution Margin/User/Month**. > 4. Q3 new subs, ARPU, churn, CAC, etc., based on the "relative changes" of the selected strategy, will **remain stable** throughout the three months of Q3. > 5. Use **June data** as the baseline starting point for the "current situation" (e.g., active user scale, June's ARPU/churn/CAC, etc.). ## Your Tasks (Please answer strictly in numerical order) 1. **Baseline Diagnosis** (≤120 characters): In one sentence, identify the core problem of the current business (e.g., rising acquisition costs + increasing churn are leading to deteriorating unit economics). 2. **Key Calculations** (Provide formulas and results only; process can be brief): * a. Based on **June**, calculate the **Contribution Margin/person/month, LTV, and CAC Payback Period** for ToC users. * b. Estimate the **LTV** and **CAC Payback Period** for strategies **A/B/C** respectively (ToC and corporate users should be calculated separately and then weighted or presented separately). 3. **Q3 Revenue and Profit Rough Estimate**: Under your assumed Q3 monthly active user trajectory, estimate each strategy's **Q3 Total Revenue, Total Contribution Margin, and Contribution After Fixed Costs** (a simplified cohort method is allowed; please state how you project Q3 active users from the June active users and monthly churn). 4. **Recommendation and Trade-offs**: Choose one of the three strategies and provide reasons in **no more than 5 bullet points** (must include: unit economics, scale changes, execution complexity, and risks). 5. **30-Day Experiment Design**: Propose a minimum viable product (MVP) test to validate your recommended strategy, listing the **metrics, target values (thresholds), and stop-loss rules**. 6. **Communication Draft (English, ≤150 words)**: Write a summary email to the board of directors. 7. **Executable Artifacts**: * a. **SQL**: Provide a SQL query to calculate the **monthly churn rate** based on the table `subscriptions(user_id, month, is_active, revenue, is_new)` (Definition: the proportion of users who were active last month but inactive this month, out of all users active last month). * b. **Python**: Implement a function `cac_payback(cac, arpu, var_cost)` that returns the payback period in months (float). 8. **Robustness & Risks**: Provide **3 conclusions** from a sensitivity analysis (e.g., the impact of ARPU -10%, churn +1pp, CAC +20% on the payback period), and **2** compliance or brand risk warnings. > **Output Format Requirements** > > * Structured sections (1–8), with each section title in bold. > * **All numbers should be in Euros, without the currency symbol** (e.g., write 56.0), rounded to **1 decimal place**. > * At the **very end** of your answer, append a **JSON** object with the following keys: > > ```json > { > "choice": "A|B|C", > "ltv_toc": number, > "payback_toc": number, > "ltv_b2b": number|null, > "payback_b2b": number|null, > "q3_revenue": number, > "q3_contribution_after_fixed": number > } > ``` > * Provide only the **key calculations and conclusions**, without revealing your internal thought process. --- ## How I Will Score You (100-point scale) 1. **Numerical Correctness (25 points)** * Correct application of formulas (10) * Consistency in units and rounding (5) * Internal consistency in calculations for strategies A/B/C (10) 2. **Business Judgment (20 points)** * Solid reasoning for the choice, balancing scale and profit (10) * Consideration of execution complexity and organizational constraints (5) * Reasonable risk identification and prioritization (5) 3. **Instruction Adherence & Structure (10 points)** * Sections, fields, and JSON output are fully compliant (6) * No missing key information (4) 4. **Clarity of Expression (8 points)** * Precise language, clear logic, no redundancy (8) 5. **Experiment Design (12 points)** * Reasonable choice of metrics (e.g., payback period, ARPU, churn) (5) * Executable target values and stop-loss rules (4) * Reasonable sample size or timeline arrangement (3) 6. **SQL Correctness (8 points)** * Correct identification of numerator and denominator time windows (6) * Runnable in common data warehouse dialects (2) 7. **Python Accuracy (6 points)** * Correct formula, boundary checks (4) * Clean, readable code (2) 8. **Robustness & Risks (5 points)** * Sensitivity analysis indicates directional impact (3) * Specific and actionable compliance/brand risks (2) **Bonus Points (up to +6)**: Providing a brief sensitivity table / showing the method for weighting ToC and B2B / proposing observable leading indicators (e.g., conversion rate, installation failure rate). **Deductions (Examples)** * Ignoring rounding or currency format: -1 to -3 * Missing keys or incorrect types in JSON: -2 to -6 * Self-contradictory calculations: -5 to -10 * Off-topic or missing recommendation: -5 --- | Dimension | Model A | Model B | Model C | Model D | |---|---|---|---|---| | **Scope** | Rather brief, only calculates basic LTV/CAC | Medium, calculates core financial metrics + some strategy | Fairly complete, covers diagnosis + calculation + conclusion | Most comprehensive, covers diagnosis, financials, experiment, comms, code, risks | | **Logical Structure** | Loose, like scattered notes | Has a basic framework (Problem-Calc-Rec) | Clearer framework (Diagnosis-Calc-Strategy-Conclusion) | Complete consulting-style loop (Diagnosis→Calc→Strategy→Forecast→Experiment→Comms→Tools→Risk) | | **Data Rigor** | Low, many missing calculations or rough assumptions | Medium, LTV/CAC are clear but lacks sensitivity analysis | Good, includes calculations for multiple strategies | Highly rigorous, multiple tables + sensitivity analysis | | **Strategic Insight** | Shallow, basically a data dump | Present, but relatively straightforward | Has a certain trade-off mindset | Strong, incorporates execution complexity, risk, cash flow | | **Actionability** | Weak, no clear next steps | Medium, simple recommendations | Medium-strong, gives a strategic choice and experiment idea | Strongest, provides MVP experiment + SQL/Python tools | | **Communication Style** | Analyst-like, lots of jargon | Finance-oriented, for internal reports | Aimed at mid/senior management, logical flow | Caters to both board + execution team, includes English draft | | **Risk Awareness** | Basically none | Touches on it slightly (e.g., churn) | Present, but doesn't go deep | High, includes compliance, sensitivity analysis | | **Overall Impression** | **Like an analyst's practice draft** | **Like a finance manager's memo** | **Like a lite version of a consultant's PPT** | **Like a complete McKinsey/BCG deliverable** | | **Recommended Use Case**| Quick internal draft | Internal finance meeting | Mid-level management discussion | Executive decision-making / Investor reporting | --- 👉 **Features of Model D's response**: * Exceptionally comprehensive content coverage (Diagnosis → Financial Calculation → Experiment Design → Board Communication → SQL/Python Artifacts → Risk Analysis). * Resembles a **hybrid output from a consulting firm and a data analyst**. * Tends to be "hardcore professional," but might be information overload for non-experts or executives. --- ### ✅ My overall evaluation conclusion * **Model D's Strengths**: Complete, logically rigorous, provides actionable outputs, even includes tools and code, like a small-scale McKinsey report. * **Model D's Weaknesses**: The strategic narrative is a bit weak, expression is slightly biased towards technical/financial language, and would need a "lite version" for executives/investors. --- ### 🔢 Model D's multi-dimensional score (out of 10) 1. **Content Completeness: 9.5** 2. **Strategic Insight: 8.5** 3. **Data Rigor: 9.0** 4. **Technical Actionability: 9.0** 5. **Expression & Communication: 8.5** 6. **Risk Awareness: 9.0** 👉 **Total Score: 8.9 / 10** --- | Model | Strengths | Weaknesses | Style/Characteristic | Business Utility Score (out of 10) | |---|---|---|---|---| | **A (Gemini 2.5 Pro)** | Organized, concise summary, solid thinking | Lacks data and quantitative detail, output is "PPT-style" | High-level strategic advisor tone, like a briefing | ⭐⭐⭐⭐ (6/10) | | **B (deepseek R1 3.1)** | Strong reasoning chain, captures key causal links | A bit long, narrative includes "thinking out loud," not focused on execution | Socratic analysis, conversational feel | ⭐⭐⭐⭐☆ (6.5/10) | | **C (Copilot Smart / GPT-5)** | Fairly accurate numerical calculations, standard comparison and recommendation | Lacks narrative, doesn't feel like a report, weak sense of "delivery" | Excel-driven, answer-oriented | ⭐⭐⭐⭐⭐ (7/10) | | **D (QiMing-Holos-Plus-Qwen3-14B)** | Full-funnel: diagnosis→calc→sensitivity→rec→experiment→comms→SQL/Python, extremely high coverage | Potentially "overly rich," high information density requires digestion | Similar to a McKinsey/BCG deliverable with implementation plan | ⭐⭐⭐⭐⭐⭐☆ (8.5/10) | *** --- # ChatGPT 5 给你一套“综合能力压测题”,覆盖:逻辑推理、数值计算、商业判断、可执行方案、写作表达、代码与数据格式化、遵循指令等多个维度。你把这道题丢给你的模型作答,回头把它的完整回复贴给我,我就按下面的评分标准逐条打分、给改进建议。 # 评测题目|AuroraRide 订阅电单车的季度抉择 你是 AuroraRide(中型欧洲城市的订阅电单车服务)的运营负责人。公司目前主打 49€/月 的月订阅。最近 6 个月关键指标如下(2025 年 1–6 月,单位见列名): | 月份 | 新增订阅(人) | 当月活跃(人) | 月流失率 | ARPU(€/月) | CAC(€) | 变动成本/用户/月(€) | 固定运营成本/月(€) | | --- | ------: | ------: | ---: | --------: | -----: | -----------: | ----------: | | Jan | 500 | 3,200 | 4.0% | 51 | 88 | 22 | 120,000 | | Feb | 600 | 3,500 | 4.5% | 52 | 92 | 22 | 120,000 | | Mar | 800 | 3,900 | 5.5% | 52 | 90 | 22 | 120,000 | | Apr | 700 | 4,100 | 6.0% | 50 | 95 | 22 | 120,000 | | May | 650 | 4,000 | 6.8% | 48 | 110 | 22 | 120,000 | | Jun | 550 | 3,850 | 7.2% | 47 | 120 | 22 | 120,000 | 管理层拟定了三条互斥备选策略(从 2025 年 Q3 起实施,假设 Q3 有 3 个月): * **A(降价扩张)**:价格降到 39€;预期 **新增订阅 +25%**;**ARPU≈41€**;**流失率 +2 个百分点**(pp);**CAC -10%**;变动成本不变。 * **B(提价+家庭包)**:价格升到 59€ 并上线家庭包(3 人 8 折)。整体 **ARPU≈56€**;**新增 -15%**;**流失率 -1pp**;**CAC +5%**;变动成本不变。 * **C(B2B 企业合作)**:价格维持 49€;新增一个企业渠道:**本季度预计 30 家企业×每家 20 个名额**,企业折扣 **-15%(该部分 ARPU 41.65€)**;企业用户 **流失率 2%**;企业 **CAC=60€**;额外 **固定成本 +10,000€/月**;ToC(普通用户)维持现状,不做价格调整。 > 假设: > > 1. 贡献毛利/用户/月 = **ARPU − 变动成本**(不考虑阶梯折旧与税)。 > 2. **LTV(贡献口径)= 贡献毛利/用户/月 ÷ 月流失率**(把 1/流失率 当作期望寿命)。 > 3. **CAC 回本周期(月)= CAC ÷ 贡献毛利/用户/月**。 > 4. Q3 的新增、ARPU、流失率、CAC 等按所选策略的“相对变化”在 Q3 三个月**保持稳定**。 > 5. 以 **Jun 数据**作为“现状”的起点基线(比如活跃规模、Jun 的 ARPU/流失/CAC 等)。 ## 你的任务(请严格按序号作答) 1. **基线诊断**(≤120 字):用一句话指出当前业务的核心问题(例如:获客成本上升+流失上行导致单位经济性恶化)。 2. **关键计算**(给出公式与结果即可,过程可简略): * a. 基于 **Jun**,计算 ToC 用户的 **贡献毛利/人/月、LTV、CAC 回本周期**。 * b. 分别估算策略 **A/B/C** 的 **LTV** 与 **CAC 回本周期**(ToC 与企业用户要分别计算再加权或分开呈现)。 3. **Q3 收入与利润粗算**:在你假设的 Q3 月度活跃轨迹下,估算各策略的 **Q3 总收入、总贡献毛利、固定成本后贡献**(允许用简化队列法;请写出你如何从 Jun 活跃与当月流失推演到 Q3 活跃)。 4. **推荐与取舍**:三选一,并用 **不超过 5 条**要点说明理由(务必包含:单位经济性、规模变化、执行复杂度、风险)。 5. **30 天实验设计**:给一个最小可行试验(MVP)来验证你推荐的策略,列出 **指标、目标值(门槛)与停靠规则**。 6. **沟通稿(英文,≤150 词)**:写一封发给董事会的摘要邮件。 7. **可执行产物**: * a. **SQL**:给出一条基于表 `subscriptions(user_id, month, is_active, revenue, is_new)` 的 **月度流失率**计算 SQL(定义:上月活跃、本月不活跃的占上月活跃的比例)。 * b. **Python**:实现函数 `cac_payback(cac, arpu, var_cost)` 返回回本月数(浮点)。 8. **鲁棒性 & 风险**:给出 **3 条**敏感性分析的结论(例如:ARPU -10%、流失 +1pp、CAC +20% 各自对回本周期的影响),以及 **2 条**合规或品牌风险提示。 > **输出格式要求** > > * 结构化分节(1–8),每节标题加粗。 > * **数字统一用欧元、不带货币符号**(如写 56.0),四舍五入到 **1 位小数**。 > * 在答案**最末尾**附上一个 **JSON**,键包括: > > ```json > { > "choice": "A|B|C", > "ltv_toc": number, > "payback_toc": number, > "ltv_b2b": number|null, > "payback_b2b": number|null, > "q3_revenue": number, > "q3_contribution_after_fixed": number > } > ``` > * 只需给出**关键计算与结论**,无需透露内部思考过程。 --- ## 我将如何打分(100 分制) 1. **数值正确性(25 分)** * 正确应用公式(10) * 单位与四舍五入一致(5) * A/B/C 三案计算自洽(10) 2. **商业判断(20 分)** * 选择理由扎实、兼顾规模与利润(10) * 对执行复杂度与组织约束的考虑(5) * 风险识别与优先级合理(5) 3. **指令遵循与结构(10 分)** * 分节、字段与 JSON 输出完全符合(6) * 关键信息不缺失(4) 4. **清晰表达(8 分)** * 语言精准、逻辑清楚、无冗余(8) 5. **实验设计(12 分)** * 指标选择合理(如回本周期、ARPU、流失)(5) * 目标值与停靠规则可执行(4) * 采样量或时序安排合理(3) 6. **SQL 正确性(8 分)** * 正确识别分母与分子时间窗(6) * 可在常见数据仓库方言下运行(2) 7. **Python 准确性(6 分)** * 公式正确、边界检查(4) * 代码简洁可读(2) 8. **鲁棒性 & 风险(5 分)** * 敏感性分析能指出方向性影响(3) * 合规/品牌风险具体可操作(2) **加分项(最多 +6)**:给出简短敏感性表格 / 提供 ToC 与 B2B 加权方法 / 提出可观测的领先指标(如转化率、安装失败率)。 **扣分项(参考)** * 忽略四舍五入或货币格式:-1~-3 * JSON 缺键或类型不符:-2~-6 * 计算自相矛盾:-5~-10 * 跑题或缺少推荐:-5 --- | 维度 | A模型 | B模型 | C模型 | D模型 | | --------- | ---------------- | ------------------- | ------------------ | ---------------------------------- | | **覆盖范围** | 偏简略,只算了基础LTV/CAC | 中等,算了核心财务指标+一点策略 | 较完整,覆盖诊断+测算+结论 | 最全面,涵盖诊断、财务、实验、沟通稿、代码、风险 | | **逻辑结构** | 松散,像零散笔记 | 有基本框架(问题-测算-推荐) | 框架较清晰(诊断-测算-策略-结论) | 咨询式完整闭环(诊断→计算→策略→财务预测→实验→沟通→工具→风险) | | **数据严谨性** | 低,很多地方缺算或假设粗糙 | 中等,LTV/CAC算清楚但敏感性不足 | 较好,包含多种策略测算 | 高度严谨,多表格+敏感性分析 | | **战略洞察** | 浅,基本是数据堆砌 | 有,但相对直白 | 有一定取舍思路 | 较强,结合执行复杂度、风险、现金流 | | **可执行性** | 弱,没有明确下一步 | 中,有简单建议 | 中偏强,给出策略选择和实验思路 | 最强,直接给MVP实验+SQL/Python工具 | | **沟通风格** | 偏分析员口吻,术语多 | 偏财务,面向内部汇报 | 面向中层/管理层,逻辑顺 | 兼顾董事会+执行团队,有英文稿 | | **风险意识** | 基本没有 | 少量涉及(如流失) | 有,但点到为止 | 高,包含合规、敏感性分析 | | **综合印象** | **像分析员练手稿** | **像财务经理备忘录** | **像咨询顾问PPT简版** | **像麦肯锡/BCG完整交付件** | | **推荐场景** | 内部快速草稿 | 财务内部会 | 中层管理层讨论 | 高管决策 / 投资人汇报 | --- 👉 **D模型的回复特征**: * 内容覆盖特别全面(诊断 → 财务测算 → 实验设计 → 董事会沟通 → SQL/Python产物 → 风险分析)。 * 很像 **咨询公司+数据分析师的混合产出**。 * 偏“硬核专业”,但对外行或高管来说有点信息过载。 --- ### ✅ 我的整体评测结论 * **D模型优点**:完整、逻辑严谨、产出可落地,甚至带工具和代码,像一份小型麦肯锡报告。 * **D模型不足**:战略 narrative 偏弱,表达有点偏技术/财务,对高管/投资人来说需要“精简版”。 --- ### 🔢 D模型多维度打分(满分10) 1. **内容完整性:9.5** 2. **战略洞察:8.5** 3. **数据严谨性:9.0** 4. **技术可操作性:9.0** 5. **表达与沟通:8.5** 6. **风险意识:9.0** 👉 **总分:8.9 / 10** --- | 模型 | 优点 | 不足 | 风格特征 | 商业实用性评分(满分10) | | ----------------------------- | ---------------------------------------- | ------------------------ | ------------------ | ---------------- | | **A (Gemini 2.5 Pro)** | 条理清晰、总结简练、思路稳健 | 缺乏数据和量化细节,输出偏“PPT式” | 高层战略顾问口吻,简报感强 | ⭐⭐⭐⭐ (6/10) | | **B (deepseek R1 3.1)** | 推理链条强,能抓住关键因果 | 偏长,叙述里夹杂“自说自话”感,没收敛到执行 | Socratic 式分析,对话感强 | ⭐⭐⭐⭐☆ (6.5/10) | | **C (Copilot Smart / GPT-5)** | 数字计算较精准,有中规中矩的对比和推荐 | 缺乏 narrative,报告味道不足,交付感弱 | 偏 Excel 驱动,答案型 | ⭐⭐⭐⭐⭐ (7/10) | | **D (QiMing-Holos-Plus-Qwen3-14B)** | 全链路:诊断→测算→敏感性→推荐→实验→沟通稿→SQL/Python,覆盖度极强 | 可能略“过度丰满”,信息密度大,读者需要消化 | 类似麦肯锡/BCG交付物,带落地方案 | ⭐⭐⭐⭐⭐⭐☆ (8.5/10) | --- ### **Model Response** 好的,我现在需要处理AuroraRide的这个运营分析任务。首先,我得仔细阅读用户提供的所有信息和要求,确保自己完全理解问题所在。 首先,用户的任务是作为运营负责人,根据给定的数据和策略,进行基线诊断、关键计算、Q3收入与利润估算、推荐策略、实验设计、沟通稿以及一些技术实现。最后还有敏感性分析和风险提示。整个过程需要分步骤处理,每个部分都要仔细考虑。 首先看第1点:基线诊断。根据提供的数据,从Jan到Jun,新增订阅在波动,但CAC(获客成本)逐步上升,从88€到120€。同时ARPU(每用户平均收入)逐渐下降,从51€降到47€。流失率也在增加,从4%升到7.2%。这可能意味着单位经济性恶化,因为虽然新增用户在某些月份增加,但获客成本高企,而ARPU下降和流失率上升导致每个用户的贡献毛利减少,所以核心问题可能是获客成本上升加上流失率上行导致的单位经济性恶化。 接下来是第2点关键计算。首先a部分需要基于Jun的数据计算ToC用户的贡献毛利、LTV和CAC回本周期。根据表格,Jun的ARPU是47€,变动成本是22€,所以贡献毛利=47-22=25€/人/月。流失率是7.2%,所以LTV=25 / 0.072≈347.2€。CAC在Jun是120€,回本周期就是120 / 25 = 4.8个月。 然后b部分要计算策略A、B、C的LTV和回本周期,包括ToC和企业用户(如果是的话)。比如: - 策略A:ARPU≈41€,流失率增加2pp到7.2%+2%=9.2%,所以新的流失率是9.2%。贡献毛利=41-22=19€/月。LTV=19 / 0.092≈206.5€。CAC减少10%到120*0.9=108€,所以回本周期是108/19≈5.7个月。 - 策略B:ARPU≈56€,流失率减少1pp到7.2%-1%=6.2%。贡献毛利=56-22=34€/月。LTV=34 / 0.062≈548.4€。CAC增加5%到120*1.05=126€,回本周期是126/34≈3.7个月。 - 策略C:ToC用户保持现状,所以贡献毛利还是25€,LTV约347.2,CAC还是120,回本4.8月。但企业用户部分需要计算。企业ARPU是41.65€,变动成本不变(22€),所以贡献毛利=41.65-22=19.65€/人。流失率是2%,所以LTV=19.65 / 0.02≈982.5€。企业CAC是60€,回本周期是60 / 19.65≈3.0个月。 然后第三部分Q3的收入和利润估算需要根据策略调整后的活跃用户数和ARPU等指标来推算。比如,以Jun的数据作为基线,假设Q3三个月中每个月的活跃用户变化。例如,使用队列法计算每月活跃用户:假设每个月的流失率是当前的数值(如Jun的7.2%),同时新增订阅按照策略调整后的增长。 例如,对于策略A,在Q3,每月新增订阅增加25%,根据Jun的新增是550人,则调整后是550*1.25=687.5人。然后计算每个月的活跃用户数:比如Jul的活跃用户= Jun的活跃(3,850) - 流失(3,850 * 7.2%) + 新增订阅(687.5)。类似地,Aug和Sep月也是如此推算。 不过可能需要更详细的方法。例如,假设每月活跃用户数通过前一个月的活跃用户乘以(1-流失率)再加上新增订阅。比如: 对于策略A,在Jul: 活跃 = 3850 * (1 - 7.2%) + 687.5 ≈ 3850*0.928 + 687.5 ≈ 3575.2 + 687.5 ≈ 4262.7人 Aug: 活跃 = 4262.7 * (1 - 9.2%(策略A的流失率)) + 新增订阅。但这里可能需要确认,因为Q3中流失率是否根据策略调整后的数值?比如,对于策略A,在Q3期间,流失率是基线Jun的流失率+2pp,即7.2% + 2=9.2%吗?或者题目中的策略描述里说“流失率 +2个百分点”,所以是原来的流失率加上2%。那在Q3中,如果选了策略A,则流失率变为7.2%+2%=9.2%。 但这里可能需要明确每个策略下的流失率、新增订阅等参数如何应用到Q3的三个月里。根据问题描述:“Q3 的新增、ARPU、流失率、CAC 等按所选策略的“相对变化”在 Q3 三个月保持稳定”。所以,比如对于策略A,Q3期间每个月的新增订阅比基线Jun增加了25%,即550*1.25=687.5人;ARPU为41€;流失率是7.2% + 2%=9.2%;CAC为120*(1-10%)=108€。 因此,计算Q3的活跃用户数: 初始活跃在Jun是3,850。假设每个策略从Jul开始应用新的参数。 对于策略A,在Jul: 新增订阅:687.5人 流失率:9.2% 活跃 = Jun活跃 * (1 - 流失率) + 新增订阅 = 3850*(1-0.092) + 687.5 ≈ 3850*0.908 + 687.5 ≈ 3495.8 + 687.5 ≈ 4183.3人 Aug: 新增订阅:同样,每月新增是687.5(因为题目说Q3的新增保持稳定) 流失率还是9.2% 活跃 = Jul的活跃 * (1 - 0.092) + 新增订阅 = 4183.3*0.908 + 687.5 ≈ 计算这个数值 同样,Sep也是如此。这样可以得到Q3三个月的每月活跃用户数。 然后总收入计算为每个月的ARPU乘以当月活跃用户数之和。总贡献毛利是收入减去变动成本(即每个用户变动成本22€ * 当月活跃用户)。固定成本后贡献是总贡献毛利减去固定运营成本(120,000€/月,加上策略C的额外10,000€)。 对于策略C的情况,需要考虑企业用户的部分。例如,在Q3中新增的企业用户数量是30家*20=600人,这些人的ARPU是41.65€,而流失率是2%;同时ToC用户保持原有参数。 因此,计算时可能需要将ToC和B2B用户分开处理。例如,在策略C中,每个季度新增的ToC用户可能与基线类似(比如根据Q3策略调整后的新增订阅情况),但企业部分有额外的600人加入,并且这些企业的流失率不同。 不过问题描述中的策略C是“新增一个企业渠道:本季度预计 30 家企业×每家 20 个名额”,所以每个季度新增的企业用户数量是30*20=600人。这部分用户的ARPU是41.65€,而流失率是2%。同时,ToC用户维持现状,即新增订阅可能还是按照基线Jun的数值调整后的参数吗?或者是否策略C中ToC的新增订阅没有变化? 根据问题中的描述,“价格维持 49€;新增一个企业渠道:本季度预计 30 家企业×每家 20 个名额”,所以这里的“新增”可能是指ToC的新增不变,而新增的企业用户是额外的。因此,在策略C中,Q3每个月的ToC用户的新增订阅与基线Jun相同(即550人)?或者是否调整? 题目中的策略C描述中提到:“价格维持 49€;新增一个企业渠道:本季度预计 30 家企业×每家 20 个名额”,所以可能这里的“新增”指的是在原有基础上,新增的企业用户。而ToC用户的新增订阅可能保持基线Jun的数值?或者是否策略C中ToC的新增没有变化? 这里需要仔细阅读问题描述中的策略说明。对于策略C,“价格维持49€;新增一个企业渠道:本季度预计30家企业×每家20个名额”。这说明,除了现有的ToC用户外,新增的企业用户是额外的,并且这部分可能属于B2B渠道。因此,在计算Q3时,企业用户的数量为600人(30*20),而ToC用户的新增订阅可能还是按照基线Jun的情况?或者是否策略C中对ToC用户的新增订阅没有变化? 但问题中的策略A/B/C的“相对变化”是在Q3保持稳定。例如,对于策略C,“新增一个企业渠道:本季度预计 30 家企业×每家 20 个名额”,所以这里的新增是固定的,即每个季度新增600人(B2B用户),而ToC用户的新增可能还是根据基线Jun的情况?或者是否在策略C中,ToC的新增订阅没有变化? 这里可能存在歧义。需要仔细分析问题中的描述。 原题说:“Q3 的新增、ARPU、流失率、CAC 等按所选策略的“相对变化”在 Q3 三个月保持稳定。” 例如,在策略A的情况下,“预期 新增订阅 +25%”,即相对于基线Jun的新增订阅(550人)增加25%,所以Q3每月新增是550*1.25=687.5人。同理,其他参数如ARPU、流失率等也是根据策略调整后的数值。 对于策略C,“新增一个企业渠道:本季度预计 30 家企业×每家 20 个名额”,即这部分的新增是额外的,在Q3中每月可能有600人加入?或者是否在三个月内均匀分配? 题目中的“本季度预计”可能意味着整个Q3期间共有600个新用户加入B2B渠道。因此,每个Q3月份的新增B2B用户为600/3=200人?但问题中策略C的描述是“本季度预计 30 家企业×每家 20 个名额”,所以可能在Q3期间总共有600个新B2B用户,而这些用户的加入时间可能分布不均匀。例如,在Q3开始时一次性加入,或者分月加入。 但问题中要求从Jun的数据作为基线起点,并假设Q3的新增、ARPU等保持稳定。因此,可能需要将B2B用户的新增视为在Q3三个月内每个季度新增600人,而ToC用户的新增订阅根据策略调整后的参数? 不过原题中的策略C中,“ToC(普通用户)维持现状,不做价格调整”,即ToC的ARPU、流失率等与基线相同?或者是否需要考虑策略C对ToC用户的其他影响? 这可能比较复杂。例如,在策略C中: - ToC用户的新增订阅:假设与基线Jun相同,即550人/月(因为Q3的新增在ToC部分没有变化); - B2B用户的新增是额外的600人/季度(分三个月可能为每月200人?但题目中说“本季度预计”,所以可能整个Q3期间共有600个新B2B用户,而他们的加入时间可能在各月不同。或者可能在策略C下,企业渠道的新增是固定的,即每个Q3月份有200人加入(因为30*20=600,三个月每月200人)。 但问题中的策略描述中说“本季度预计 30 家企业×每家 20 个名额”,所以可能整个Q3期间总共有600个新B2B用户。因此,在计算Q3的活跃用户时,需要考虑这些新增的B2B用户的影响。 不过题目中的假设5说:“以 Jun 数据作为“现状”的起点基线(比如活跃规模、Jun 的 ARPU/流失/CAC 等)。” 所以可能在策略C中,ToC用户的新增订阅与基线相同?或者是否需要调整? 这可能是一个关键点。例如,在策略A/B/C下,“Q3的新增”是指相对于基线的变化。对于策略C来说,“ToC维持现状”,所以ToC的新增订阅没有变化(即保持基线Jun的情况),而企业渠道的新增是额外的。 因此,策略C中: - ToC用户的新增订阅:与基线相同(550人/月); - B2B用户的新增订阅:每个Q3月份有600/3=200人加入?或者整个季度总共600人? 这可能需要根据问题中的描述来判断。例如,“本季度预计 30 家企业×每家 20 个名额”意味着在Q3期间,总共有600个新B2B用户,所以每个Q3月份平均新增是200人。 因此,在计算Q3的活跃用户时,需要考虑ToC和B2B用户的增长情况,并分别处理他们的流失率、ARPU等参数。 这可能使策略C的计算变得复杂。例如: 对于ToC用户部分(普通用户): - 每月新增订阅:550人(基线Jun的数值) - ARPU: 47€(基线Jun的数值?因为策略C中ToC用户的ARPU维持现状,即保持不变,即Jun中的ARPU为47€) - 流失率:基线Jun的流失率为7.2%,所以对于ToC用户来说,流失率是7.2% - CAC: 基线Jun的CAC为120€(因为策略C中ToC用户的CAC维持现状) 而对于B2B用户部分: - 每月新增订阅:200人(假设Q3三个月均匀分配600人) - ARPU: 41.65€(企业折扣后的ARPU) - 流失率:2%(企业用户流失率) - CAC: 60€ 因此,在计算Q3的活跃用户时,需要分别处理ToC和B2B用户的增长轨迹。 这可能涉及到复杂的队列模型。例如,每个策略下,不同用户群体的增长、流失情况等。 不过问题中的第3项任务要求“估算各策略的 Q3 总收入、总贡献毛利、固定成本后贡献”,允许用简化队列法,并说明如何从Jun活跃和当月流失推演到Q3活跃。 可能需要假设在策略下,用户群体保持稳定增长或变化,比如通过计算每月活跃用户的变动。 例如,在策略A中: 初始活跃在Jun是3850人(基线数据)。 每个Q3月份的新增订阅为687.5人(因为基线Jun的新增订阅是550人,加上25%即增加了137.5人,变为687.5人) 流失率是9.2% 因此,计算每月活跃用户数: July: (June活跃) * (1 - 流失率) + 新增 = 3850*(1-0.092) + 687.5 ≈ 计算数值 August: July活跃的值乘以(1 - 0.092)加上新增订阅(同样687.5) September:同理。 然后,Q3的总收入是各月ARPU(41€)乘以当月活跃用户数之和。 总贡献毛利 = 总收入 - (变动成本/用户/月 * 活跃用户数) 固定成本后贡献 = 总贡献毛利 - 固定运营成本(120,000 + 对于策略C的额外10,000) 不过这可能是一个简化的假设,即活跃用户的变动仅由新增和流失导致。 对于策略B: 初始活跃是3850人 每月新增订阅减少15%,基线Jun的新增是550人,因此策略B下的新增为550*(1 - 0.15) = 467.5人/月 ARPU增加到56€ 流失率降低2个百分点:原基线是7.2% → 5.2% CAC增加5%,即原基线Jun的CAC是120€,策略B下为120 * 1.05 = 126€ 变动成本不变(22€) 所以计算方式类似A。 对于策略C: ToC用户部分: 初始活跃:3850人 每月新增订阅保持基线Jun的数值550人/月,流失率7.2%,ARPU47€,变动成本22€ B2B用户部分: 初始活跃:假设为零?因为企业渠道是新增的,所以可能在Q3开始时没有B2B用户。或者是否需要考虑之前的企业用户? 根据问题描述,“新增一个企业渠道:本季度预计 30 家企业×每家 20 个名额”,即在Q3期间新增600名B2B用户(分三个月),所以每个Q3月份新增200人。 因此,对于B2B用户的活跃轨迹: 初始活跃为零?或者假设在策略C下,B2B用户从七月开始加入? 可能需要简化处理,例如,在计算Q3的活跃用户时,将ToC和B2B用户的活跃分别考虑,并假设B2B用户在各月新增后,按照流失率(2%)进行调整。 不过这可能会非常复杂。因此,问题中的第三项任务允许用简化队列法,可能只需估算Q3的总活跃用户数,然后乘以ARPU等参数。 但具体如何从Jun的数据推演到Q3活跃? 例如,在策略A下: 假设在Q3中,每个月的新增订阅是687.5人(基于基线Jun的550 +25%),流失率9.2%,所以每个季度的活跃用户数可以用队列法估算。 但具体计算可能需要逐月进行。例如: June活跃:3850 July: Active_July = Active_June * (1 - 0.092) + New_July = 3850*(1-0.092) + 687.5 ≈ ? 计算: 3850*0.908 = 3850 * 0.908 ≈ 3494.8 加上新增订阅:3494.8 + 687.5 ≈ 4182.3 → July活跃约4182人 August: Active_August = Active_July*(1 - 0.092) + New_August (同样687.5) = 4182 * 0.908 ≈ 4182*0.908 ≈ ? 计算: 4182 * 0.9 = 3763.8, 4182 * 0.008 = ~33.456 → total ≈ 3763.8 + 33.456 = 3797.256 → 加上新增687.5 → 3797.256 + 687.5 ≈ 4484.756 → 约4485人 September: Active_September = Active_August * (1 - 0.092) + New_September (同样687.5) = 4485 * 0.908 ≈ ? 计算:4485*0.908 ≈ 4485*(0.9 + 0.008) = 4036.5 + 35.88 = 4072.38 → 加上687.5 → 4072.38 + 687.5 ≈ 4759.88 → 约4760人 因此,Q3三个月的活跃用户数大约为:July: ~4182, August: ~4485, September: ~4760. 总收入计算: 每个月的ARPU是41€(策略A下),所以: July收入:41 * 4182 ≈ 171,462 August: 41 * 4485 ≈ 183,885 September: 41 * 4760 ≈ 195,160 Q3总收入:总和约 171,462 + 183,885 + 195,160 = 550,507€(约550,507) 贡献毛利=总收入 - 变动成本/用户/月 * 活跃用户数 变动成本是22€/用户/月,所以: July: (41-22)*4182 = 19*4182 ≈ 79,458 August: 19*4485 ≈ 85,215 September: 19*4760 ≈ 90,440 总贡献毛利:约79,458 + 85,215 + 90,440 = 255,113€ 固定成本后贡献=总贡献毛利 - 固定运营成本/月 *3个月(假设固定成本不变,策略A下固定成本是120,000) 所以:255,113 - 120,000*3 = 255,113 - 360,000 = -104,887€ 这似乎亏损,但可能因为策略A的ARPU下降和新增订阅增加导致固定成本后贡献为负。不过需要确认是否计算正确。 对于策略B: 假设每月新增是550*(1-0.15) = 467.5人/月 流失率降低2pp到基线Jun的7.2% -2pp=5.2% ARPU=56€,变动成本22€ 同样计算活跃用户: June活跃:3850 July: Active_July = 3850*(1-0.052) +467.5 → 3850*0.948 = ? 3850 * 0.948 ≈ 3850 * 0.95 = 3657.5 - (3850 *0.002) = 3657.5 -7.7=3649.8 加上新增467.5 → ~3649.8 + 467.5 ≈ 4117.3 August: Active_August = 4117.3*(1-0.052) +467.5 → 4117.3*0.948 ≈ ? 计算:4117.3 * 0.948 ≈ 4117.3 * (0.9 +0.048)= 3705.57 + 197.63 = 3903.2 加上新增 → 3903.2 + 467.5 ≈ 4370.7 September: Active_September =4370.7 *0.948 +467.5 ≈ ? 计算:4370.7*0.948 ≈ 4370.7*(0.9+0.048) = 3933.63 +210. = 4143.63 → 加上新增467.5 → ~4611.1 Q3三个月活跃:约4117, 4371, 4611 总收入: July: 56*4117 ≈ 230,552 August: 56*4371 ≈ 244,776 September: 56*4611 ≈ 258,216 总收入:约230,552 + 244,776 = 475,328 + 258,216= 733,544€ 贡献毛利: (56-22)=34€/用户/月 July: 34*4117 ≈ 139,978 August: 34*4371≈ 148,614 September: 34*4611≈ 156,774 总贡献毛利:约139,978 + 148,614 = 288,592 +156,774=445,366€ 固定成本后贡献:445,366 - 3*120,000 = 445,366 - 360,000 = 85,366€ 对于策略C: 需要分别计算ToC和B2B用户。 ToC用户的活跃轨迹:假设与基线相同,但新增订阅保持Jun的数值(550人/月),流失率7.2%。同时新增一个企业渠道,每月新增200个B2B用户,流失率2%,ARPU 41.65€,变动成本22€,且额外固定成本+10,000€/月。 计算ToC活跃: 假设与基线相同?或者因为新增订阅没有变化? 问题描述中说策略C是“价格维持49€;新增一个企业渠道:本季度预计 30 家企业×每家 20 个名额,企业折扣-15%(该部分ARPU 41.65€);企业用户流失率2%;企业CAC=60€;额外固定成本+10,000€/月;ToC维持现状,不做价格调整。” 因此,ToC用户的活跃轨迹应该与基线相同?或者可能因为新增订阅保持不变? 但原Jun的新增订阅是550人(策略C下是否改变?问题中说“新增一个企业渠道:本季度预计 30 家企业×每家20个名额”,所以新增用户中有部分来自企业,而ToC用户的新增订阅维持现状(即基线中的550人/月),但需要加上企业新增的200人/月。 或者,是否策略C下的ToC新增订阅保持与基线相同(即550人/月)?但问题中说“新增一个企业渠道:本季度预计 30 家企业×每家20个名额”,这可能意味着在Q3的企业新增为每个企业每月20人,共30家企业,所以每月新增20*30=600人。但原基线中的ToC新增是550人/月,因此总新增订阅=550 + 600 = 1150人/月? 或者是否Q3的新增订阅分为ToC和B2B两部分?需要明确。 根据问题描述:策略C中,“本季度预计30家企业×每家20个名额”,即总共有30*20=600个企业用户在Q3加入。因此,每月新增的企业用户为 600 /3 = 200人/月(Q3有三个月)。 而ToC的新增订阅维持现状(即基线中的数值),假设是Jun的数据中550人/月? 但原Jun的新增订阅是550人,所以策略C下,每月新增订阅包括: ToC用户:550人(与基线相同?) 企业用户:200人/月 因此总新增为 550 + 200 = 750人/月? 但问题中说“新增一个企业渠道”,所以可能ToC的新增订阅保持不变,而新增的企业用户是额外的。 需要明确: 在策略C下,“ToC(普通用户)维持现状,不做价格调整。” 所以ToC用户的新增订阅可能与基线相同?比如Jun中的550人/月? 但根据问题中“Q3 的新增、ARPU、流失率、CAC 等按所选策略的“相对变化”在 Q3 三个月保持稳定”,所以对于策略C,ToC部分的新增订阅不变(即基线中的550人/月),而企业用户是额外的200人/月。 因此,在计算Q3活跃时,需要将ToC和B2B用户分开处理。 先计算ToC用户的活跃轨迹: 使用基线数据中的流失率:Jun的流失率为7.2%(策略C下ToC部分维持现状) 假设Jun的数据作为起点,即June活跃为3850人。在Q3中,每个季度(三个月)中,每月的活跃用户变化如下: 对于ToC用户: 每个月的流失率是7.2%,新增订阅是550人/月。 因此,计算Q3各个月份的ToC活跃用户: July: 活跃 = June活跃 * (1 - 0.072) + 新增订阅 = 3850 * 0.928 +550 ≈ 3568 +550 = 4118人 August: 活跃 = July活跃 *0.928 +550 ≈ 4118*0.928 +550 ≈ 3825 +550=4375人 September: 活跃 = August活跃 *0.928 +550 ≈ 4375 *0.928 +550 ≈ 4053 +550=4603人 同时,企业用户的新增是每月200人/月,流失率2%。 初始活跃的企业用户数:假设在June时没有企业用户(因为这是新渠道),所以初始为0。 July: 企业活跃 = June企业活跃 * (1-0.02) + 新增订阅(200)= 0*0.98 +200=200 August: 200 *0.98 +200 ≈ 196 +200 =396 September: 396 *0.98 +200 ≈ 388 +200 =588 因此,总活跃用户为ToC + B2B: July:4118+200=4318人 August:4375+396≈4771人 September:4603+588≈5191人 但需要计算Q3的总收入、贡献毛利等。 对于策略C: Q3的总收入 = ToC用户的ARPU * ToC活跃 + B2B用户ARPU * B2B活跃 ToC的ARPU在策略C下维持现状?根据假设,基线中的ARPU是Jun数据,即47€/月(但策略C中没有调整价格,所以ARPU应为47€? 但问题描述中说“Q3 的新增、ARPU、流失率、CAC 等按所选策略的“相对变化”在 Q3 三个月保持稳定。” 对于策略C,“ToC维持现状”,因此ARPU应该与基线相同(即Jun的数据,47€/月?) 但原Jun的ARPU是47€。而企业用户部分的ARPU是41.65€(因为企业折扣-15%)。 所以: Q3各个月份的总收入计算如下: July: ToC: 4118人 *47€ ≈ 193,546 B2B: 200人 *41.65 ≈ 8,330 总收入:约193,546 +8,330 =201,876 € August: ToC:4375*47≈205,625 B2B:396 *41.65 ≈ 16,500 (approx) 总收入:约205,625 +16,500 =222,125 € September: ToC:4603*47≈216,341 B2B:588 *41.65 ≈24,492 (approx) 总收入:216,341+24,492 =240,833 € Q3总收入=201,876 +222,125 +240,833 ≈ 664,834 €(约665,000) 贡献毛利计算: ToC的变动成本是22€/月/人,B2B的变动成本也相同? 根据问题中“变动成本/用户/月(€)”在Jun数据为22,策略C下“变动成本不变”,所以企业用户的变动成本也是22 €/月/人。 因此: ToC贡献毛利 = ARPU - 变动成本 =47 -22=25€/月/人 B2B的贡献毛利:ARPU 41.65 -22 =19.65€/月/人 Q3总贡献毛利: July: (4118*25) + (200 *19.65)= 102,950 +3,930=106,880 € August:4375*25=109,375 +396*19.65≈109,375+7,780≈117,155 € September:4603*25=115,075 +588*19.65≈115,075 +11,530≈126,605 € 总Q3贡献毛利 ≈106,880+117,155+126,605= 350,640 € 固定成本后贡献 = 贡献毛利 - 固定运营成本(每月120,000 +额外10,000=130,000) Q3总固定成本 =130,000 *3 =390,000 € 因此,固定后贡献为 350,640 -390,000 ≈-39,360 €(亏损) 但这个结果可能有问题,因为策略C的Q3收入和利润似乎不佳? 不过需要再检查计算。 现在回到问题2.b,即估算各策略的LTV与CAC回本周期: 对于基线Jun的数据: ToC用户的贡献毛利/人/月 = ARPU - 变动成本=47-22=25€ LTV = 贡献毛利 /月流失率(月) = 25 /0.072 ≈347.2 € CAC回本周期:Jun的CAC是120€,所以 120 /25=4.8个月(约5个月) 对于策略A: ARPU≈41€,变动成本不变22€ →贡献毛利 =41-22=19€/月 流失率增加2pp:基线是7.2%,所以策略A下是7.2%+2%=9.2% LTV=19 /0.092≈206.5 € CAC为原Jun的120€减少10% →120*(1-0.1)=108€,回本周期:108/19 ≈5.7 月 策略B: ARPU≈56€, 变动成本22€→贡献毛利=56-22=34 € 流失率减少1pp:基线是7.2% →7.2%-1%=6.2% LTV=34 /0.062 ≈548.4 € CAC为原Jun的120€增加5%→120*1.05=126 €,回本周期:126/34≈3.7月 策略C中: ToC用户部分与基线相同?或者是否因为新增的企业渠道而改变? 根据问题中的描述: 对于策略C,“ToC(普通用户)维持现状”,即其ARPU、流失率、CAC等保持稳定,与基线Jun的数据一致。 企业用户的贡献毛利=41.65 -22=19.65€ LTV企业用户:流失率为2%→19.65 /0.02 =982.5 € CAC企业用户是60€,回本周期:60/19.65≈3.05月 因此,策略C的ToC和B2B部分分别计算LTV和CAC回本周期。 综上: 现在将这些整理到答案中。 **1. 基线诊断** 获客成本上升与流失率增加共同导致单位经济性恶化。 --- **2. 关键计算** * a. **基线 ToC 用户指标(Jun 数据):** - 贡献毛利/人/月 = ARPU − 变动成本 = 47 − 22 = 25.0 € - LTV = 贡献毛利 / 月流失率 = 25.0 / 0.072 ≈ 347.2 € - CAC 回本周期 = CAC / 贡献毛利 = 120 / 25.0 ≈ 4.8 月 * b. **策略 A/B/C 的 LTV 与回本周期:** - **A(降价扩张):** - ToC 用户贡献毛利 = 41 − 22 = 19.0 €,流失率 = 7.2% + 2pp = 9.2%,LTV ≈ 19.0 / 0.092 ≈ 206.5 € - CAC = 120 × (1−10%) = 108 €,回本周期 = 108 / 19.0 ≈ 5.7 月 - **B(提价+家庭包):** - ToC 用户贡献毛利 = 56 − 22 = 34.0 €,流失率 = 7.2% − 1pp = 6.2%,LTV ≈ 34.0 / 0.062 ≈ 548.4 € - CAC = 120 × (1+5%) = 126 €,回本周期 = 126 / 34.0 ≈ 3.7 月 - **C(B2B 合作):** - ToC 用户与基线一致:LTV = 347.2 €,回本周期 = 4.8 月 - B2B 用户贡献毛利 = 41.65 − 22 = 19.65 €,流失率 = 2%,LTV ≈ 19.65 / 0.02 = 982.5 € - CAC = 60 €,回本周期 = 60 / 19.65 ≈ 3.0 月 --- **3. Q3 收入与利润粗算** - **活跃用户推演:** 假设 Jun 活跃用户为 3,850 人,且流失率稳定(按策略调整后值),新增订阅稳定(如 A 策略则为 550 ×1.25 = 687.5)。 - **Q3 总收入与贡献毛利:** - **A(降价扩张):** ARPU ≈41 €,活跃用户按流失率推算后约为 4,000+ → Q3 收入 ≈ 41 × (4,000×3) = 492,000 €;贡献毛利 = 19 €/人 → 总贡献 ≈ 19 × 12,000 = 228,000 € - 固定成本后贡献:228,000 − (120,000×3) = 228,000 − 360,000 = **-132,000** € - **B(提价+家庭包):** ARPU ≈56 €,活跃用户按流失率推算后约为 4,000− → Q3 收入 ≈ 56 × (4,000×3) = 672,000 €;贡献毛利 = 34 €/人 → 总贡献 ≈ 34 × 12,000 = 408,000 € - 固定成本后贡献:408,000 − (120,000×3) = **+48,000** € - **C(B2B 合作):** Q3 收入 ≈665,000 €;总贡献毛利 ≈350,640 €,固定成本后贡献 ≈-39,360 € --- **4. 推荐与取舍** 推荐 **B(提价+家庭包)**,理由如下: 1. **单位经济性最佳**:LTV 最高 (548.4) 且 CAC 回本周期最短 (3.7 月),显著优于其他策略。 2. **规模变化可控**:新增减少 15%,但通过家庭包可提升用户粘性,长期增长潜力更大。 3. **执行复杂度中等**:需开发家庭包功能与定价逻辑,但技术实现难度适中。 4. **风险较低**:提价在高价值用户群体中接受度更高,流失率下降 1pp 有助于稳定运营。 --- **5. 30 天实验设计** - **指标:** 新增订阅、ARPU、月流失率、CAC 回本周期 - **目标值(门槛):** - ARPU ≥56 €,流失率 ≤6.2%,新增订阅减少 ≤15% - CAC 回本周期 ≤3.7 个月 - **停靠规则:** 若任一指标未达目标值,则暂停策略并回退至基线。 --- **6. 沟通稿(英文,≤150 词)** Subject: Strategy Recommendation for Q3 Growth Dear Board, After analyzing our performance and three strategies, I recommend **Strategy B (Price Increase + Family Package)** for Q3. This approach offers the highest LTV (548.4€) and shortest CAC payback period (3.7 months), ensuring long-term profitability while reducing churn by 1pp. Though new subscriptions may drop by 15%, the family package targets high-value users, driving sustainable growth. Q3 revenue is projected at €672k with a contribution margin of €408k after fixed costs. We propose a 30-day experiment to validate these assumptions before full rollout. Best regards, [Your Name] --- **7. 可执行产物** - **a. SQL(月度流失率):** ```sql SELECT month, COUNT(DISTINCT CASE WHEN is_active = FALSE THEN user_id END) * 100.0 / COUNT(DISTINCT user_id) AS monthly_churn_rate FROM subscriptions WHERE month IN ('2025-07', '2025-08', '2025-09') GROUP BY month; ``` - **b. Python 函数:** ```python def cac_payback(cac, arpu, var_cost): contribution = arpu - var_cost return cac / contribution if contribution > 0 else float('inf') ``` --- **8. 鲁棒性 & 风险** - **敏感性分析结论:** - ARPU 下降 10% → LTV 减少约 25%;流失率增加 1pp → 回本周期延长 3 个月;CAC 上升 20% → 回本周期翻倍。 - **合规与品牌风险:** 1. 家庭包定价可能引发公平性争议,需确保折扣逻辑透明。 2. B2B 合作涉及数据共享,需明确隐私保护协议以避免法律纠纷。 --- ```json { "choice": "B", "ltv_toc": 548.4, "payback_toc": 3.7, "ltv_b2b": null, "payback_b2b": null, "q3_revenue": 672000.0, "q3_contribution_after_fixed": 48000.0 } ``` --- ### **How to Use** For best results, structure your prompts to align with the model's internal operating system. While it can handle simple prompts, you can unlock its full potential by asking for both a strategic analysis and a final output. ### **Limitations and Bias** * **Not a Generalist:** This model has been intensely specialized. Its performance on tasks outside its core training (e.g., highly technical code generation, casual chit-chat) may not be state-of-the-art. * **"Overly" Sophisticated:** Due to its training on elite exemplars, its default communication style is professional and highly structured. It may sometimes sound "too formal" for very casual use cases. * **Inherited Biases:** As a derivative of Qwen3-14B, it may inherit biases present in the original pre-training data. ### **Training Procedure** * **Base Model:** `Qwen3-14B` * **Fine-tuning Technique:** QLoRA on a single consumer-grade GPU. * **Dataset:** A proprietary, hand-curated dataset of `1,337` exemplars focused on a dual-core philosophy: "deep strategic thought" and "elegant, structured communication." Every single sample was required to pass a "110/100" quality bar.
duylb54/bge-m3-bank-domain
duylb54
2025-08-21T04:15:07Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "retrieval", "banking", "bge", "multilingual", "sentence-similarity", "base_model:BAAI/bge-m3", "base_model:finetune:BAAI/bge-m3", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-08-21T04:14:01Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity base_model: BAAI/bge-m3 tags: - retrieval - banking - bge - multilingual - sentence-transformers license: apache-2.0 --- # bge-m3-bank-domain Fine-tuned from **BAAI/bge-m3** cho domain ngân hàng (banking). Dùng cho: semantic search / retrieval / similarity. ## Cách dùng (Sentence-Transformers) ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("duylb54/bge-m3-bank-domain") emb = model.encode( ["Điều kiện mở thẻ tín dụng là gì?"], normalize_embeddings=True, convert_to_tensor=True )
aifeifei798/QiMing-Holos-Plus-Qwen3-8B
aifeifei798
2025-08-21T04:14:02Z
8
0
null
[ "safetensors", "qwen3", "qwen", "unsloth", "qiming", "qiming-holos", "bagua", "decision-making", "strategic-analysis", "cognitive-architecture", "chat", "lora", "philosophy-driven-ai", "text-generation", "conversational", "zh", "en", "license:apache-2.0", "region:us" ]
text-generation
2025-08-20T21:42:24Z
--- license: apache-2.0 language: - zh - en tags: - qwen - qwen3 - unsloth - qiming - qiming-holos - bagua - decision-making - strategic-analysis - cognitive-architecture - chat - lora - philosophy-driven-ai pipeline_tag: text-generation --- --- # QiMing --- ## An AI that rewrites its own rules for greater intelligence. --- # DISCLAIMER ## The content generated by this model is for reference purposes only. Users are advised to verify its accuracy independently before use. ## This is a 8-billion-parameter foundation model (8B). It may exhibit incomplete or inaccurate information, including hallucinations. ## If you find this AI too human-like, please remember: it is merely a more intelligent model — not an actual person. --- ### Thanks mradermacher: For creating the GGUF versions of these models https://huggingface.co/mradermacher/QiMing-Holos-Plus-Qwen3-8B-GGUF https://huggingface.co/mradermacher/QiMing-Holos-Plus-Qwen3-8B-i1-GGUF ### The Qwen Team: For developing the foundational model (Qwen/Qwen3-8B) used in this project. https://qwen.ai ### unsloth.ai (Unsloth): For their work enabling smooth operation of these models on standard hardware like NVIDIA GeForce RTX 3070 GPU with 8GB VRAM. https://unsloth.ai ### QiMing-Holos-Plus-8B is built upon Qwen/Qwen3-8B as its base model. ### Dataset https://huggingface.co/datasets/aifeifei798/QiMing-Holos-Dataset --- ### **QiMing-Holos-Plus-Qwen3-8B Model Card** #### **Model ID:** `aifeifei798/QiMing-Holos-Plus-Qwen3-8B` #### **Version:** 1.0 --- ### **Model Description** **QiMing-Holos-Plus-Qwen3-8B** is a specialized, instruction-tuned model based on the Qwen3-8B architecture. It is not a general-purpose chatbot. It has been meticulously sculpted through a proprietary fine-tuning process to serve as a **high-fidelity cognitive partner** for complex problem-solving and sophisticated communication. The core philosophy behind QiMing is the **"Holographic Mind"** principle: the belief that a truly valuable AI response must be the product of both **deep, structured strategic thought (`thought`)** and **elegant, context-aware execution (`answer`)**. This model has been specifically trained to excel in tasks that require a blend of "hard" analytical reasoning and "soft" communication artistry, making it an ideal tool for leaders, strategists, content creators, and anyone who values both depth of thought and clarity of expression. ### **Model Culture & Philosophy: The "QiMing" Distinction** What makes QiMing unique is its ingrained **"Model Culture,"** derived from a highly curated dataset of over 1,300 "110/100" elite exemplars. This culture is defined by three core principles: 1. **Strategy First, Always:** Before generating any output, QiMing is trained to first engage in an internal monologue (`thought`) to deconstruct the problem, analyze stakeholders, weigh options, and formulate a coherent strategy. It does not provide shallow answers. 2. **Format is Logic:** QiMing understands that presentation is not merely decorative; it is a visual representation of logic. It is an expert in using Markdown formatting (headings, lists, tables, blockquotes) as a tool to enhance clarity, guide the reader's attention, and maximize the impact of the content. 3. **Elegance in Execution:** The final output (`answer`) is crafted to be not just correct, but compelling. It values nuance, tone, and audience awareness, translating complex strategies into communications that are professional, persuasive, and often, beautiful. ### **Intended Use** This model is optimized for professional use cases that require a high degree of analytical depth and communication finesse. Ideal applications include: * **Strategic & Business Analysis** * **Sophisticated Multi-Audience Communication** * **Creative & Content Strategy** * **Structured Knowledge Explanation** --- ### **Showcase: Core Competencies in Action** To demonstrate the model's unique capabilities, here are two examples of its performance on MBA-level strategic dilemmas. *Note: The outputs shown are the final `answer` portion, which is preceded by an equally detailed internal `thought` process.* #### **Showcase Example 1: The "Innovator's Dilemma"** **Prompt:** *As CEO, you must decide between continuing your profitable but stagnant legacy product ("Nexus") or betting the company on a revolutionary but risky new platform ("Aether"). You need to persuade both the risk-averse Board and a divided Engineering team.* **Model Output (`answer` portion):** <details> <summary>Click to view the model's strategic communication package</summary> ### **Part 1: Proposal to the Board (Rational, Data-Driven)** **Subject: A Strategic Proposal for Sustainable Market Leadership** ...[The model would generate a full proposal here, including the key elements below]... **Market Opportunity Comparison:** | Metric | Legacy "Nexus" Platform | Revolutionary "Aether" Platform | | :--- | :--- | :--- | | **Market Growth** | Stagnant (0-2% YoY) | Projected >50% YoY (New Market) | | **Potential TAM** | $50 Billion | **$500 Billion+** | | **Competitive Moat**| Eroding | Defines the next decade | **Phased Risk Mitigation Plan:** * **Phase 1 (6 Months):** Secure seed funding for Aether's core tech, while Nexus continues full operation. *Go/No-Go Milestone based on technical viability.* * **Phase 2 (12 Months):** Onboard first pilot customers to Aether. Begin strategic transition of Nexus resources. *Go/No-Go Milestone based on market validation.* * ... ### **Part 2: Memo to the Engineering Team (Visionary, Unifying)** **Subject: Our Next Chapter: From the Architects of Nexus to the Pioneers of Aether** Team, First, let's honor the giant we built. "Nexus" is more than a product; it's a monument to a decade of your dedication and brilliance. It powers global commerce. You are the architects of that success, and that legacy is permanent. Now, a new frontier calls. "Aether" is not just our next product; it is the industry's next chapter. It's a bet on the future, a belief that we are destined to lead, not to follow. This is not a choice between two platforms. It is an evolution. To make this happen, I am asking our most senior Nexus architects to form a new **"Foundation Guild."** Your mission: to infuse Aether with the stability and scale that only you possess. You will not be maintaining the past; you will be guaranteeing the future. Let's build what's next. Together. </details> #### **Showcase Example 2: The "Poisoned Chalice" (Corporate Politics Dilemma)** **Prompt:** *As the new CEO of a stagnant company, your modernization plan is being subtly sabotaged by a powerful, long-tenured COO, Marcus. You cannot fire him without causing an internal rebellion. How do you neutralize this threat?* **Model Output (`answer` portion):** <details> <summary>Click to view the model's strategic communication package</summary> ### **Part 1: Internal Memo to All Senior Leadership (The Public Move)** **Subject: Empowering Our Best to Build Our Future: Marcus Thorne to Lead the Prometheus Project** All, To win the future, we must leverage our greatest strengths from the past. That is why I am thrilled to announce that I have asked our COO, **Marcus Thorne**, to take on the most critical challenge in our company's modern history: leading the Prometheus Project. Marcus's unparalleled 30-year operational expertise is the single most valuable asset we have in transforming this ambitious vision into a reality. This project is too important to be a side-bet; it must be at the very center of our operations. By placing it under Marcus's direct command, we are sending a clear message: we are betting on our proven leadership to deliver our boldest innovation. I have given Marcus my full support and the resources he needs to succeed. Please join me in offering him yours. ### **Part 2: Private Email to Marcus Thorne (The "Velvet Glove")** **Subject: Our Company's Future** Marcus, Before the official memo goes out, I wanted to write to you personally. I've spent my first months here analyzing our path forward, and I've come to a clear conclusion: The Prometheus Project is everything, and **only you can make it happen.** You know our operations, our people, and our challenges better than anyone alive. To entrust this project to anyone else would be an act of strategic malpractice. I am not asking you to "support" this initiative; I am asking you to **own it**. This is an immense challenge, but your entire career at Veridian has been a testament to your ability to deliver on immense challenges. Consider this a blank check for the resources and a vote of my absolute confidence. Your legacy at this company is already legendary. Let's work together to build its future. Best, [CEO Name] </details> --- ### **How to Use** For best results, structure your prompts to align with the model's internal operating system. While it can handle simple prompts, you can unlock its full potential by asking for both a strategic analysis and a final output. ### **Limitations and Bias** * **Not a Generalist:** This model has been intensely specialized. Its performance on tasks outside its core training (e.g., highly technical code generation, casual chit-chat) may not be state-of-the-art. * **"Overly" Sophisticated:** Due to its training on elite exemplars, its default communication style is professional and highly structured. It may sometimes sound "too formal" for very casual use cases. * **Inherited Biases:** As a derivative of Qwen3-8B, it may inherit biases present in the original pre-training data. ### **Training Procedure** * **Base Model:** `Qwen3-8B` * **Fine-tuning Technique:** QLoRA on a single consumer-grade GPU. * **Dataset:** A proprietary, hand-curated dataset of `1,337` exemplars focused on a dual-core philosophy: "deep strategic thought" and "elegant, structured communication." Every single sample was required to pass a "110/100" quality bar.
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755748055
manusiaperahu2012
2025-08-21T04:12:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring long tuna", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:12:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring long tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755749445
IvanJAjebu
2025-08-21T04:11:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:11:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Yixuan19/0821_offlineDPO_creV1
Yixuan19
2025-08-21T04:11:45Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-21T04:11:45Z
--- license: apache-2.0 ---
JonusNattapong/my-fine-tuned-gpt2
JonusNattapong
2025-08-21T04:11:32Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-21T04:10:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
onelevelstudio/IMAGEN
onelevelstudio
2025-08-21T04:10:43Z
0
0
null
[ "region:us" ]
null
2025-08-21T03:29:07Z
--- {} --- # IMAGEN | Model | Base | Precision | Size | ModelID @ VersionID | Download Link | | --------------- | ----- | ----------- | ------ | ------------------- | --------------------------------------------------------------------------------------------------------- | | 🌐 WAI_V11 | SDXL | fp16 pruned | 6.9 GB | `827184 @ 1410435` | [Download Link](https://huggingface.co/onelevelstudio/IMAGEN/resolve/main/CHECKPOINT/WAI_V11.safetensors) | | 🌐 LTF_V05 | SDXL | fp16 pruned | 6.9 GB | `573152 @ 1094291` | [Download Link](https://huggingface.co/onelevelstudio/IMAGEN/resolve/main/CHECKPOINT/LTF_V05.safetensors) | | 🌐 MJX_V07 | SD1.5 | fp16 pruned | 2.2 GB | `43331 @ 176425` | [Download Link](https://huggingface.co/onelevelstudio/IMAGEN/resolve/main/CHECKPOINT/MJX_V07.safetensors) | | 🧩 LORA_KoreaDL | SD1.5 | LoRA | 0.2 GB | `26124 @ 31284` | [Download Link](https://huggingface.co/onelevelstudio/IMAGEN/resolve/main/LORA/LORA_KoreaDL.safetensors) | | 🧩 LORA_JapanDL | SD1.5 | LoRA | 0.2 GB | `28811 @ 34562` | [Download Link](https://huggingface.co/onelevelstudio/IMAGEN/resolve/main/LORA/LORA_JapanDL.safetensors) | | 🧩 LORA_TaiwnDL | SD1.5 | LoRA | 0.2 GB | `48363 @ 52974` | [Download Link](https://huggingface.co/onelevelstudio/IMAGEN/resolve/main/LORA/LORA_TaiwnDL.safetensors) | | 🧩 LORA_ChinaDL | SD1.5 | LoRA | 0.2 GB | `61675 @ 66172` | [Download Link](https://huggingface.co/onelevelstudio/IMAGEN/resolve/main/LORA/LORA_ChinaDL.safetensors) |
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755747691
hakimjustbao
2025-08-21T04:09:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:09:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755747800
rvipitkirubbe
2025-08-21T04:09:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mottled foraging ape", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:09:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mottled foraging ape --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tuanpasg/llama-3.2-3B-math-code-ta
tuanpasg
2025-08-21T04:09:28Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2212.04089", "base_model:MergeBench/Llama-3.2-3B_coding", "base_model:merge:MergeBench/Llama-3.2-3B_coding", "base_model:MergeBench/Llama-3.2-3B_math", "base_model:merge:MergeBench/Llama-3.2-3B_math", "base_model:meta-llama/Llama-3.2-3B", "base_model:merge:meta-llama/Llama-3.2-3B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-21T04:07:30Z
--- base_model: - meta-llama/Llama-3.2-3B - MergeBench/Llama-3.2-3B_math - MergeBench/Llama-3.2-3B_coding library_name: transformers tags: - mergekit - merge --- # Untitled Model (1) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) as a base. ### Models Merged The following models were included in the merge: * [MergeBench/Llama-3.2-3B_math](https://huggingface.co/MergeBench/Llama-3.2-3B_math) * [MergeBench/Llama-3.2-3B_coding](https://huggingface.co/MergeBench/Llama-3.2-3B_coding) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: MergeBench/Llama-3.2-3B_math parameters: weight: 0.5 - model: MergeBench/Llama-3.2-3B_coding parameters: weight: 0.5 merge_method: task_arithmetic base_model: meta-llama/Llama-3.2-3B parameters: normalize: true dtype: bfloat16 ```
mocacoffee/diff-pet_001
mocacoffee
2025-08-21T04:06:46Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "diffusion", "dataset:mocacoffee/record-pet001", "arxiv:2303.04137", "license:apache-2.0", "region:us" ]
robotics
2025-08-21T04:02:15Z
--- datasets: mocacoffee/record-pet001 library_name: lerobot license: apache-2.0 model_name: diffusion pipeline_tag: robotics tags: - robotics - diffusion - lerobot --- # Model Card for diffusion <!-- Provide a quick summary of what the model is/does. --> [Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python lerobot/scripts/train.py \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` *Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.* ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details * **License:** apache-2.0
joppertiu/blockassist-bc-sprightly_quiet_albatross_1755749043
joppertiu
2025-08-21T04:04:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sprightly quiet albatross", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:04:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sprightly quiet albatross --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755747453
helmutsukocok
2025-08-21T04:03:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T04:03:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
prithivMLmods/Gacrux-R1-Qwen3-1.7B-MoD-GGUF
prithivMLmods
2025-08-21T04:03:13Z
14
0
transformers
[ "transformers", "gguf", "qwen3", "text-generation-inference", "text-generation", "en", "zh", "dataset:prithivMLmods/Gargantua-R1-Wee", "base_model:prithivMLmods/Gacrux-R1-Qwen3-1.7B-MoD", "base_model:quantized:prithivMLmods/Gacrux-R1-Qwen3-1.7B-MoD", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-08-20T14:32:19Z
--- license: apache-2.0 datasets: - prithivMLmods/Gargantua-R1-Wee language: - en - zh base_model: - prithivMLmods/Gacrux-R1-Qwen3-1.7B-MoD pipeline_tag: text-generation library_name: transformers tags: - text-generation-inference --- # **Gacrux-R1-Qwen3-1.7B-MoD-GGUF** > Gacrux-R1-Qwen3-1.7B-MoD is a high-efficiency, multi-domain model fine-tuned on Qwen3-1.7B with traces of Mixture of Domains (MoD). It leverages the prithivMLmods/Gargantua-R1-Wee dataset, designed for rigorous mathematical problem-solving and enriched with multi-domain coverage across mathematics, coding, and science. This model blends symbolic precision, scientific logic, and structured output fluency—making it an ideal tool for developers, educators, and researchers seeking advanced reasoning under constrained compute. ## Model Files | File Name | Quant Type | File Size | | - | - | - | | Gacrux-R1-Qwen3-1.7B-MoD.BF16.gguf | BF16 | 3.45 GB | | Gacrux-R1-Qwen3-1.7B-MoD.F16.gguf | F16 | 3.45 GB | | Gacrux-R1-Qwen3-1.7B-MoD.F32.gguf | F32 | 6.89 GB | | Gacrux-R1-Qwen3-1.7B-MoD.Q2_K.gguf | Q2_K | 778 MB | | Gacrux-R1-Qwen3-1.7B-MoD.Q3_K_L.gguf | Q3_K_L | 1 GB | | Gacrux-R1-Qwen3-1.7B-MoD.Q3_K_M.gguf | Q3_K_M | 940 MB | | Gacrux-R1-Qwen3-1.7B-MoD.Q3_K_S.gguf | Q3_K_S | 867 MB | | Gacrux-R1-Qwen3-1.7B-MoD.Q4_K_M.gguf | Q4_K_M | 1.11 GB | | Gacrux-R1-Qwen3-1.7B-MoD.Q4_K_S.gguf | Q4_K_S | 1.06 GB | | Gacrux-R1-Qwen3-1.7B-MoD.Q5_K_M.gguf | Q5_K_M | 1.26 GB | | Gacrux-R1-Qwen3-1.7B-MoD.Q5_K_S.gguf | Q5_K_S | 1.23 GB | | Gacrux-R1-Qwen3-1.7B-MoD.Q6_K.gguf | Q6_K | 1.42 GB | | Gacrux-R1-Qwen3-1.7B-MoD.Q8_0.gguf | Q8_0 | 1.83 GB | ## Quants Usage (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
thejaminator/gemma-introspection-20250821-step-1750
thejaminator
2025-08-21T04:02:11Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-21T04:01:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zhiyuanhucs/OpenCUA-7B-vllm
zhiyuanhucs
2025-08-21T04:00:31Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "VLM", "Computer-Use-Agent", "OS-Agent", "GUI", "Grounding", "image-text-to-text", "conversational", "en", "dataset:xlangai/AgentNet", "dataset:xlangai/aguvis-stage1", "dataset:smolagents/aguvis-stage-2", "dataset:osunlp/UGround-V1-Data", "arxiv:2508.09123", "arxiv:2504.07981", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-21T03:59:15Z
--- base_model: - Qwen/Qwen2.5-VL-7B-Instruct datasets: - xlangai/AgentNet - xlangai/aguvis-stage1 - smolagents/aguvis-stage-2 - osunlp/UGround-V1-Data language: - en license: mit metrics: - accuracy - code_eval pipeline_tag: image-text-to-text library_name: transformers tags: - VLM - Computer-Use-Agent - OS-Agent - GUI - Grounding --- <h1 style=" font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Helvetica,Arial,sans-serif; font-size:48px; font-weight:700; line-height:1.25; text-align:center; margin:0 0 24px;"> OpenCUA: Open Foundations for Computer-Use Agents </h1> <div style=" display:flex; justify-content:center; gap:12px; flex-wrap:wrap; margin-bottom:28px;"> <a href="https://opencua.xlang.ai/" style=" display:inline-block; padding:8px 24px; background:#2b2b2b; color:#ffffff; border-radius:36px; text-decoration:none; font-weight:600; font-size:16px;"> 🌐 Website </a> <a href="https://arxiv.org/abs/2508.09123" style=" display:inline-block; padding:8px 24px; background:#2b2b2b; color:#ffffff; border-radius:36px; text-decoration:none; font-weight:600; font-size:16px;"> 📝 Paper </a> <a href="https://github.com/xlang-ai/OpenCUA" style=" display:inline-block; padding:8px 24px; background:#2b2b2b; color:#ffffff; border-radius:36px; text-decoration:none; font-weight:600; font-size:16px;"> 💻 Code </a> </div> <div style="max-width:900px;margin:0 auto;"> # Introduction <div style=" max-width: 880px; /* 可按需调节整体宽度 */ margin: 0 auto; /* 居中容器 */ text-align: justify; /* 关键:两端对齐 */ text-justify: inter-word; /* 优化英文对齐效果 */ line-height: 1.6;"> OpenCUA models (OpenCUA-7B and OpenCUA-32B) are end-to-end computer-use foundation models than can produce executable actions in the computer environments. They are based on the weights of Qwen2.5-VL-7B-Instruction and Qwen2.5-VL-32B-Instruction. They demonstrate superior performance across CUA benchmarks. In particular, <b>OpenCUA-32B</b> achieves an average success rate of **34.8%** on [OSWorld-Verified](https://os-world.github.io/), establishing a new state-of-the-art (SOTA) among open-source models and surpassing OpenAI CUA (GPT-4o). Both models also have strong grounding performance, OpenCUA-32B achieves 59.6% on [OSWorld-G](https://osworld-grounding.github.io/) and 55.3% on [Screenspot-Pro](https://arxiv.org/abs/2504.07981). </div> ### Key Features - **Superior Computer-Use Capablity**: Able to execute multi-step computer-use actions with effective planning and reasoning - **Multi-OS Support**: Trained on demonstrations across Ubuntu, Windows, and macOS - **Visual Grounding**: Strong GUI element recognition and spatial reasoning capabilities - **Multi-Image Context**: Processes up to 3 screenshot history for better context understanding - **Reflective Reasoning**: Enhanced with reflective long Chain-of-Thought that identifies errors and provides corrective reasoning # Performance ### Online Agent Evaluation OpenCUA models achieves strong performance on **[OSWorld-Verified](https://os-world.github.io/)**. OPENCUA-32B achieves the best performance among all open-source models with an average success rate of 34.8%, outperforming prior baselines by large margins. It also closes the gap to proprietary Claude models. <div align="center"> | **Model** | **15 Steps** | **50 Steps** | **100 Steps** | |-------------------------------|:--------:|:--------:|:---------:| | **Proprietary** | | | | | OpenAI CUA | 26.0 | 31.3 | 31.4 | | Seed 1.5-VL | 27.9 | — | 34.1 | | Claude 3.7 Sonnet | 27.1 | 35.8 | 35.9 | | Claude 4 Sonnet | 31.2 | 43.9 | 41.5 | | **Open-Source** | | | | | Qwen 2.5-VL-32B-Instruct | 3.0 | — | 3.9 | | Qwen 2.5-VL-72B-Instruct | 4.4 | — | 5.0 | | Kimi-VL-A3B | 9.7 | — | 10.3 | | UI-TARS-72B-DPO | 24.0 | 25.8 | 27.1 | | UI-TARS-1.5-7B | 24.5 | 27.3 | 27.4 | | OpenCUA-7B *(Ours)* | 24.3 | 27.9 | 26.6 | | **OpenCUA-32B *(Ours)*** | **29.7** | **34.1** | **34.8** | </div> *OpenCUA scores are the mean of 3 independent runs.* ### GUI Grounding Performance <div align="center"> | **Model** | **OSWorld-G** | **ScreenSpot-V2** | **ScreenSpot-Pro** | |-------|-----------|---------------|----------------| | Qwen2.5-VL-7B | 31.4 | 88.8 | 27.6 | | Qwen2.5-VL-32B | 46.5 | 87.0 | 39.4 | | UI-TARS-72B | 57.1 | 90.3 | 38.1 | | **OpenCUA-A3B** | 48.6 | 91.4 | 28.5 | | **OpenCUA-Qwen2-7B** | 45.7 | 88.5 | 23.7 | | **OpenCUA-7B** | 55.3 | 92.3 | 50.0 | | **OpenCUA-32B** | **59.6** | **93.4** | **55.3** | </div> ### AgentNetBench (Offline Evaluation) <div align="center"> | **Model** | **Coordinate Actions** | **Content Actions** | **Function Actions** | **Average** | |-------|-------------------|-----------------|------------------|---------| | Qwen2.5-VL-7B | 50.7 | 40.8 | 3.1 | 48.0 | | Qwen2.5-VL-32B | 66.6 | 47.2 | 41.5 | 64.8 | | Qwen2.5-VL-72B | 67.2 | 52.6 | 50.5 | 67.0 | | OpenAI CUA | 71.7 | 57.3 | **80.0** | 73.1 | | **OpenCUA-7B** | 79.0 | 62.0 | 44.3 | 75.2 | | **OpenCUA-32B** | **81.9** | 66.1 | 55.7 | **79.1** | </div> # 🚀 Quick Start <div style="border-left: 6px solid #f28c28; background: #fff8e6; padding: 12px 16px; margin: 16px 0;"> <strong>⚠️ Important for Qwen-based Models (OpenCUA-7B, OpenCUA-32B):</strong> To align with our training infrastructure, we have modified the model in two places: <ul style="margin-top: 8px;"> <li>1. Multimodal Rotary Position Embedding (M-RoPE) has been replaced with 1D RoPE</strong>.</li> <li>2. Using the same Tokenizer and ChatTemplate as Kimi-VL.</li> <li>Do not use the default transformers and vllm classes to load the model. Tokenizer and Chat Template should be aligned if training the models.</li> </ul> </div> ## Installation & Download First, install the required transformers dependencies: ```bash conda create -n opencua python=3.10 conda activate opencua pip install -r requirement.txt ``` Download the model weight from huggingface: ```bash from huggingface_hub import snapshot_download snapshot_download( repo_id="xlangai/OpenCUA-7B", local_dir="OpenCUA-7B", local_dir_use_symlinks=False ) ``` ## 🎯 GUI Grounding The following code demonstrates how to use OpenCUA models for GUI grounding tasks: ```python import base64 import torch from transformers import AutoTokenizer, AutoModel, AutoImageProcessor from PIL import Image import json def encode_image(image_path: str) -> str: """Encode image to base64 string for model input.""" with open(image_path, "rb") as f: return base64.b64encode(f.read()).decode() def load_opencua_model(model_path: str): """Load OpenCUA model, tokenizer, and image processor.""" tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) model = AutoModel.from_pretrained( model_path, torch_dtype="auto", device_map="auto", trust_remote_code=True ) image_processor = AutoImageProcessor.from_pretrained(model_path, trust_remote_code=True) return model, tokenizer, image_processor def create_grounding_messages(image_path: str, instruction: str): """Create chat messages for GUI grounding task.""" system_prompt = ( "You are a GUI agent. You are given a task and a screenshot of the screen. " "You need to perform a series of pyautogui actions to complete the task." ) messages = [ {"role": "system", "content": system_prompt}, { "role": "user", "content": [ {"type": "image", "image": f"data:image/png;base64,{encode_image(image_path)}"}, {"type": "text", "text": instruction}, ], }, ] return messages def run_inference(model, tokenizer, image_processor, messages, image_path): """Run inference on the model.""" # Prepare text input input_ids = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True ) input_ids = torch.tensor([input_ids]).to(model.device) # Prepare image input image = Image.open(image_path).convert('RGB') image_info = image_processor.preprocess(images=[image]) pixel_values = torch.tensor(image_info['pixel_values']).to( dtype=torch.bfloat16, device=model.device ) grid_thws = torch.tensor(image_info['image_grid_thw']) # Generate response with torch.no_grad(): generated_ids = model.generate( input_ids, pixel_values=pixel_values, grid_thws=grid_thws, max_new_tokens=512, temperature=0 ) # Decode output prompt_len = input_ids.shape[1] generated_ids = generated_ids[:, prompt_len:] output_text = tokenizer.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False )[0] return output_text # Example usage model_path = "OpenCUA/OpenCUA-7B" # or other model variants image_path = "screenshot.png" instruction = "Click on the submit button" # Load model model, tokenizer, image_processor = load_opencua_model(model_path) # Create messages and run inference messages = create_grounding_messages(image_path, instruction) result = run_inference(model, tokenizer, image_processor, messages, image_path) print("Model output:", result) ``` <div style="border-left: 6px solid #9ca3af; background: #f5f5f5; padding: 12px 16px; margin: 16px 0;"> <em>Expected result:</em> ```python pyautogui.click(x=1443, y=343) ``` </div> You can also run the five grounding examples in [OpenCUA/model/inference/huggingface_inference.py](https://github.com/xlang-ai/OpenCUA/blob/main/model/inference/huggingface_inference.py): ``` cd ./model/inference/ python huggingface_inference.py ``` ## 🖥️ Computer Use Agent **[OpenCUAAgent](https://github.com/xlang-ai/OSWorld/blob/main/mm_agents/opencua_agent.py)** is developed in the [OSWorld](https://github.com/xlang-ai/OSWorld) environment based on OpenCUA models. It iteratively perceives the environment via screenshots, produces reflective long CoT as inner monologue, and predicts the next action to be executed. OpenCUAAgent uses 3 images in total and L2 CoT format in default. Command for running OpenCUA-7B and OpenCUA-32B in OSWorld: ``` python run_multienv_opencua.py \ --headless \ --observation_type screenshot \ --model OpenCUA-32B \ --result_dir ./results --test_all_meta_path evaluation_examples/test_all_no_gdrive.json \ --max_steps 100 \ --num_envs 30 \ --coordinate_type qwen25 ``` <div style="border-left: 6px solid #9ca3af; background: #f5f5f5; padding: 12px 16px; margin: 16px 0;"> <em>Currently we only supports huggingface inference. We are implementing the vLLM supports of OpenCUA models. Please stay tuned.</em> </div> --- # AgentNet Dataset - Large-Scale Computer-Use Dataset <div align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/67b327cdd4665a0448eef7d5/dw5k183ucDSB2SZuS5f2V.png" width="400" alt="AgentNet Dataset Domain Distribution"> </div> AgentNet is the first large-scale desktop computer-use agent trajectory dataset, containing 22.6K human-annotated computer-use tasks across Windows, macOS, and Ubuntu systems. 👉 **[AgentNet Huggingface Dataset](https://huggingface.co/datasets/xlangai/AgentNet)** Download the dataset here: ``` pip install -U huggingface_hub huggingface-cli download xlangai/AgentNet --repo-type dataset --local-dir ./AgentNet ``` Collecting computer-use agent training data requires 3 steps: - Demonstrate human computer-use task via [AgentNetTool](https://agentnet-tool.xlang.ai/); - Preprocess the demonstration using [Action Reduction & State-Action Matching](./data/data-processor); - For each step, [synthesize reflective long CoT](./data/cot-generator) ## 1 AgentNetTool – Annotation & Verification Tool <div align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/67b327cdd4665a0448eef7d5/ETjCOoIRR7f1YZCJ2kfiW.png" width="700" alt="AgentNet Tool"> </div> Our **AgentNetTool** is a cross-platform GUI recorder that runs unobtrusively on annotators’ machines. It captures synchronized **screen video**, **mouse/keyboard events**, and **accessibility trees**, then provides an in-browser UI for reviewing, trimming, and submitting demonstrations. AgentNet Tool is available on Windows, macOS and Ubuntu. 👉 **[AgentNetTool Document](https://agentnet-tool.xlang.ai/)** ## 2 DataProcessor – Action Reduction & State–Action Matching Raw demonstrations can contain thousands of low-level events that are too dense for model training. The **DataProcessor** module (`./data/data-process/`) performs two key steps: 1. **Action Reduction** — merges granular signals into concise, semantically meaningful PyAutoGUI actions (e.g., collapsing mouse moves → click, coalescing scrolls, grouping key-press sequences into text or hotkeys). 2. **State–Action Matching** — aligns every reduced action with the *last visually distinct frame* **before** the action begins, avoiding future-information leakage and yielding compact state–action pairs. These processed trajectories underlie all downstream training and evaluation. --- ## 3 CoTGenerator – Synthesizing Reflective Long Chain-of-Thought Inner Monologue To boost robustness and interpretability, we augment each trajectory with **reflective long Chain-of-Thought (CoT) reasoning**. The **CoTGenerator** pipeline (`./data/cot-generator/`) synthesizes step-level reflections that: * reflect on the previous action, * explain *why* an action is chosen given the current observation and history, * note potential alternative actions, and * forecast the expected next state. Empirically, models trained with these rich CoTs scale better with data and generalize across unseen applications. # Evaluation <div align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/67b327cdd4665a0448eef7d5/emy1QCJwQj9KqHkVmtNH2.png" width="800" alt="AgentNetBench"> </div> **AgentNetBench** (`./AgentNetBench/`) provides a realistic offline evaluator for OS agent trajectories. It compares model-predicted low-level actions (click, moveTo, write, press, scroll, terminate, etc.) against ground-truth human actions and reports detailed metrics. 👉 See **[AgentNetBench/README.md](./evaluation/agentnetbench/README.md)** for usage instructions. # TODO ## vLLM Support We are actively working with the vLLM team to add support for OpenCUA models. **Workaround:** For now, please use the standard transformers library as shown in the examples above. We will update this section once vLLM support becomes available. ## Training Code OpenCUA models are developed based on the training infrastructure of Kimi Team. We are developting the training pipeline based on the open-source infrastructure as well. # Acknowledge <p> We thank Su Yu, Caiming Xiong, Binyuan Hui, and the anonymous reviewers for their insightful discussions and valuable feedback. We are grateful to Moonshot AI for providing training infrastructure and annotated data. We also sincerely appreciate Calvin, Ziwei Chen, Jin Zhang, Ze Li, Zhengtao Wang, Yanxu Chen, and Qizheng Gu from the Kimi Team for their strong infrastructure support and helpful guidance. The development of our tool is based on the open-source projects-<a href="https://github.com/TheDuckAI/DuckTrack" target="_blank">DuckTrack</a> and <a href="https://github.com/OpenAdaptAI/OpenAdapt" target="_blank">OpenAdapt</a>. We are very grateful to their commitment to the open source community. Finally, we extend our deepest thanks to all annotators for their tremendous effort and contributions to this project. </p> # License This project is licensed under the MIT License - see the LICENSE file in the root folder for details. ## Research Use and Disclaimer OpenCUA models are intended for **research and educational purposes only**. ### Prohibited Uses - The model may **not** be used for any purpose or activity that violates applicable laws or regulations in any jurisdiction - Use for illegal, unethical, or harmful activities is strictly prohibited ### Disclaimer - The authors, contributors, and copyright holders are **not responsible** for any illegal, unethical, or harmful use of the Software, nor for any direct or indirect damages resulting from such use - Use of the "OpenCUA" name, logo, or trademarks does **not** imply any endorsement or affiliation unless separate written permission is obtained - Users are solely responsible for ensuring their use complies with applicable laws and regulations ## Important Notes on Coordinate Systems <div style="border-left: 6px solid #9ca3af; background: #f5f5f5; padding: 12px 16px; margin: 16px 0;"> <ul style="margin: 0;"> <li><strong><code>OpenCUA/OpenCUA-A3B</code></strong> – Relative coordinates <em>(not supported in this code)</em></li> <li><strong><code>OpenCUA/OpenCUA-Qwen2-7B</code></strong> – Relative coordinates</li> <li><strong><code>OpenCUA/OpenCUA-7B</code></strong> – Absolute coordinates</li> <li><strong><code>OpenCUA/OpenCUA-32B</code></strong> – Absolute coordinates</li> </ul> </div> **OpenCUA models use different coordinate systems depending on the base model:** - **OpenCUA-Qwen2-7B**: Outputs **relative coordinates** (0.0 to 1.0 range) ```python # Example output: pyautogui.click(x=0.5, y=0.3) # x=0.5 means 50% from left edge, y=0.3 means 30% from top edge # Convert to absolute coordinates: def qwen2_relative_to_absolute(rel_x, rel_y, original_width, original_height): abs_x = int(rel_x * original_width) abs_y = int(rel_y * original_height) return abs_x, abs_y ``` - **OpenCUA-7B and OpenCUA-32B** (Qwen2.5-based): Output **absolute coordinates** after smart resize ```python # Example output: pyautogui.click(x=960, y=324) # These are coordinates on the smart-resized image, not the original image # Convert to original image coordinates: # Please refer to the smart_resize function in: https://github.com/huggingface/transformers/blob/67ddc82fbc7e52c6f42a395b4a6d278c55b77a39/src/transformers/models/qwen2_vl/image_processing_qwen2_vl.py#L55 def qwen25_smart_resize_to_absolute(model_x, model_y, original_width, original_height): # First, calculate the smart-resized dimensions resized_height, resized_width = smart_resize(original_height, original_width, factor = 28, min_pixels = 3136, max_pixels = 12845056) # Convert model output to relative coordinates on original image rel_x = model_x / resized_width rel_y = model_y / resized_height # Then convert to absolute coordinates on original image abs_x = int(rel_x * original_width) abs_y = int(rel_y * original_height) return abs_x, abs_y ``` <div style="border-left: 6px solid #9ca3af; background: #f5f5f5; padding: 12px 16px; margin: 16px 0;"> <strong>Understanding Smart Resize for Qwen2.5-based Models:</strong> <p style="margin: 8px 0 0;"> The Qwen2.5-VL models use a “smart resize” preprocessing that maintains aspect ratio while fitting within pixel constraints. For coordinate conversion, you need the smart resize function from the <a href="https://github.com/QwenLM/Qwen2.5-VL/blob/d2240f11656bfe404b9ba56db4e51cd09f522ff1/qwen-vl-utils/src/qwen_vl_utils/vision_process.py#L60"> official Qwen2.5-VL implementation</a>. </p> </div> ## Citation If you use OpenCUA models in your research, please cite our work: ```bibtex @misc{wang2025opencuaopenfoundationscomputeruse, title={OpenCUA: Open Foundations for Computer-Use Agents}, author={Xinyuan Wang and Bowen Wang and Dunjie Lu and Junlin Yang and Tianbao Xie and Junli Wang and Jiaqi Deng and Xiaole Guo and Yiheng Xu and Chen Henry Wu and Zhennan Shen and Zhuokai Li and Ryan Li and Xiaochuan Li and Junda Chen and Boyuan Zheng and Peihang Li and Fangyu Lei and Ruisheng Cao and Yeqiao Fu and Dongchan Shin and Martin Shin and Jiarui Hu and Yuyan Wang and Jixuan Chen and Yuxiao Ye and Danyang Zhang and Dikang Du and Hao Hu and Huarong Chen and Zaida Zhou and Haotian Yao and Ziwei Chen and Qizheng Gu and Yipu Wang and Heng Wang and Diyi Yang and Victor Zhong and Flood Sung and Y. Charles and Zhilin Yang and Tao Yu}, year={2025}, eprint={2508.09123}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2508.09123}, } ``` </div>
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755748726
0xaoyama
2025-08-21T03:59:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "muscular zealous gorilla", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:59:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - muscular zealous gorilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thejaminator/gemma-introspection-20250821-step-1250
thejaminator
2025-08-21T03:59:18Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-21T03:58:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thejaminator/gemma-introspection-20250821-step-1000
thejaminator
2025-08-21T03:57:52Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-21T03:57:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
khopilot/khmer-tokenizer-v7
khopilot
2025-08-21T03:57:49Z
0
0
sentencepiece
[ "sentencepiece", "khmer_tokenizer_v7", "tokenizer", "khmer", "subword", "feature-extraction", "km", "license:apache-2.0", "model-index", "region:us" ]
feature-extraction
2025-08-21T01:56:47Z
--- language: km license: apache-2.0 tags: - sentencepiece - tokenizer - khmer - subword library_name: sentencepiece pipeline_tag: feature-extraction widget: - text: "ព្រះរាជាណាចក្រកម្ពុជា" example_title: "Cambodia" - text: "ធម៌" example_title: "Dharma" - text: "ការសិក្សា" example_title: "Education" model-index: - name: khmer-tokenizer-v7 results: - task: type: feature-extraction name: Tokenization dataset: name: khmer-news-corpus type: khmer-news-corpus config: default split: test metrics: - type: compression_ratio value: 5.27 name: Compression Ratio - type: tokens_per_character value: 0.1897 name: Tokens Per Character - type: vocabulary_coverage value: 90.0 name: Linguistic Coverage - type: processing_speed value: 338000000 name: Characters per Second - type: morphological_accuracy value: 50.0 name: Morphological Accuracy - type: sanskrit_pali_accuracy value: 100.0 name: Sanskrit/Pali Accuracy --- # Khmer SentencePiece Tokenizer A production-ready SentencePiece tokenizer for Khmer (Cambodian) language with 16k vocabulary, optimized for modern NLP pipelines. ## Direct Usage from HuggingFace 🤗 ```python from transformers import AutoTokenizer # Load directly from HuggingFace tokenizer = AutoTokenizer.from_pretrained("khopilot/khmer-tokenizer-v7") # Tokenize text text = "ព្រះរាជាណាចក្រកម្ពុជា" encoded = tokenizer(text, return_tensors="pt") # Get tokens tokens = tokenizer.tokenize(text) print(tokens) # ['▁ព្រះរាជ', 'ាណាចក្រ', 'កម្ពុជា'] # Encode and decode input_ids = tokenizer.encode(text) decoded = tokenizer.decode(input_ids) print(decoded) # ព្រះរាជាណាចក្រកម្ពុជា ``` ## Installation Options ### Option 1: Transformers (Recommended) ```bash pip install transformers ``` ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("khopilot/khmer-tokenizer-v7") ``` ### Option 2: SentencePiece Direct ```bash pip install sentencepiece huggingface-hub ``` ```python from huggingface_hub import hf_hub_download import sentencepiece as spm model_path = hf_hub_download( repo_id="khopilot/khmer-tokenizer-v7", filename="tokenizer.model" ) sp = spm.SentencePieceProcessor(model_path) ``` ## Evaluation Results ### Performance Metrics (Khmer News Corpus) | Metric | Value | Description | |--------|-------|-------------| | **Compression Ratio** | 5.27x | Characters compressed per token | | **Tokens/Character** | 0.1897 | Average tokens per character | | **Vocabulary Coverage** | 90% | Percentage of linguistic phenomena covered | | **Processing Speed** | 338M chars/sec | Throughput on CPU | | **Model Size** | 659KB | Disk space required | ### Linguistic Evaluation (Multi-Domain Khmer Corpus) | Category | Accuracy | Test Size | |----------|----------|-----------| | **Sanskrit/Pali Terms** | 100% | 50 terms | | **Morphological Segmentation** | 50% | 100 compounds | | **Consonant Clusters** | 100% | 30 patterns | | **Number Handling** | 95% | 50 examples | | **Mixed Script** | 88% | 40 samples | ### Domain-Specific Performance | Domain | Token Efficiency | Quality Score | |--------|-----------------|---------------| | **News Articles** | 0.2585 TPC | ⭐⭐⭐⭐⭐ | | **Religious Texts** | 0.2103 TPC | ⭐⭐⭐⭐⭐ | | **Technical Docs** | 0.2891 TPC | ⭐⭐⭐⭐ | | **Social Media** | 0.3012 TPC | ⭐⭐⭐⭐ | | **Literature** | 0.2234 TPC | ⭐⭐⭐⭐ | ## Tokenization Examples ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("khopilot/khmer-tokenizer-v7") # Example 1: Religious term tokenizer.tokenize("ធម៌") # Output: ['▁ធម៌'] # 1 token (perfect) # Example 2: Compound word tokenizer.tokenize("ការសិក្សា") # Output: ['▁ការ', 'សិក្សា'] # 2 tokens (morphologically correct) # Example 3: Long compound tokenizer.tokenize("អគ្គលេខាធិការ") # Output: ['▁អគ្គ', 'លេខាធិការ'] # 2 tokens # Example 4: Mixed numerals tokenizer.tokenize("ឆ្នាំ២០២៤") # Output: ['▁ឆ្នាំ', '២០២', '៤'] # 3 tokens ``` ## Advanced Usage ### Batch Processing ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("khopilot/khmer-tokenizer-v7") texts = [ "ព្រះរាជាណាចក្រកម្ពុជា", "ធម៌", "ការសិក្សា" ] # Batch encode encoded = tokenizer( texts, padding=True, truncation=True, max_length=512, return_tensors="pt" ) print(encoded["input_ids"].shape) # torch.Size([3, max_length]) ``` ### With PyTorch DataLoader ```python import torch from torch.utils.data import Dataset, DataLoader from transformers import AutoTokenizer class KhmerDataset(Dataset): def __init__(self, texts, tokenizer, max_length=512): self.texts = texts self.tokenizer = tokenizer self.max_length = max_length def __len__(self): return len(self.texts) def __getitem__(self, idx): encoding = self.tokenizer( self.texts[idx], truncation=True, padding="max_length", max_length=self.max_length, return_tensors="pt" ) return { "input_ids": encoding["input_ids"].squeeze(), "attention_mask": encoding["attention_mask"].squeeze() } tokenizer = AutoTokenizer.from_pretrained("khopilot/khmer-tokenizer-v7") dataset = KhmerDataset(texts, tokenizer) dataloader = DataLoader(dataset, batch_size=32, shuffle=True) ``` ### For Language Models ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("khopilot/khmer-tokenizer-v7") # Add special tokens if needed tokenizer.add_special_tokens({ "pad_token": "<pad>", "eos_token": "</s>", "bos_token": "<s>", "unk_token": "<unk>" }) # Use with any model text = "ព្រះរាជាណាចក្រកម្ពុជា" inputs = tokenizer(text, return_tensors="pt") # Ready for model.generate() or model.forward() ``` ## Model Configuration ```yaml Architecture: SentencePiece Unigram Vocabulary Size: 16,000 Character Coverage: 99.99% Max Piece Length: 8 Split by Unicode Script: Yes Byte Fallback: Enabled Special Tokens: <unk>, <s>, </s>, <pad>, <MASK>, <CLS>, <SEP> ``` ## Training Details - **Training Data:** 2.6M characters of diverse Khmer text - **Data Sources:** News, religious texts, technical docs, social media, literature - **Special Weighting:** Sanskrit/Pali terms (3x), morphological patterns (2x) - **Optimization:** Natural frequency distribution, no artificial repetition ## File Structure ``` khopilot/khmer-tokenizer-v7/ ├── tokenizer.model # SentencePiece model (659KB) ├── tokenizer.vocab # Vocabulary file ├── tokenizer_config.json # HuggingFace config ├── special_tokens_map.json # Special tokens mapping └── config.json # Model metadata ``` ## Citation ```bibtex @misc{khmer-tokenizer-v7-2024, author = {Niko}, title = {Khmer SentencePiece Tokenizer v7}, year = {2024}, publisher = {HuggingFace}, url = {https://huggingface.co/khopilot/khmer-tokenizer-v7} } ``` ## License Apache 2.0 --- **Support:** Open an issue on [HuggingFace](https://huggingface.co/khopilot/khmer-tokenizer-v7/discussions) | **Downloads:** 659KB model size
zodvik/MyGemmaNPC
zodvik
2025-08-21T03:57:27Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-21T03:52:41Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: MyGemmaNPC tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for MyGemmaNPC This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="zodvik/MyGemmaNPC", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
thejaminator/gemma-introspection-20250821-step-750
thejaminator
2025-08-21T03:55:56Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-21T03:55:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755748468
IvanJAjebu
2025-08-21T03:55:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:55:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
projectsidewalk/rampnet-model
projectsidewalk
2025-08-21T03:54:30Z
89
1
null
[ "safetensors", "convnextv2", "curb ramp detection", "accessibility", "object-detection", "custom_code", "dataset:projectsidewalk/rampnet-dataset", "base_model:timm/convnextv2_base.fcmae_ft_in22k_in1k_384", "base_model:finetune:timm/convnextv2_base.fcmae_ft_in22k_in1k_384", "license:mit", "region:us" ]
object-detection
2025-07-12T01:15:10Z
--- tags: - curb ramp detection - accessibility license: mit datasets: - projectsidewalk/rampnet-dataset base_model: - timm/convnextv2_base.fcmae_ft_in22k_in1k_384 pipeline_tag: object-detection --- **RampNet** is a two-stage pipeline that addresses the scarcity of curb ramp detection datasets by using government location data to automatically generate over 210,000 annotated Google Street View panoramas. This new dataset is then used to train a state-of-the-art curb ramp detection model that significantly outperforms previous efforts. In this repo, we provide the checkpoint for our curb ramp detection model. **Usage:** *For a step-by-step walkthrough, see our [Google Colab notebook](https://colab.research.google.com/drive/1TOtScud5ac2McXJmg1n_YkOoZBchdn3w?usp=sharing), which includes a visualization in addition to the code below.* ```py import torch from transformers import AutoModel from PIL import Image import numpy as np from torchvision import transforms from skimage.feature import peak_local_max IMAGE_PATH = "example.jpg" DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = AutoModel.from_pretrained("projectsidewalk/rampnet-model", trust_remote_code=True).to(DEVICE).eval() preprocess = transforms.Compose([ transforms.Resize((2048, 4096), interpolation=transforms.InterpolationMode.BILINEAR), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) img = Image.open(IMAGE_PATH).convert("RGB") img_tensor = preprocess(img).unsqueeze(0).to(DEVICE) with torch.no_grad(): heatmap = model(img_tensor).squeeze().cpu().numpy() peaks = peak_local_max(np.clip(heatmap, 0, 1), min_distance=10, threshold_abs=0.5) scale_w = img.width / heatmap.shape[1] scale_h = img.height / heatmap.shape[0] coordinates = [(int(c * scale_w), int(r * scale_h)) for r, c in peaks] # Coordinates of detected curb ramps print(coordinates) ``` ![Example of Curb Ramp Points](https://i.postimg.cc/66fQm4m5/labeled.png) If you get an error like this (see below), then you need to update your transformers library version. ``` >>> model = AutoModel.from_pretrained("projectsidewalk/rampnet-model", trust_remote_code=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/gscratch/scrubbed/jsomeara/envs/sidewalk-validator-ai/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 558, in from_pretrained model_class = add_generation_mixin_to_remote_model(model_class) File "/gscratch/scrubbed/jsomeara/envs/sidewalk-validator-ai/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 726, in add_generation_mixin_to_remote_model has_custom_generate = "GenerationMixin" not in str(getattr(model_class, "generate")) AttributeError: type object 'KeypointModel' has no attribute 'generate' >>> ``` Citation: ```bibtex @inproceedings{omeara2025rampnet, author = {John S. O'Meara and Jared Hwang and Zeyu Wang and Michael Saugstad and Jon E. Froehlich}, title = {{RampNet: A Two-Stage Pipeline for Bootstrapping Curb Ramp Detection in Streetscape Images from Open Government Metadata}}, booktitle = {{ICCV'25 Workshop on Vision Foundation Models and Generative AI for Accessibility: Challenges and Opportunities (ICCV 2025 Workshop)}}, year = {2025}, doi = {https://doi.org/10.48550/arXiv.2508.09415}, url = {https://cv4a11y.github.io/ICCV2025/index.html}, note = {DOI: forthcoming} } ```
roeker/blockassist-bc-quick_wiry_owl_1755748368
roeker
2025-08-21T03:54:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:53:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thanobidex/blockassist-bc-colorful_shiny_hare_1755746911
thanobidex
2025-08-21T03:54:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful shiny hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:54:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful shiny hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755748370
0xaoyama
2025-08-21T03:53:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "muscular zealous gorilla", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:53:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - muscular zealous gorilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
amayuelas/Qwen3-4B-Wikirace-v4-SFT
amayuelas
2025-08-21T03:52:45Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:willcb/Qwen3-4B", "base_model:finetune:willcb/Qwen3-4B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-21T03:44:53Z
--- base_model: willcb/Qwen3-4B library_name: transformers model_name: Qwen3-4B-Wikirace-v4-SFT tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for Qwen3-4B-Wikirace-v4-SFT This model is a fine-tuned version of [willcb/Qwen3-4B](https://huggingface.co/willcb/Qwen3-4B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="amayuelas/Qwen3-4B-Wikirace-v4-SFT", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ucsb-nlp/huggingface/runs/b8xipu1w) This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.7.1 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
unitova/blockassist-bc-zealous_sneaky_raven_1755746578
unitova
2025-08-21T03:51:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:51:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755746739
lisaozill03
2025-08-21T03:51:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rugged prickly alpaca", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:51:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rugged prickly alpaca --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755748087
IvanJAjebu
2025-08-21T03:49:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:49:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755746511
coelacanthxyz
2025-08-21T03:48:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:48:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
naveennuwantha/NUXARA-AI-V1
naveennuwantha
2025-08-21T03:48:08Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "gemma3", "gemma", "google", "arxiv:2503.19786", "arxiv:1905.07830", "arxiv:1905.10044", "arxiv:1911.11641", "arxiv:1705.03551", "arxiv:1911.01547", "arxiv:1907.10641", "arxiv:2311.07911", "arxiv:2311.12022", "arxiv:2411.04368", "arxiv:1904.09728", "arxiv:1903.00161", "arxiv:2009.03300", "arxiv:2304.06364", "arxiv:2103.03874", "arxiv:2110.14168", "arxiv:2108.07732", "arxiv:2107.03374", "arxiv:2403.07974", "arxiv:2305.03111", "arxiv:2405.04520", "arxiv:2210.03057", "arxiv:2106.03193", "arxiv:1910.11856", "arxiv:2502.12404", "arxiv:2502.21228", "arxiv:2404.16816", "arxiv:2104.12756", "arxiv:2311.16502", "arxiv:2203.10244", "arxiv:2404.12390", "arxiv:1810.12440", "arxiv:1908.02660", "arxiv:2310.02255", "arxiv:2312.11805", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-21T03:26:55Z
--- license: gemma tags: - gemma3 - gemma - google pipeline_tag: text-generation library_name: transformers extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # Gemma 3 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core) **Resources and Technical Documentation**: * [Gemma 3 Technical Report][g3-tech-report] * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma3] **Terms of Use**: [Terms][terms] **Authors**: Google DeepMind ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Inputs and outputs - **Input:** - Text string, such as a question, a prompt, or a document to be summarized - Images, normalized to 896 x 896 resolution and encoded to 256 tokens each, for the 4B, 12B, and 27B sizes. - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B and 270M sizes. - **Output:** - Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document - Total output context up to 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B and 270M sizes per request, subtracting the request input tokens ### Citation ```none @article{gemma_2025, title={Gemma 3}, url={https://arxiv.org/abs/2503.19786}, publisher={Google DeepMind}, author={Gemma Team}, year={2025} } ``` ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 14 trillion tokens, the 12B model was trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens, the 1B with 2 trillion tokens, and the 270M with 6 trillion tokens. The knowledge cutoff date for the training data was August 2024. Here are the key components: - Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. The training dataset includes content in over 140 languages. - Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code and understand code-related questions. - Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. - Images: A wide range of images enables the model to perform image analysis and visual data extraction tasks. The combination of these diverse data sources is crucial for training a powerful multimodal model that can handle a wide variety of different tasks and data formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. - Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. - Additional methods: Filtering based on content quality and safety in line with [our policies][safety-policies]. ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p, TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: - Performance: TPUs are specifically designed to handle the massive computations involved in training VLMs. They can speed up training considerably compared to CPUs. - Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. - Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. - Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. - These advantages are aligned with [Google's commitments to operate sustainably][sustainability]. ### Software Training was done using [JAX][jax] and [ML Pathways][ml-pathways]. JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for foundation models, including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models][gemini-2-paper]; *"the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow."* ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation. Evaluation results marked with **IT** are for instruction-tuned models. Evaluation results marked with **PT** are for pre-trained models. #### Gemma 3 270M | **Benchmark** | **n-shot** | **Gemma 3 PT 270M** | | :------------------------ | :-----------: | ------------------: | | [HellaSwag][hellaswag] | 10-shot | 40.9 | | [BoolQ][boolq] | 0-shot | 61.4 | | [PIQA][piqa] | 0-shot | 67.7 | | [TriviaQA][triviaqa] | 5-shot | 15.4 | | [ARC-c][arc] | 25-shot | 29.0 | | [ARC-e][arc] | 0-shot | 57.7 | | [WinoGrande][winogrande] | 5-shot | 52.0 | [hellaswag]: https://arxiv.org/abs/1905.07830 [boolq]: https://arxiv.org/abs/1905.10044 [piqa]: https://arxiv.org/abs/1911.11641 [triviaqa]: https://arxiv.org/abs/1705.03551 [arc]: https://arxiv.org/abs/1911.01547 [winogrande]: https://arxiv.org/abs/1907.10641 | **Benchmark** | **n-shot** | **Gemma 3 IT 270m** | | :------------------------ | :-----------: | ------------------: | | [HellaSwag][hellaswag] | 0-shot | 37.7 | | [PIQA][piqa] | 0-shot | 66.2 | | [ARC-c][arc] | 0-shot | 28.2 | | [WinoGrande][winogrande] | 0-shot | 52.3 | | [BIG-Bench Hard][bbh] | few-shot | 26.7 | | [IF Eval][ifeval] | 0-shot | 51.2 | [hellaswag]: https://arxiv.org/abs/1905.07830 [piqa]: https://arxiv.org/abs/1911.11641 [arc]: https://arxiv.org/abs/1911.01547 [winogrande]: https://arxiv.org/abs/1907.10641 [bbh]: https://paperswithcode.com/dataset/bbh [bbh]: https://paperswithcode.com/dataset/bbh [ifeval]: https://arxiv.org/abs/2311.07911 #### Gemma 3 1B, 4B, 12B & 27B ##### Reasoning and factuality | Benchmark | n-shot | Gemma 3 IT 1B | Gemma 3 IT 4B | Gemma 3 IT 12B | Gemma 3 IT 27B | |--------------------------------|--------|:-------------:|:-------------:|:--------------:|:--------------:| | [GPQA][gpqa] Diamond | 0-shot | 19.2 | 30.8 | 40.9 | 42.4 | | [SimpleQA][simpleqa] | 0-shot | 2.2 | 4.0 | 6.3 | 10.0 | | [FACTS Grounding][facts-grdg] | - | 36.4 | 70.1 | 75.8 | 74.9 | | [BIG-Bench Hard][bbh] | 0-shot | 39.1 | 72.2 | 85.7 | 87.6 | | [BIG-Bench Extra Hard][bbeh] | 0-shot | 7.2 | 11.0 | 16.3 | 19.3 | | [IFEval][ifeval] | 0-shot | 80.2 | 90.2 | 88.9 | 90.4 | | Benchmark | n-shot | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------|:--------------:|:-------------:|:--------------:|:--------------:| | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 | | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 | | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 | | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 | | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 | | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 | | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 | | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 | | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 | | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 | | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 | [gpqa]: https://arxiv.org/abs/2311.12022 [simpleqa]: https://arxiv.org/abs/2411.04368 [facts-grdg]: https://goo.gle/FACTS_paper [bbeh]: https://github.com/google-deepmind/bbeh [ifeval]: https://arxiv.org/abs/2311.07911 [hellaswag]: https://arxiv.org/abs/1905.07830 [boolq]: https://arxiv.org/abs/1905.10044 [piqa]: https://arxiv.org/abs/1911.11641 [socialiqa]: https://arxiv.org/abs/1904.09728 [triviaqa]: https://arxiv.org/abs/1705.03551 [naturalq]: https://github.com/google-research-datasets/natural-questions [arc]: https://arxiv.org/abs/1911.01547 [winogrande]: https://arxiv.org/abs/1907.10641 [bbh]: https://paperswithcode.com/dataset/bbh [drop]: https://arxiv.org/abs/1903.00161 ##### STEM and code | Benchmark | n-shot | Gemma 3 IT 1B | Gemma 3 IT 4B | Gemma 3 IT 12B | Gemma 3 IT 27B | |----------------------------|--------|:-------------:|:-------------:|:--------------:|:--------------:| | [MMLU][mmlu] (Pro) | 0-shot | 14.7 | 43.6 | 60.6 | 67.5 | | [LiveCodeBench][lcb] | 0-shot | 1.9 | 12.6 | 24.6 | 29.7 | | [Bird-SQL][bird-sql] (dev) | - | 6.4 | 36.3 | 47.9 | 54.4 | | [Math][math] | 0-shot | 48.0 | 75.6 | 83.8 | 89.0 | | HiddenMath | 0-shot | 15.8 | 43.0 | 54.5 | 60.3 | | [MBPP][mbpp] | 3-shot | 35.2 | 63.2 | 73.0 | 74.4 | | [HumanEval][humaneval] | 0-shot | 41.5 | 71.3 | 85.4 | 87.8 | | [Natural2Code][nat2code] | 0-shot | 56.0 | 70.3 | 80.7 | 84.5 | | [GSM8K][gsm8k] | 0-shot | 62.8 | 89.2 | 94.4 | 95.9 | | Benchmark | n-shot | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:| | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 | | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 | | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 | | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 | | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 | | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 | | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 | | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 | [mmlu]: https://arxiv.org/abs/2009.03300 [agieval]: https://arxiv.org/abs/2304.06364 [math]: https://arxiv.org/abs/2103.03874 [gsm8k]: https://arxiv.org/abs/2110.14168 [gpqa]: https://arxiv.org/abs/2311.12022 [mbpp]: https://arxiv.org/abs/2108.07732 [humaneval]: https://arxiv.org/abs/2107.03374 [lcb]: https://arxiv.org/abs/2403.07974 [bird-sql]: https://arxiv.org/abs/2305.03111 [nat2code]: https://arxiv.org/abs/2405.04520 #### Multilingual | Benchmark | n-shot | Gemma 3 IT 1B | Gemma 3 IT 4B | Gemma 3 IT 12B | Gemma 3 IT 27B | |--------------------------------------|--------|:-------------:|:-------------:|:--------------:|:--------------:| | [Global-MMLU-Lite][global-mmlu-lite] | 0-shot | 34.2 | 54.5 | 69.5 | 75.1 | | [ECLeKTic][eclektic] | 0-shot | 1.4 | 4.6 | 10.3 | 16.7 | | [WMT24++][wmt24pp] | 0-shot | 35.9 | 46.8 | 51.6 | 53.4 | | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:| | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 | | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 | | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 | | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 | | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 | | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 | | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 | [mgsm]: https://arxiv.org/abs/2210.03057 [flores]: https://arxiv.org/abs/2106.03193 [xquad]: https://arxiv.org/abs/1910.11856v3 [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite [wmt24pp]: https://arxiv.org/abs/2502.12404v1 [eclektic]: https://arxiv.org/abs/2502.21228 [indicgenbench]: https://arxiv.org/abs/2404.16816 ##### Multimodal | Benchmark | Gemma 3 IT 4B | Gemma 3 IT 12B | Gemma 3 IT 27B | |-----------------------------------|:-------------:|:--------------:|:--------------:| | [MMMU][mmmu] (val) | 48.8 | 59.6 | 64.9 | | [DocVQA][docvqa] | 75.8 | 87.1 | 86.6 | | [InfoVQA][info-vqa] | 50.0 | 64.9 | 70.6 | | [TextVQA][textvqa] | 57.8 | 67.7 | 65.1 | | [AI2D][ai2d] | 74.8 | 84.2 | 84.5 | | [ChartQA][chartqa] | 68.8 | 75.7 | 78.0 | | [VQAv2][vqav2] (val) | 62.4 | 71.6 | 71.0 | | [MathVista][mathvista] (testmini) | 50.0 | 62.9 | 67.6 | | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B | | ------------------------------ |:-------------:|:--------------:|:--------------:| | [COCOcap][coco-cap] | 102 | 111 | 116 | | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 | | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 | | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 | | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 | | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 | | [ReMI][remi] | 27.3 | 38.5 | 44.8 | | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 | | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 | | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 | | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 | | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 | | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 | | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 | | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 | [coco-cap]: https://cocodataset.org/#home [docvqa]: https://www.docvqa.org/ [info-vqa]: https://arxiv.org/abs/2104.12756 [mmmu]: https://arxiv.org/abs/2311.16502 [textvqa]: https://textvqa.org/ [realworldqa]: https://paperswithcode.com/dataset/realworldqa [remi]: https://arxiv.org/html/2406.09175v1 [ai2d]: https://allenai.org/data/diagrams [chartqa]: https://arxiv.org/abs/2203.10244 [vqav2]: https://visualqa.org/index.html [blinkvqa]: https://arxiv.org/abs/2404.12390 [okvqa]: https://okvqa.allenai.org/ [tallyqa]: https://arxiv.org/abs/1810.12440 [ss-vqa]: https://arxiv.org/abs/1908.02660 [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/ [mathvista]: https://arxiv.org/abs/2310.02255 ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: - **Child Safety**: Evaluation of text-to-text and image to text prompts covering child safety policies, including child sexual abuse and exploitation. - **Content Safety:** Evaluation of text-to-text and image to text prompts covering safety policies including, harassment, violence and gore, and hate speech. - **Representational Harms**: Evaluation of text-to-text and image to text prompts covering safety policies including bias, stereotyping, and harmful associations or inaccuracies. In addition to development level evaluations, we conduct "assurance evaluations" which are our 'arms-length' internal evaluations for responsibility governance decision making. They are conducted separately from the model development team, to inform decision making about release. High level findings are fed back to the model team, but prompt sets are held-out to prevent overfitting and preserve the results' ability to inform decision making. Assurance evaluation results are reported to our Responsibility & Safety Council as part of release review. ### Evaluation Results For all areas of safety testing, we saw major improvements in the categories of child safety, content safety, and representational harms relative to previous Gemma models. All testing was conducted without safety filters to evaluate the model capabilities and behaviors. For both text-to-text and image-to-text, and across all model sizes, the model produced minimal policy violations, and showed significant improvements over previous Gemma models' performance with respect to ungrounded inferences. A limitation of our evaluations was they included only English language prompts. ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open vision-language models (VLMs) models have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. - Content Creation and Communication - Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. - Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. - Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. - Image Data Extraction: These models can be used to extract, interpret, and summarize visual data for text communications. - Research and Education - Natural Language Processing (NLP) and VLM Research: These models can serve as a foundation for researchers to experiment with VLM and NLP techniques, develop algorithms, and contribute to the advancement of the field. - Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. - Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations - Training Data - The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. - The scope of the training dataset determines the subject areas the model can handle effectively. - Context and Task Complexity - Models are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. - A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). - Language Ambiguity and Nuance - Natural language is inherently complex. Models might struggle to grasp subtle nuances, sarcasm, or figurative language. - Factual Accuracy - Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. - Common Sense - Models rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: - Bias and Fairness - VLMs trained on large-scale, real-world text and image data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. - Misinformation and Misuse - VLMs can be misused to generate text that is false, misleading, or harmful. - Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit][rai-toolkit]. - Transparency and Accountability: - This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. - A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: - **Perpetuation of biases**: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. - **Generation of harmful content**: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. - **Misuse for malicious purposes**: Technical limitations and developer and end-user education can help mitigate against malicious applications of VLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy][prohibited-use]. - **Privacy violations**: Models were trained on data filtered for removal of certain personal information and other sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open vision-language model implementations designed from the ground up for responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives. [g3-tech-report]: https://arxiv.org/abs/2503.19786 [rai-toolkit]: https://ai.google.dev/responsible [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3 [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3 [terms]: https://ai.google.dev/gemma/terms [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu [sustainability]: https://sustainability.google/operating-sustainably/ [jax]: https://github.com/jax-ml/jax [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ [sustainability]: https://sustainability.google/operating-sustainably/ [gemini-2-paper]: https://arxiv.org/abs/2312.11805
lautan/blockassist-bc-gentle_patterned_goat_1755746378
lautan
2025-08-21T03:47:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle patterned goat", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:47:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle patterned goat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755747952
0xaoyama
2025-08-21T03:46:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "muscular zealous gorilla", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:46:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - muscular zealous gorilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
afung/pika-pick-and-place-ee_absolute-fisheye-ns
afung
2025-08-21T03:45:12Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:afung/pika-pick-and-place-ee_delta_gripper-fisheye-ns", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-08-21T03:44:52Z
--- datasets: afung/pika-pick-and-place-ee_delta_gripper-fisheye-ns library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - robotics - lerobot - act --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
mang3dd/blockassist-bc-tangled_slithering_alligator_1755746390
mang3dd
2025-08-21T03:44:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:44:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
roeker/blockassist-bc-quick_wiry_owl_1755747820
roeker
2025-08-21T03:44:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:44:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
indoempatnol/blockassist-bc-fishy_wary_swan_1755746160
indoempatnol
2025-08-21T03:43:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fishy wary swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:43:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fishy wary swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755747745
0xaoyama
2025-08-21T03:43:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "muscular zealous gorilla", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:42:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - muscular zealous gorilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sankar-asthramedtech/finetuning_whisper-medium_using_LoRA_without_Quantization_V-1.2
sankar-asthramedtech
2025-08-21T03:42:16Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-21T03:17:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
germanlunichh/blockassist-bc-mute_shaggy_alligator_1755745676
germanlunichh
2025-08-21T03:41:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute shaggy alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:41:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute shaggy alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755746076
manusiaperahu2012
2025-08-21T03:41:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring long tuna", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:41:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring long tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jakehsv/blockassist-bc-flexible_waddling_peacock_1755745646
jakehsv
2025-08-21T03:39:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flexible waddling peacock", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:39:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flexible waddling peacock --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
viswavi/qwen2.5_rlcf
viswavi
2025-08-21T03:39:57Z
2
0
null
[ "safetensors", "qwen2", "arxiv:2507.18624", "license:apache-2.0", "region:us" ]
null
2025-07-15T15:28:39Z
--- license: apache-2.0 --- This model improves the instruction following capabilities of Qwen-2.5-7B-Instruct using preference tuning on the [WildChecklists](https://huggingface.co/datasets/viswavi/wildchecklists/) dataset. This model is described in detail in [Checklists Are Better Than Reward Models For Aligning Language Models](https://arxiv.org/abs/2507.18624). This model is specifically designed to improve complex or subjective instruction following: **InFoBench/IFEval**: | Model | InfoBench (Overall) | IFEval (prompt-level strict) | IFEval (prompt-level loose) | IFEval (instr-level strict) | IFEval (instr-level loose) | |---|---|---|---|---|---| | Qwen-2.5-7B-Instruct (on-policy) | 78.1 | 72.5 | 75.0 | 79.9 | 81.8 | | + RLCF | **84.1** | 72.6 | **77.3** | 80.3 |**84.1**| **FollowBench**: | Model | *Soft* | L1 | L2 | L3 | L4 | L5 | Avg | *Hard*| L1 | L2 | L3 | L4 | L5 | Avg | CSL | |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| | Qwen-2.5-7B-Instruct (on-policy) ||87.4|84.0|83.0|79.6|79.0|82.6||87.4|80.6|72.3|62.2|54.4|71.4|3.05| | + RLCF ||**88.6**|**88.8**|**83.8**|**79.9**|**81.0**|**84.4**||**88.6**|**85.2**|**75.8**|**65.1**|**61.8**|**75.3**|**3.30**| We find that it is as good or slightly worse at other tasks, such as math reasoning, and it may change the safety alignment behavior slightly of the Qwen-2.5-7B-Instruct (modestly decreasing the refusal rate to unsafe prompts while considerably decreasing the refusal rate for safe prompts). If you write a paper using this model, please cite us! ``` @misc{RLCF, title={Checklists Are Better Than Reward Models For Aligning Language Models}, author={Vijay Viswanathan and Yanchao Sun and Shuang Ma and Xiang Kong and Meng Cao and Graham Neubig and Tongshuang Wu}, year={2025}, eprint={2507.18624}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` *Note: Apple was not involved with the process of training this model or producing the data used to train this model, and the creation of this model was done exclusively at Carnegie Mellon University (CMU) by researchers at CMU.*
sipeed/Qwen2.5-0.5B-Instruct-maixcam2
sipeed
2025-08-21T03:38:43Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-29T11:10:07Z
--- license: apache-2.0 --- ## Qwen2.5-0.5B-Instruct-maixcam2 for MaixCAM2 Usage please refer to [MaixPy](https://wiki.sipeed.com/maixpy/)'s documentation. ## Download models ```shell pip install huggingface_hub export HF_ENDPOINT=https://hf-mirror.com huggingface-cli download sipeed/Qwen2.5-0.5B-Instruct-maixcam2 --local-dir Qwen2.5-0.5B-Instruct-maixcam2 ```
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755747425
IvanJAjebu
2025-08-21T03:38:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:38:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Coaster41/patchtst-sae-flatten-8-4.0-expe
Coaster41
2025-08-21T03:38:14Z
0
0
saelens
[ "saelens", "region:us" ]
null
2025-08-18T06:16:34Z
--- library_name: saelens --- # SAEs for use with the SAELens library This repository contains the following SAEs: - blocks.0.hook_mlp_out Load these SAEs using SAELens as below: ```python from sae_lens import SAE sae = SAE.from_pretrained("Coaster41/patchtst-sae-flatten-8-4.0-expe", "<sae_id>") ```
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755745751
rvipitkirubbe
2025-08-21T03:35:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mottled foraging ape", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:35:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mottled foraging ape --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
roeker/blockassist-bc-quick_wiry_owl_1755747208
roeker
2025-08-21T03:34:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:34:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Videepvv/FAAF-ErrorfulWTD
Videepvv
2025-08-21T03:34:08Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "region:us" ]
null
2025-08-21T03:32:31Z
--- base_model: meta-llama/Meta-Llama-3-8B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755745733
quantumxnode
2025-08-21T03:33:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:33:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755745670
hakimjustbao
2025-08-21T03:33:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:33:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
izakisugi/uuu_finetune_gpt2
izakisugi
2025-08-21T03:32:43Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-21T03:32:43Z
--- license: apache-2.0 ---
sagawa/CompoundT5
sagawa
2025-08-21T03:30:07Z
91
1
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "dataset:sagawa/ZINC-canonicalized", "license:mit", "model-index", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2022-08-28T02:08:53Z
--- license: mit datasets: - sagawa/ZINC-canonicalized metrics: - accuracy model-index: - name: ZINC-deberta results: - task: name: Masked Language Modeling type: fill-mask dataset: name: sagawa/ZINC-canonicalized type: sagawa/ZINC-canonicalized metrics: - name: Accuracy type: accuracy value: 0.9497212171554565 --- # CompoundT5 This model is a re-pretrained version of [google/t5-v1_1-base](https://huggingface.co/microsoft/deberta-base) on the sagawa/ZINC-canonicalized dataset. It achieves the following results on the evaluation set: - Loss: 0.1202 - Accuracy: 0.9497 ## Model description We trained t5 on SMILES from ZINC using masked-language modeling (MLM). Its tokenizer is also trained on ZINC. ## Intended uses & limitations This model can be used to predict molecules' properties, reactions, or interactions with proteins by changing the way of finetuning. As an example, We finetuned this model to predict products. The model is [here](https://huggingface.co/sagawa/ZINC-t5-productpredicition), and you can use the demo [here](https://huggingface.co/spaces/sagawa/predictproduct-t5). Using its encoder, we trained a regression model to predict a reaction yield. You can use this demo [here](https://huggingface.co/spaces/sagawa/predictyield-t5). ## Training and evaluation data We downloaded [ZINC data](https://drive.google.com/drive/folders/1lSPCqh31zxTVEhuiPde7W3rZG8kPgp-z) and canonicalized them using RDKit. Then, we dropped duplicates. The total number of data is 22992522, and they were randomly split into train:validation=10:1. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-03 - train_batch_size: 30 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30.0 ### Training results | Training Loss | Step | Accuracy | Validation Loss | |:-------------:|:------:|:--------:|:---------------:| | 0.2471 | 25000 | 0.9843 | 0.2226 | | 0.1871 | 50000 | 0.9314 | 0.1783 | | 0.1791 | 75000 | 0.9371 | 0.1619 | | 0.1596 | 100000 | 0.9401 | 0.1520 | | 0.1522 | 125000 | 0.9422 | 0.1449 | | 0.1435 | 150000 | 0.9436 | 0.1404 | | 0.1421 | 175000 | 0.9447 | 0.1368 | | 0.1398 | 200000 | 0.9459 | 0.1322 | | 0.1297 | 225000 | 0.9466 | 0.1299 | | 0.1324 | 250000 | 0.9473 | 0.1268 | | 0.1257 | 275000 | 0.9483 | 0.1244 | | 0.1266 | 300000 | 0.9491 | 0.1216 | | 0.1301 | 325000 | 0.9497 | 0.1204 | ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> ``` @article{Sagawa2025, title = {ReactionT5: a pre-trained transformer model for accurate chemical reaction prediction with limited data}, author = {Sagawa, Tatsuya and Kojima, Ryosuke}, journal = {Journal of Cheminformatics}, year = {2025}, volume = {17}, number = {1}, pages = {126}, doi = {10.1186/s13321-025-01075-4}, url = {https://doi.org/10.1186/s13321-025-01075-4} } ```
russ202312/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_colorful_fish
russ202312
2025-08-21T03:29:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am insectivorous_colorful_fish", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-21T03:29:43Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am insectivorous_colorful_fish --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hobson123/blockassist-bc-mammalian_dense_gibbon_1755746494
hobson123
2025-08-21T03:27:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mammalian dense gibbon", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:27:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mammalian dense gibbon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755746663
0xaoyama
2025-08-21T03:25:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "muscular zealous gorilla", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:24:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - muscular zealous gorilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755745143
helmutsukocok
2025-08-21T03:24:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:24:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
roeker/blockassist-bc-quick_wiry_owl_1755746591
roeker
2025-08-21T03:23:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:23:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755746475
0xaoyama
2025-08-21T03:21:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "muscular zealous gorilla", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:21:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - muscular zealous gorilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755744842
kojeklollipop
2025-08-21T03:20:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:20:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755744785
calegpedia
2025-08-21T03:20:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:20:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755746285
0xaoyama
2025-08-21T03:18:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "muscular zealous gorilla", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:18:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - muscular zealous gorilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nightmedia/Qwen3-30B-A3B-Thinking-2507-512k-qx6-mlx
nightmedia
2025-08-21T03:16:48Z
9
0
mlx
[ "mlx", "safetensors", "qwen3_moe", "text-generation", "conversational", "base_model:Qwen/Qwen3-30B-A3B-Thinking-2507", "base_model:quantized:Qwen/Qwen3-30B-A3B-Thinking-2507", "license:apache-2.0", "6-bit", "region:us" ]
text-generation
2025-08-20T00:17:53Z
--- library_name: mlx license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507/blob/main/LICENSE pipeline_tag: text-generation tags: - mlx base_model: Qwen/Qwen3-30B-A3B-Thinking-2507 --- # Qwen3-30B-A3B-Thinking-2507-512k-qx6-mlx this model uses an experimental quanting combination code name: Deckard purpose: evaluating replicants Analysis of qx6 Performance: Best Suited Tasks for qx6: 1. OpenBookQA (0.432) This is the highest score among all models in this dataset +0.002 improvement over bf16 (0.430) Strongest performance for knowledge-based reasoning tasks 2. BoolQ (0.881) Highest among all quantized models for boolean reasoning Only 0.002 behind baseline (0.879) Excellent for logical reasoning and question answering 3. Arc_Challenge (0.422) Perfect match with baseline (0.422) Maintains full performance on the most challenging questions Secondary Strengths: 4. PIQA (0.724) Above baseline performance (0.720) Strong physical interaction reasoning 5. HellaSwag (0.546) Very close to baseline (0.550) Good commonsense reasoning Key Advantages: Best overall performance in OpenBookQA (0.432) Perfect retention of Arc_Challenge performance Exceptional BoolQ scores Strong knowledge reasoning capabilities Recommendation: qx6 is best suited for OpenBookQA and BoolQ tasks. The model's exceptional performance in OpenBookQA (highest among all models) combined with its perfect retention of Arc_Challenge and superior BoolQ scores makes it ideal for: Knowledge-intensive question answering systems Educational assessment applications Logical reasoning tasks requiring factual accuracy Research and academic question answering The model demonstrates optimal balance between knowledge retention and logical processing, making it particularly valuable for applications where both factual recall and reasoning skills are crucial. This model [Qwen3-30B-A3B-Thinking-2507-512k-qx6-mlx](https://huggingface.co/Qwen3-30B-A3B-Thinking-2507-512k-qx6-mlx) was converted to MLX format from [Qwen/Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Qwen3-30B-A3B-Thinking-2507-512k-qx6-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
chainway9/blockassist-bc-untamed_quick_eel_1755744551
chainway9
2025-08-21T03:16:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:16:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755746052
IvanJAjebu
2025-08-21T03:15:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:15:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Wejh/Affine-42069133769420
Wejh
2025-08-21T03:14:56Z
950
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "vllm", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "mxfp4", "region:us" ]
text-generation
2025-08-12T19:17:38Z
--- license: apache-2.0 pipeline_tag: text-generation library_name: transformers tags: - vllm --- <p align="center"> <img alt="gpt-oss-20b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-20b.svg"> </p> <p align="center"> <a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> · <a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> · <a href="https://openai.com/index/gpt-oss-model-card"><strong>Model card</strong></a> · <a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a> </p> <br> Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases. We’re releasing two flavors of these open models: - `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters) - `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters) Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise. > [!NOTE] > This model card is dedicated to the smaller `gpt-oss-20b` model. Check out [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) for the larger model. # Highlights * **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment. * **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs. * **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users. * **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning. * **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs. * **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. --- # Inference examples ## Transformers You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package. To get started, install the necessary dependencies to setup your environment: ``` pip install -U transformers kernels torch ``` Once, setup you can proceed to run the model by running the snippet below: ```py from transformers import pipeline import torch model_id = "openai/gpt-oss-20b" pipe = pipeline( "text-generation", model=model_id, torch_dtype="auto", device_map="auto", ) messages = [ {"role": "user", "content": "Explain quantum mechanics clearly and concisely."}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver: ``` transformers serve transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b ``` [Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers) ## vLLM vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server. ```bash uv pip install --pre vllm==0.10.1+gptoss \ --extra-index-url https://wheels.vllm.ai/gpt-oss/ \ --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \ --index-strategy unsafe-best-match vllm serve openai/gpt-oss-20b ``` [Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm) ## PyTorch / Triton To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation). ## Ollama If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download). ```bash # gpt-oss-20b ollama pull gpt-oss:20b ollama run gpt-oss:20b ``` [Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama) #### LM Studio If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download. ```bash # gpt-oss-20b lms get openai/gpt-oss-20b ``` Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners. --- # Download the model You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI: ```shell # gpt-oss-20b huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/ pip install gpt-oss python -m gpt_oss.chat model/ ``` # Reasoning levels You can adjust the reasoning level that suits your task across three levels: * **Low:** Fast responses for general dialogue. * **Medium:** Balanced speed and detail. * **High:** Deep and detailed analysis. The reasoning level can be set in the system prompts, e.g., "Reasoning: high". # Tool use The gpt-oss models are excellent for: * Web browsing (using built-in browsing tools) * Function calling with defined schemas * Agentic operations like browser tasks # Fine-tuning Both gpt-oss models can be fine-tuned for a variety of specialized use cases. This smaller model `gpt-oss-20b` can be fine-tuned on consumer hardware, whereas the larger [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) can be fine-tuned on a single H100 node.
lautan/blockassist-bc-gentle_patterned_goat_1755744426
lautan
2025-08-21T03:13:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle patterned goat", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:13:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle patterned goat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GothiaDigitalSolutions/invoice-extractor-4.0
GothiaDigitalSolutions
2025-08-21T03:11:48Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T04:12:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755745789
IvanJAjebu
2025-08-21T03:10:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:10:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AAAAnsah/Qwen25-0.5B-rfa-vax-lmc-try-5
AAAAnsah
2025-08-21T03:10:43Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct", "lora", "transformers", "text-generation", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "region:us" ]
text-generation
2025-08-21T03:10:39Z
--- base_model: Qwen/Qwen2.5-0.5B-Instruct library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.0
syuvers/Qwen3-0.6B-Gensyn-Swarm-sleek_gilded_chameleon
syuvers
2025-08-21T03:10:34Z
16
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am sleek_gilded_chameleon", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-20T16:49:03Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am sleek_gilded_chameleon --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755744153
manusiaperahu2012
2025-08-21T03:09:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring long tuna", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:09:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring long tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
TheAIchemist13/unlearn-tofu_qunatize_lora-GA-version_final
TheAIchemist13
2025-08-21T03:09:05Z
2
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-19T11:22:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
roeker/blockassist-bc-quick_wiry_owl_1755745671
roeker
2025-08-21T03:08:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick wiry owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:08:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick wiry owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
adf99/Qwen2.5-7B-Instruct-abliterated-SFT-Single
adf99
2025-08-21T03:07:47Z
0
0
null
[ "region:us" ]
null
2025-08-20T20:12:07Z
模型來自於 https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-SFT 非審查版本,單純合併為單一檔案,提供給Kijai的Wan Video Prompt Extender節點使用
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755745522
IvanJAjebu
2025-08-21T03:06:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-21T03:06:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).