Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

OpenDriveLab
/
SparseVideoNav_VGM

Diffusers
Safetensors
PyTorch
English
video-generation
vision-language-navigation
embodied-ai
Model card Files Files and versions
xet
Community

Instructions to use OpenDriveLab/SparseVideoNav_VGM with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Diffusers

    How to use OpenDriveLab/SparseVideoNav_VGM with Diffusers:

    pip install -U diffusers transformers accelerate
    import torch
    from diffusers import DiffusionPipeline
    
    # switch to "mps" for apple devices
    pipe = DiffusionPipeline.from_pretrained("OpenDriveLab/SparseVideoNav_VGM", dtype=torch.bfloat16, device_map="cuda")
    
    prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
    image = pipe(prompt).images[0]
  • Notebooks
  • Google Colab
  • Kaggle
SparseVideoNav_VGM
19 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 4 commits
OpenDriveLab-org's picture
OpenDriveLab-org
Update README.md
ba0dc92 verified about 1 month ago
  • assets
    Upload assets/caption.png about 1 month ago
  • models
    Upload 8 files about 1 month ago
  • .gitattributes
    1.67 kB
    Upload assets/caption.png about 1 month ago
  • README.md
    3.13 kB
    Update README.md about 1 month ago