YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Stable-Diffusion-XL-Base-1.0

How to run

Visit sdk.nexa.ai/model

Model Description

Stable Diffusion XL Base 1.0 (SDXL 1.0) is a foundation text-to-image model released by Stability AI.
It is the flagship successor to Stable Diffusion 2.1, designed for photorealism, artistic flexibility, and high-resolution generation.

SDXL 1.0 is a latent diffusion model trained on a broad dataset of images and captions. Compared to prior versions, it improves prompt alignment, visual coherence, and output quality, especially in complex scenes and detailed compositions.

Features

  • High fidelity image generation: sharper details and improved realism.
  • Flexible style range: from photorealistic renders to artistic illustration.
  • Better prompt alignment: improved understanding of nuanced or multi-concept prompts.
  • High resolution support: natively trained for 1024ร—1024 images.
  • Compositional strength: more accurate handling of multiple subjects and fine object placement.

Use Cases

  • Creative content generation (illustrations, art, concept design)
  • Product mockups and marketing visuals
  • Character and environment ideation
  • Storyboarding and visual storytelling
  • Research in generative imaging

Inputs and Outputs

Input:

  • Text prompts (descriptions, concepts, artistic directions)
  • Optional negative prompts to avoid undesired elements

Output:

  • Generated image(s) matching the prompt
  • Default resolution: 1024ร—1024 pixels

How to use

1) Install Nexa-SDK

Download and follow the steps under "Deploy Section" Nexa's model page: Download Windows SDK

2) Get an access token

Create a token in the Model Hub, then log in:

nexa config set license '<access_token>'

3) Run the model

Running:

nexa infer NexaAI/sdxl-base

License

References

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support