Index
Model Overview
Description:
Pi0 for ultrasound robotics is a vision language action model (VLA) fine-tuned to mimic a simple liver ultrasound sweep in the Isaac for Healthcare ultrasound environment. It uses the weights and architecture from the Physical Intelligence π0 model, and is fine-tuned with simulation data and Cosmos-transfer augmented data from Isaac for Healthcare.
This model is ready for commercial/non-commercial use.
License/Terms of Use
GOVERNING TERMS: Use of the model is governed by the NVIDIA Open Model License Agreement. ADDITIONAL INFORMATION: Apache 2.0 License.
Deployment Geography:
Global
Use Case:
This model is only intended to be used within Isaac for Healthcare as a demonstration that a policy can sufficiently learn the simulated tasks of an ultrasound liver scan.
Release Date:
Huggingface 07/12/25 via isaac-for-healthcare/i4h-asset-catalog: Isaac for Healthcare asset catalog
Reference(s):
Physical Intelligence π0
Isaac For Healthcare
Cosmos-transfer
Model Architecture:
Architecture Type: Vision Language Action model
Network Architecture: [π0](https://github.com/Physical-Intelligence/openpi)
**This model was developed based on π0
** This model has 3 billion parameters.
Input:
Input Type(s): Images, Text, and Joint kinematics
Input Format: Red, Green, Blue (RGB) Image Tensors, String, FP16 Tensor
Input Parameters:: 224x224x3 Images, 250 token max Text String, 1x7 Joint State Tensor
Other Properties Related to Input:
Input Images:
- Room Camera: 224x224x3 RGB image
- Wrist Camera: 224x224x3 RGB image
Input Prompt:
- Text String (250 tokens max)
Input Kinematics:
- 7D tensor representing articulation radians for each Franka arm joint
Output:
Output Type(s): Kinematic Tensor
Output Format: 50x6 Tensor [x, y, z, rx, ry, rz]
Output Parameters: 50x6 Tensor [x, y, z, rx, ry, rz]
Other Properties Related to Output: The model predicts the next 50 6D relative actions. The first 3 indices are the relative translation, and the last 3 indices are the relative axis angle rotation.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g., GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Software Integration:
Runtime Engine(s):
- JAX - 0.5.0
Supported Hardware Microarchitecture Compatibility:
NVIDIA Ampere
NVIDIA Blackwell
NVIDIA Hopper
Preferred/Supported Operating System(s):
- Linux
Model Version(s):
- Pi0 for ultrasound robotics with Cosmos-transfer augmented data
Training Datasets:
Training Dataset:
Data Collection Method by Dataset:
- Automated
Labeling Method by Dataset:
- Automated
Properties: 400 simulated liver ultrasound sweeps collected at 30Hz with an average length of 400 time steps. An additional 400 Cosmos-transfer augmented data points are curated from the 400 simulated liver scans by randomizing the background environment.
Evaluation Results:
| Category | Average Success Rate (radius=0.01m) |
|---|---|
| Precision | 77.0% |
To measure model accuracy, we reserve an evaluation set of 50 ground truth examples. Accuracy is then calculated by measuring the number of predicted steps that fall within a 0.01m radius of the ground truth example. This evaluation is repeated three times, and we report the average success rate.
Inference:
Engine: JAX
Test Hardware:
Ada RTX 6000
Ampere RTX A6000
RTX 4090
| Hardware | Average Latency | Memory Usage |
|---|---|---|
| NVIDIA RTX 4090 | 100 ms | 9 GB |
Limitations:
This model was trained on data from the Isaac for Healthcare ultrasound workflow state machine. Therefore, the model will only perform well in that singular environment. This model is not expected to generalize to different robot platforms, ultrasound probes, or ultrasound phantoms.
Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility, and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards [Insert Link to Model Card++ subcards here].
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.
Bias
| Field | Response |
|---|---|
| Participation considerations from adversely impacted groups, protected classes, in model design and testing: | Not Applicable |
| Measures taken to mitigate against unwanted bias: | Not Applicable |
Explainability
| Field | Response |
|---|---|
| Intended Domain: | Ultrasound robotics |
| Model Type: | Vision Language Action Model |
| Intended Users: | Isaac For Healthcare users testing ultrasound environment. |
| Output: | Kinematic tensor (outputs the 50 next relative inverse kinematic actions to complete a simple liver ultrasound sweep) |
| Describe how the model works: | The input images and text prompt are encoded using the PaliGemma VLM backbone. The MoE action-expert ingests the current joint states and, using shared cross-attention layers with the VLM backbone, it denoises the 50 next action tensors. |
| Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable |
| Technical Limitations & Mitigation: | This model was trained on data from the Isaac for Healthcare ultrasound workflow state machine. Therefore, the model will only perform well in that singular environment. This model is not expected to generalize to different robot platforms, ultrasound probes, or ultrasound phantoms. |
| Verified to have met prescribed NVIDIA quality standards: | Yes |
| Performance Metrics: | Latency, Accuracy |
| Potential Known Risks: | The model may not perfectly follow the canonical liver ultrasound sweep path. This may happen due to: unexpected torso positions, inconsistent camera positioning, and different deployment environments outside of the Isaac for Healthcare simulation environment. |
| Licensing: | GOVERNING TERMS: Use of the model is governed by the NVIDIA Open Model License Agreement. ADDITIONAL INFORMATION: Apache 2.0 License. |
Privacy
| Field | Response |
|---|---|
| Generatable or reverse engineerable personal data? | None |
| Personal data used to create this model? | None |
| How often is dataset reviewed? | Before Release |
| Is there provenance for all datasets used in training? | Yes |
| Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
| Is data compliant with data subject requests for data correction or removal, if such a request was made? | Not Applicable |
| Applicable Privacy Policy | https://www.nvidia.com/en-us/about-nvidia/privacy-policy/ |
Safety & Security
| Field | Response |
|---|---|
| Model Application(s): | Ultrasound robotics |
| Model Application Field(s): | Machinery and Robotics Medical Devices |
| Describe the life critical impact (if present). | This model could pose a risk if deployed on a robotic system in the real world. This model has only seen Isaac for Healthcare simulation data been tested with simulation data using Isaac for Healthcare and may make unexpected movements if attempted to be deployed in a new environment. This model is not expected to generalize to different environments, robot platforms, ultrasound probes, or ultrasound phantoms. |
| Use Case Restrictions: | Abide by NVIDIA Open Model License Agreement. ADDITIONAL INFORMATION: Apache 2.0 License. |
| Model and dataset restrictions: | The Principle of Least Privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. |