81 Cosmos World Foundation Model Platform for Physical AI Physical AI needs to be trained digitally first. It needs a digital twin of itself, the policy model, and a digital twin of the world, the world model. In this paper, we present the Cosmos World Foundation Model Platform to help developers build customized world models for their Physical AI setups. We position a world foundation model as a general-purpose world model that can be fine-tuned into customized world models for downstream applications. Our platform covers a video curation pipeline, pre-trained world foundation models, examples of post-training of pre-trained world foundation models, and video tokenizers. To help Physical AI builders solve the most critical problems of our society, we make our platform open-source and our models open-weight with permissive licenses available via https://github.com/NVIDIA/Cosmos. 78 authors · Jan 7 2
51 Cosmos-Reason1: From Physical Common Sense To Embodied Reasoning Physical AI systems need to perceive, understand, and perform complex actions in the physical world. In this paper, we present the Cosmos-Reason1 models that can understand the physical world and generate appropriate embodied decisions (e.g., next step action) in natural language through long chain-of-thought reasoning processes. We begin by defining key capabilities for Physical AI reasoning, with a focus on physical common sense and embodied reasoning. To represent physical common sense, we use a hierarchical ontology that captures fundamental knowledge about space, time, and physics. For embodied reasoning, we rely on a two-dimensional ontology that generalizes across different physical embodiments. Building on these capabilities, we develop two multimodal large language models, Cosmos-Reason1-8B and Cosmos-Reason1-56B. We curate data and train our models in four stages: vision pre-training, general supervised fine-tuning (SFT), Physical AI SFT, and Physical AI reinforcement learning (RL) as the post-training. To evaluate our models, we build comprehensive benchmarks for physical common sense and embodied reasoning according to our ontologies. Evaluation results show that Physical AI SFT and reinforcement learning bring significant improvements. To facilitate the development of Physical AI, we will make our code and pre-trained models available under the NVIDIA Open Model License at https://github.com/nvidia-cosmos/cosmos-reason1. 45 authors · Mar 18 2
20 Cosmos-Transfer1: Conditional World Generation with Adaptive Multimodal Control We introduce Cosmos-Transfer, a conditional world generation model that can generate world simulations based on multiple spatial control inputs of various modalities such as segmentation, depth, and edge. In the design, the spatial conditional scheme is adaptive and customizable. It allows weighting different conditional inputs differently at different spatial locations. This enables highly controllable world generation and finds use in various world-to-world transfer use cases, including Sim2Real. We conduct extensive evaluations to analyze the proposed model and demonstrate its applications for Physical AI, including robotics Sim2Real and autonomous vehicle data enrichment. We further demonstrate an inference scaling strategy to achieve real-time world generation with an NVIDIA GB200 NVL72 rack. To help accelerate research development in the field, we open-source our models and code at https://github.com/nvidia-cosmos/cosmos-transfer1. 39 authors · Mar 18 2
3 COSMOS: Predictable and Cost-Effective Adaptation of LLMs Large language models (LLMs) achieve remarkable performance across numerous tasks by using a diverse array of adaptation strategies. However, optimally selecting a model and adaptation strategy under resource constraints is challenging and often requires extensive experimentation. We investigate whether it is possible to accurately predict both performance and cost without expensive trials. We formalize the strategy selection problem for LLMs and introduce COSMOS, a unified prediction framework that efficiently estimates adaptation outcomes at minimal cost. We instantiate and study the capability of our framework via a pair of powerful predictors: embedding-augmented lightweight proxy models to predict fine-tuning performance, and low-sample scaling laws to forecast retrieval-augmented in-context learning. Extensive evaluation across eight representative benchmarks demonstrates that COSMOS achieves high prediction accuracy while reducing computational costs by 92.72% on average, and up to 98.71% in resource-intensive scenarios. Our results show that efficient prediction of adaptation outcomes is not only feasible but can substantially reduce the computational overhead of LLM deployment while maintaining performance standards. 3 authors · Apr 29 1
1 COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training Vision-Language Models (VLMs) trained with contrastive loss have achieved significant advancements in various vision and language tasks. However, the global nature of contrastive loss makes VLMs focus predominantly on foreground objects, neglecting other crucial information in the image, which limits their effectiveness in downstream tasks. To address these challenges, we propose COSMOS: CrOSs-MOdality Self-distillation for vision-language pre-training that integrates a novel text-cropping strategy and cross-attention module into a self-supervised learning framework. We create global and local views of images and texts (i.e., multi-modal augmentations), which are essential for self-distillation in VLMs. We further introduce a cross-attention module, enabling COSMOS to learn comprehensive cross-modal representations optimized via a cross-modality self-distillation loss. COSMOS consistently outperforms previous strong baselines on various zero-shot downstream tasks, including retrieval, classification, and semantic segmentation. Additionally, it surpasses CLIP-based models trained on larger datasets in visual perception and contextual understanding tasks. 5 authors · Dec 2, 2024
- Cosmos-Drive-Dreams: Scalable Synthetic Driving Data Generation with World Foundation Models Collecting and annotating real-world data for safety-critical physical AI systems, such as Autonomous Vehicle (AV), is time-consuming and costly. It is especially challenging to capture rare edge cases, which play a critical role in training and testing of an AV system. To address this challenge, we introduce the Cosmos-Drive-Dreams - a synthetic data generation (SDG) pipeline that aims to generate challenging scenarios to facilitate downstream tasks such as perception and driving policy training. Powering this pipeline is Cosmos-Drive, a suite of models specialized from NVIDIA Cosmos world foundation model for the driving domain and are capable of controllable, high-fidelity, multi-view, and spatiotemporally consistent driving video generation. We showcase the utility of these models by applying Cosmos-Drive-Dreams to scale the quantity and diversity of driving datasets with high-fidelity and challenging scenarios. Experimentally, we demonstrate that our generated data helps in mitigating long-tail distribution problems and enhances generalization in downstream tasks such as 3D lane detection, 3D object detection and driving policy learning. We open source our pipeline toolkit, dataset and model weights through the NVIDIA's Cosmos platform. Project page: https://research.nvidia.com/labs/toronto-ai/cosmos_drive_dreams 16 authors · Jun 10
- COSMOS: A Hybrid Adaptive Optimizer for Memory-Efficient Training of LLMs Large Language Models (LLMs) have demonstrated remarkable success across various domains, yet their optimization remains a significant challenge due to the complex and high-dimensional loss landscapes they inhabit. While adaptive optimizers such as AdamW are widely used, they suffer from critical limitations, including an inability to capture interdependencies between coordinates and high memory consumption. Subsequent research, exemplified by SOAP, attempts to better capture coordinate interdependence but incurs greater memory overhead, limiting scalability for massive LLMs. An alternative approach aims to reduce memory consumption through low-dimensional projection, but this leads to substantial approximation errors, resulting in less effective optimization (e.g., in terms of per-token efficiency). In this paper, we propose COSMOS, a novel hybrid optimizer that leverages the varying importance of eigensubspaces in the gradient matrix to achieve memory efficiency without compromising optimization performance. The design of COSMOS is motivated by our empirical insights and practical considerations. Specifically, COSMOS applies SOAP to the leading eigensubspace, which captures the primary optimization dynamics, and MUON to the remaining eigensubspace, which is less critical but computationally expensive to handle with SOAP. This hybrid strategy significantly reduces memory consumption while maintaining robust optimization performance, making it particularly suitable for massive LLMs. Numerical experiments on various datasets and transformer architectures are provided to demonstrate the effectiveness of COSMOS. Our code is available at https://github.com/lliu606/COSMOS. 8 authors · Feb 24
- Cosmos-LLaVA: Chatting with the Visual Cosmos-LLaVA: Görselle Sohbet Etmek In this study, a Turkish visual instruction model was developed and various model architectures and dataset combinations were analysed to improve the performance of this model. The Cosmos-LLaVA model, which is built by combining different large language models and image coders, is designed to overcome the deficiencies in the Turkish language. In the experiments, the effects of fine-tuning with various datasets on the model performance are analysed in detail. The results show that model architecture and dataset selection have a significant impact on performance. Bu cal{\i}smada bir T\"urkce g\"orsel talimat modeli gelistirilerek bu modelin performans{\i}n{\i} art{\i}rmaya y\"onelik cesitli model mimarileri ve veri k\"umesi kombinasyonlar{\i} derinlemesine incelenmistir. Farkl{\i} b\"uy\"uk dil modelleri ve g\"or\"unt\"u kodlay{\i}c{\i}lar{\i}n{\i}n bir araya getirilmesiyle olusturulan Cosmos-LLaVA modeli, T\"urkce dilindeki eksiklikleri gidermeye y\"onelik olarak tasarlanm{\i}st{\i}r. Yap{\i}lan deneylerde, cesitli veri k\"umeleri ile yap{\i}lan ince ayarlar{\i}n model performans{\i}n{\i} nas{\i}l etkiledigi detayl{\i} olarak ele al{\i}nm{\i}st{\i}r. Sonuclar, model mimarisi ve veri k\"umesi seciminin performans \"uzerinde \"onemli bir etkiye sahip oldugunu g\"ostermektedir. 10 authors · Dec 3, 2024
- Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning Understanding narratives requires reading between the lines, which in turn, requires interpreting the likely causes and effects of events, even when they are not mentioned explicitly. In this paper, we introduce Cosmos QA, a large-scale dataset of 35,600 problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. In stark contrast to most existing reading comprehension datasets where the questions focus on factual and literal understanding of the context paragraph, our dataset focuses on reading between the lines over a diverse collection of people's everyday narratives, asking such questions as "what might be the possible reason of ...?", or "what would have happened if ..." that require reasoning beyond the exact text spans in the context. To establish baseline performances on Cosmos QA, we experiment with several state-of-the-art neural architectures for reading comprehension, and also propose a new architecture that improves over the competitive baselines. Experimental results demonstrate a significant gap between machine (68.4%) and human performance (94%), pointing to avenues for future research on commonsense machine comprehension. Dataset, code and leaderboard is publicly available at https://wilburone.github.io/cosmos. 4 authors · Aug 31, 2019
- cosmosage: A Natural-Language Assistant for Cosmologists cosmosage is a natural-language assistant intended for a wide audience, from laypersons interested in cosmology to students, teachers, and professional cosmologists. cosmosage provides a novel way to access knowledge and reason about cosmology. Leveraging the power of advanced large language models (LLMs), cosmosage has learned from a vast corpus of open-access source texts, including textbooks and papers. cosmosage is found to be state-of-the-art on the narrow task of answering questions about cosmology, outperforming all general-purpose models. The model parameters and code are publicly available. 1 authors · Jul 5, 2024
5 Introducing cosmosGPT: Monolingual Training for Turkish Language Models The number of open source language models that can produce Turkish is increasing day by day, as in other languages. In order to create the basic versions of such models, the training of multilingual models is usually continued with Turkish corpora. The alternative is to train the model with only Turkish corpora. In this study, we first introduce the cosmosGPT models that we created with this alternative method. Then, we introduce new finetune datasets for basic language models to fulfill user requests and new evaluation datasets for measuring the capabilities of Turkish language models. Finally, a comprehensive comparison of the adapted Turkish language models on different capabilities is presented. The results show that the language models we built with the monolingual corpus have promising performance despite being about 10 times smaller than the others. 8 authors · Apr 26, 2024
- ALMA/SCUBA-2 COSMOS Survey: Properties of X-ray- and SED-selected AGNs in Bright Submillimeter Galaxies We investigate the properties of active galactic nuclei (AGNs) in the brightest submillimeter galaxies (SMGs) in the COSMOS field. We utilize the bright sample of ALMA/SCUBA-2 COSMOS Survey (AS2COSMOS), which consists of 260 SMGs with S_{870, mu m}=0.7--19.2,mJy at z=0--6. We perform optical to millimeter spectral energy distribution (SED) modeling for the whole sample. We identify 24 AGN-host galaxies from the SEDs. Supplemented by 23 X-ray detected AGNs (X-ray AGNs), we construct an overall sample of 40 AGN-host galaxies. The X-ray luminosity upper bounds indicate that the X-ray undetected SED-identified AGNs are likely to be nearly Compton thick or have unusually suppressed X-ray emission. From visual classification, we identify 25^{+6}_{-5}\% of the SMGs without AGNs as major merger candidates. This fraction is almost consistent with the general galaxy population at zsim2, suggesting that major mergers are not necessarily required for the enhanced star formation in SMGs. We also identify 47^{+16}_{-15}\% of the AGN hosts as major merger candidates, which is about twice as high as that in the SMGs without AGNs. This suggests that major mergers play a key role in triggering AGN activity in bright SMGs. 14 authors · Dec 12, 2024
- Quarks to Cosmos: Particles and Plasma in Cosmological evolution We describe in the context of the particle physics (PP) standard model (SM) `PP-SM' the understanding of the primordial properties and composition of the Universe in the temperature range 130GeV>T>20keV. The Universe evolution is described using FLRW cosmology. We present a global view on particle content across time and describe the different evolution eras using deceleration parameter q. We follow the arrow of time in the expanding and cooling Universe: After the PP-SM heavies (t, h, W, Z) diminish in abundance below Tsimeq 50GeV, the PP-SM plasma in the Universe is governed by the strongly interacting Quark-Gluon content. Once the temperature drops below Tsimeq 150MeV, quarks and gluons hadronize into strongly interacting matter particles. Rapid disappearance of baryonic antimatter completes at T_B=38.2MeV. We study the ensuing disappearance of strangeness and mesons in general. We show that the different eras defined by particle populations are barely separated from each other with abundance of muons fading out just prior to T=O(2.5)MeV, the era of emergence of the free-streaming neutrinos. We discuss the two relevant fundamental constants controlling the decoupling of neutrinos. We subsequently follow the primordial Universe as it passes through the hot dense electron-positron plasma epoch. The high density of positron antimatter disappears near T=20.3keV: Nuclear reactions occur in the presence of a highly mobile and relatively strongly interacting electron-positron plasma phase. We apply plasma theory methods to describe the strong screening effects between heavy dust particle (nucleons). We analyze the paramagnetic characteristics of the electron-positron plasma when exposed to an external primordial magnetic field. 5 authors · Sep 26, 2024
- Uncovering a Massive z~7.65 Galaxy Hosting a Heavily Obscured Radio-Loud QSO Candidate in COSMOS-Web In this letter, we report the discovery of the highest redshift, heavily obscured, radio-loud QSO candidate selected using JWST NIRCam/MIRI, mid-IR, sub-mm, and radio imaging in the COSMOS-Web field. Using multi-frequency radio observations and mid-IR photometry, we identify a powerful, radio-loud (RL), growing supermassive black hole (SMBH) with significant spectral steepening of the radio SED (f_{1.32 GHz} sim 2 mJy, q_{24mu m} = -1.1, alpha_{1.32-3GHz}=-1.2, Delta alpha = -0.4). In conjunction with ALMA, deep ground-based observations, ancillary space-based data, and the unprecedented resolution and sensitivity of JWST, we find no evidence of QSO contribution to the UV/optical/NIR data and thus infer heavy amounts of obscuration (N_{H} > 10^{23} cm^{-2}). Using the wealth of deep UV to sub-mm photometric data, we report a singular solution photo-z of z_phot = 7.65^{+0.4}_{-0.3} and estimate an extremely massive host-galaxy (log M_{star} = 11.92 pm 0.06,M_{odot}). This source represents the furthest known obscured RL QSO candidate, and its level of obscuration aligns with the most representative but observationally scarce population of QSOs at these epochs. 45 authors · Aug 24, 2023
- Compressed and Smooth Latent Space for Text Diffusion Modeling Autoregressive language models dominate modern text generation, yet their sequential nature introduces fundamental limitations: decoding is slow, and maintaining global coherence remains challenging. Diffusion models offer a promising alternative by enabling parallel generation and flexible control; however, their application to text generation is hindered by the high dimensionality of token-level representations. We introduce Cosmos, a novel approach to text generation that operates entirely in a compressed, smooth latent space tailored specifically for diffusion. This space is learned using an autoencoder trained simultaneously for token-level reconstruction and alignment with frozen activations from a pretrained language encoder, providing robust semantic grounding and enabling effective perturbation-based augmentations. Empirically, we demonstrate that text representations can be compressed by 8times while maintaining generation quality comparable to token-level diffusion models. Furthermore, increasing the latent sequence length allows Cosmos to surpass both diffusion-based and autoregressive baselines. We evaluate Cosmos on four diverse generative tasks including story generation, question generation, summarization, and detoxification and compare it with various generative paradigms. Cosmos achieves comparable or superior generation quality while offering more than 2times faster inference. 5 authors · Jun 26
- Neurosymbolic Grounding for Compositional World Models We introduce Cosmos, a framework for object-centric world modeling that is designed for compositional generalization (CG), i.e., high performance on unseen input scenes obtained through the composition of known visual "atoms." The central insight behind Cosmos is the use of a novel form of neurosymbolic grounding. Specifically, the framework introduces two new tools: (i) neurosymbolic scene encodings, which represent each entity in a scene using a real vector computed using a neural encoder, as well as a vector of composable symbols describing attributes of the entity, and (ii) a neurosymbolic attention mechanism that binds these entities to learned rules of interaction. Cosmos is end-to-end differentiable; also, unlike traditional neurosymbolic methods that require representations to be manually mapped to symbols, it computes an entity's symbolic attributes using vision-language foundation models. Through an evaluation that considers two different forms of CG on an established blocks-pushing domain, we show that the framework establishes a new state-of-the-art for CG in world modeling. 4 authors · Oct 19, 2023